⚠️ Please complete the Pre-VTP Survey before you start the project

1 Getting Started

⚠️ If haven’t completed the Pre-VTP Survey please do so before you start the project

1.1 RStudio, Assigned Dataset, Checkpoint Information

Here is a Google Sheet that contains each participant:

  • AWS-hosted RStudio URL, Username, and Password (General Info tab)
  • Sample Assignment Information (General Info tab)
  • Worksheet where you will upload your Checkpoint (CP) Results (Checkpoint (CP) Results tab)

1.2 Slack

Please join the Slack channel for this course

1.3 Troubleshooting

  1. If you’re having trouble with a step analyzing your assigned sample, compare the code to the that of the example sample code to see if you can determine where you’ve gone wrong. Each coding step requires you to first reproduce the results using the Example Sample (mini_SL342389). You should always be able to copy the code using the icon in the upper righthand corner of the screen, paste it into the R Console or Linux Terminal (depending on the step) and and run it as is. You should not ask a question about the analysis of your sample if you haven’t first reproduced the results with the Example Sample.
  2. If you have a question about the Example Sample Analysis, or if you’ve successfully completed the Example Sample Analysis for a given step and have a question about analyzing your sample, go to Slack and search and see if someone has asked this question.
  3. If no one has previously asked your question on Slack, please post the question with text and screenshot(s) in a single message on the main channel. A TA or MILRD Assistant Mentor will reply in a thread to to that message to assist you.
  4. If a threaded conversation in Slack does not resolve your question, the person helping you will join you in a Zoom room, have you share your screen, and help you troubleshoot.

2 Introduction to the project.

The primary goal of this project is to use the Linux and R programming languages to bioinformatically characterize, quantify and visualize the species composition of urban microbiome samples (i.e. subway swabs) from raw data to completed analysis.

What is bioinformatics?

Think, Room (4-5 ppl), Share:

  • 1-minute: think about one lab, activity, or lesson you prevously taught that had an element that could have been explored using bioinformatics analysis? (Explain)

  • 3-minutes: divide into breakout rooms and discuss with your group

  • 2-minutes: re-join the main meeting room and present 1-2 examples to the other participants.

Why is the metagenomics application of bioinformatics useful?

In case you (or more likely, your students) are thinking, “How would someone be able to use this knowledge in real life?”, here are three real-world applications of the metagenomics analysis technique you will learn:

  • Research Applications: you can ask questions about the similarity of microbiome samples, the prevalence/emergence of antimicrobial resistant strains, etc. The Metasub Project, which is where the data you’ll analyze in this VTP comes from, would be an example of this application.

  • Clinical Applications: This technique is already being employed by some biotech startups, like Karius (https://kariusdx.com/), to rapidly diagnose blood-borne infections.

  • Biotech/Industry Applications: Some companies offer microbial surveillance services to identify and monitor the presence of resistant pathogens (e.g. on hospital surfaces). One such startup called Biotia (https://www.biotia.io/) was spun out of the Mason lab (the same Cornell Med lab that created the Metasub project) and utilizes many of the same techniques that you will employ in this VTP.”

In a broader sense, bioinformatics is a subdivision of data science; principles learned in the former are highly relevant to the latter. We’ll point these out as we go along.

History of metagenomics:

Example: Clinical Application of Metagenomics:

Think, Room (4-5 ppl), Share:

  • 1-minute: Come up with one potential application of metagenomics not discussed here.

  • 3-minutes: divide into breakout rooms and discuss with your group

  • 2-minutes: re-join the main meeting room and present 1-2 examples from your breakout room to the other participants.

3 Rationale for the MetaSub Project

To accomplish this goals in this project, we will use data from the Metasub project, an effort to characterize the “built environment” microbiomes of mass transit systems around the world, headed by Dr. Chris Mason’s lab at Weill Cornell Medical Center (http://www.masonlab.net/).

Here’s the recent Metasub paper in case you haven’t seen it and would like to review it later: https://milrd.org/wp-content/uploads/2022/09/Cell_A-global-metagenomic-map-of-urban-microbiomesand-antimicrobial-resistance.pdf

(To be frank, this article is a bit challenging to read, so we suggest you review it later on if you’re inclined, after you’ve done a bit of the project.)

Some additional information in case you’re interested:

Metasub was borne out of a project called PathoMap (also from the Mason lab), which began in summer 2013 to profile the New York City metagenome in, around, and below NYC on mass-transit areas of the built environment, focusing on the subway.

Here’s the Pathomap paper in case you would like to review it later: https://milrd.org/wp-content/uploads/2022/09/Geospatial-Resolution-of-Human-and-Bacterial-Diversity-with-City-Scale-Metagenomics.pdf

Pathomap sought to establish baseline profiles across the subway system, identify potential bio-threats, and provide an additional level of data that can be used by the city to create a “smart city;” i.e., one that uses high- dimensional data to improve city planning, management, and human health.”

Metasub extended the Pathomap project based on the recognition that NYC is not the only city in the world that could benefit from a systematic, longitudinal metagenomic profile of its subway system.

Although NYC subway has the most stations, it ranks 7th in the world in term of the number of riders per year. A wide variety of population density, length, and climate types define the busiest subways of the world, ranging from cold (Moscow) to temperate (New York City, Paris), to subtropical (Mexico City) and tropical (São Paulo).

To address this gap in our knowledge of the built environment, the Mason lab created Metasub: an international consortium of laboratories to establish a world-wide “DNA map” of microbiomes in mass transit systems.

Take a look at Figure 1, which provides an overview of the Pathomap project’s design and execution:

As you can see, the researchers collected samples from

  1. New York City’s five boroughs

  2. Collected samples from the 466 subway stations of NYC across the 24 subway lines

  3. extracted, sequenced, QC’d and analyzed DNA

  4. Mapped the distribution of taxa identified from the entire pooled dataset, and

  5. presented geospatial analysis of a highly prominent genus, Pseudomonas

Notably, as seen in (D), nearly half of the DNA sequenced (48%) did not match any known organism, underscoring the vast wealth of likely unknown organisms that surround passengers every day.

Think, Room (4-5 ppl), Share:

  • 1-minute: Why do you think so much of the sequenced DNA did not match any known organism?

  • 3-minutes: divide into breakout rooms and discuss with your group

  • 2-minutes: re-join the main meeting room and present 1-2 examples from your breakout room to the other participants.

Now, let’s focus on Fig 1C, as samples from the Metasub project were collected, sequenced, and analyzed in a similar manner to the Pathomap project. Here’s a simplified version of what the Mason lab did in Fig 1C:

4 Overview: bioinformatics analyses

This guide is intended to teach you how to teach you one component of metagenomic analysis: how to plot abundances at the “phylum” level for each metagenomics sample.

Throughout the VTP, each participant characterizes, quantifies and visualize microbial metagenomics data from sequenced swabs of public urban environments on their own AWS High Performance Compute instance. In the Linux terminal, they perform genomic data quality control, genome alignment, taxonomic characterization & abundance quantification, and in R, they viaualize results, conduct a principal component analysis. To conclude, they investigate their most abundant species and use the Patric database to consider how they would determine the strains of these species.

Linux Steps R Steps

Think, Room (4-5 ppl), Share:

  • 1-minute: Note that we’re using two programming languages (Linux/Bash and R). Why do you think it’s necessary to use two programming languages, and not just one?

  • 3-minutes: divide into breakout rooms and discuss with your group

  • 2-minutes: re-join the main meeting room and present 1-2 examples from your breakout room to the other participants.

All bioinformatics tasks will be performed in “the cloud” on an Amazon Web Services (AWS) hosted high performance compute instance.

What is cloud computing:

Think, Room (4-5 ppl), Share:

  • 1-minute: Why do we (as well as professional researchers, data scientists, bioinformaticians) often need to perform analyses on the cloud?

  • 3-minutes: divide into breakout rooms and discuss with your group

  • 2-minutes: re-join the main meeting room and present 1-2 examples from your breakout room to the other participants.

What is R and RStusio

RStudio is an integrated development environment (IDE) for the open-source R programming language, which basically means it brings the core components of R (Scripting Pane, Console, Environment Pane, File Manager/Plots/Help Pane) into a quadrant-based user interface for efficient use.

Here is what the R dashboard looks like:

In R, code is always executed via the Console, but you have the choice whether to execute that code in a Script (opened in the Scripting Pane) or directly in the Console. Unless you need to quicly execute a one-liner (e.g. setting a working directory using the setwd command), you’ll want to be writing your code in a script, highlighting it, and clicking Run to execute it in the console. This is because you can easily edit code in a script and re-run it. Once code is run in the Console it is not editable.

5 Bioinformatics Tasks

5.1 Login to your RStudio Instance

You are provided an instance of the R editor RStudio on AWS already waiting for you to use. Login to your AWS-hosted RStudio instance using the URL, username, and password assigned to you in the Google Sheet.

Please make sure the RStudio user interface dashboard looks as follows: Console Pane (Lower Left Quadrant), Scripting Pane (Upper Left Quadrant), File Manager/Plots Display/Help-Tab Pane (Lower Right Quadrant), Environment/History-Tab Pane (Upper Right Quadrant).

5.2 Barplots

Current Step Linux Steps R Steps

In this section, you will learn how to use R to generate two plots that help visualize the similarities and differences between a subset of the Metasub samples and compare it to metagenomics samples from the Human Microbiome project.

We have provided a subset of of metagnomics data (taxa_table.csv) and one with, for your convenience. This file contains taxonomically characterized and quantified metagenomics data from nine microbiome samples: three from the human microbiome project, three from the Metasub project, and three from a mystery source.

First we will plot our samples as a stacked-barplot at the phylum level. A stacked-barplot shows a set of numbers as a series of columns one on top of each other colored by a label. In our case, a taxonomic profile is a set of numerical abundances labeled by the microbial species it belongs to.

Here’s the template code to make this plot in R:

library(ggplot2)  

taxa_yourSample = read.csv('yourSample.csv', header=TRUE, sep=',') 

family_yourSample = taxa_yourSample[taxa_yourSample$rank == 'phylum',] 

family_yourSample = family_yourSample[family_yourSample$percent_abundance >= 2,] 

ggplot(family_yourSample, aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 
  1. These line loads the ggplot2 library into our environment. The library() function is used to load additional libraries that contain functions to be utilized by the script that follows.
  2. Reads tabular taxonomic data into a computational object from a .csv file using the read.csv() function.
  3. This filters our taxonomic table to a specific taxonomic rank (i.e. Kingdom, Phylum, Class, Order, Family, Genus, Species).
  4. This filters out and discards data for a specified variable below the specified threshold.
  5. this creates a ggplot() object.
  6. this tells ggplot() how we want our data to be displayed
  7. These lines tell ggplot() what our axis labels should be
  8. this rotates the x-axis text 90 degrees
Execute with Example Sample mini_SL342389
Create: Phylum-level Barplot
Run Script: all at once

This code generates a phylum-level barplot by modifying taxa_yourSample, mini_yourSample.csv, and phyla_yourSample per Example sample.

library(ggplot2)  

taxa_mini_SL342389 = read.csv('mini_SL342389_taxa.csv', header=TRUE, sep=',') 

phyla_mini_SL342389 = taxa_mini_SL342389[taxa_mini_SL342389$rank == 'phylum',] 

phyla_mini_SL342389 = phyla_mini_SL342389[phyla_mini_SL342389$percent_abundance >= 2,] 

ggplot(phyla_mini_SL342389 , aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 
Output
Step-by-Step Video
Run Script: Chunk-by-chunk

Clear your Console, Environment Tab, and Plots Tab using the 🧹 button and re-run the script chunk-by-chunk (noting what each chunk appears to be doing):

library(ggplot2)   
Note 1 Code should be sent to the Console. Library will be loaded behing the scenes. This command worked if no error messages were printed in the console)

+

taxa_mini_SL342389 = read.csv('mini_SL342389_taxa.csv', header=TRUE, sep=',')
Note 1 After execution, the taxa_mini_SL342389 object should be listed in the Environment Tab. Click on the name of this object in the environment tab and it should load as a table in a new tab in the scripting pane (as long as it’s not too big). Alternatively, you can use the head() function to return the first “parts of a vector, matrix, table, data frame or function”; to learn more about this function, execute ?head() in the Console. Go ahead and execute head(taxa_mini_SL342389) in the Console now. It will show the first 6 lines of the taxa_mini_SL342389 by default.)
Note 2 Take a look at the rank column. Even from the first few lines of the taxa_mini_SL34238 object, it’s evident there are taxonomic classifications made a several levels (e.g. phylum, class order, genus, etc). Keep this in mind as you execute the next step.

+

phyla_mini_SL342389 = taxa_mini_SL342389[taxa_mini_SL342389$rank == 'phylum',] 
Note 1 After Execution, run head(taxa_mini_SL342389) in the Console. How did this line of code transform the data?

+

phyla_mini_SL342389 = phyla_mini_SL342389[phyla_mini_SL342389$percent_abundance >= 2,] 
Note 1 In the R parlance, an object’s rows are referred to as oberservations (obs. for short); an object’s column headers are referred to as variables. How did this line of code transform the data? (Hint: look at the change in observations.)

+

ggplot(phyla_mini_SL342389 , aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 
Create: Family-level Barplot

This code generates family-level barplot by modifying taxa_yourSample, mini_yourSample.csv, phyla_yourSample, and labs() per Example sample.

library(ggplot2)  

taxa_mini_SL342389 = read.csv('mini_SL342389_taxa.csv', header=TRUE, sep=',') 

family_mini_SL342389 = taxa_mini_SL342389[taxa_mini_SL342389$rank == 'family',] 

family_mini_SL342389 = family_mini_SL342389[family_mini_SL342389$percent_abundance >= 2,] 

ggplot(family_mini_SL342389 , aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Family') +
  theme(axis.text.x = element_text(angle = 90)) 
Output
Execute with Your Sample
Create: Phylum-level Barplot
Modify and run template script

Generate a phylum-level barplot by modifying taxa_yourSample, mini_yourSample.csv, and phyla_yourSample in the template scriptper your assigned sample.

library(ggplot2)  

taxa_yourSample = read.csv('yourSample_taxa.csv', header=TRUE, sep=',') 

phyla_yourSample = taxa_yourSample[taxa_yourSample$rank == 'phylum',] 

phyla_yourSample = phyla_yourSample[phyla_yourSample$percent_abundance >= 2,] 

ggplot(phyla_yourSample, aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 
Re-run: Modified Template Script Chunk-by-chunk

Clear your Console, Environment Tab, and Plots tab using the 🧹 button and re-run the script chunk-by-chunk (noting what each chunk appears to be doing):

library(ggplot2)   

+

taxa_yourSample = read.csv('yourSample_taxa.csv', header=TRUE, sep=',') 

+

phyla_yourSample = taxa_yourSample[taxa_yourSample$rank == 'phylum',] 

+

phyla_yourSample = phyla_yourSample[phyla_yourSample$percent_abundance >= 2,]

+

ggplot(taxa, aes(x=sample, y=percent_abundance, fill=taxon)) + 
  geom_bar(stat="identity") + 
  xlab('Sample') +  
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 
Create: Family-level Barplot

Modify code: edit your script to generate a barplot at the “family” level, by modifying this line of code:

phyla_yourSample = taxa_yourSample[taxa_yourSample$rank == 'phylum',] 

Checkpoint (CP) 1

Please upload the following to the Google Sheet:

  1. Barplot at the phylum level. (To download this plot as a file, click the (+) Zoom button, right click on the plot and select “Save Image As”.)

  2. Upload barplot at the family level. (To download this plot as a file, click the (+) Zoom button, right click on the plot and select “Save Image As”.)

  3. Answer to these questions:

    1. Which line of code ingests data from the taxa_table.csv file into the taxa object (copy paste the exact line of code)?
    1. Which line of code allows you to filter down to a specific taxonomic level?
    1. What is the $ operator doing in this line of code: taxa = taxa[taxa$rank == 'phylum',] ?
    1. Why wouldn’t you want to plot a barplot at the species level? (If you’re not sure, try it!)
    1. What can the barplots tell you about the similarities and dissimilarities of microbiome samples?
    1. What questions do barlots leave unanswered about microbiome samples?

5.3 Access your Linux terminal via RStudio

Current step Linux Steps R Steps

Access your Linux terminal by logging into RStudio via the web browser (can be found in the the Google Sheet.

Setup to your Linux environment:

It should look like this once you’re finished setting up your Linux environment:

Checkpoint (CP) 2

Please take a screenshot of the last 15 lines of your history output and upload it to the Google Sheet.

5.5 Look: assigned dataset

5.5.1 Workflow Context

Current step Linux Steps R Steps

5.5.2 A note on your Sample

To make sure you undertstand the workflow, we’d like to first have you complete the analysis with a small dataset. We’ve taken 50,000 reads from the read1 and read2 files of each assigned sample. They are labeled with the prefix mini_ and located in the reads directory (files without the prefix mini are the original, full-size files, which each contain millions of reads.)

5.5.3 Each Sample had Two Files

Data from each Metasub sample is contained in two files—read1 (i.e. R1) and read2 (i.e.R2). This is because the researchers paired-end sequenced each sample. When samples are sequenced on an Illumina genomic sequencer researchers get to decide: (1) the number of reads to generate for each sample library; (2) the read length (within a range of ~50 - ~300 nucleotides); and (3) whether to generate a single read or a pair of reads from each sequenced DNA library fragment.

5.5.4 Paired-end vs. Single-end

Here is an diagram that summarizes paired-end versus single-end sequencing:

5.5.5 FASTQ Format

The raw genomic data for this project is in fastqfiles. Fastq files have the extension .fastq.

Fastq files are collections of fastq records.

A FASTQ record has the following format:

  1. A line starting with @, containing the sequence ID.
  2. One or more lines that contain the sequence.
  3. A new line starting with the character +, and being either empty or repeating the sequence ID.
  4. One or more lines that contain the quality scores.

Here’s an example of a FASTQ file with two records:

@071112_SLXA-EAS1_s_7:5:1:817:345
GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACC
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9IC
@071112_SLXA-EAS1_s_7:5:1:801:338
GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGA
+
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBI
Navigate to the reads directory (which contains the “mini” fastq data):
cd ~/reads/

PLEASE NOTE: make sure to complete all Linux steps from the reads directory.

5.5.6 Look at your assigned dataset

Execute with Example Sample mini_SL342389

Take a look at the first 10 lines of the Example Sample Read1 fastq file:

  zcat mini_SL342389.R1.fastq.gz | head

Take a look at the first 10 lines of the Example Sample Read2 fastq file:

  zcat mini_SL342389.R2.fastq.gz | head
Output
Step-by-Step Video
Execute with Your Sample

Take a look at the first 10 lines of Your Sample’s Read1 fastq file:

zcat mini_my_reads.R1.fastq.gz | head

Take a look at the first 10 lines of Your Sample’s Read2 fastq file:

zcat mini_my_reads.R2.fastq.gz | head

(Please note: mini_my_reads is a placeholder for your assigned miniature reads dataset.)

Checkpoint (CP) 3

In the Google Sheet, please:

  1. Upload a screenshot of your zcat results.

  2. Answer these questions:

    1. How are fastq files are structured? (i.e. What does line 1, line 2, line 3, and line 4, of each read signify?)
    1. Why are there two reads files for each sample?

5.6 Quality Control

5.6.1 Workflow Context

Current step Linux Steps R Steps

5.6.2 A note on Quality Control

5.6.3 Code Explanation

Run AdapterRemoval on your mini sample (executable version):

AdapterRemoval --file1 mini_my_reads.R1.fastq.gz --file2 mini_my_reads.R2.fastq.gz --trimns --trimqualities --minquality 40 --threads 2 --gzip --output1 clean.mini_my_reads.R1.fastq.gz  --output2 clean.mini_my_reads.R2.fastq.gz

AdapterRemoval Calls the program you want to invoke.
--file1tells the program where to find Read1 input data
--file2 tells the program where to find Read2 input data
--trimns this tells the program to remove reads with ambiguous bases or ’N’s
--trimqualities --minquality this tells the program to remove nucleotide bases below a certain quality --threads this tells the computer how many threads to use when running the program --gzip this tells the program that both the input and output files are .gzip compressed --output1 this tells the program where to place processed Read1 data --output2 this tells the program where to place processed Read2 data

5.6.4 Run Step

Check: Ensure AdapterRemoval is running
AdapterRemoval -h

You should see a long list of all the arguments that can be given to this tool.

Execute with Example Sample mini_SL342389
AdapterRemoval --file1 mini_SL342389.R1.fastq.gz --file2 mini_SL342389.R2.fastq.gz --trimns --trimqualities --minquality 40 --threads 2 --gzip --output1 clean.mini_SL342389.R1.fastq.gz  --output2 clean.mini_SL342389.R2.fastq.gz
Output

Suggestion:Execute ls to confirm clean.mini_SL342389.R1.fastq.gz and clean.mini_SL342389.R2.fastq.gz were generated.

Step-by-Step Video
Execute with your sample

Run AdapterRemoval on your assigned sample:

AdapterRemoval --file1 mini_my_reads.R1.fastq.gz --file2 mini_my_reads.R2.fastq.gz --trimns --trimqualities --minquality 40 --threads 2 --gzip --output1 clean.mini_my_reads.R1.fastq.gz  --output2 clean.mini_my_reads.R2.fastq.gz

Suggestion:Execute ls to confirm clean.mini_my_reads.R1.fastq.gz and clean.mini_my_reads.R2.fastq.gz were generated.

5.6.4.1 Examine Results

Execute with Example Sample mini_SL342389

Look at the first 10 lines of your AdapterRemoval Read1 and Read2 output files (i.e. “cleaned” or “clean” reads)

  zcat clean.mini_SL342389.R1.fastq.gz | head
  zcat clean.mini_SL342389.R2.fastq.gz | head
Compare to Raw Reads

For comparison, look again at the first 10 lines of your initial “i.e.”raw”) reads:

  zcat mini_SL342389.R1.fastq.gz | head
  zcat mini_SL342389.R2.fastq.gz | head

Suggestion: Make note of any differences you see between the lengths of the Clean and Raw reads.

Count the number of reads in each of your cleaned fastq files by running this code and dividing the output by four:

  zcat clean.mini_SL342389.R1.fastq.gz | wc -l
  zcat clean.mini_SL342389.R2.fastq.gz | wc -l

Count the number of reads in each of your your raw (i.e. initial) fastq files by running this code and dividing the output by four:

  zcat mini_SL342389.R1.fastq.gz | wc -l
  zcat mini_SL342389.R2.fastq.gz | wc -l

Suggestion: Make note of any differences you see between the read counts of the Clean and Raw reads.

Execute with your sample

Look at your clean… results

  zcat clean.mini_my_reads.R1.fastq.gz | head
  zcat clean.mini_my_reads.R2.fastq.gz | head
Compare to Raw Reads

For comparison, look again at the first 10 lines of your raw reads:

  zcat mini_my_reads.R1.fastq.gz | head
  zcat mini_my_reads.R2.fastq.gz | head

Count the number of reads in each of your cleaned fastq files by running:

  zcat clean.mini_reads.R1.fastq.gz | wc -l
  zcat clean.mini_reads.R2.fastq.gz | wc -l

Count the number of reads in each of your raw (i.e. initial) fastq files by running:

  zcat mini_reads.R1.fastq.gz | wc -l
  zcat mini_reads.R2.fastq.gz | wc -l

Record your Raw and Clean read counts to report them in CP4.

Notice any differences between the Clean and Raw reads? If so, write them down.

Checkpoint (CP) 4

In the Google Sheet, please: 1. Upload a screenshot of your zcat results. 2. Answer these questions:

    1. Do you notice any differences in the read lengths between your Raw and Cleaned reads? If so, what?
    1. Report the numbers of your Raw and Cleaned reads.

5.7 Not shown: Remove contaminating human DNA from your sample

Researchers often decide at this point to remove DNA from unwanted species, such as human. DNA that we don’t want is called “Contaminating DNA”. We are not going to perform this step due to time constraints.

5.8 Taxonomic Characterization & Quantification

Current step Linux Steps R Steps

Launch Kraken (executable version):

(base) user#@ip-#:~/reads $

conda install --channel bioconda --yes kraken 

conda is a package manager
install is a subcommand ofconda
--yes gives conda permission to install software
kraken is the program we are installing

You should now be able to test Kraken by running:

(base) user#@ip-#:~/reads $

kraken --help

--help is an option that displays the options for the tool

If a list of kraken options and explanations is displayed, it’s confirmation the tool is running. Provided this happnes, we can now perform taxonomic classification on our data using this command:

(base) user#@ip-#:~/reads $

kraken --gzip-compressed --fastq-input --threads 2 --paired --preload --db ~/databases/minikraken_20171101_8GB_dustmasked clean.mini_my_reads.R1.fastq.gz clean.mini_my_reads.R2.fastq.gz > mini_my_reads_taxonomic_report.csv

kraken command invocation
--gzip-compressed indicates our reads are gzip compressed
--fastq-input indicates our reads are formatted as fastq files
--threads tells the compute instance how many threads to use
--paired indicates we’re using paired end data
--preload specifies to load the database into RAM before running
--db specifies the path to the database
clean.mini_my_reads.R1.fastq.gz filepath to Read1 file
clean.mini_my_reads.R2.fastq.gz filepath to Read2 file
mini_my_reads_taxonomic_report.csv filepath to file where kraken will deposit results

5.8.1 Look at your assigned dataset: See one, do one

See One (Example)
Sample: mini_SL342389

Example code chunk + vid

Execute with your sample
Sample: yourSample

your sample code chunk + vid

5.8.2 placeholder text

5.8.3 Execute metagonomics analysis: See one, do one

text

See One (Example)
Sample: mini_SL342389

(base) user#@ip-#:~/reads $

kraken --gzip-compressed --fastq-input --threads 2 --paired --preload --db ~/databases/minikraken_20171101_8GB_dustmasked clean.mini_SL342389.R1.fastq.gz clean.mini_SL342389.R2.fastq.gz > mini_SL342389_taxonomic_report.csv

Confirm the file is present using the ls command.

Look at Kraken output:

(base) user#@ip-#:~/reads $

head mini_SL342389_taxonomic_report.csv

(the head command prints the first 10 lines of any file to the terminal)

It should look similar to this:

Note Column 1 indicates whether it was classified by Kraken or not. Column 2 denotes the Read ID (line 1 of the fastq read record). Subsequent columns indicated taxonomic mappings, if any.

Execute with your sample

(base) user#@ip-#:~/reads $

kraken --gzip-compressed --fastq-input --threads 2 --paired --preload --db ~/databases/minikraken_20171101_8GB_dustmasked clean.mini_my_reads.R1.fastq.gz clean.mini_my_reads.R2.fastq.gz > mini_my_reads_taxonomic_report.csv

Confirm the file is present using the ls command.

Once this command runs, take a look at the output by running: (base) user#@ip-#:~/reads $

head mini_my_reads_taxonomic_report.csv

5.8.4 Examine Kraken Results

1. Execute with Example Sample
mini_SL342389

The mini_SL342389_taxonomic_report.csv is formatted for storage efficiency and is not very useful for biological interpretation. Convert this file to an .mpa format so that it can be interpreted more easily:

(base) user#@ip-#:~/reads $

kraken-mpa-report --db ~/databases/minikraken_20171101_8GB_dustmasked mini_SL342389_taxonomic_report.csv > mini_SL342389_final_taxonomic_results.mpa

Look at the first 20 lines by executing: (base) user#@ip-#:~/reads $

head -20 mini_SL342389_final_taxonomic_results.mpa

Look at the last 20 lines by executing: (base) user#@ip-#:~/reads $

tail -20 mini_SL342389_final_taxonomic_results.mpa

2. Execute with Your Sample

The mini_my_reads_taxonomic_report.csv is formatted for storage efficiency and is not very useful for biological interpretation. Convert this file to an .mpa format so that it can be interpreted more easily:

(base) user#@ip-#:~/reads $

kraken-mpa-report --db ~/databases/minikraken_20171101_8GB_dustmasked mini_my_reads_taxonomic_report.csv > mini_my_reads_final_taxonomic_results.mpa

Look at the first 20 lines by executing: (base) user#@ip-#:~/reads $

head -20 mini_my_reads_final_taxonomic_results.mpa

Look at the last 20 lines by executing: (base) user#@ip-#:~/reads $

tail -20 mini_my_reads_final_taxonomic_results.mpa

5.8.5 Extract Species Level Taxonomic Assignments

Pull out the species level assignments in your .mpa file and store them in a .tsv file:

1. Execute with Example Sample
mini_SL342389

(base) user#@ip-#:~/reads $

grep 's__' mini_SL342389_final_taxonomic_results.mpa > mini_SL342389_species_only.tsv

Look at the first 10 lines:

(base) user#@ip-#:~/reads $

head mini_SL342389_species_only.tsv

Visually confirm that all taxonmomic assignments are at the species level.

See how many species were identified:

(base) user#@ip-#:~/reads $

wc -l mini_SL342389_species_only.tsv

2. Execute with Your Sample

(base) user#@ip-#:~/reads $

grep 's__' mini_my_reads_final_taxonomic_results.mpa > mini_my_reads_species_only.tsv

Look at the first 10 lines: (base) user#@ip-#:~/reads $

head mini_my_reads_species_only.tsv

Visually confirm that all taxonmomic assignments are at the species level.

See how many species were identified:

(base) user#@ip-#:~/reads $

wc -l mini_my_reads_species_only.tsv

5.8.6

Checkpoint (CP) 5

In the Google Sheet, please:

  1. Upload a screenshot of your kraken command output. (i.e. the one that states the % sequences classified and unclassified)

  2. Upload a screenshot of your head -20 mini_my_reads_final_taxonomic_results.mpa command.

  3. Upload a screenshot of your tail -20 mini_my_reads_final_taxonomic_results.mpa command.

  4. Answer these questions:

    1. What do d, p, and other abbreviations stand for?
    1. What do you the numbers at the end of each line signify?
    1. Why do you think Kraken can’t characterize some reads at all?
    1. Why do you think Kraken can’t characterize some reads down to the species level?
    1. When identifying only the species-level taxonomic assignments, what “search term” do you use in your code?

5.9 Barplots

Now create a barplot with all of the metasub samples:

library(ggplot2)  

taxa_all_barplot = read.csv('kraken_all_metasub.csv', header=TRUE, sep=',') 

taxa_all_barplot = taxa_all_barplot[taxa_all_barplot$rank == 'phylum',] 

taxa_all_barplot = taxa_all_barplot[taxa_all_barplot$percent_abundance >= 2,] 

ggplot(taxa_all_barplot, aes(x=sample, y=percent_abundance, fill=taxon)) +
  geom_bar(stat="identity") +  
  xlab('Sample') + 
  ylab('Abundance') +
  labs(fill='Phylum') +
  theme(axis.text.x = element_text(angle = 90)) 

Checkpoint (CP) 6

Please upload the following to the Google Sheet:

  1. Barplot at the phylum level for all samples. (To download this plot as a file, click the (+) Zoom button, right click on the plot and select “Save Image As”.)

5.10 ID Abundant species

Read-in all of the Kraken data from the samples you and the other participants analyzed:

taxa_all = read.csv('kraken_all_metasub.csv', header=TRUE, sep=',')

taxa_all_spp = taxa_all[taxa_all$rank == 'species',] 

taxa_all_spp_mySampleID = taxa_all_spp[taxa_all_spp$sample == 'SampleID',]  

taxa_all_spp_mySampleID_ordered <- taxa_all_spp_mySampleID[order(taxa_all_spp_mySampleID$percent_abundance, decreasing = TRUE), ]  

head(taxa_all_spp_mySampleID_ordered, 5)
  1. Read-in all metasub data
  2. Filter down to species level
  3. Sort to your sample. Be sure to specify your SampleID (e.g. SL342389) where indicated.
  4. Order object from highest-to-lowest percent abundance
  5. Print top 5 most abundant species in your sample to the Console

For your information, you can open taxa_all_spp_mySampleID as a table in the Scripting Pane by clicking on the name of the object in the Environment Pane. You can order rows based on the incresing and decreasing values of a specific column by clicking the ▲ and ▼ arrows of that column.

Checkpoint (CP) 7

In the Google Sheet:

  1. List the top 5 most abundant species in your sample.

5.11 ID AMR Loci

Navigate to the Bacterial and Viral Bioinformatics Resource Center (BV-BRC): https://www.bv-brc.org/

Use the tools in BV-BRC to ID at least one AMR locus/gene for the top 5 most abundant species in your sample.

  1. Select Taxa option from the “All Data Types” dropdown menu
  2. Search for a given species via the search bar
  3. Select the “species” result (usually the first one) by checking the box next to that row
  4. Click TAXON OVERVIEW from the green column
  5. Click the Specialty Genes tab
  6. Click the blue Filter button
  7. For Property, select Antibiotic Resitance
  8. For Source, select PATRIC
  9. For Classification, search for efflux pump conferring antibiotic resistance using the magnifying glass feature
  10. Select a few of the results rows by checking their boxes
  11. Click the FASTA option in the green column and select View FASTA DNA This will show you the nucleotide sequences of the selected resistance loci.

Here’s a screencast using the species Staphylococcus aureus (Methicillin-resistant Staphylococcus aureus (MRSA) is a widely known antibiotic resistant hospital acquired infection):

Checkpoint (CP) 8

In the Google Sheet:

  1. List three efflux pump conferring antibiotic resistance locus IDs for the species you queried using the BV-BRC.
  2. Hypothesize how you would figure out if a resistant strain of one of the abundant species in your sample was present.

6 Biology + Data Science in the Classroom

While the linux steps can be challenging to perform on a local computuer, students should be able to perform all R/RStudio related steps.

R is free and open source. The Desktop version of RStudio is free.

To use RStudio, you must:

  1. Download and install Base R.
  2. Download and install RStudio

Please feel free to Download the taxa_table.csv and kraken_all_metasub.csv for your own use.

7 Data Science Principles Covered

  1. Using commands (e.g. head, tail) to understand the structure/format of the data.
  2. Use operators (e.g. $ or @) to filter or sort data by specific variables (i.e. columns)
  3. Use commands like , summary(), density() and plot to understand the descriptive statistics and general distribution of a dataset.
  4. Dataset sizes (footprints) tend to get smaller as one works through analyses downstream.
  5. To compare samples visually, one often tries to reduce the dimensionality of the data.
  6. If you’re having trouble running a command, try to first reproduce the sample code result.

8 Post-VTP Survey

We would greatly appreciate it if you could complete the POST-VTP Survey: https://forms.gle/ogTzjPW61S4WfVou6