library(vcfR)
library(adegenet)
library(poppr)
This tutorial is derived from Novel tools in R for population genomic analyses, Reading VCF data and Analysis of genome data by BJ Knaus, JF Tabima and NJ Grünwald
We now live in the fast growing era of high throughput sequencing (HTS) that is revolutionizing our ability to understand genetic variation (Grünwald, McDonald, and Milgroom 2016; Luikart et al. 2003). Two factors are contributing to a need for new methods of analyzing data: 1. the data is now often in a genome-wide context where location within a genome is part of the analysis and 2. the number of variants are large.
The R computing language has become a great tool for analyzing population genomic data. A special issue in Molecular Ecology Resources provides a nice overview of the arsenal of tools available in R (Paradis et al. 2017). New tools have become available in R for analyzing HTS data including adegenet (Jombart 2008), ape (Paradis, Claude, and Strimmer 2004), vcfR (Knaus and Grünwald 2017), and poppr (Kamvar, Tabima, and Grünwald 2014; Kamvar, Brooks, and Grünwald 2015) among others. Section III of this primer is geared towards analyzing whole genome or reduced representation genomic data for populations using the variant call format (VCF). The next three chapters will focus on introducing the VCF file format, reading SNP data into R from high throughput sequencing projects, performing quality control, and conducting selected analyses using population genomic data.
Genetic variation data is typically stored in variant call format (VCF) files (Danecek et al. 2011). This format is the preferred file format obtained from genome sequencing or high throughput genotyping. One advantage of using VCF files is that only variants (e.g., SNPs, indels, etc.) are reported which economizes files size relative to a format that may included invariant sites. Variant callers typically attempt to aggressively call variants with the perspective that a downstream quality control step will remove low quality variants. Note that VCF files come in different flavors and that each variant caller may report a slightly different information. A first step in working with this data is to understand their contents.
A VCF file can be thought of as having three sections: a vcf header, a fix region and a gt region. The VCF meta region is located at the top of the file and contains meta-data describing the body of the file. Each VCF meta line begins with a ‘##’. The information in the meta region defines the abbreviations used elsewhere in the file. It may also document software used to create the file as well as parameters used by this software. Below the metadata region, the data are tabular. The first eight columns of this table contain information about each variant. This data may be common over all variants, such as its chromosomal position, or a summary over all samples, such as quality metrics. These data are fixed, or the same, over all samples. The fix region is required in a VCF file, subsequent columns are optional but are common in our experience. Beginning at column ten is a column for every sample. The values in these columns are information for each sample and each variant. The organization of each cell containing a genotype and associated information is specified in column nine, the FORMAT column. The location of these three regions within a file can be represented by this cartoon:
The VCF file specification is flexible. This means that there are slots for certain types of data, but any particular software which creates a VCF file does not necessarily use them all. Similarly, authors have the opportunity to include new forms of data, forms which may not have been foreseen by the authors of the VCF specification. The result is that all VCF files do not contain the same information.
For this example, we will use example data provided with the R package vcfR (Knaus and Grünwald 2017).
library(vcfR)
data(vcfR_example)
vcf
## ***** Object of Class vcfR *****
## 18 samples
## 1 CHROMs
## 2,533 variants
## Object size: 3.2 Mb
## 8.497 percent missing data
## ***** ***** *****
The function library() loads libraries, in this case the package vcfR. The function data() loads datasets that were included with R and its packages. Our usage of data() loads the objects ‘gff’, ‘dna’ and ‘vcf’ from the ‘vcfR_example’ dataset. Here we’re only interested in the object ‘vcf’ which contains example VCF data. When we call the object name with no function it invokes the ‘show’ method which prints some summary information to the console.
The meta region contains information about the file, its creation, as well as information to interpret abbreviations used elsewhere in the file. Each line of the meta region begins with a double pound sign (‘##’). The example which comes with vcfR is shown below. (Only the first seven lines are shown for brevity.)
strwrap(vcf@meta[1:7])
## [1] "##fileformat=VCFv4.1"
## [2] "##source=\"GATK haplotype Caller, phased with beagle4\""
## [3] "##FILTER=<ID=LowQual,Description=\"Low quality\">"
## [4] "##FORMAT=<ID=AD,Number=.,Type=Integer,Description=\"Allelic depths for"
## [5] "the ref and alt alleles in the order listed\">"
## [6] "##FORMAT=<ID=DP,Number=1,Type=Integer,Description=\"Approximate read"
## [7] "depth (reads with MQ=255 or with bad mates are filtered)\">"
## [8] "##FORMAT=<ID=GQ,Number=1,Type=Integer,Description=\"Genotype Quality\">"
## [9] "##FORMAT=<ID=GT,Number=1,Type=String,Description=\"Genotype\">"
The first line contains the version of the VCF format used in the file. This line is required. The second line specifies the software which created the VCF file. This is not required, so not all VCF files include it. When they do, the file becomes self documenting. Note that the alignment software is not included here because it was used upstream of the VCF file’s creation (aligners typically create .SAM or .BAM format files). Because the file can only include information about the software that created it, the entire pipeline does not get documented. Some VCF files may contain a line for every chromosome (or supercontig or contig depending on your genome), so they may become rather long. Here, the remaining lines contain INFO and FORMAT specifications which define abbreviations used in the fix and gt portions of the file.
The meta region may include long lines that may not be easy to view. In vcfR we’ve created a function to help press this data.
queryMETA(vcf)
## [1] "FILTER=ID=LowQual"
## [2] "FORMAT=ID=AD"
## [3] "FORMAT=ID=DP"
## [4] "FORMAT=ID=GQ"
## [5] "FORMAT=ID=GT"
## [6] "FORMAT=ID=PL"
## [7] "GATKCommandLine=ID=HaplotypeCaller"
## [8] "INFO=ID=AC"
## [9] "INFO=ID=AF"
## [10] "INFO=ID=AN"
## [11] "INFO=ID=BaseQRankSum"
## [12] "INFO=ID=ClippingRankSum"
## [13] "INFO=ID=DP"
## [14] "INFO=ID=DS"
## [15] "INFO=ID=FS"
## [16] "INFO=ID=HaplotypeScore"
## [17] "INFO=ID=InbreedingCoeff"
## [18] "INFO=ID=MLEAC"
## [19] "INFO=ID=MLEAF"
## [20] "INFO=ID=MQ"
## [21] "INFO=ID=MQ0"
## [22] "INFO=ID=MQRankSum"
## [23] "INFO=ID=QD"
## [24] "INFO=ID=ReadPosRankSum"
## [25] "INFO=ID=SOR"
## [26] "1 contig=<IDs omitted from queryMETA"
When the function queryMETA() is called with only a vcfR object as a parameter, it attempts to summarize the meta information. Not all of the information is returned. For example, ‘contig’ elements are not returned. This is an attempt to summarize information that may be most useful for comprehension of the file’s contents.
queryMETA(vcf, element = 'DP')
## [[1]]
## [1] "FORMAT=ID=DP"
## [2] "Number=1"
## [3] "Type=Integer"
## [4] "Description=Approximate read depth (reads with MQ=255 or with bad mates are filtered)"
##
## [[2]]
## [1] "INFO=ID=DP"
## [2] "Number=1"
## [3] "Type=Integer"
## [4] "Description=Approximate read depth; some reads may have been filtered"
When an element parameter is included, only the information about that element is returned. In this example the element ‘DP’ is returned. We see that this acronym is defined as both a ‘FORMAT’ and ‘INFO’ acronym. We can narrow down our query by including more information in the element parameter.
queryMETA(vcf, element = 'FORMAT=<ID=DP')
## [[1]]
## [1] "FORMAT=ID=DP"
## [2] "Number=1"
## [3] "Type=Integer"
## [4] "Description=Approximate read depth (reads with MQ=255 or with bad mates are filtered)"
Here we’ve isolated the definition of ‘DP’ as a ‘FORMAT’ element. Note that the function queryMETA() includes the parameter nice which by default is TRUE and attempts to present the data in a nicely formatted manner. However, our query is performed on the actual information in the ‘meta’ region. It is therefore sometimes appropriate to set nice = FALSE so that we can see the raw data. In the above example the angled bracket (‘<’) is omitted from the nice = TRUE representation but is essential to distinguishing the ‘FORMAT’ element from the ‘INFO’ element.
The fix region contains information for each variant which is sometimes summarized over all samples. The first eight columns of the fixed region are titled CHROM, POS, ID, REF, ALT, QUAL, FILTER and INFO. This is per variant information which is ‘fixed’, or the same, over all samples. The first two columns indicate the location of the variant by chromosome and position within that chromosome. Here, the ID field has not been used, so it consists of missing data (NA). The REF and ALT columns indicate the reference and alternate allelic states for a diploid sample. When multiple alternate allelic states are present they are delimited with commas. The QUAL column attempts to summarize the quality of each variant over all samples. The FILTER field is not used here but could contain information on whether a variant has passed some form of quality assessment.
head(getFIX(vcf))
## CHROM POS ID REF ALT QUAL FILTER
## [1,] "Supercontig_1.50" "2" NA "T" "A" "44.44" NA
## [2,] "Supercontig_1.50" "246" NA "C" "G" "144.21" NA
## [3,] "Supercontig_1.50" "549" NA "A" "C" "68.49" NA
## [4,] "Supercontig_1.50" "668" NA "G" "C" "108.07" NA
## [5,] "Supercontig_1.50" "765" NA "A" "C" "92.78" NA
## [6,] "Supercontig_1.50" "780" NA "G" "T" "58.38" NA
The eigth column, titled INFO, is a semicolon delimited list of information. It can be rather long and cumbersome. The function getFIX() will suppress this column by default. Each abbreviation in the INFO column should be defined in the meta section. We can validate this by querying the meta portion, as we did in the ‘meta’ section above.
The gt (genotype) region contains information about each variant for each sample. The values for each variant and each sample are colon delimited. Multiple types of data for each genotype may be stored in this manner. The format of the data is specified by the FORMAT column (column nine). Here we see that we have information for GT, AD, DP, GQ and PL. The definition of these acronyms can be referenced by querying the the meta region, as demonstrated previously. Every variant does not necessarily have the same information (e.g., SNPs and indels may be handled differently), so the rows are best treated independently. Different variant callers may include different information in this region.
vcf@gt[1:6, 1:4]
## FORMAT BL2009P4_us23 DDR7602
## [1,] "GT:AD:DP:GQ:PL" "0|0:62,0:62:99:0,190,2835" "0|0:12,0:12:39:0,39,585"
## [2,] "GT:AD:DP:GQ:PL" "1|0:5,5:10:99:111,0,114" NA
## [3,] "GT:AD:DP:GQ:PL" NA NA
## [4,] "GT:AD:DP:GQ:PL" "0|0:1,0:1:3:0,3,44" NA
## [5,] "GT:AD:DP:GQ:PL" "0|0:2,0:2:6:0,6,49" "0|0:1,0:1:3:0,3,34"
## [6,] "GT:AD:DP:GQ:PL" "0|0:2,0:2:6:0,6,49" "0|0:1,0:1:3:0,3,34"
## IN2009T1_us22
## [1,] "0|0:37,0:37:99:0,114,1709"
## [2,] "0|1:2,1:3:16:16,0,48"
## [3,] "0|0:2,0:2:6:0,6,51"
## [4,] "1|1:0,1:1:3:25,3,0"
## [5,] "0|0:1,0:1:3:0,3,31"
## [6,] "0|0:3,0:3:9:0,9,85"
Using the R package vcfR, we can read VCF format files into memory using the function read.vcfR(). Once in memory we can use the head() method to summarize the information in the three VCF regions.
vcf <- read.vcfR("pinfsc50_filtered.vcf.gz")
## Scanning file to determine attributes.
## File attributes:
## meta lines: 29
## header_line: 30
## variant count: 2190
## column count: 27
##
Meta line 29 read in.
## All meta lines processed.
## gt matrix initialized.
## Character matrix gt created.
## Character matrix gt rows: 2190
## Character matrix gt cols: 27
## skip: 0
## nrows: 2190
## row_num: 0
##
Processed variant 1000
Processed variant 2000
Processed variant: 2190
## All variants processed
head(vcf)
## [1] "***** Object of class 'vcfR' *****"
## [1] "***** Meta section *****"
## [1] "##fileformat=VCFv4.1"
## [1] "##source=\"GATK haplotype Caller, phased with beagle4\""
## [1] "##FILTER=<ID=LowQual,Description=\"Low quality\">"
## [1] "##FORMAT=<ID=AD,Number=.,Type=Integer,Description=\"Allelic depths fo [Truncated]"
## [1] "##FORMAT=<ID=DP,Number=1,Type=Integer,Description=\"Approximate read [Truncated]"
## [1] "##FORMAT=<ID=GQ,Number=1,Type=Integer,Description=\"Genotype Quality\">"
## [1] "First 6 rows."
## [1]
## [1] "***** Fixed section *****"
## CHROM POS ID REF ALT QUAL FILTER
## [1,] "Supercontig_1.50" "80058" NA "T" "G,TACTG" "3480.23" NA
## [2,] "Supercontig_1.50" "80063" NA "C" "T" "3016.89" NA
## [3,] "Supercontig_1.50" "80067" NA "A" "C" "3555.08" NA
## [4,] "Supercontig_1.50" "80073" NA "C" "A" "104.72" NA
## [5,] "Supercontig_1.50" "80074" NA "A" "G" "2877.74" NA
## [6,] "Supercontig_1.50" "80089" NA "A" "ACG" "2250.92" NA
## [1]
## [1] "***** Genotype section *****"
## FORMAT BL2009P4_us23
## [1,] "GT:AD:DP:GQ:PL" "1|0:25,3,0:28:45:45,0,1120,129,1134,1300"
## [2,] "GT:AD:DP:GQ:PL" "1|0:29,3:32:30:30,0,1335"
## [3,] "GT:AD:DP:GQ:PL" "1|0:31,3:34:27:27,0,1372"
## [4,] "GT:AD:DP:GQ:PL" "0|0:30,0:30:99:0,102,1530"
## [5,] "GT:AD:DP:GQ:PL" "0|0:30,0:30:93:0,93,1395"
## [6,] "GT:AD:DP:GQ:PL" "0|0:33,0:33:99:0,99,1485"
## DDR7602
## [1,] "1|0:19,7,0:26:99:237,0,777,300,804,1181"
## [2,] "1|0:20,7:27:99:234,0,819"
## [3,] "1|0:19,6:25:99:189,0,864"
## [4,] "0|0:26,0:26:87:0,87,1305"
## [5,] "1|0:21,4:25:99:147,0,867"
## [6,] "1|0:20,2:22:18:18,0,918"
## IN2009T1_us22
## [1,] "0|1:29,6,0:35:99:162,0,1229,252,1252,1512"
## [2,] "0|1:27,7:34:99:204,0,1232"
## [3,] "0|1:27,6:33:99:210,0,1155"
## [4,] "0|0:33,0:33:99:0,99,1485"
## [5,] "0|1:27,6:33:99:171,0,1116"
## [6,] "0|1:27,7:34:99:213,0,1113"
## LBUS5
## [1,] "0|1:19,7,0:26:99:237,0,777,300,804,1181"
## [2,] "0|1:20,7:27:99:234,0,819"
## [3,] "0|1:19,6:25:99:189,0,864"
## [4,] "0|0:26,0:26:87:0,87,1305"
## [5,] "0|1:21,4:25:99:147,0,867"
## [6,] "0|1:20,2:22:18:18,0,918"
## NL07434
## [1,] "0|1:45,19,0:64:99:643,0,1782,793,1866,2825"
## [2,] "0|1:42,18:60:99:655,0,1748"
## [3,] "0|1:41,16:57:99:584,0,1737"
## [4,] "0|0:56,0:56:99:0,172,2565"
## [5,] "0|1:39,16:55:99:629,0,1709"
## [6,] "0|1:34,12:46:99:393,0,1518"
## [1] "First 6 columns only."
## [1]
## [1] "Unique GT formats:"
## [1] "GT:AD:DP:GQ:PL"
## [1]
After we have made any manipulations of the file we can save it as a VCF file with the function write.vcf().
write.vcf(vcf, "myVCFdata_filtered.vcf.gz")
write.vcf()will write a file to your active directory. We now have a summary of our VCF file which we can use to help understand what forms of information are contained within it. This information can be further explored with plotting functions and used to filter the VCF file for high quality variants as we will see in the next section.
?vcfR()
queryMETA(vcf, element = 'AD')
## [[1]]
## [1] "FORMAT=ID=AD"
## [2] "Number=."
## [3] "Type=Integer"
## [4] "Description=Allelic depths for the ref and alt alleles in the order listed"
tail(getFIX(vcf)) # if you don't want to show the last INFO column
## CHROM POS ID REF ALT QUAL FILTER
## [2185,] "Supercontig_1.50" "1041878" NA "T" "C" "219.68" NA
## [2186,] "Supercontig_1.50" "1041989" NA "G" "A" "803.29" NA
## [2187,] "Supercontig_1.50" "1041997" NA "GA" "G" "472.52" NA
## [2188,] "Supercontig_1.50" "1042000" NA "A" "T" "481.57" NA
## [2189,] "Supercontig_1.50" "1042001" NA "T" "C" "93.70" NA
## [2190,] "Supercontig_1.50" "1042002" NA "C" "T" "487.57" NA
# or?
tail(vcf@fix) # if you want to show the last INFO column
## CHROM POS ID REF ALT QUAL FILTER
## [2185,] "Supercontig_1.50" "1041878" NA "T" "C" "219.68" NA
## [2186,] "Supercontig_1.50" "1041989" NA "G" "A" "803.29" NA
## [2187,] "Supercontig_1.50" "1041997" NA "GA" "G" "472.52" NA
## [2188,] "Supercontig_1.50" "1042000" NA "A" "T" "481.57" NA
## [2189,] "Supercontig_1.50" "1042001" NA "T" "C" "93.70" NA
## [2190,] "Supercontig_1.50" "1042002" NA "C" "T" "487.57" NA
## INFO
## [2185,] "AC=1;AF=0.028;AN=36;BaseQRankSum=-0.961;ClippingRankSum=-0.868;DP=502;FS=0.000;InbreedingCoeff=-0.0289;MLEAC=1;MLEAF=0.028;MQ=59.12;MQ0=0;MQRankSum=-3.092;QD=11.56;ReadPosRankSum=0.765;SOR=0.665"
## [2186,] "AC=2;AF=0.056;AN=36;BaseQRankSum=-4.021;ClippingRankSum=-1.419;DP=482;FS=0.000;InbreedingCoeff=0.9586;MLEAC=2;MLEAF=0.056;MQ=54.82;MQ0=0;MQRankSum=-0.944;QD=27.70;ReadPosRankSum=-2.468;SOR=0.823"
## [2187,] "AC=5;AF=0.139;AN=36;BaseQRankSum=1.722;ClippingRankSum=1.829;DP=490;FS=5.543;InbreedingCoeff=-0.1625;MLEAC=5;MLEAF=0.139;MQ=54.37;MQ0=0;MQRankSum=0.295;QD=6.14;ReadPosRankSum=-0.116;SOR=1.724"
## [2188,] "AC=5;AF=0.139;AN=36;BaseQRankSum=1.506;ClippingRankSum=-0.134;DP=475;FS=5.618;InbreedingCoeff=-0.1625;MLEAC=5;MLEAF=0.139;MQ=54.23;MQ0=0;MQRankSum=-0.193;QD=6.78;ReadPosRankSum=0.529;SOR=1.719"
## [2189,] "AC=1;AF=0.028;AN=36;BaseQRankSum=-1.590;ClippingRankSum=-0.469;DP=477;FS=3.099;InbreedingCoeff=-0.0304;MLEAC=1;MLEAF=0.028;MQ=54.21;MQ0=0;MQRankSum=1.586;QD=4.46;ReadPosRankSum=-3.047;SOR=1.591"
## [2190,] "AC=5;AF=0.139;AN=36;BaseQRankSum=1.731;ClippingRankSum=-0.674;DP=477;FS=5.655;InbreedingCoeff=-0.1625;MLEAC=5;MLEAF=0.139;MQ=54.21;MQ0=0;MQRankSum=0.245;QD=6.87;ReadPosRankSum=0.985;SOR=1.716"
# I imagine understanding the quality of the reads would be useful but I'm not entirely clear
# on what the quantity of "QUAL" means. Like why does the plot show many low-quality reads?
plot(vcf)
colnames(vcf@gt)
## [1] "FORMAT" "BL2009P4_us23" "DDR7602" "IN2009T1_us22"
## [5] "LBUS5" "NL07434" "P10127" "P10650"
## [9] "P11633" "P12204" "P13527" "P1362"
## [13] "P13626" "P17777us22" "P6096" "P7722"
## [17] "RS2009P1_us8" "blue13" "t30-4"
Analysis of genome data for populations can be seen as similar to the analyses of other marker systems discussed in previous chapters of this book, except that genome data analyses include larger quantities of data. For example, VCF data (discussed in ‘reading VCF data’) can be read into R using vcfR (Knaus and Grünwald 2017) to create a vcfR object. This object can be converted into a genlight object (Jombart 2008) and then a snpclone object (Kamvar, Tabima, and Grünwald 2014, @kamvar2015novel) if deemed necessary. Analysis on these objects has been covered in previous sections. Genome scale data provides additional analytical options as well. For example, when assumptions about the neutrality of the majority of the genome are appropriate, this can be used as a null hypothesis and used to help identify markers that differentiate from this assumption. Here we’ll provide examples of how genomic data may be analyzed.
For genomics examples we’ll use the pinfsc50 dataset. The pinfsc50 dataset is from a number of published Phytophthora infestans (potato late blight fungus) genomics projects where the data has been subset here to supercontig_1.50. This dataset is available as a stand alone R package (Knaus and Grünwald 2017) or be download from the course repo by subsetting the data to one supercontig it creates a dataset of a size that can be conveniently used for examples. This dataset illustrates some important strengths and weaknesses of these studies. A strength is the amount of data we have for each individual. Among the weaknesses are that the samples are ‘opportunistic’ in that we have no control over the design of the experiment. Also, because of the large investment in data per sample, there is a relatively small number of samples.
We’ll read our VCF data into R using the function read.vcfR(). This is data from the pinfsc50 data set that we filtered for quality in the section reading VCF data. Once the file is read in we can validate its contents using the show method which is implemented by executing the object’s name at the prompt.
library('vcfR')
vcf <- read.vcfR("pinfsc50_filtered.vcf.gz")
## Scanning file to determine attributes.
## File attributes:
## meta lines: 29
## header_line: 30
## variant count: 2190
## column count: 27
##
Meta line 29 read in.
## All meta lines processed.
## gt matrix initialized.
## Character matrix gt created.
## Character matrix gt rows: 2190
## Character matrix gt cols: 27
## skip: 0
## nrows: 2190
## row_num: 0
##
Processed variant 1000
Processed variant 2000
Processed variant: 2190
## All variants processed
# This data can also be download from the course repo - https://github.com/jeffreyblanchard/EvoGeno/blob/master/Grunwald/pinfsc50_filtered.vcf.gz
vcf
## ***** Object of Class vcfR *****
## 18 samples
## 1 CHROMs
## 2,190 variants
## Object size: 2.9 Mb
## 0 percent missing data
## ***** ***** *****
The show method reports that we have 18 samples and 2,190 variants. If this matches our expectation then we can proceed.
Different R packages have created different data structures to hold your data when it is imported into R. This is analagous to the different file formats you may have used to analyze your data in software outside of R. We’ve tried to engineer a suite of functions to convert data structures among the various R packages we typically use. The R package adegenet is a popular R package used for population genetic analysis and it works on data structures called ‘genlight’ objects. Here we use the function vcfR2genlight() to convert our vcfR object to a genlight object. This makes our VCF data available to the analyses in adegenet.
x <- vcfR2genlight(vcf)
## Warning in vcfR2genlight(vcf): Found 44 loci with more than two alleles.
## Objects of class genlight only support loci with two alleles.
## 44 loci will be omitted from the genlight object.
x
## /// GENLIGHT OBJECT /////////
##
## // 18 genotypes, 2,146 binary SNPs, size: 240.4 Kb
## 0 (0 %) missing data
##
## // Basic content
## @gen: list of 18 SNPbin
##
## // Optional content
## @ind.names: 18 individual labels
## @loc.names: 2146 locus labels
## @chromosome: factor storing chromosomes of the SNPs
## @position: integer storing positions of the SNPs
## @other: a list containing: elements without names
A genlight object only supports biallelic, or binary, variants. That is, variants with no more than two alleles. However, variant call format data can include multiple alleles. When we created our genlight object we recieved a warning message indicating that our vcfR object had variants with more than two alleles and that it was being subset to only biallelic variants. This is one of several important differences in how data is handled in VCF data versus genlight objects.
Another important difference among VCF and genlight data is how the genotypes are stored. In VCF data the alleles are delimited by either a pipe or a forward slash (‘|’, ‘/’ respectively). Because genlight objects only use biallelic loci the genotypes can be recoded as 0, 1 and 2. These correspond to homozygous for the reference or zero allele, heterozygote or homozygous for the first alternate allele. We can validate this by checking a few select genotypes from both the vcfR object and the genlight object.
# vcfR
gt <- extract.gt(vcf, element = "GT")
gt[c(2,6,18), 1:3]
## BL2009P4_us23 DDR7602 IN2009T1_us22
## Supercontig_1.50_80063 "1|0" "1|0" "0|1"
## Supercontig_1.50_80089 "0|0" "1|0" "0|1"
## Supercontig_1.50_94108 "0|1" "0|1" "1|1"
# genlight
t(as.matrix(x))[c(1,5,17), 1:3]
## BL2009P4_us23 DDR7602 IN2009T1_us22
## Supercontig_1.50_80063 1 1 1
## Supercontig_1.50_80089 0 1 1
## Supercontig_1.50_94108 1 1 2
Note that in VCF data the samples are in columns and the variants are in rows. In genlight objects, and many other R objects, the samples are in rows while the variants are in columns. We can use the transpose function (t()) to convert between these two states.
Yet another difference among VCF data and genlight objects is that in VCF data there is no concept of ‘population.’ The package adegenet was designed specifically for the analysis of population data, so its genlight object has a place (a ‘slot’) to hold this information. Because there is no population data in VCF data, if we want population data we’ll have to set it ourselves."
library(adegenet)
pop(x) <- as.factor(c("us", "eu", "us", "af", "eu", "us", "mx", "eu", "eu", "sa", "mx", "sa", "us", "sa", "Pmir", "us", "eu", "eu"))
popNames(x)
## [1] "af" "eu" "mx" "Pmir" "sa" "us"
Our population designation consists of a vector, that is the same length as the number of samples we have, where each element indicates which population each sample belongs to. By using the as.factor() function we transform the “vector” into a “factor”. A factor understands that all of the elements that are named “us” or “eu” are all part of the same group. This is why when we ask for the popNames we get a vector where each population is represented only once.
Yet another difference among VCF data and genlight objects is the concept of ploidy. In VCF data each variant is treated independently. This means that in theory VCF data may contain data that is of mixed ploidy. In a genlight object different samples may be of different ploidy levels, but within each sample all of its loci must be of the same ploidy level. Here we’ll set the ploidy of all the samples in the genlight object to the same ploidy.
ploidy(x) <- 2
Let’s create a pairwise genetic distance matrix for individuals or populations (i.e., groups of individuals).
To summarize, we can create a distance matrix from a genlight object using dist():
x.dist <- dist(x)
Note, that we have not specified what the variable x is. We can find documentation for this function with ?dist.
There are also functions to create distance matrices from genlight objects that exist in other packages. The function bitwise.dist() in the package poppr is an example. We can find documentation for this function with ?poppr::bitwise.dist. Again, you need to know where to look for this information or you may not find it. We can use this function as follows.
x.dist <- poppr::bitwise.dist(x)
Note, that the variable x has not yet been specified. Lastly, because you can use as.matrix() on your genlight object, and most distance algorithms can use this matrix as input, you can use this as an intermediate step to create a matrix from your genlight object and pass it to your distance algorithm of choice. Options include ade4, vegdist() in vegan, or daisy() in cluster. Note that it is up to you to determine which distance metric is best for your particular analysis. A number of options therefore exist for creating distance matrices from genlight objects.
Genomic projects frequently incorporate several types of data. For example, the reference sequence may be stored as a FASTA format file, variants (SNPs, indels, etc.) may be stored in a variant call format (VCF) file while annotations may be stored as a GFF or BED format (tablular data). Genome browsers can be used to integrate these different data types. However, genome browsers typically lack a manipulation environment, they simply display existing files. The R environment includes a tremendous amount of statistical support that is both specific to genetics and genomics as well as more general tools (e.g., the linear model and its extensions). The R package vcfR provides a link between VCF data and the R environment and it includes a simple genome browser to help visualize the effect of manipulations. Here we explore how we can use vcfR to survey genomic data for interesting features.
In this example we will begin by locating the example data from the pinfsc50 package. This is a separate package from vcfR that you will need to install. If you haven’t installed it already, you can install it with install.packages(‘pinfsc50’). For data from your own research activities you may wany to omit the system.file() steps and directly use your filenames in the input steps.
library(vcfR)
# Find the files.
vcf_file <- system.file("extdata", "pinf_sc50.vcf.gz", package = "pinfsc50")
dna_file <- system.file("extdata", "pinf_sc50.fasta", package = "pinfsc50")
gff_file <- system.file("extdata", "pinf_sc50.gff", package = "pinfsc50")
# Input the files.
vcf <- read.vcfR(vcf_file, verbose = FALSE)
dna <- ape::read.dna(dna_file, format = "fasta")
gff <- read.table(gff_file, sep="\t", quote="")
# Create a chromR object.
chrom <- create.chromR(name="Supercontig", vcf=vcf, seq=dna, ann=gff, verbose=TRUE)
## Names in vcf:
## Supercontig_1.50
## Names of sequences:
## Supercontig_1.50 of Phytophthora infestans T30-4
## Warning in create.chromR(name = "Supercontig", vcf = vcf, seq = dna, ann = gff, :
## Names in variant data and sequence data do not match perfectly.
## If you choose to proceed, we'll do our best to match the data.
## But prepare yourself for unexpected results.
## Names in annotation:
## Supercontig_1.50
## Initializing var.info slot.
## var.info slot initialized.
Note that a warning message indicates that the names in all of the data sources do not match pefectly. It has been my experience that this is a frequent occurrence in genome projects. Instead of asking the user to create duplicate files that have the same data but standardized names, vcfR allows the user to exercise some judgement. If you see this message and feel the names are correct you can ignore this and proceed. In this case we see that a chromosome is named ‘Supercontig_1.50’ in the VCF data but named ‘Supercontig_1.50 of Phytophthora infestans T30-4’ in the FASTA (sequence) file. Because we know that for this specific project these are synonyms we can safely ignore the warning and proceed.
Once we have created our chromR object we can verify that its contents are what we expect. By executing the object’s name at the console, with no other arguments, we invoke the object’s ‘show’ method. The show method for chromR objects presents a summary of the object’s contents.
chrom
## ***** Class chromR, method Show *****
## Name: Supercontig
## Chromosome length: 1,042,442 bp
## Chromosome labels: Supercontig_1.50 of Phytophthora infestans T30-4
## Annotation (@ann) count: 223
## Annotation chromosome names: Supercontig_1.50
## Variant (@vcf) count: 22,031
## Variant (@vcf) chromosome names: Supercontig_1.50
## Object size: 24.1 Mb
## Use head(object) for more details.
## ***** End Show (chromR) *****
plot(chrom)
The read depth here is a sum over all samples. We see a peak that represents the depth where most of our genomes were sequenced at. Low regions of sequence depth may indicate variants where we may be concerned that there may not be enough information to call a genotype. Variants of high coverage may represent repetetive regions of genomes where the reference may not contain all the copies so the reads pile up on the fraction of repeats that were successfully assembled. These regions may violate the ploidy assumptions made by variant callers and therefore may be considered a target for quality filtering. Mapping quality is very peaked at 60 but also contains variants that deviate from this common value. Quality (QUAL) is less easily interpreted. It appears that most of our variants are of a low quality with very few of them being of high quality. It is important to remember that while everyone would like high quality, quality is frequently difficult to measure. The simplest interpretation here is that QUAL may not be a good parameter to use to judge your variants. The last panel for SNP densities is empty because this data is created during the processing of chromR objects, which we will discuss below.
chromoqc(chrom, dp.alpha = 66)
Our second plot, called chromo plot, displays the same information as the plot method only it distributes the data along its chomosomal coordinates. It also includes a representation of the annotation data. The contents of this plot are somewhat flexible in that it depends on what data is present in the chromR object.
Creation and processing of a chromR object has been divided into separate tasks. Creation loads the data into the chromR object and should typically only be required once. Processing the chromR object generates summaries of the data. Some of these summaries will need to be updated as the chromR object is updated. For example, if the size of the sliding window used to summarize variant density and GC content is changed the chromR object will need to be processed to update this information.
chrom <- proc.chromR(chrom, verbose = TRUE)
## Nucleotide regions complete.
## elapsed time: 0.345
## N regions complete.
## elapsed time: 0.27
## Population summary complete.
## elapsed time: 0.375
## window_init complete.
## elapsed time: 0.001
## windowize_fasta complete.
## elapsed time: 0.165
## windowize_annotations complete.
## elapsed time: 0.019
## windowize_variants complete.
## elapsed time: 0.001
plot(chrom)
Subsequent to processing, our plot function is identical to its previous presentation except that we now have variant densities. When we observe the chromoqc plot we see that we now have variant densities, nucleotide content as well as a representation of where in our reference we have nucleotides (A, C, G or T) or where we have ambiguous nucleotides.
chromoqc(chrom, dp.alpha = 66)
The above data is an example of visualizing raw data that has come from a variant caller and other automated sources. In our section on quality control we presented methods on how to filter variants on various parameters as an attempt to omit low quality variants. We can use this data to create a chromR object and compare it to the above data.
#vcf <- read.vcfR("pinfsc50_qc.vcf.gz", verbose = FALSE)
vcf <- read.vcfR("pinfsc50_filtered.vcf.gz", verbose = FALSE)
chrom <- create.chromR(name="Supercontig", vcf=vcf, seq=dna, ann=gff, verbose=FALSE)
chrom <- proc.chromR(chrom, verbose = FALSE)
chromoqc(chrom, dp.alpha = 66)
We have a smaller quantity of data after our quality control steps. However, there do appear to be a few improvements. First, the read depth is now fairly uniform and lacks the large variation in depth we saw in the raw data. In genomics projects our naive assumption is that we would sequence all regions of the genome at the same depth. So this change in the data allows it to approach our expectation. Second, the mapping quality appear relatively constant and the variants with low mapping quality have been omitted. If we feel that ‘mapping quality’ is a reasonable assessment of quality, we may interpret this as an improvement. These are methods we feel improve the quality of our datasets prior to analysis.
When we process a chromR object, two forms of tabular data are created. First, summaries are made on a per variant basis. This includes sample size (minus missing data), allele counts, heterozygosity and effective size. Second, summaries are made on a per window basis. Window size can be changed with the win.size parameter of the function proc.chromR(). Window based summaries include nucleotide content per window (including missing data so you can adjust window size for analyses if necessary), the number of genic sites per window (when annotation information was provided) and the number of variants per window
head(chrom@var.info)
## CHROM POS MQ DP mask n Allele_counts He Ne
## 1 Supercontig_1.50 80058 58.96 508 TRUE 18 25,10,1 0.64364712 2.806207
## 2 Supercontig_1.50 80063 58.95 514 TRUE 18 25,11 0.42438272 1.737265
## 3 Supercontig_1.50 80067 58.88 499 TRUE 18 23,13 0.46141975 1.856734
## 4 Supercontig_1.50 80073 58.77 490 TRUE 18 35,1 0.05401235 1.057096
## 5 Supercontig_1.50 80074 58.75 482 TRUE 18 26,10 0.40123457 1.670103
## 6 Supercontig_1.50 80089 58.80 481 TRUE 18 25,11 0.42438272 1.737265
head(chrom@win.info)
## CHROM window start end length A C G T N other genic
## 1 Supercontig_1.50 1 1 1000 1000 267 213 293 227 0 0 0
## 2 Supercontig_1.50 2 1001 2000 1000 283 206 309 202 0 0 0
## 3 Supercontig_1.50 3 2001 3000 1000 229 213 235 177 146 0 0
## 4 Supercontig_1.50 4 3001 4000 1000 0 0 0 0 1000 0 0
## 5 Supercontig_1.50 5 4001 5000 1000 0 0 0 0 1000 0 0
## 6 Supercontig_1.50 6 5001 6000 1000 0 0 0 0 1000 0 0
## variants
## 1 0
## 2 0
## 3 0
## 4 0
## 5 0
## 6 0
While loading entire genomes into memory may not be practical due to resource limitations, it is frequently practical to break a genome up into fractions that can be processed given the resources available on any system. By processing a genome by chromosomes, or some other fraction, and saving this tabular data to file you can perform genome scans in an attempt to identify interesting features.
A fundamental question to most population studies is whether populations are diverse and whether this diversity is shared among the populations? To address the question of within population diversity geneticists typically report heterozygosity. This is the probability that two alleles randomly chosen from a population will be different (Nei 1973). Ecologists may know this as Simpson’s Index (Simpson 1949). To address differentiation population geneticists typically utilize FST or one of its analogues. Population differentiation measured by FST was originally proposed by Sewall Wright (Wright 1949). This was later extended to a method based on diversity by Masatoshi Nei (Nei 1973). As researchers applied these metrics to microsatellites, genetic markers with a large number of alleles, it became clear that Nei’s measure would not correctly range from zero to one, so Philip Hedrick proposed a correction (Hedrick 2005). More recently, Lou Jost proposed another alternative (Jost 2008). You can tell a topic is popular when so many variants of it are generated. And there are more variants than mentioned here. A nice discussion as to which measure may be appropriate for your data was posteed to teh Molecular Ecologist blog titled should I use FST, GST or D?.
In vcfR, the function genetic_diff() was implemented to measure population diversity and differentiation. Because VCF data typically do not include population information we’ll have to supply it as a factor. The method ‘nei’ employed here is based on the methods reported by Hedrick (Hedrick 2005). The exception is that the heterozygosities are weighted by the number of alleles observed in each population. This was inspired by hierfstat::pairwise.fst() which uses the number of individuals observed in each population to weight the heterozygosities. By using the number of alleles observed instead of the number of individuals we remove an assumption about how many alleles each individual may contribute. That is, we should be able to accomodate samples of mixed ploidy.
library(vcfR)
data(vcfR_example)
pop <- as.factor(c("us", "eu", "us", "af", "eu", "us", "mx", "eu", "eu", "sa", "mx", "sa", "us", "sa", "Pmir", "us", "eu", "eu"))
myDiff <- genetic_diff(vcf, pops = pop, method = 'nei')
knitr::kable(head(myDiff[,1:15]))
CHROM | POS | Hs_af | Hs_eu | Hs_mx | Hs_Pmir | Hs_sa | Hs_us | Ht | n_af | n_eu | n_mx | n_Pmir | n_sa | n_us |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Supercontig_1.50 | 2 | 0 | 0.0 | 0.000 | 0.5 | 0.000 | 0.00 | 0.0798611 | 2 | 4 | 4 | 2 | 4 | 8 |
Supercontig_1.50 | 246 | NaN | 0.0 | 0.375 | NaN | 0.000 | 0.50 | 0.3512397 | 0 | 4 | 4 | 0 | 6 | 8 |
Supercontig_1.50 | 549 | NaN | 0.0 | NaN | NaN | NaN | 0.50 | 0.4444444 | 0 | 2 | 0 | 0 | 0 | 4 |
Supercontig_1.50 | 668 | NaN | 0.5 | 0.000 | NaN | 0.000 | 0.50 | 0.5000000 | 0 | 4 | 2 | 0 | 2 | 8 |
Supercontig_1.50 | 765 | 0 | 0.0 | 0.000 | 0.0 | 0.000 | 0.00 | 0.1107266 | 2 | 12 | 4 | 2 | 4 | 10 |
Supercontig_1.50 | 780 | 0 | 0.0 | 0.000 | 0.0 | 0.375 | 0.18 | 0.1244444 | 2 | 8 | 4 | 2 | 4 | 10 |
The function returns the chromosome and position of each variant as provided in the VCF data. This should allow you to align its output with the VCF data. The heterozygosities for each population are reported as well as the total heterozygosity, followed by the number of alleles observed in each population. Note that in some populations zero alleles were observed. Populations with zero alleles reported heterozygosities of ‘NaN’ because of this absence of data.
knitr::kable(head(myDiff[,16:19]))
Gst | Htmax | Gstmax | Gprimest |
---|---|---|---|
0.4782609 | 0.7951389 | 0.9475983 | 0.5047085 |
NaN | 0.8057851 | NaN | NaN |
NaN | 0.6666667 | NaN | NaN |
NaN | 0.8125000 | NaN | NaN |
1.0000000 | 0.7543253 | 1.0000000 | 1.0000000 |
0.1160714 | 0.8000000 | 0.8625000 | 0.1345756 |
The remaining columns contain GST, the maximum heterozygosity, the maximum GST and finally G′ST. The maximum heterozygosity and the maximum GST are intermediary values used to calculate G′ST. They are typically not reported but provide values to help validate that G′ST was calculated correctly. Note that the populations that had zero alleles, and therefore a heterozygosity of ‘NaN’, contributed to GSTs that were also ‘NaN’. To avoid this you may want to consider omitting populations with a small sample size or that contain a large amount of missing data.
We now have information for each variant in the VCF data. Because this is typically a large quantity of information, we’ll want to summarize it. One way is to take averages of the data.
knitr::kable(round(colMeans(myDiff[,c(3:9,16,19)], na.rm = TRUE), digits = 3))
x | |
---|---|
Hs_af | 0.176 |
Hs_eu | 0.188 |
Hs_mx | 0.168 |
Hs_Pmir | 0.052 |
Hs_sa | 0.198 |
Hs_us | 0.155 |
Ht | 0.247 |
Gst | 0.595 |
Gprimest | 0.632 |
Another way to summarize data is to use violin plots.
library(reshape2)
library(ggplot2)
dpf <- melt(myDiff[,c(3:8,19)], varnames=c('Index', 'Sample'), value.name = 'Depth', na.rm=TRUE)
## No id variables; using all as measure variables
p <- ggplot(dpf, aes(x=variable, y=Depth)) + geom_violin(fill="#2ca25f", adjust = 1.2)
p <- p + xlab("")
p <- p + ylab("")
p <- p + theme_bw()
p
My attempt:
ggplot(myDiff, aes(x=POS, y=Gprimest)) +
geom_point() +
xlab("Location on Chromosome/Supercontig") +
ylab("Heterozygosity (G′ST)") +
theme_bw()
## Warning: Removed 887 rows containing missing values (geom_point).
Jeff’s answer:
plot(getPOS(vcf), myDiff$Gprimest,
pch = 20,
col = "#1E90FF44",
xlab = "",
ylab = "",
ylim = c(0, 1),
xaxt = "n")
axis(side = 1,
at = seq(0, 1e5, by = 1e4),
labels = seq(0, 100, by = 10))
title(xlab='Genomic position (Kbp)')
title(ylab = expression(italic("G'"["ST"])))
Small sample size
table(pop)
## pop
## af eu mx Pmir sa us
## 1 6 2 1 3 5
chromoqc(chrom, dp.alpha = 66, xlim = c(2e05, 4e05))
queryMETA(vcf)
## [1] "FILTER=ID=LowQual"
## [2] "FORMAT=ID=AD"
## [3] "FORMAT=ID=DP"
## [4] "FORMAT=ID=GQ"
## [5] "FORMAT=ID=GT"
## [6] "FORMAT=ID=PL"
## [7] "GATKCommandLine=ID=HaplotypeCaller"
## [8] "INFO=ID=AC"
## [9] "INFO=ID=AF"
## [10] "INFO=ID=AN"
## [11] "INFO=ID=BaseQRankSum"
## [12] "INFO=ID=ClippingRankSum"
## [13] "INFO=ID=DP"
## [14] "INFO=ID=DS"
## [15] "INFO=ID=FS"
## [16] "INFO=ID=HaplotypeScore"
## [17] "INFO=ID=InbreedingCoeff"
## [18] "INFO=ID=MLEAC"
## [19] "INFO=ID=MLEAF"
## [20] "INFO=ID=MQ"
## [21] "INFO=ID=MQ0"
## [22] "INFO=ID=MQRankSum"
## [23] "INFO=ID=QD"
## [24] "INFO=ID=ReadPosRankSum"
## [25] "INFO=ID=SOR"
## [26] "1 contig=<IDs omitted from queryMETA"
Danecek, Petr, Adam Auton, Goncalo Abecasis, Cornelis A Albers, Eric Banks, Mark A DePristo, Robert E Handsaker, et al. 2011. “The Variant Call Format and VCFtools.” Bioinformatics 27 (15): 2156–8. https://doi.org/10.1093/bioinformatics/btr330.
Grünwald, Niklaus J, Bruce A McDonald, and Michael G Milgroom. 2016. “Population Genomics of Fungal and Oomycete Pathogens.” Annual Review of Phytopathology 54: 323–46. https://doi.org/0.1146/annurev-phyto-080614-115913.
Hedrick, Philip W. 2005. “A Standardized Genetic Differentiation Measure.” Evolution 59 (8): 1633–8. http://dx.doi.org/10.1111/j.0014-3820.2005.tb01814.x.
Jombart, Thibaut. 2008. “adegenet: A R Package for the Multivariate Analysis of Genetic Markers.” Bioinformatics 24 (11): 1403–5. https://doi.org/10.1093/bioinformatics/btn129.
Jost, Lou. 2008. “GST And Its Relatives Do Not Measure Differentiation.” Molecular Ecology 17 (18): 4015–26. http://dx.doi.org/10.1111/j.1365-294X.2008.03887.x.
Kamvar, Zhian N, Jonah C Brooks, and Niklaus J Grünwald. 2015. “Novel R tools for analysis of genome-wide population genetic data with emphasis on clonality.” Name: Frontiers in Genetics 6: 208. https://doi.org/10.3389/fgene.2015.00208.
Kamvar, Z N, J F Tabima, and Niklaus J Grünwald. 2014. “Poppr: An R Package for Genetic Analysis of Populations with Clonal, Partially Clonal, and/or Sexual Reproduction.” PeerJ 2: e281. https://doi.org/10.7717/peerj.281.
Knaus, Brian J, and Niklaus J Grünwald. 2017. “Vcfr: A Package to Manipulate and Visualize Variant Call Format Data in R.” Molecular Ecology Resources 17 (1): 44–53. http://dx.doi.org/10.1111/1755-0998.12549.
Luikart, Gordon, Phillip R England, David Tallmon, Steve Jordan, and Pierre Taberlet. 2003. “The Power and Promise of Population Genomics: From Genotyping to Genome Typing.” Nature Reviews Genetics 4 (12): 981–94. https://doi.org/10.1038/nrg1226.
Nei, Masatoshi. 1973. “Analysis of Gene Diversity in Subdivided Populations.” Proceedings of the National Academy of Sciences 70 (12): 3321–3. http://www.pnas.org/content/70/12/3321.abstract.
Paradis, Emmanuel, Julien Claude, and Korbinian Strimmer. 2004. “APE: Analyses of Phylogenetics and Evolution in R Language.” Bioinformatics 20 (2): 289–90. https://doi.org/10.1093/bioinformatics/btg412.
Paradis, Emmanuel, Thierry Gosselin, Niklaus J Grünwald, Thibaut Jombart, Stéphanie Manel, and Hilmar Lapp. 2017. “Towards an Integrated Ecosystem of R Packages for the Analysis of Population Genetic Data.” Molecular Ecology Resources 17 (1): 1–4. https://doi.org/10.1111/1755-0998.12636.
Simpson, Edward H. 1949. “Measurement of Diversity.” Nature 163: 688. http://dx.doi.org/10.1038/163688a0.
Wright, Sewall. 1949. “The Genetical Structure of Populations.” Annals of Eugenics 15 (1): 323–54. https://doi.org/10.1111/j.1469-1809.1949.tb02451.x.