Document is currently under review and in
BETA. It is incomplete and may contain inaccuracies. Expect changes to the content.
The tutorial outlines steps in detecting copy ratio alterations, more familiarly copy number variants (CNVs), as well as allelic segments in a single sample using GATK4. The tutorial (i) denoises case sample alignment data against a panel of normals (PoN) to obtain copy ratios (Tutorial#11682) and (ii) models segments from the copy ratios and allelic counts (Tutorial#11683). The latter modeling incorporates data from a matched control. The same workflow steps apply to targeted exome and whole genome sequencing data.
Tutorial#11682 covers sections 1–4. Section 1 prepares a genomic intervals list with PreprocessIntervals and collects read coverage counts across the intervals. Section 2 creates a CNV PoN with CreateReadCountPanelOfNormals using read coverage counts. Section 3 denoises read coverage data against the PoN with DenoiseReadCounts using principal component analysis. Section 4 plots the results of standardizing and denoising copy ratios against the PoN.
Tutorial#11683 covers sections 5–8. Section 5 collects counts of reference versus alternate alleles with CollectAllelicCounts. Section 6 incorporates copy ratio and allelic counts data to group contiguous copy ratio and allelic counts segments with ModelSegments using kernel segmentation and Markov-chain Monte Carlo. The tool can also segment either copy ratio data or allelic counts data alone. Both types of data together refine segmentation results in that segments are based on the same copy ratio and the same minor allele fraction. Section 7 calls amplification, deletion and neutral events for the segmented copy ratios. Finally, Section 8 plots the results of segmentation and estimated allele-specific copy ratios.
Plotting is across genomic loci on the x-axis and copy or allelic ratios on the y-axis. The first part of the workflow focuses on removing systematic noise from coverage counts and adjusts the data points vertically. The second part focuses on segmentation and groups the data points horizontally. The extent of grouping, or smoothing, is adjustable with ModelSegments parameters. These adjustments do not change the copy ratios; the denoising in the first part of the workflow remains invariant in the second part of the workflow. See Figure 3 of this poster for a summary of tutorial results.
► The official GATK4 workflow is capable of running efficiently on WGS data and provides much greater resolution, up to ~50-fold more resolution for tested data. In these ways, GATK4 CNV improves upon its predecessor workflows in GATK4.alpha and GATK4.beta. Validations are still in progress and therefore the workflow itself is in BETA status, even if most tools, with the exception of ModelSegments, are production ready. The ModelSegments tool is still in BETA status and may change in small but significant ways going forward. Use at your own risk.
The GATK4 CNV workflow offers a multitude of levers, e.g. towards fine-tuning analyses and towards controls. Researchers are expected to tune workflow parameters on samples with similar copy number profiles as their case sample under scrutiny. Refer to each tool's documentation for descriptions of parameters.
Download tutorial_11682.tar.gz and tutorial_11683.tar.gz, either from the GoogleDrive or from the FTP site. To access the ftp site, leave the password field blank. If the GoogleDrive link is broken, please let us know. The tutorial also requires the GRCh38 reference FASTA, dictionary and index. These are available from the GATK Resource Bundle. For details on the example data, see Tutorial#11136's third footnote and .
Alternatively, download the
spacecade7/tutorial_11682_11683 docker image from DockerHub. The image contains GATK126.96.36.199 and the data necessary to run the tutorial commands, including the GRCh38 reference. Allocation of at least 4GB memory to Docker is recommended before launching the container.
Before collecting coverage counts that forms the basis of copy number variant detection, we define the resolution of the analysis with a genomic intervals list. The extent of genomic coverage and the size of genomic intervals in the intervals list factor towards resolution.
Preparing a genomic intervals list is necessary whether an analysis is on targeted exome data or whole genome data. In the case of exome data, we pad the target regions of the capture kit. In the case of whole genome data, we divide the reference genome into equally sized intervals or bins. In either case, we use PreprocessIntervals to prepare the intervals list.
For the tutorial exome data, we provide the capture kit target regions in 1-based intervals and set
--bin-length to zero.
gatk PreprocessIntervals \ -L targets_C.interval_list \ -R /gatk/ref/Homo_sapiens_assembly38.fasta \ --bin-length 0 \ --interval-merging-rule OVERLAPPING_ONLY \ -O sandbox/targets_C.preprocessed.interval_list
This produces a Picard-style intervals list
targets_C.preprocessed.interval_list for use in the coverage collection step. Each interval is expanded 250 bases each on either side.
Comments on select parameters
-Largument is optional. If provided, the tool expects the intervals list to be in Picard-style as described in Article#1319. The tool errs for other formats. If this argument is omitted, then the tool assumes each contig is a single interval. See  for additional discussion.
--bin-lengthargument to be appropriate for the type of data, e.g. default 1000 for whole genome or 0 for exomes. In binning, an interval is divided into equal-sized regions of the specified length. The tool does not bin regions that contain
--interval-merging-ruleto OVERLAPPING_ONLY, to prevent the tool from merging abutting intervals. 
-Ris required and implies the presence of a corresponding reference index and a reference dictionary in the same directory.
--padding. The default value of 250 bases was determined to work well empirically for TCGA targeted exome data. This argument is relevant for exome data, as binning without an intervals list does not allow for intervals expansion. 
Take a look at the intervals before and after padding.
For consecutive intervals less than 250 bases apart, how does the tool pad the intervals?
Now collect raw integer counts data. The tutorial uses GATK188.8.131.52's CollectFragmentCounts, which counts coverage of paired end fragments. The tool counts once per fragment overlapping at its center with the interval. In GATK184.108.40.206, CollectReadCounts replaces CollectFragmentCounts. CollectReadCounts counts reads that overlap the interval.
The tutorial has already collected coverage on the tumor case sample, on the normal matched-control and on each of the normal samples that constitute the PoN. To demonstrate coverage collection, the following command uses the small BAM from Tutorial#11136’s data bundle . The tutorial does not use the resulting file in subsequent steps. The CollectReadCounts command swaps out the tool name but otherwise uses identical parameters.
gatk CollectFragmentCounts \ -I tumor.bam \ -L targets_C.preprocessed.interval_list \ --interval-merging-rule OVERLAPPING_ONLY \ -O sandbox/tumor.counts.hdf5
In the tutorial data bundle, the equivalent full-length result is
hcc1143_T_clean.counts.hdf5. The data tabulates CONTIG, START, END and raw COUNT values for each genomic interval.
Comments on select parameters
-Largument interval list is a Picard-style interval list prepared with PreprocessIntervals.
-Iinput is alignment data.
--format TSV. The HDF5 format allows for quicker panel of normals creation.
OVERLAPPING_ONLY, to prevent the tool from merging abutting intervals. 
See Article#11508 for an overview of the format and instructions on how to navigate the data with external application HDFView. The article illustrates features of the format using data generated in this tutorial.
In creating a PoN, CreateReadCountPanelOfNormals abstracts the counts data for the samples and the intervals using Singular Value Decomposition (SVD, 1), a type of Principal Component Analysis (PCA, 1, 2, 3). The normal samples in the PoN should match the sequencing approach of the case sample under scrutiny. This applies especially to targeted exome data because the capture step introduces target-specific noise.
The tutorial has already created a CNV panel of normals using forty 1000 Genomes Project samples. The command below illustrates PoN creation using just three samples.
gatk --java-options "-Xmx6500m" CreateReadCountPanelOfNormals \ -I HG00133.alt_bwamem_GRCh38DH.20150826.GBR.exome.counts.hdf5 \ -I HG00733.alt_bwamem_GRCh38DH.20150826.PUR.exome.counts.hdf5 \ -I NA19654.alt_bwamem_GRCh38DH.20150826.MXL.exome.counts.hdf5 \ --minimum-interval-median-percentile 5.0 \ -O sandbox/cnvponC.pon.hdf5
This generates a PoN in HDF5 format. The PoN stores information that, when applied, will (i) standardize case sample counts to PoN median counts and (ii) remove systematic noise in the case sample.
Comments on select parameters
-I. Coverage data may be in either TSV or HDF5 format. The tool will accept a single sample, e.g. the matched-normal.
--number-of-eigensamplesor principal components is twenty. The tool will adjust this number to the smaller of twenty or the number of samples the tool retains after filtering. In general, denoising against a PoN with more components improves segmentation, but at the expense of sensitivity. Ideally, researchers should perform a sensitivity analysis to choose an appropriate value for this parameter. See this related discussion.
--spark-master. See Article#11245 for details.
Comments on filtering and imputation parameters, in the order of application
--minimum-interval-median-percentileargument from the default of 10.0 to a smaller value of 5.0. The tool filters out targets or bins with a median proportional coverage below this percentile. The median is across the samples. The proportional coverage is the target coverage divided by the sum of the coverage of all targets for a sample. The effect of setting this parameter to a smaller value is that we retain more information.
--maximum-zeros-in-sample-percentagedefault is 5.0. Any sample with more than 5% zero coverage targets is filtered.
--maximum-zeros-in-interval-percentagedefault is 5.0. Any target interval with more than 5% zero coverage across samples is filtered.
--extreme-sample-median-percentiledefault is 2.5. Any sample with less than 2.5 percentile or more than 97.5 percentile normalized median proportional coverage is filtered.
--do-impute-zerosdefault is set to true. The tool takes zero coverage regions and changes these values to the median of the non-zero values. The tool additionally normalizes zero values below the 0.10 percentile or above the 99.90 percentile to.
--extreme-outlier-truncation-percentiledefault is 0.1. The tool takes any proportional coverage below the 0.1 percentile or above the 99.9 percentile and sets it to the corresponding percentile value.
The current filtering and imputation parameters are identical to that in the BETA release of the CNV workflow and may change for later versions based on evaluations. The implementation has been made to be more memory efficient so that the tool runs faster than the BETA release.
If the data are not uniform, e.g. has many intervals with zero or low counts, the tool gives the warning to adjust filtering parameters and stops the run. This may happen, for example, if one attempts to construct a panel of mixed-sex samples and includes the allosomal contigs . In this case, first be sure to either exclude allosomal contigs via a subset intervals list or subset the panel samples to those expected to have similar coverage across the given contigs, e.g. panels of the same sex. If the warning still occurs, then adjust
--minimum-interval-median-percentile to a larger value. See this thread for the original discussion.
Based on what you know about PCA, what do you think are the effects of using more normal samples? A panel with some profiles that are outliers? Could PCA account for GC-bias? What do you know about the 1000 Genome Project? Specifically, the exome data? How could we tell a good PoN from a bad PoN? What control could we use?
In a somatic analysis, what is better for a PoN: tissue-matched normals or blood normals? Should we include our particular tumor’s matched normal in the PoN?
Provide DenoiseReadCounts with counts collected by CollectFragmentCounts and the CNV PoN generated with CreateReadCountPanelOfNormals.
gatk --java-options "-Xmx12g" DenoiseReadCounts \ -I hcc1143_T_clean.counts.hdf5 \ --count-panel-of-normals cnvponC.pon.hdf5 \ --standardized-copy-ratios sandbox/hcc1143_T_clean.standardizedCR.tsv \ --denoised-copy-ratios sandbox/hcc1143_T_clean.denoisedCR.tsv
This produces two files, the standardized copy ratios
hcc1143_T_clean.standardizedCR.tsv and the denoised copy ratios
hcc1143_T_clean.denoisedCR.tsv that each represents a data transformation. In the first transformation, the tool standardizes counts by the PoN median counts. The standarization includes log2 transformation and normalizing the counts data to center around one. In the second transformation, the tool denoises the standardized copy ratios using the principal components of the PoN.
Comments on select parameters
null, the tool uses the maximum number of eigensamples available in the PoN. In section 2, by using default CreateReadCoundPanelOfNormals parameters, we capped the number of eigensamples in the PoN to twenty. Changing the
--number-of-eigensamplesin DenoiseReadCounts to lower values can change the resolution of results, i.e. how smooth segments are. See this thread for detailed discussion.
--annotated-intervalsgenerated by AnnotateIntervals to concurrently perform GC-bias correction.
We plot the standardized and denoised read counts with PlotDenoisedCopyRatios. The plots allow visually assessing the efficacy of denoising. Provide the tool with both the standardized and denoised copy ratios from the previous step as well as a reference sequence dictionary.
gatk PlotDenoisedCopyRatios \ --standardized-copy-ratios hcc1143_T_clean.standardizedCR.tsv \ --denoised-copy-ratios hcc1143_T_clean.denoisedCR.tsv \ --sequence-dictionary Homo_sapiens_assembly38.dict \ --minimum-contig-length 46709983 \ --output sandbox/plots \ --output-prefix hcc1143_T_clean
This produces six files in the
plots directory--two PNG images and four text files as follows.
hcc1143_T_clean.denoised.pngplots the standardized and denoised read counts across the contigs and scales the y-axis to accommodate all copy ratio data.
hcc1143_T_clean.denoisedLimit4.pngplots the same but limits the y-axis range from 0 to 4 for comparability across samples.
Each of the text files contains a single quality control value. The value is the median of absolute differences (MAD) in copy-ratios of adjacent targets. Its calculation is robust to actual copy-number events and should decrease after denoising.
hcc1143_T_clean.standardizedMAD.txtgives the MAD for standardized copy ratios.
hcc1143_T_clean.denoisedMAD.txtgives the MAD for denoised copy ratios.
hcc1143_T_clean.deltaMAD.txtgives the difference between standardized MAD and denoised MAD.
hcc1143_T_clean.scaledDeltaMAD.txtgives the fractional difference (standardized MAD - denoised MAD)/(standardized MAD).
Comments on select parameters
--sequence-dictionarythat matches the GRCh38 reference used in mapping.
--minimum-contig-lengthfrom the default value of 1,000,000 to 46,709,983, the length of the smallest of GRCh38's primary assembly contigs.
Here are the results for the HCC1143 tumor cell line and its matched normal cell line. The normal cell line serves as a control. For each sample are two plots that show the effects of PCA denoising. The upper plot shows standardized copy ratios in blue and the lower plot shows denoised copy ratios in green.
4A. Tumor standarized and denoised copy ratio plots
4B. Normal standarized and denoised copy ratio plots
Would you guess there are CNV events in the normal? Should we be surprised?
The next step is to perform segmentation. This can be done either using copy ratios alone or in combination with allelic copy ratios. In part II, Section 6 outlines considerations in modeling segments with allelic copy ratios, section 7 generates a callset and section 8 shows how to plot segmented copy and allelic ratios. Again, the tutorial presents these steps using the full features of the workflow. However, researchers may desire to perform copy ratio segmentation independently of allelic counts data, e.g. for a case without a matched-control. For the case-only, segmentation gives the following plots. To recapitulate this approach, omit allelic-counts parameters from the example commands in sections 6 and 8.
4C. Tumor case-only copy ratios segmentation gives 235 segments.
4D. Normal case-only copy ratios segmentation gives 41 segments.
While the normal sample shows trisomy of chr2 and a subpopulation with deletion of chr6, the tumor sample is highly aberrant. The extent of aneuploidy is unsurprising and consistent with these HCC1143 tumor dSKY results by Wenhan Chen. Remember that cell lines, with increasing culture time and selective bottlenecks, can give rise to new somatic events, undergo clonal selection and develop population heterogeneity much like in cancer.
Denoising with a PoN is critical for calling copy-number variants from targeted exome coverage profiles. It can also improve calls from WGS profiles that are typically more evenly distributed and subject to less noise. Furthermore, denoising with a PoN can greatly impact results for (i) samples that have more noise, e.g. those with lower coverage, lower purity or higher activity, (ii) samples lacking a matched normal and (iii) detection of smaller events that span only a few targets.
To understand the impact a PoN's constituents can have on an analysis, compare the results of denoising the normal sample against two different PoNs. Each PoN consists of forty 1000 Genomes Project exome samples. PoN-M consists of the same cohort used in the Mutect2 tutorial's PoN. We selected PoN-C's constituents with more care and this is the PoN the CNV tutorial uses.
4E. Compare standardization and denoising with PoN-C versus PoN-M.
What is the difference in the targets for the two cohorts--cohort-M and cohort-C? Is this a sufficient reason for the difference in noise profiles we observe above?
GATK4 denoises exome coverage profiles robustly with either panel of normals. However, a good panel allows maximal denoising, as is the case for PoN-C over PoN-M.
We use publically available 1000 Genomes Project data so as to be able to share the data and to illustrate considerations in CNV analyses. In an actual somatic analysis, we would construct the PoNs using the blood normals of the tumor cohort(s). We would construct a PoN for each sex, so as to be able to call events on allosomal chromosomes. Such a PoN should give better results than that from either of the tutorial PoNs.
Somatic analyses, due to the confounding factors of tumor purity and heterogeneity, require high sensitivity in calling. However, a sensitive caller can only do so much. Use of a carefully constructed PoN augments the sensitivity and helps illuminate copy number events.
This section is adapted from a hands-on tutorial developed and written by Soo Hee Lee (@shlee) in July of 2017 for the GATK workshops in Cambridge and Edinburgh, UK. The original tutorial uses the GATK4.beta workflow and can be found in the 1707 through 1711 GATK workshops folders. Although the Somatic CNV workflow has changed from GATK4.beta and the official GATK4 release, the PCA denoising remains the same. The hands-on tutorial focuses on differences in PCA denoising based on two different panels of normals (PoNs). Researchers may find working through the worksheet to the very end with either release version beneficial, as considerations in selecting PoN constituents remain identical.
Examining the read group information for the samples in the two PoNs shows a difference in mixtures of sequencing centers--four different sequencing centers for PoN-M versus a single sequencing center for PoN-C. The single sequencing center corresponds to that of the HCC1143 samples. Furthermore, tracing sample information will show different targeted exome capture kits for the sequencing centers. Comparing the denoising results of the two PoNs stresses the importance of selective PoN creation.
A feature of the GATK4 CNV workflow is the ability to normalize a case against a single control sample, e.g. a tumor case against its matched normal. This involves running the control sample through CreateReadCountPanelOfNormals, then denoising the case against this single-sample projection with DenoiseReadCounts. To illustrate this approach, here is the result of denoising the HCC1143 tumor sample against its matched normal. For single-sample matched-control denoising, DenoiseReadCounts produces identical data for
4F. Tumor case standardized against the normal matched-control
Compare these results to that of section 4.1. Notice the depression in chr2 copy ratios that occurs due to the PoN normal sample's chr2 trisomy. Here, the median absolute deviation (MAD) of 0.149 is an incremental improvement to section 4.1's PoN-M denoising (MAD=0.15). In contrast, PoN-C denoising (MAD=0.125) and even PoN-C standardization alone (MAD=0.134) are seemingly better normalization approaches than the matched-normal standardization. Again, results stress the importance of selective PoN creation.
The PoN accounts for germline CNVs common to its constituents such that the workflow discounts the same variation in the case. It is possible for the workflow to detect germline CNVs not represented in the PoN, in particular, rare germline CNVs. In the case of matched-normal standardization, the workflow should discount germline CNVs and reveal only somatic events.
The workflow does not support iteratively denoising two samples each against a PoN and then against each other.
The tutorial continues in a second document at #11683.
 The constituents of the forty sample CNV panel of normals differs from that of the Mutect2 panel of normals. Preliminarly CNV data was generated with v220.127.116.11 somatic CNV WDL scripts run locally on a Gcloud Compute Engine VM with Cromwell v30.2. Additional refinements were performed on a 16GB MacBook Pro laptop. Additional plots were generated using a broadinstitute/gatk:18.104.22.168 Docker container. Note the v22.214.171.124 WDL script does not allow custom sequence dictionaries for the plotting steps.
 See original discussion on bin size here. The bin size determines the resolution of CNV breakpoints. The theoretical limit depends on coverage depth and the insert-size distribution. Typically bin sizes on the order of the read length will give reasonable results. The GATK developers have tested WGS runs where the bin size is as small as 250 bases.
OVERLAPPING_ONLY, to prevent the tool from merging abutting intervals. The default is set to
ALL for GATK126.96.36.199. For future versions, the default will be set to
 The tool allows specifying both the padding and the binning arguments simultaneously. If exome targets are very long, it may be preferable to both pad and break up the intervals with binning. This may provide some additional resolution.
 The data bundle from Tutorial#11136 contains
normal.bam. These tumor and normal samples are identical to that in the current tutorial and represent a subset of the full data for the following regions:
chr6 29941013 29946495 + chr11 915890 1133890 + chr17 1 83257441 + chr11_KI270927v1_alt 1 218612 + HLA-A*24:03:01 1 3502 +
--minimum-mapping-qualityto 30. Thus, the tool uses reads with MAPQ30 or higher.
 The current tool version requires strategizing denoising of allosomal chromosomes, e.g. X and Y in humans, against the panel of normals. This is because coverage will vary for these regions depending on the sex of the sample. To determine the sex of samples, analyze them with DetermineGermlineContigPloidy. Aneuploidy in allosomal chromosomes, much like trisomy, can still make for viable organisms and so phenotypic sex designations are insufficient. GermlineCNVCaller can account for differential sex in data.
Many thanks to Samuel Lee (@slee), no relation, for his patient explanations of the workflow parameters and mathematics behind the workflow. Researchers may find reading through @slee's comments on the forum, a few of which the tutorials link to, insightful.