Creates a panel of normals for read-count denoising
The input read counts are first transformed to log2 fractional coverages and preprocessed according to specified filtering and imputation parameters. Singular value decomposition (SVD) is then performed to find the first number-of-eigensamples principal components, which are stored in the PoN. Some or all of these principal components can then be used for denoising case samples with DenoiseReadCounts; it is assumed that the principal components used represent systematic sequencing biases (rather than statistical noise). Examining the singular values, which are also stored in the PoN, may be useful in determining the appropriate number of principal components to use for denoising.
If annotated intervals are provided, explicit GC-bias correction will be performed by GCBiasCorrector before filtering and SVD. GC-content information for the intervals will be stored in the PoN and used to perform explicit GC-bias correction identically in DenoiseReadCounts. Note that if annotated intervals are not provided, it is still likely that GC-bias correction is implicitly performed by the SVD denoising process (i.e., some of the principal components arise from GC bias).
Note that such SVD denoising cannot distinguish between variance due to systematic sequencing biases and that due to true common germline CNVs present in the panel; signal from the latter may thus be inadvertently denoised away. Furthermore, variance arising from coverage on the sex chromosomes may also significantly contribute to the principal components if the panel contains samples of mixed sex. Therefore, if sex chromosomes are not excluded from coverage collection, it is strongly recommended that users avoid creating panels of mixed sex and take care to denoise case samples only with panels containing only individuals of the same sex as the case samples. (See GermlineCNVCaller, which avoids these issues by simultaneously learning a probabilistic model for systematic bias and calling rare and common germline CNVs for samples in the panel.)
gatk CreateReadCountPanelOfNormals \ -I sample_1.counts.hdf5 \ -I sample_2.counts.hdf5 \ ... \ -O cnv.pon.hdf5
gatk CreateReadCountPanelOfNormals \ -I sample_1.counts.hdf5 \ -I sample_2.counts.tsv \ ... \ --annotated-intervals annotated_intervals.tsv \ -O cnv.pon.hdf5
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--input -I |
[] | Input TSV or HDF5 files containing integer read counts in genomic intervals for all samples in the panel of normals (output of CollectFragmentCounts). Intervals must be identical and in the same order for all samples. | |
--output -O |
null | Output file for the panel of normals. | |
Optional Tool Arguments | |||
--annotated-intervals |
null | Input file containing annotations for GC content in genomic intervals (output of AnnotateIntervals). If provided, explicit GC correction will be performed before performing SVD. Intervals must be identical to and in the same order as those in the input read-counts files. | |
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--conf |
[] | spark properties to set on the spark context in the format |
|
--do-impute-zeros |
true | If true, impute zero-coverage values as the median of the non-zero values in the corresponding interval. (This is applied after all filters.) | |
--extreme-outlier-truncation-percentile |
0.1 | Fractional coverages normalized by genomic-interval medians that are below this percentile or above the complementary percentile are set to the corresponding percentile value. (This is applied after all filters and imputation.) | |
--extreme-sample-median-percentile |
2.5 | Samples with a median (across genomic intervals) of fractional coverage normalized by genomic-interval medians below this percentile or above the complementary percentile are filtered out. (This is the fourth filter applied.) | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--help -h |
false | display the help message | |
--maximum-zeros-in-interval-percentage |
5.0 | Genomic intervals with a fraction of zero-coverage samples above this percentage are filtered out. (This is the third filter applied.) | |
--maximum-zeros-in-sample-percentage |
5.0 | Samples with a fraction of zero-coverage genomic intervals above this percentage are filtered out. (This is the second filter applied.) | |
--minimum-interval-median-percentile |
10.0 | Genomic intervals with a median (across samples) of fractional coverage (optionally corrected for GC bias) below this percentile are filtered out. (This is the first filter applied.) | |
--number-of-eigensamples |
20 | Number of eigensamples to use for truncated SVD and to store in the panel of normals. The number of samples retained after filtering will be used instead if it is smaller than this. | |
--program-name |
null | Name of the program running | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--TMP_DIR |
[] | Undocumented option | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--showHidden |
false | display hidden arguments |
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
Input file containing annotations for GC content in genomic intervals (output of AnnotateIntervals). If provided, explicit GC correction will be performed before performing SVD. Intervals must be identical to and in the same order as those in the input read-counts files.
File null
read one or more arguments files and add them to the command line
List[File] []
spark properties to set on the spark context in the format
List[String] []
If true, impute zero-coverage values as the median of the non-zero values in the corresponding interval. (This is applied after all filters.)
boolean true
Fractional coverages normalized by genomic-interval medians that are below this percentile or above the complementary percentile are set to the corresponding percentile value. (This is applied after all filters and imputation.)
double 0.1 [ [ 0 50 ] ]
Samples with a median (across genomic intervals) of fractional coverage normalized by genomic-interval medians below this percentile or above the complementary percentile are filtered out. (This is the fourth filter applied.)
double 2.5 [ [ 0 50 ] ]
A configuration file to use with the GATK.
String null
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
display the help message
boolean false
Input TSV or HDF5 files containing integer read counts in genomic intervals for all samples in the panel of normals (output of CollectFragmentCounts). Intervals must be identical and in the same order for all samples.
R List[File] []
Genomic intervals with a fraction of zero-coverage samples above this percentage are filtered out. (This is the third filter applied.)
double 5.0 [ [ 0 100 ] ]
Samples with a fraction of zero-coverage genomic intervals above this percentage are filtered out. (This is the second filter applied.)
double 5.0 [ [ 0 100 ] ]
Genomic intervals with a median (across samples) of fractional coverage (optionally corrected for GC bias) below this percentile are filtered out. (This is the first filter applied.)
double 10.0 [ [ 0 100 ] ]
Number of eigensamples to use for truncated SVD and to store in the panel of normals. The number of samples retained after filtering will be used instead if it is smaller than this.
int 20 [ [ 1 ∞ ] ]
Output file for the panel of normals.
R File null
Name of the program running
String null
Whether to suppress job-summary info on System.err.
Boolean false
display hidden arguments
boolean false
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
Undocumented option
List[File] []
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
LogLevel INFO
display the version number for this tool
boolean false
See also General Documentation | Tool Docs Index Tool Docs Index | Support Forum
GATK version 4.0.1.1 built at 16-35-2018 05:35:54.