Showing tool doc from version 4.1.0.0 | The latest version is 4.1.2.0

MarkDuplicatesSpark

MarkDuplicates on Spark

Category Read Data Manipulation


Overview

This is a Spark implementation of the MarkDuplicates tool from Picard that allows the tool to be run in parallel on multiple cores on a local machine or multiple machines on a Spark cluster while still matching the output of the single-core Picard version. Since the tool requires holding all of the readnames in memory while it groups the read information, it is recommended running this tool on a machine/configuration with at least 8 GB of memory overall for a typical 30x bam.

This tool locates and tags duplicate reads in a BAM or SAM file, where duplicate reads are defined as originating from a single fragment of DNA. Duplicates can arise during sample preparation e.g. library construction using PCR. See also "EstimateLibraryComplexity" + for additional notes on PCR duplication artifacts. Duplicate reads can also result from a single amplification cluster, incorrectly detected as multiple clusters by the optical sensor of the sequencing instrument. These duplication artifacts are referred to as optical duplicates.

The MarkDuplicates tool works by comparing sequences in the 5 prime positions of both reads and read-pairs in a SAM/BAM file. After duplicate reads are collected, the tool differentiates the primary and duplicate reads using an algorithm that ranks reads by the sums of their base-quality scores (default method).

The tool's main output is a new SAM or BAM file, in which duplicates have been identified in the SAM flags field for each read. Duplicates are marked with the hexadecimal value of 0x0400, which corresponds to a decimal value of 1024. If you are not familiar with this type of annotation, please see the following blog post for additional information.

" +

Although the bitwise flag annotation indicates whether a read was marked as a duplicate, it does not identify the type of duplicate. To do this, a new tag called the duplicate type (DT) tag was recently added as an optional output in the 'optional field' section of a SAM/BAM file. Invoking the 'duplicate-tagging-policy' option, you can instruct the program to mark all the duplicates (All), only the optical duplicates (OpticalOnly), or no duplicates (DontTag). The records within the output of a SAM/BAM file will have values for the 'DT' tag (depending on the invoked 'duplicate-tagging-policy'), as either library/PCR-generated duplicates (LB), or sequencing-platform artifact duplicates (SQ). This tool uses the 'read-name-regex' and the 'optical-duplicate-pixel-distance' options as the primary methods to identify and differentiate duplicate types. Set read-name-regex' to null to skip optical duplicate detection, e.g. for RNA-seq or other data where duplicate sets are extremely large and estimating library complexity is not an aim. Note that without optical duplicate counts, library size estimation will be inaccurate.

MarkDuplicates also produces a metrics file indicating the numbers of duplicates for both single- and paired-end reads.

The program can take either coordinate-sorted or query-sorted inputs, however it is recommended that the input be query-sorted or query-grouped as the tool will have to perform an extra sort operation on the data in order to associate reads from the input bam with their mates.

If desired, duplicates can be removed using the 'remove-all-duplicates' and 'remove-sequencing-duplicates' options.

Usage example:

      gatk MarkDuplicatesSpark \\
-I input.bam \\
-O marked_duplicates.bam \\
-M marked_dup_metrics.txt

MarkDuplicates run locally specifying the core input (if 'spark.executor.cores' is unset spark will use all available cores on the machine)

       gatk MarkDuplicatesSpark \\
-I input.bam \\
-O marked_duplicates.bam \\
-M marked_dup_metrics.txt \\
--conf 'spark.executor.cores=5'

MarkDuplicates run on a spark cluster 5 machines

       gatk MarkDuplicatesSpark \\
-I input.bam \\
-O marked_duplicates.bam \\
-M marked_dup_metrics.txt \\
-- \\
--spark-runner SPARK \\
--spark-master \\
--num-executors 5 \\
--executor-cores 8
Please see MarkDuplicates for detailed explanations of the output metrics.

Additional Information

Read filters

This Read Filter is automatically applied to the data by the Engine before processing by MarkDuplicatesSpark.

MarkDuplicatesSpark specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--input
 -I
[] BAM/SAM/CRAM file containing reads
--output
 -O
null the output bam
Optional Tool Arguments
--arguments_file
[] read one or more arguments files and add them to the command line
--bam-partition-size
0 maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
--conf
[] spark properties to set on the spark context in the format =
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--do-not-mark-unmapped-mates
false Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates.
--duplicate-scoring-strategy
 -DS
SUM_OF_BASE_QUALITIES The scoring strategy for choosing the non-duplicate among candidates.
--duplicate-tagging-policy
DontTag Determines how duplicate types are recorded in the DT optional attribute.
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--gcs-project-for-requester-pays
"" Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
--help
 -h
false display the help message
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
[] One or more genomic intervals over which to operate
--metrics-file
 -M
null Path to write duplication metrics to.
--num-reducers
0 For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
--optical-duplicate-pixel-distance
100 The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal.
--output-shard-tmp-dir
null when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
--program-name
null Name of the program running
--read-name-regex
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.
--reference
 -R
null Reference sequence
--remove-all-duplicates
false If true do not write duplicates to the output file instead of writing them with appropriate flags set.
--remove-sequencing-duplicates
false If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set.
--sharded-output
false For tools that write an output, write the output in multiple pieces (shards)
--spark-master
local[*] URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--version
false display the version number for this tool
Optional Common Arguments
--add-output-vcf-command-line
true If true, adds a command line header line to created VCF files.
--create-output-bam-index
 -OBI
true If true, create a BAM index when writing a coordinate-sorted BAM file.
--create-output-bam-splitting-index
true If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.
--create-output-variant-index
 -OVI
true If true, create a VCF index when writing a coordinate-sorted VCF file.
--disable-read-filter
 -DF
[] Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
--exclude-intervals
 -XL
[] One or more genomic intervals to exclude from processing
--gatk-config-file
null A configuration file to use with the GATK.
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
[] Read filters to be applied before analysis
--read-index
[] Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--tmp-dir
null Temp directory to use.
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--showHidden
false display hidden arguments

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--add-output-vcf-command-line / -add-output-vcf-command-line

If true, adds a command line header line to created VCF files.

boolean  true


--arguments_file / NA

read one or more arguments files and add them to the command line

List[File]  []


--bam-partition-size / NA

maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).

long  0  [ [ -∞  ∞ ] ]


--conf / -conf

spark properties to set on the spark context in the format =

List[String]  []


--create-output-bam-index / -OBI

If true, create a BAM index when writing a coordinate-sorted BAM file.

boolean  true


--create-output-bam-splitting-index / NA

If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.

boolean  true


--create-output-variant-index / -OVI

If true, create a VCF index when writing a coordinate-sorted VCF file.

boolean  true


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)

boolean  false


--do-not-mark-unmapped-mates / NA

Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates.

boolean  false


--duplicate-scoring-strategy / -DS

The scoring strategy for choosing the non-duplicate among candidates.

The --duplicate-scoring-strategy argument is an enumerated type (MarkDuplicatesScoringStrategy), which can have one of the following values:

SUM_OF_BASE_QUALITIES
TOTAL_MAPPED_REFERENCE_LENGTH

MarkDuplicatesScoringStrategy  SUM_OF_BASE_QUALITIES


--duplicate-tagging-policy / NA

Determines how duplicate types are recorded in the DT optional attribute.

Exclusion: This argument cannot be used at the same time as remove-all-duplicates, remove-sequencing-duplicates.

The --duplicate-tagging-policy argument is an enumerated type (DuplicateTaggingPolicy), which can have one of the following values:

DontTag
OpticalOnly
All

DuplicateTaggingPolicy  DontTag


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals).

List[String]  []


--gatk-config-file / NA

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--gcs-project-for-requester-pays / NA

Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.

String  ""


--help / -h

display the help message

boolean  false


--input / -I

BAM/SAM/CRAM file containing reads

R List[String]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--metrics-file / -M

Path to write duplication metrics to.

String  null


--num-reducers / NA

For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.

int  0  [ [ -∞  ∞ ] ]


--optical-duplicate-pixel-distance / NA

The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal.

int  100  [ [ -∞  ∞ ] ]


--output / -O

the output bam

R String  null


--output-shard-tmp-dir / NA

when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used

Exclusion: This argument cannot be used at the same time as sharded-output.

String  null


--program-name / NA

Name of the program running

String  null


--QUIET / NA

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[String]  []


--read-name-regex / NA

Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.

String  


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--reference / -R

Reference sequence

String  null


--remove-all-duplicates / NA

If true do not write duplicates to the output file instead of writing them with appropriate flags set.

Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-sequencing-duplicates.

boolean  false


--remove-sequencing-duplicates / NA

If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set.

Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-all-duplicates.

boolean  false


--sharded-output / NA

For tools that write an output, write the output in multiple pieces (shards)

Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir.

boolean  false


--showHidden / -showHidden

display hidden arguments

boolean  false


--spark-master / NA

URL of the Spark Master to submit jobs to when using the Spark pipeline runner.

String  local[*]


--tmp-dir / NA

Temp directory to use.

String  null


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version / NA

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum

GATK version 4.1.0.0 built at Wed, 30 Jan 2019 10:21:04 +0530.