Showing tool doc from version 4.1.4.0 | The latest version is 4.1.4.0

MarkDuplicatesSpark

MarkDuplicates on Spark

Category Read Data Manipulation


Overview

MarkDuplicates on Spark

This is a Spark implementation of Picard MarkDuplicates that allows the tool to be run in parallel on multiple cores on a local machine or multiple machines on a Spark cluster while still matching the output of the non-Spark Picard version of the tool. Since the tool requires holding all of the readnames in memory while it groups read information, machine configuration and starting sort-order impact tool performance.

Here are some differences of note between MarkDuplicatesSpark and Picard MarkDuplicates.
  • MarkDuplicatesSpark processing can replace both the MarkDuplicates and SortSam steps of the Best Practices single sample pipeline. After flagging duplicate sets, the tool automatically coordinate-sorts the records. It is still necessary to subsequently run SetNmMdAndUqTags before running BQSR.
  • The tool is optimized to run on queryname-grouped alignments (that is, all reads with the same queryname are together in the input file). If provided coordinate-sorted alignments, the tool will spend additional time first queryname sorting the reads internally. This can result in the tool being up to 2x slower processing under some circumstances.
  • Due to MarkDuplicatesSpark queryname-sorting coordinate-sorted inputs internally at the start, the tool produces identical results regardless of the input sort-order. That is, it will flag duplicates sets that include secondary, and supplementary and unmapped mate records no matter the sort-order of the input. This differs from how Picard MarkDuplicates behaves given the differently sorted inputs.
  • Collecting duplicate metrics slows down performance and thus the metrics collection is optional and must be specified for the Spark version of the tool with '-M'. It is possible to collect the metrics with the standalone Picard tool EstimateLibraryComplexity.
  • MarkDuplicatesSpark is optimized to run locally on a single machine by leveraging core parallelism that MarkDuplicates and SortSam cannot. It will typically run faster than MarkDuplicates and SortSam by a factor of 15% over the same data at 2 cores and will scale linearly to upwards of 16 cores. This means MarkDuplicatesSpark, even without access to a Spark cluster, is faster than MarkDuplicates.
  • MarkDuplicatesSpark can be run with multiple input bams. If this is the case all of the inputs must be a mix queryname-grouped or queryname sorted.

For a typical 30x coverage WGS BAM, we recommend running on a machine with at least 16 GB. Memory usage scales with library complexity and the tool will need more memory for larger or more complex data. If the tool is running slowly it is possible Spark is running out of memory and is spilling data to disk excessively. If this is the case then increasing the memory available to the tool should yield speedup to a threshold; otherwise, increasing memory should have no effect beyond that threshold.

Note that this tool does not support UMI based duplicate marking.

See MarkDuplicates documentation for details on tool features and background information.

Usage examples

Provide queryname-grouped reads to MarkDuplicatesSpark
      gatk MarkDuplicatesSpark \
            -I input.bam \
            -O marked_duplicates.bam
     
Additionally produce estimated library complexity metrics
     gatk MarkDuplicatesSpark \
             -I input.bam \
             -O marked_duplicates.bam \
             -M marked_dup_metrics.txt

     
MarkDuplicatesSpark run locally specifying the removal of sequencing duplicates
       gatk MarkDuplicatesSpark \
            -I input.bam \
            -O marked_duplicates.bam \
            --remove-sequencing-duplicates
     
MarkDuplicatesSpark run locally tagging OpticalDuplicates using the "DT" attribute for reads
       gatk MarkDuplicatesSpark \
            -I input.bam \
            -O marked_duplicates.bam \
            --duplicate-tagging-policy OpticalOnly
     
MarkDuplicates run locally specifying the core input. Note if 'spark.executor.cores' is unset, Spark will use all available cores on the machine.
       gatk MarkDuplicatesSpark \
            -I input.bam \
            -O marked_duplicates.bam \
            -M marked_dup_metrics.txt \
            --conf 'spark.executor.cores=5'
     
MarkDuplicates run on a Spark cluster of five executors and with eight executor cores
       gatk MarkDuplicatesSpark \
            -I input.bam \
            -O marked_duplicates.bam \
            -M marked_dup_metrics.txt \
            -- \
            --spark-runner SPARK \
            --spark-master MASTER_URL \
            --num-executors 5 \
            --executor-cores 8
     
Please see Picard DuplicationMetrics for detailed explanations of the output metrics.

Notes

  1. This Spark tool requires a significant amount of disk operations. Run with both the input data and outputs on high throughput SSDs when possible. When pipelining this tool on Google Compute Engine instances, for best performance requisition machines with LOCAL SSDs.
  2. Furthermore, we recommend explicitly setting the Spark temp directory to an available SSD when running this in local mode by adding the argument --conf 'spark.local.dir=/PATH/TO/TEMP/DIR'. See this forum discussion for details.

Additional Information

Read filters

This Read Filter is automatically applied to the data by the Engine before processing by MarkDuplicatesSpark.

MarkDuplicatesSpark specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--input
 -I
[] BAM/SAM/CRAM file containing reads
--output
 -O
null the output bam
Optional Tool Arguments
--arguments_file
[] read one or more arguments files and add them to the command line
--bam-partition-size
0 maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
--conf
[] Spark properties to set on the Spark context in the format =
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--do-not-mark-unmapped-mates
false Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates.
--duplicate-scoring-strategy
 -DS
SUM_OF_BASE_QUALITIES The scoring strategy for choosing the non-duplicate among candidates.
--duplicate-tagging-policy
DontTag Determines how duplicate types are recorded in the DT optional attribute.
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--gcs-project-for-requester-pays
"" Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
--help
 -h
false display the help message
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
[] One or more genomic intervals over which to operate
--metrics-file
 -M
null Path to write duplication metrics to.
--num-reducers
0 For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
--optical-duplicate-pixel-distance
100 The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal.
--output-shard-tmp-dir
null when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
--program-name
null Name of the program running
--read-name-regex
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.
--reference
 -R
null Reference sequence
--remove-all-duplicates
false If true do not write duplicates to the output file instead of writing them with appropriate flags set.
--remove-sequencing-duplicates
false If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set.
--sharded-output
false For tools that write an output, write the output in multiple pieces (shards)
--spark-master
local[*] URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--spark-verbosity
null Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}
--use-nio
false Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.)
--version
false display the version number for this tool
Optional Common Arguments
--add-output-vcf-command-line
true If true, adds a command line header line to created VCF files.
--create-output-bam-index
 -OBI
true If true, create a BAM index when writing a coordinate-sorted BAM file.
--create-output-bam-splitting-index
true If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.
--create-output-variant-index
 -OVI
true If true, create a VCF index when writing a coordinate-sorted VCF file.
--disable-read-filter
 -DF
[] Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
--exclude-intervals
 -XL
[] One or more genomic intervals to exclude from processing
--gatk-config-file
null A configuration file to use with the GATK.
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
[] Read filters to be applied before analysis
--read-index
[] Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--tmp-dir
null Temp directory to use.
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--allow-multiple-sort-orders-in-input
false Allow non-queryname sorted inputs when specifying multiple input bams.
--showHidden
false display hidden arguments
--treat-unsorted-as-querygroup-ordered
false Treat unsorted files as query-group orderd files. WARNING: This option disables a basic safety check and may result in unexpected behavior if the file is truly unordered

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--add-output-vcf-command-line / -add-output-vcf-command-line

If true, adds a command line header line to created VCF files.

boolean  true


--allow-multiple-sort-orders-in-input / NA

Allow non-queryname sorted inputs when specifying multiple input bams.

boolean  false


--arguments_file / NA

read one or more arguments files and add them to the command line

List[File]  []


--bam-partition-size / NA

maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).

long  0  [ [ -∞  ∞ ] ]


--conf / NA

Spark properties to set on the Spark context in the format =

List[String]  []


--create-output-bam-index / -OBI

If true, create a BAM index when writing a coordinate-sorted BAM file.

boolean  true


--create-output-bam-splitting-index / NA

If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.

boolean  true


--create-output-variant-index / -OVI

If true, create a VCF index when writing a coordinate-sorted VCF file.

boolean  true


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)

boolean  false


--do-not-mark-unmapped-mates / NA

Enabling this option will mean unmapped mates of duplicate marked reads will not be marked as duplicates.

boolean  false


--duplicate-scoring-strategy / -DS

The scoring strategy for choosing the non-duplicate among candidates.

The --duplicate-scoring-strategy argument is an enumerated type (MarkDuplicatesScoringStrategy), which can have one of the following values:

SUM_OF_BASE_QUALITIES
TOTAL_MAPPED_REFERENCE_LENGTH

MarkDuplicatesScoringStrategy  SUM_OF_BASE_QUALITIES


--duplicate-tagging-policy / NA

Determines how duplicate types are recorded in the DT optional attribute.

Exclusion: This argument cannot be used at the same time as remove-all-duplicates, remove-sequencing-duplicates.

The --duplicate-tagging-policy argument is an enumerated type (DuplicateTaggingPolicy), which can have one of the following values:

DontTag
OpticalOnly
All

DuplicateTaggingPolicy  DontTag


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals).

List[String]  []


--gatk-config-file / NA

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--gcs-project-for-requester-pays / NA

Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.

String  ""


--help / -h

display the help message

boolean  false


--input / -I

BAM/SAM/CRAM file containing reads

R List[String]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--metrics-file / -M

Path to write duplication metrics to.

String  null


--num-reducers / NA

For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.

int  0  [ [ -∞  ∞ ] ]


--optical-duplicate-pixel-distance / NA

The maximum offset between two duplicate clusters in order to consider them optical duplicates. This should usually be set to some fairly small number (e.g. 5-10 pixels) unless using later versions of the Illumina pipeline that multiply pixel values by 10, in which case 50-100 is more normal.

int  100  [ [ -∞  ∞ ] ]


--output / -O

the output bam

R String  null


--output-shard-tmp-dir / NA

when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used

Exclusion: This argument cannot be used at the same time as sharded-output.

String  null


--program-name / NA

Name of the program running

String  null


--QUIET / NA

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[String]  []


--read-name-regex / NA

Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.

String  


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--reference / -R

Reference sequence

String  null


--remove-all-duplicates / NA

If true do not write duplicates to the output file instead of writing them with appropriate flags set.

Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-sequencing-duplicates.

boolean  false


--remove-sequencing-duplicates / NA

If true do not write optical/sequencing duplicates to the output file instead of writing them with appropriate flags set.

Exclusion: This argument cannot be used at the same time as duplicate-tagging-policy, remove-all-duplicates.

boolean  false


--sharded-output / NA

For tools that write an output, write the output in multiple pieces (shards)

Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir.

boolean  false


--showHidden / -showHidden

display hidden arguments

boolean  false


--spark-master / NA

URL of the Spark Master to submit jobs to when using the Spark pipeline runner.

String  local[*]


--spark-verbosity / NA

Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}

String  null


--tmp-dir / NA

Temp directory to use.

GATKPathSpecifier  null


--treat-unsorted-as-querygroup-ordered / NA

Treat unsorted files as query-group orderd files. WARNING: This option disables a basic safety check and may result in unexpected behavior if the file is truly unordered

boolean  false


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--use-nio / NA

Whether to use NIO or the Hadoop filesystem (default) for reading files. (Note that the Hadoop filesystem is always used for writing files.)

boolean  false


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version / NA

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum

GATK version 4.1.4.0 built at Wed, 9 Oct 2019 15:19:59 -0400.