Showing tool doc from version 4.0.3.0 | The latest version is 4.1.4.0

PathSeqBwaSpark

Step 2: Aligns reads to the microbe reference

Category Metagenomics


Overview

Align reads to a microbe reference using BWA-MEM and Spark. Second step in the PathSeq pipeline.

See PathSeqPipelineSpark for an overview of the PathSeq pipeline.

This is a specialized version of BwaSpark designed for the PathSeq pipeline. The main difference is that alignments with SAM bit flag 0x100 or 0x800 (indicating secondary or supplementary alignment) are omitted in the output.

Inputs

  • Unaligned queryname-sorted BAM file containing only paired reads (paired-end reads with mates)
  • Unaligned BAM file containing only unpaired reads (paired-end reads without mates and/or single-end reads)
  • *Microbe reference BWA-MEM index image generated using BwaMemIndexImageCreator
  • *Indexed microbe reference FASTA file

*A standard microbe reference is available in the GATK Resource Bundle.

Output

  • Aligned BAM file containing the paired reads (paired-end reads with mates)
  • Aligned BAM file containing the unpaired reads (paired-end reads without mates and/or single-end reads)

Usage example

This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.

Local mode:

 gatk PathSeqBwaSpark  \
   --paired-input input_reads_paired.bam \
   --unpaired-input input_reads_unpaired.bam \
   --paired-output output_reads_paired.bam \
   --unpaired-output output_reads_unpaired.bam \
   --microbe-bwa-image reference.img \
   --microbe-fasta reference.fa
 

Spark cluster on Google Cloud DataProc with 6 32-core / 208GB memory worker nodes:

 gatk PathSeqBwaSpark  \
   --paired-input gs://my-gcs-bucket/input_reads_paired.bam \
   --unpaired-input gs://my-gcs-bucket/input_reads_unpaired.bam \
   --paired-output gs://my-gcs-bucket/output_reads_paired.bam \
   --unpaired-output gs://my-gcs-bucket/output_reads_unpaired.bam \
   --microbe-bwa-image /references/reference.img \
   --microbe-fasta hdfs://my-cluster-m:8020//references/reference.fa \
   --bam-partition-size 4000000 \
   -- \
   --sparkRunner GCS \
   --cluster my_cluster \
   --driver-memory 8G \
   --executor-memory 32G \
   --num-executors 4 \
   --executor-cores 30 \
   --conf spark.yarn.executor.memoryOverhead=132000
 

Note that the microbe BWA image must be copied to the same path on every worker node. The microbe FASTA may also be copied to a single path on every worker node or to HDFS.

Notes

For small input BAMs, it is recommended that the user reduce the BAM partition size in order to increase parallelism. Note that insert size is estimated separately for each Spark partition. Consequently partition size and other Spark parameters can affect the output for paired-end alignment.

To minimize output file size, header lines are included only for sequences with at least one alignment.

PathSeqBwaSpark specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--microbe-bwa-image
 -MI
null Microbe reference BWA index image file generated using BwaMemIndexImageCreator. If running on a Spark cluster, this must be distributed to local disk on each node.
--microbe-fasta
 -MF
null Reference corresponding to the microbe reference image file
Optional Tool Arguments
--arguments_file
[] read one or more arguments files and add them to the command line
--bam-partition-size
0 maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
--bwa-score-threshold
30 Minimum score threshold for microbe alignments
--conf
[] spark properties to set on the spark context in the format =
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--help
 -h
false display the help message
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
[] One or more genomic intervals over which to operate
--max-alternate-hits
5000 Maximum number of alternate microbe alignments
--microbe-min-seed-length
19 Minimum BWA-MEM seed length for the microbe alignment
--num-reducers
0 For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
--paired-input
 -PI
null Input queryname-sorted BAM containing only paired reads
--paired-output
 -PO
null Output BAM containing only paired reads
--program-name
null Name of the program running
--reference
 -R
null Reference sequence
--sharded-output
false For tools that write an output, write the output in multiple pieces (shards)
--spark-master
local[*] URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--unpaired-input
 -UI
null Input BAM containing only unpaired reads
--unpaired-output
 -UO
null Output BAM containing only unpaired reads
--version
false display the version number for this tool
Optional Common Arguments
--disable-read-filter
 -DF
[] Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters
--exclude-intervals
 -XL
[] One or more genomic intervals to exclude from processing
--gatk-config-file
null A configuration file to use with the GATK.
--input
 -I
[] BAM/SAM/CRAM file containing reads
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
[] Read filters to be applied before analysis
--read-index
[] Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--TMP_DIR
[] Undocumented option
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--showHidden
false display hidden arguments

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--arguments_file / NA

read one or more arguments files and add them to the command line

List[File]  []


--bam-partition-size / NA

maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).

long  0  [ [ -∞  ∞ ] ]


--bwa-score-threshold / -bwa-score-threshold

Minimum score threshold for microbe alignments
This parameter controls the minimum quality of the BWA alignments for the output.

int  30  [ [ 0  ∞ ] ]


--conf / -conf

spark properties to set on the spark context in the format =

List[String]  []


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters

boolean  false


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals).

List[String]  []


--gatk-config-file / NA

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--help / -h

display the help message

boolean  false


--input / -I

BAM/SAM/CRAM file containing reads

List[String]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--max-alternate-hits / -max-alternate-hits

Maximum number of alternate microbe alignments
The maximum number of alternate alignments for each read, i.e. the alignments appearing in the XA tag.

int  5000  [ [ 0  ∞ ] ]


--microbe-bwa-image / -MI

Microbe reference BWA index image file generated using BwaMemIndexImageCreator. If running on a Spark cluster, this must be distributed to local disk on each node.

R String  null


--microbe-fasta / -MF

Reference corresponding to the microbe reference image file

R String  null


--microbe-min-seed-length / -microbe-min-seed-length

Minimum BWA-MEM seed length for the microbe alignment
This parameter controls the sensitivity of the BWA-MEM aligner. Smaller values result in more alignments at the expense of computation time.

int  19  [ [ 1  [ 11  ∞ ] ]


--num-reducers / NA

For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.

int  0  [ [ -∞  ∞ ] ]


--paired-input / -PI

Input queryname-sorted BAM containing only paired reads

String  null


--paired-output / -PO

Output BAM containing only paired reads

String  null


--program-name / NA

Name of the program running

String  null


--QUIET / NA

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[String]  []


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--reference / -R

Reference sequence

String  null


--sharded-output / NA

For tools that write an output, write the output in multiple pieces (shards)

boolean  false


--showHidden / -showHidden

display hidden arguments

boolean  false


--spark-master / NA

URL of the Spark Master to submit jobs to when using the Spark pipeline runner.

String  local[*]


--TMP_DIR / NA

Undocumented option

List[File]  []


--unpaired-input / -UI

Input BAM containing only unpaired reads

String  null


--unpaired-output / -UO

Output BAM containing only unpaired reads

String  null


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version / NA

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Docs Index | Support Forum

GATK version 4.0.3.0 built at 09-43-2018 09:43:10.