Showing tool doc from version 4.1.0.0 | The latest version is 4.1.4.1

PathSeqPipelineSpark

Combined tool that performs all steps: read filtering, microbe reference alignment, and abundance scoring

Category Metagenomics


Overview

Combined tool that performs all PathSeq steps: read filtering, microbe reference alignment and abundance scoring

PathSeq is a suite of tools for detecting microbial organisms in deep sequenced biological samples. It is capable of (1) quantifying microbial abundances in metagenomic samples containing a mixture of organisms, (2) detecting extremely low-abundance (<0.001%) organisms, and (3) identifying unknown sequences that may belong to novel organisms. The pipeline is based on a previously published tool of the same name (Kostic et al. 2011), which has been used in a wide range of studies to investigate novel associations between pathogens and human disease.

The pipeline consists of three phases: (1) removing reads that are low quality, low complexity, or match a given host (e.g. human) reference, (2) aligning the remaining reads to a microorganism reference, and (3) determining the taxonomic classification of each read and estimating microbe abundances. These steps can be performed individually using PathSeqFilterSpark, PathSeqBwaSpark, and PathSeqScoreSpark. To simplify using the pipeline, this tool combines the three steps into one. Further details can be found in the individual tools' documentations.

The filtering phase ensures that only high fidelity, non-host reads are classified, thus reducing computational costs and false positives. Note that while generally applicable to any type of biological sample (e.g. saliva, stool), PathSeq is particularly efficient for samples containing a high percentage of host reads (e.g. blood, tissue, CSF). PathSeq is able to detect evidence of low-abundance organisms and scales to use comprehensive genomic database references (e.g. > 100 Gbp). Lastly, because PathSeq works by identifying both host and known microbial sequences, it can also be used to discover novel pathogens by deducing the sample to sequences of unknown origin, which may be followed by de novo assembly.

Because sequence alignment is computationally burdensome, PathSeq is integrated with Apache Spark, enabling parallelization of all steps in the pipeline on multi-core workstations and cluster environments. This overcomes the high computational cost and permits rapid turnaround times (minutes to hours) in deep sequenced samples.

Reference files

Before running the PathSeq pipeline, the host and microbe references must be built. Prebuilt references for a standard microbial set are available in the GATK Resource Bundle.

To build custom references, users must provide FASTA files of the host and pathogen sequences. Tools are included to generate the necessary files: the host k-mer database (PathSeqBuildKmers), BWA-MEM index image files of the host and pathogen references (BwaMemIndexImageCreator), and a taxonomic tree of the pathogen reference (PathSeqBuildReferenceTaxonomy).

Input

  • BAM containing input reads (either unaligned or aligned to a host reference)
  • Host k-mer file generated using PathSeqBuildKmers
  • Host BWA-MEM index image generated using BwaMemIndexImageCreator
  • Microbe BWA-MEM index image generated using BwaMemIndexImageCreator
  • Indexed microbe reference FASTA file
  • Taxonomy file generated using PathSeqBuildReferenceTaxonomy

Output

  • Taxonomic scores table
  • Annotated BAM aligned to the microbe reference
  • Filter metrics file (optional)
  • Score metrics file (optional)

Usage example

This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.

Local mode:

 gatk PathSeqPipelineSpark  \
   --input input_reads.bam \
   --kmer-file host_kmers.bfi \
   --filter-bwa-image host_reference.img \
   --microbe-bwa-image microbe_reference.img \
   --microbe-fasta reference.fa \
   --taxonomy-file taxonomy.db \
   --min-clipped-read-length 60 \
   --min-score-identity 0.90 \
   --identity-margin 0.02 \
   --scores-output scores.txt \
   --output output_reads.bam \
   --filter-metrics filter_metrics.txt \
   --score-metrics score_metrics.txt
 

Spark cluster on Google Cloud DataProc with 6 16-core / 208GB memory worker nodes:

 gatk PathSeqPipelineSpark  \
   --input gs://my-gcs-bucket/input_reads.bam \
   --kmer-file hdfs://my-cluster-m:8020//host_kmers.bfi \
   --filter-bwa-image /references/host_reference.img \
   --microbe-bwa-image /references/microbe_reference.img \
   --microbe-fasta hdfs://my-cluster-m:8020//reference.fa \
   --taxonomy-file hdfs://my-cluster-m:8020//taxonomy.db \
   --min-clipped-read-length 60 \
   --min-score-identity 0.90 \
   --identity-margin 0.02 \
   --scores-output gs://my-gcs-bucket/scores.txt \
   --output gs://my-gcs-bucket/output_reads.bam \
   --filter-metrics gs://my-gcs-bucket/filter_metrics.txt \
   --score-metrics gs://my-gcs-bucket/score_metrics.txt \
   -- \
   --sparkRunner GCS \
   --cluster my_cluster \
   --driver-memory 8G \
   --executor-memory 32G \
   --num-executors 4 \
   --executor-cores 30 \
   --conf spark.yarn.executor.memoryOverhead=132000
 

Note that the host and microbe BWA images must be copied to the same paths on every worker node. The microbe FASTA, host k-mer file, and taxonomy file may also be copied to a single path on every worker node or to HDFS.

References

  1. Kostic, A. D. et al. (2011). PathSeq: software to identify or discover microbes by deep sequencing of human tissue. Nat. Biotechnol. 29, 393-396.

Additional Information

Read filters

This Read Filter is automatically applied to the data by the Engine before processing by PathSeqPipelineSpark.

PathSeqPipelineSpark specific arguments

This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.

Argument name(s) Default value Summary
Required Arguments
--input
 -I
[] BAM/SAM/CRAM file containing reads
--microbe-bwa-image
 -MI
null Microbe reference BWA index image file generated using BwaMemIndexImageCreator. If running on a Spark cluster, this must be distributed to local disk on each node.
--microbe-fasta
 -MF
null Reference corresponding to the microbe reference image file
--scores-output
 -SO
null URI for the taxonomic scores output
--taxonomy-file
 -T
null URI to the microbe reference taxonomy database built using PathSeqBuildReferenceTaxonomy
Optional Tool Arguments
--arguments_file
[] read one or more arguments files and add them to the command line
--bam-partition-size
0 maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
--bwa-score-threshold
30 Minimum score threshold for microbe alignments
--conf
[] spark properties to set on the spark context in the format =
--disable-sequence-dictionary-validation
false If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--divide-by-genome-length
false Divide abundance scores by each taxon's reference genome length (in millions)
--dust-mask-quality
2 Base quality to assign low-complexity bases
--dust-t
20.0 DUST algorithm score threshold
--dust-window
64 DUST algorithm window size
--filter-bwa-image
 -FI
null The BWA image file of the host reference. This must be distributed to local disk on each node.
--filter-bwa-seed-length
19 Minimum seed length for the host BWA alignment.
--filter-duplicates
true Filter duplicate reads
--filter-metrics
 -FM
null Log counts of filtered reads to this file
--gcs-max-retries
 -gcs-retries
20 If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
--gcs-project-for-requester-pays
"" Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
--help
 -h
false display the help message
--host-kmer-thresh
1 Host kmer count threshold.
--host-min-identity
30 Host alignment identity score threshold, in bp
--identity-margin
0.02 Identity margin, as a fraction of the best hit (between 0 and 1).
--interval-merging-rule
 -imr
ALL Interval merging rule for abutting intervals
--intervals
 -L
[] One or more genomic intervals over which to operate
--is-host-aligned
false Set if the input BAM is aligned to the host
--kmer-file
 -K
null Path to host k-mer file generated with PathSeqBuildKmers. K-mer filtering is skipped if this is not specified.
--max-adapter-mismatches
1 Maximum number of mismatches for adapter trimming
--max-alternate-hits
5000 Maximum number of alternate microbe alignments
--max-masked-bases
2 Max allowable number of masked bases per read
--microbe-min-seed-length
19 Minimum BWA-MEM seed length for the microbe alignment
--min-adapter-length
12 Minimum length of adapter sequence to trim
--min-base-quality
15 Bases below this call quality will be masked with 'N'
--min-clipped-read-length
31 Minimum length of reads after quality trimming
--min-score-identity
0.9 Alignment identity score threshold, as a fraction of the read length (between 0 and 1).
--not-normalized-by-kingdom
false If true, normalized abundance scores will be reported as a percentage within their kingdom.
--num-reducers
0 For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
--output
 -O
null Output BAM
--output-shard-tmp-dir
null when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
--pipeline-reads-per-partition
5000 Number of reads per partition to use for alignment and scoring.
--program-name
null Name of the program running
--quality-threshold
15 Quality score trimmer threshold
--readsPerPartitionOutput
1000000 Number of reads per partition for output. Use this to control the number of sharded BAMs (not --num-reducers).
--reference
 -R
null Reference sequence
--score-metrics
 -SM
null Log counts of mapped and unmapped reads to this file
--score-warnings
 -SW
null Write accessions found in the reads header but not the taxonomy database to this file
--sharded-output
false For tools that write an output, write the output in multiple pieces (shards)
--skip-quality-filters
false Skip low-quality and low-complexity read filtering
--spark-master
local[*] URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--version
false display the version number for this tool
Optional Common Arguments
--add-output-vcf-command-line
true If true, adds a command line header line to created VCF files.
--create-output-bam-index
 -OBI
true If true, create a BAM index when writing a coordinate-sorted BAM file.
--create-output-bam-splitting-index
true If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.
--create-output-variant-index
 -OVI
true If true, create a VCF index when writing a coordinate-sorted VCF file.
--disable-read-filter
 -DF
[] Read filters to be disabled before analysis
--disable-tool-default-read-filters
false Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
--exclude-intervals
 -XL
[] One or more genomic intervals to exclude from processing
--gatk-config-file
null A configuration file to use with the GATK.
--interval-exclusion-padding
 -ixp
0 Amount of padding (in bp) to add to each interval you are excluding.
--interval-padding
 -ip
0 Amount of padding (in bp) to add to each interval you are including.
--interval-set-rule
 -isr
UNION Set merging approach to use for combining interval inputs
--QUIET
false Whether to suppress job-summary info on System.err.
--read-filter
 -RF
[] Read filters to be applied before analysis
--read-index
[] Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
--read-validation-stringency
 -VS
SILENT Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
--tmp-dir
null Temp directory to use.
--use-jdk-deflater
 -jdk-deflater
false Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater
 -jdk-inflater
false Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity
INFO Control verbosity of logging.
Advanced Arguments
--filter-reads-per-partition
200000 Estimated reads per partition after quality, kmer, and BWA filtering
--score-reads-per-partition-estimate
200000 Estimated reads per Spark partition for scoring
--showHidden
false display hidden arguments
--skip-pre-bwa-repartition
false Skip pre-BWA repartition. Set to true for inputs with a high proportion of microbial reads that are not host coordinate-sorted.

Argument details

Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.


--add-output-vcf-command-line / -add-output-vcf-command-line

If true, adds a command line header line to created VCF files.

boolean  true


--arguments_file / NA

read one or more arguments files and add them to the command line

List[File]  []


--bam-partition-size / NA

maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).

long  0  [ [ -∞  ∞ ] ]


--bwa-score-threshold / -bwa-score-threshold

Minimum score threshold for microbe alignments
This parameter controls the minimum quality of the BWA alignments for the output.

int  30  [ [ 0  ∞ ] ]


--conf / -conf

spark properties to set on the spark context in the format =

List[String]  []


--create-output-bam-index / -OBI

If true, create a BAM index when writing a coordinate-sorted BAM file.

boolean  true


--create-output-bam-splitting-index / NA

If true, create a BAM splitting index (SBI) when writing a coordinate-sorted BAM file.

boolean  true


--create-output-variant-index / -OVI

If true, create a VCF index when writing a coordinate-sorted VCF file.

boolean  true


--disable-read-filter / -DF

Read filters to be disabled before analysis

List[String]  []


--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation

If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!

boolean  false


--disable-tool-default-read-filters / -disable-tool-default-read-filters

Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)

boolean  false


--divide-by-genome-length / -divide-by-genome-length

Divide abundance scores by each taxon's reference genome length (in millions)
If true, the score contributed by each read is divided by the mapped organism's genome length in the reference.

boolean  false


--dust-mask-quality / -dust-mask-quality

Base quality to assign low-complexity bases

int  2  [ [ -∞  ∞ ] ]


--dust-t / -dust-t

DUST algorithm score threshold
Controls the stringency of low-complexity filtering.

double  20.0  [ [ -∞  ∞ ] ]


--dust-window / -dust-window

DUST algorithm window size

int  64  [ [ -∞  ∞ ] ]


--exclude-intervals / -XL

One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals).

List[String]  []


--filter-bwa-image / -FI

The BWA image file of the host reference. This must be distributed to local disk on each node.
This file should be generated using BwaMemIndexImageCreator.

String  null


--filter-bwa-seed-length / -filter-bwa-seed-length

Minimum seed length for the host BWA alignment.
Controls the sensitivity of BWA alignment to the host reference. Shorter seed lengths will enhance detection of host reads during the subtraction phase but will also increase run time.

int  19  [ [ 1  [ 11  ∞ ] ]


--filter-duplicates / -filter-duplicates

Filter duplicate reads
If true, then for any two reads with identical sequences (or identical to the other's reverse complement), one will be filtered.

boolean  true


--filter-metrics / -FM

Log counts of filtered reads to this file
If specified, records the number of reads remaining after each of the following steps:

  • Pre-aligned host read filtering
  • Low-quality and low-complexity sequence filtering
  • Host read subtraction
  • Read deduplication

It also provides the following:

  • Number of low-quality and low-complexity reads removed
  • Number of host reads removed
  • Number of duplicate reads removed
  • Final number of reads
  • Final number of paired reads
  • Final number of unpaired reads

Note that using this option may substantially increase runtime.

String  null


--filter-reads-per-partition / -filter-reads-per-partition

Estimated reads per partition after quality, kmer, and BWA filtering
This is a parameter for fine-tuning memory performance. Lower values may result in less memory usage but possibly at the expense of greater computation time.

int  200000  [ [ 1  ∞ ] ]


--gatk-config-file / NA

A configuration file to use with the GATK.

String  null


--gcs-max-retries / -gcs-retries

If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection

int  20  [ [ -∞  ∞ ] ]


--gcs-project-for-requester-pays / NA

Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.

String  ""


--help / -h

display the help message

boolean  false


--host-kmer-thresh / -host-kmer-thresh

Host kmer count threshold.
Controls the stringency of read filtering based on host k-mer matching. Reads with at least this many matching k-mers in the host reference will be filtered.

int  1  [ [ 1  ∞ ] ]


--host-min-identity / -host-min-identity

Host alignment identity score threshold, in bp
Controls the stringency of read filtering based on alignment to the host reference. The identity score is defined as the number of matching bases less the number of deletions in the alignment.

int  30  [ [ 1  ∞ ] ]


--identity-margin / -identity-margin

Identity margin, as a fraction of the best hit (between 0 and 1).
For reads having multiple alignments, the best hit is always counted as long as it is above the identity score threshold. Any additional hits will be counted when its identity score is within this percentage of the best hit.

For example, consider a read that aligns to two different sequences, one with identity score 0.90 and the other with 0.85. If the minimum identity score is 0.7, the best hit (with score 0.90) is counted. In addition, if the identity margin is 10%, then any additional alignments at or above 0.90 * (1 - 0.10) = 0.81 would also be counted. Therefore in this example the second alignment with score 0.85 would be counted.

double  0.02  [ [ 0  1 ] ]


--input / -I

BAM/SAM/CRAM file containing reads

R List[String]  []


--interval-exclusion-padding / -ixp

Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-merging-rule / -imr

Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.

The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:

ALL
OVERLAPPING_ONLY

IntervalMergingRule  ALL


--interval-padding / -ip

Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.

int  0  [ [ -∞  ∞ ] ]


--interval-set-rule / -isr

Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.

The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:

UNION
Take the union of all intervals
INTERSECTION
Take the intersection of intervals (the subset that overlaps all intervals specified)

IntervalSetRule  UNION


--intervals / -L

One or more genomic intervals over which to operate

List[String]  []


--is-host-aligned / -is-host-aligned

Set if the input BAM is aligned to the host
PathSeq will rapidly filter the reads if they are aligned to a host reference, thus reducing run time.

boolean  false


--kmer-file / -K

Path to host k-mer file generated with PathSeqBuildKmers. K-mer filtering is skipped if this is not specified.

String  null


--max-adapter-mismatches / -max-adapter-mismatches

Maximum number of mismatches for adapter trimming

int  1  [ [ 0  ∞ ] ]


--max-alternate-hits / -max-alternate-hits

Maximum number of alternate microbe alignments
The maximum number of alternate alignments for each read, i.e. the alignments appearing in the XA tag.

int  5000  [ [ 0  ∞ ] ]


--max-masked-bases / -max-masked-bases

Max allowable number of masked bases per read
This is the threshold for filtering reads based on the number of 'N' values present in the sequence. Note that the low-complexity DUST filter and quality filter mask using 'N' bases. Therefore, this parameter is the threshold for the sum of:

  • The number of N's in the original input read
  • The number of low-quality base calls
  • The number of low-complexity bases

int  2  [ [ 0  ∞ ] ]


--microbe-bwa-image / -MI

Microbe reference BWA index image file generated using BwaMemIndexImageCreator. If running on a Spark cluster, this must be distributed to local disk on each node.

R String  null


--microbe-fasta / -MF

Reference corresponding to the microbe reference image file

R String  null


--microbe-min-seed-length / -microbe-min-seed-length

Minimum BWA-MEM seed length for the microbe alignment
This parameter controls the sensitivity of the BWA-MEM aligner. Smaller values result in more alignments at the expense of computation time.

int  19  [ [ 1  [ 11  ∞ ] ]


--min-adapter-length / -min-adapter-length

Minimum length of adapter sequence to trim
Adapter trimming will require a match of at least this length to a known adapter.

int  12  [ [ 1  ∞ ] ]


--min-base-quality / -min-base-quality

Bases below this call quality will be masked with 'N'

int  15  [ [ 1  ∞ ] ]


--min-clipped-read-length / -min-clipped-read-length

Minimum length of reads after quality trimming
Reads are trimmed based on base call quality and low-complexity content. Decreasing the value will enhance pathogen detection (higher sensitivity) but also result in undesired false positives and ambiguous microbe alignments (lower specificity).

int  31  [ [ 1  [ 31  ∞ ] ]


--min-score-identity / -min-score-identity

Alignment identity score threshold, as a fraction of the read length (between 0 and 1).
This parameter controls the stringency of the microbe alignment. The identity score threshold is defined as the number of matching bases minus number of deletions. Alignments below this threshold score will be ignored.

double  0.9  [ [ 0  1 ] ]


--not-normalized-by-kingdom / -not-normalized-by-kingdom

If true, normalized abundance scores will be reported as a percentage within their kingdom.
Comparmentalizes the normalized abundance scores by kingdom.

boolean  false


--num-reducers / NA

For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.

int  0  [ [ -∞  ∞ ] ]


--output / -O

Output BAM

String  null


--output-shard-tmp-dir / NA

when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used

Exclusion: This argument cannot be used at the same time as sharded-output.

String  null


--pipeline-reads-per-partition / -pipeline-reads-per-partition

Number of reads per partition to use for alignment and scoring.

int  5000  [ [ 100  ∞ ] ]


--program-name / NA

Name of the program running

String  null


--quality-threshold / -quality-threshold

Quality score trimmer threshold
Controls the stingency of base call quality-based read trimming. Higher values result in more trimming.

int  15  [ [ 1  ∞ ] ]


--QUIET / NA

Whether to suppress job-summary info on System.err.

Boolean  false


--read-filter / -RF

Read filters to be applied before analysis

List[String]  []


--read-index / -read-index

Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.

List[String]  []


--read-validation-stringency / -VS

Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.

The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:

STRICT
LENIENT
SILENT

ValidationStringency  SILENT


--readsPerPartitionOutput / NA

Number of reads per partition for output. Use this to control the number of sharded BAMs (not --num-reducers).
Because numReducers is based on the input size, it causes too many partitions to be produced when the output size is much smaller.

int  1000000  [ [ 100  [ 100,000  ∞ ] ]


--reference / -R

Reference sequence

String  null


--score-metrics / -SM

Log counts of mapped and unmapped reads to this file
If specified, records the following metrics:

  • Number of reads mapped to the microbial reference
  • Number of unmapped reads

Note that using this option may increase runtime.

String  null


--score-reads-per-partition-estimate / -score-reads-per-partition-estimate

Estimated reads per Spark partition for scoring
This parameter is for fine-tuning memory performance. Lower values may result in less memory usage but possibly at the expense of greater computation time.

int  200000  [ [ 1  ∞ ] ]


--score-warnings / -SW

Write accessions found in the reads header but not the taxonomy database to this file

String  null


--scores-output / -SO

URI for the taxonomic scores output

R String  null


--sharded-output / NA

For tools that write an output, write the output in multiple pieces (shards)

Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir.

boolean  false


--showHidden / -showHidden

display hidden arguments

boolean  false


--skip-pre-bwa-repartition / -skip-pre-bwa-repartition

Skip pre-BWA repartition. Set to true for inputs with a high proportion of microbial reads that are not host coordinate-sorted.

Advanced optimization option that should be used only in the case of inputs with a high proportion of microbial reads that are not host-aligned/coordinate-sorted.

In the filter tool, the input reads are initially divided up into smaller partitions (default size is usually the size of one HDFS block, or ~64MB) that Spark works on in parallel. In samples with a low proportion of microbial reads (e.g. < 1%), the steps leading up to the host BWA alignment will whittle these partitions down to a small fraction of their original size. At that point, the distribution of reads across the partitions may be unbalanced.

For example, say the input is 256MB and Spark splits this into 4 even partitions. It is possible that, after running through the quality filters and host kmer search, there are 5% remaining in partition #1, 8% in partition #2, 2% in partition #3, and 20% in partition #4. Thus there is an imbalance of work across the partitions. To correct this, a "reparitioning" is invoked that distributes the reads evenly. Note this is especially important for host-aligned, coordinate-sorted inputs, in which unmapped reads would be concentrated in the last partitions.

If, however, the proportion of microbial reads is higher, say 30%, then the partitions are generally more balanced (except for in the aforementioned coordinate-sorted case). In this case, the time spent doing the repartitioning is usually greater than the time saved by rebalancing, and this option should be invoked.

boolean  false


--skip-quality-filters / -skip-quality-filters

Skip low-quality and low-complexity read filtering

boolean  false


--spark-master / NA

URL of the Spark Master to submit jobs to when using the Spark pipeline runner.

String  local[*]


--taxonomy-file / -T

URI to the microbe reference taxonomy database built using PathSeqBuildReferenceTaxonomy

R String  null


--tmp-dir / NA

Temp directory to use.

String  null


--use-jdk-deflater / -jdk-deflater

Whether to use the JdkDeflater (as opposed to IntelDeflater)

boolean  false


--use-jdk-inflater / -jdk-inflater

Whether to use the JdkInflater (as opposed to IntelInflater)

boolean  false


--verbosity / -verbosity

Control verbosity of logging.

The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:

ERROR
WARNING
INFO
DEBUG

LogLevel  INFO


--version / NA

display the version number for this tool

boolean  false


Return to top


See also General Documentation | Tool Docs Index Tool Documentation Index | Support Forum

GATK version 4.1.0.0 built at Wed, 30 Jan 2019 10:21:04 +0530.