Release Notes: June 2018

Posted by KateN on 7 Jun 2018 (0)

June 26, 2018

  • At the ease of a click, FireCloud can now populate the outputs in your method config with reasonable default attribute names, so that you don't have to. The button is right next to the Output's section in your config in your workspace.
  • You can now choose "Copy Link Address" for the metadata files ("Download 'x' metadata") in the Data tab. This will increase the speed of downloading these files when using the command line.

June 21, 2018

  • UX improvements related to call caching, submission and workflow monitoring:
    • Call caching status is now displayed at the submission level and has been removed from the workflow level.
    • Call caching status now accurately reflects the value supplied by the user at submission time. Previously, the call caching value could falsely show as disabled for certain workflows.
    • Hovering over a submission's status column in the Monitor tab now shows the counts of that submission's workflows, grouped by workflow status.
    • When viewing an individual workflow, that workflow's status now shows as both icon and text. Previously it only had an icon.
    • When viewing an individual workflow, that workflow's unexpanded calls now now show their status.
    • When a workflow or a call does not have stdout or stderr logs, the stdout/stderr fields are now hidden. Previously the fields were displayed with a blank value, taking up screen real estate.
  • Updated the swagger-ui response models for the Monitor submission status and Retrieve workflow cost
  • Fixed intermittent errors after restarting a cluster and opening a notebook.

June 12, 2018

  • The new minimum cluster disk size for Notebooks is now 10GB. (Previously 100GB)

June 7, 2018

  • FireCloud has upgraded to Cromwell version 32, however it does not yet support WDL spec 1.0. New features for this version of Cromwell in FireCloud include the following:

    • The Google PAPI backend now supports specifying GPU through WDL runtime attributes. The two types of GPU supported are nvidia-tesla-k80 and nvidia-tesla-p100.

      • Important: Before adding a GPU, make sure it is available in the zone the job is running in:
        runtime {
        gpuType: "nvidia-tesla-k80"
        gpuCount: 2
        zones: ["us-central1-c"]
    • File read limits have been introduced to help maintain Cromwell stability across all users and decrease downtime. In case you find your workflow failing due to read-limits, the best workaround will be to read the contents of a file by building a task to do so.
    • Cromwell now supports retrying failed tasks up to a specified count by declaring a value for the maxRetries key through the WDL runtime attributes. This retry option is introduced to provide a strategy for tackling transient job failures. For example, if a task fails due to a timeout from accessing an external service, then this option helps re-run the failed the task without having to re-run the entire workflow. It takes an Int as a value that indicates the maximum number of times Cromwell should retry a failed task. This retry is applied towards jobs that fail while executing the task command. If not specified, maxRetries defaults to 0.

      • If using the Google backend, it's important to note that the maxRetries count is independent from the preemptible count. For example, the task below can be retried up to 6 times if it's preempted 3 times AND the command execution fails 3 times.
      runtime {
      preemptible: 3
      maxRetries: 3
  • We now log Jupyter server output to a file named jupyter.log which is accessible via a notebook or Jupyter API. This can be useful for debugging errors in Jupyter extensions/user scripts.
  • Your script output will now be logged to the initialization log in the cluster staging bucket. Failures in user scripts will now fail cluster creation.
  • The libxml2-dev library has now been installed to Jupyter notebooks. You can now install and use unicode characters your notebooks.
  • You can now install JupyterLab as an extension to Jupyter notebooks.
  • Some sporadic cluster creation errors have been fixed.

Return to top

Thu 7 Jun 2018
Comment on this article

- Recent posts

- Follow us on Twitter



@jamesaeddy Going to phone a friend on that one — @geoffjentry care to explain?
19 Mar 19
Heads up if you use the TARGET and/or TCGA data workspaces: the file paths to TARGET and TCGA data are going to cha…
19 Mar 19
RT @lynnlangit: really love the sizing notebooks feature in @ProjectJupyter #Bioinformatics
15 Mar 19
To @harvardmed researchers and friends: Come meet, the platform that powers FireCloud and m…
13 Mar 19

- Our favorite tweets from others

The macaque genome isn't finished so it has over 200K contigs. I call them the Rhesus pieces.
15 Mar 19
It's been years in the making, but 500k structural variants (SVs) from 15k genomes in #gnomAD are out 🎉today🎉! Pre…
15 Mar 19
@fdmts @tanyacash21 and R. Munshi @broadinstitute are beginning their session on hype vs. reality in cloud capabili…
12 Mar 19
watching "Genomics TV" #bioinformatics #GATK @broadinstitute
5 Mar 19
“Running Containers — for Biologists” #Bioinformatics #Docker
4 Mar 19

See more of our favorite tweets...