Release Notes: June 2018

Posted by KateN on 7 Jun 2018 (0)


June 26, 2018

  • At the ease of a click, FireCloud can now populate the outputs in your method config with reasonable default attribute names, so that you don't have to. The button is right next to the Output's section in your config in your workspace.
  • You can now choose "Copy Link Address" for the metadata files ("Download 'x' metadata") in the Data tab. This will increase the speed of downloading these files when using the command line.

June 21, 2018

  • UX improvements related to call caching, submission and workflow monitoring:
    • Call caching status is now displayed at the submission level and has been removed from the workflow level.
    • Call caching status now accurately reflects the value supplied by the user at submission time. Previously, the call caching value could falsely show as disabled for certain workflows.
    • Hovering over a submission's status column in the Monitor tab now shows the counts of that submission's workflows, grouped by workflow status.
    • When viewing an individual workflow, that workflow's status now shows as both icon and text. Previously it only had an icon.
    • When viewing an individual workflow, that workflow's unexpanded calls now now show their status.
    • When a workflow or a call does not have stdout or stderr logs, the stdout/stderr fields are now hidden. Previously the fields were displayed with a blank value, taking up screen real estate.
  • Updated the swagger-ui response models for the Monitor submission status and Retrieve workflow cost
  • Fixed intermittent errors after restarting a cluster and opening a notebook.

June 12, 2018

  • The new minimum cluster disk size for Notebooks is now 10GB. (Previously 100GB)

June 7, 2018

  • FireCloud has upgraded to Cromwell version 32, however it does not yet support WDL spec 1.0. New features for this version of Cromwell in FireCloud include the following:

    • The Google PAPI backend now supports specifying GPU through WDL runtime attributes. The two types of GPU supported are nvidia-tesla-k80 and nvidia-tesla-p100.

      • Important: Before adding a GPU, make sure it is available in the zone the job is running in: https://cloud.google.com/compute/docs/gpus/
        runtime {
        gpuType: "nvidia-tesla-k80"
        gpuCount: 2
        zones: ["us-central1-c"]
        }
    • File read limits have been introduced to help maintain Cromwell stability across all users and decrease downtime. In case you find your workflow failing due to read-limits, the best workaround will be to read the contents of a file by building a task to do so.
    • Cromwell now supports retrying failed tasks up to a specified count by declaring a value for the maxRetries key through the WDL runtime attributes. This retry option is introduced to provide a strategy for tackling transient job failures. For example, if a task fails due to a timeout from accessing an external service, then this option helps re-run the failed the task without having to re-run the entire workflow. It takes an Int as a value that indicates the maximum number of times Cromwell should retry a failed task. This retry is applied towards jobs that fail while executing the task command. If not specified, maxRetries defaults to 0.

      • If using the Google backend, it's important to note that the maxRetries count is independent from the preemptible count. For example, the task below can be retried up to 6 times if it's preempted 3 times AND the command execution fails 3 times.
      runtime {
      preemptible: 3
      maxRetries: 3
      }
  • We now log Jupyter server output to a file named jupyter.log which is accessible via a notebook or Jupyter API. This can be useful for debugging errors in Jupyter extensions/user scripts.
  • Your script output will now be logged to the initialization log in the cluster staging bucket. Failures in user scripts will now fail cluster creation.
  • The libxml2-dev library has now been installed to Jupyter notebooks. You can now install https://github.com/mlr-org/mlr and use unicode characters your notebooks.
  • You can now install JupyterLab as an extension to Jupyter notebooks.
  • Some sporadic cluster creation errors have been fixed.

Return to top

Thu 7 Jun 2018
Comment on this article


- Recent posts



- Follow us on Twitter

FireCloud

@BroadFireCloud

The analysis described in this paper is available in reproducible form in FireCloud; see https://t.co/uSChRZIoZg fo… https://t.co/j1zeh2TGRg
30 Nov 18
@xDBio_Inc @geoffjentry @WDL_dev @gatk_dev It’s pretty new, glad you like it! Think we should add the name itself a… https://t.co/65jwJEbyQu
23 Oct 18
We’re excited to deliver our #ASHG18 Invited Workshop on reproducible research tomorrow morning! Looking forward to… https://t.co/6vmB5qaA1H
17 Oct 18
RT @NCI_NCIP: Does @BroadFireCloud sound familiar? It should! @AllofUsResearch uses the same researcher workbench as this @NIH initiative.…
16 Oct 18
RT @broadinstitute: .@BroadGenomics put together a comprehensive list of @broadinstitute activities at #ASHG18. Find out about sessions, po…
16 Oct 18

- Our favorite tweets from others

The question is, how will @Microsoft and @Docker team up to solve collaborative challenges in the area of Bioinform… https://t.co/IKzElembVl
4 Dec 18
@geoffjentry Who doesn't love a Warp Pig? @WDL_dev and @gatk_dev are on the ball getting stickers out. Was happy to… https://t.co/91OODRpFOC
22 Oct 18
Does @BroadFireCloud sound familiar? It should! @AllofUsResearch uses the same researcher workbench as this @NIH in… https://t.co/8ZyoMBSG4x
12 Oct 18
Today at #GATK course, pipelining with WDL, Cromwell and Firecloud! @ClinicalBioinfo @FProgresoysalud @gatk_dev https://t.co/V4bLinpoPh
20 Sep 18
@dgmacarthur If anybody wants to sequence my genome to find the rare variant that is preventing me from going into… https://t.co/xGPGDZn9rQ
11 Jul 18

See more of our favorite tweets...