Release Notes: June 2018

Posted by KateN on 7 Jun 2018 (0)


June 26, 2018

  • At the ease of a click, FireCloud can now populate the outputs in your method config with reasonable default attribute names, so that you don't have to. The button is right next to the Output's section in your config in your workspace.
  • You can now choose "Copy Link Address" for the metadata files ("Download 'x' metadata") in the Data tab. This will increase the speed of downloading these files when using the command line.

June 21, 2018

  • UX improvements related to call caching, submission and workflow monitoring:
    • Call caching status is now displayed at the submission level and has been removed from the workflow level.
    • Call caching status now accurately reflects the value supplied by the user at submission time. Previously, the call caching value could falsely show as disabled for certain workflows.
    • Hovering over a submission's status column in the Monitor tab now shows the counts of that submission's workflows, grouped by workflow status.
    • When viewing an individual workflow, that workflow's status now shows as both icon and text. Previously it only had an icon.
    • When viewing an individual workflow, that workflow's unexpanded calls now now show their status.
    • When a workflow or a call does not have stdout or stderr logs, the stdout/stderr fields are now hidden. Previously the fields were displayed with a blank value, taking up screen real estate.
  • Updated the swagger-ui response models for the Monitor submission status and Retrieve workflow cost
  • Fixed intermittent errors after restarting a cluster and opening a notebook.

June 12, 2018

  • The new minimum cluster disk size for Notebooks is now 10GB. (Previously 100GB)

June 7, 2018

  • FireCloud has upgraded to Cromwell version 32, however it does not yet support WDL spec 1.0. New features for this version of Cromwell in FireCloud include the following:

    • The Google PAPI backend now supports specifying GPU through WDL runtime attributes. The two types of GPU supported are nvidia-tesla-k80 and nvidia-tesla-p100.

      • Important: Before adding a GPU, make sure it is available in the zone the job is running in: https://cloud.google.com/compute/docs/gpus/
        runtime {
        gpuType: "nvidia-tesla-k80"
        gpuCount: 2
        zones: ["us-central1-c"]
        }
    • File read limits have been introduced to help maintain Cromwell stability across all users and decrease downtime. In case you find your workflow failing due to read-limits, the best workaround will be to read the contents of a file by building a task to do so.
    • Cromwell now supports retrying failed tasks up to a specified count by declaring a value for the maxRetries key through the WDL runtime attributes. This retry option is introduced to provide a strategy for tackling transient job failures. For example, if a task fails due to a timeout from accessing an external service, then this option helps re-run the failed the task without having to re-run the entire workflow. It takes an Int as a value that indicates the maximum number of times Cromwell should retry a failed task. This retry is applied towards jobs that fail while executing the task command. If not specified, maxRetries defaults to 0.

      • If using the Google backend, it's important to note that the maxRetries count is independent from the preemptible count. For example, the task below can be retried up to 6 times if it's preempted 3 times AND the command execution fails 3 times.
      runtime {
      preemptible: 3
      maxRetries: 3
      }
  • We now log Jupyter server output to a file named jupyter.log which is accessible via a notebook or Jupyter API. This can be useful for debugging errors in Jupyter extensions/user scripts.
  • Your script output will now be logged to the initialization log in the cluster staging bucket. Failures in user scripts will now fail cluster creation.
  • The libxml2-dev library has now been installed to Jupyter notebooks. You can now install https://github.com/mlr-org/mlr and use unicode characters your notebooks.
  • You can now install JupyterLab as an extension to Jupyter notebooks.
  • Some sporadic cluster creation errors have been fixed.

Return to top

Thu 7 Jun 2018
Comment on this article


- Recent posts



- Follow us on Twitter

FireCloud

@BroadFireCloud

@jamesaeddy Going to phone a friend on that one — @geoffjentry care to explain?
19 Mar 19
Heads up if you use the TARGET and/or TCGA data workspaces: the file paths to TARGET and TCGA data are going to cha… https://t.co/rq2pfpWlcz
19 Mar 19
RT @lynnlangit: really love the sizing notebooks feature in https://t.co/5NCSbM0mYz @ProjectJupyter #Bioinformatics https://t.co/iR1UuPIy7Y
15 Mar 19
To @harvardmed researchers and friends: Come meet https://t.co/Fn1HB3gWD7, the platform that powers FireCloud and m… https://t.co/kIJBPwwlff
13 Mar 19

- Our favorite tweets from others

The macaque genome isn't finished so it has over 200K contigs. I call them the Rhesus pieces.
15 Mar 19
It's been years in the making, but 500k structural variants (SVs) from 15k genomes in #gnomAD are out 🎉today🎉! Pre… https://t.co/5mYCvan5ou
15 Mar 19
@fdmts @tanyacash21 and R. Munshi @broadinstitute are beginning their session on hype vs. reality in cloud capabili… https://t.co/26Pdlh7IMw
12 Mar 19
watching "Genomics TV" https://t.co/M6NuEyKGuz #bioinformatics #GATK @broadinstitute https://t.co/jHteePNKcp
5 Mar 19
“Running Containers — for Biologists” #Bioinformatics #Docker https://t.co/lxeiFoiJlx
4 Mar 19

See more of our favorite tweets...