Version History
Release notes for recent updates


This page displays the release notes for the most recent updates to FireCloud. Earlier updates can be found here in the forum.


Release Notes: May 2018


May 22, 2018

  • When viewing a single workflow, FireCloud now allows you to drill down into the details of subworkflows.
  • When viewing a single submission, FireCloud now shows actual cloud costs for that submission and each workflow in the submission, when available. Cost information will be added to additional parts of the UI in upcoming releases.

May 17, 2018

  • Actual workflow run cost is now returned in the submission status API, when available. This will be available in the UI soon.
  • FireCloud API's now better support the retrieval of subworkflow metadata and labels. This will be also be available in the UI soon.
  • The user group all_broad_users now includes all FireCloud users that have signed up with their @broadinstitute email address.
  • You can now install server-side and client-side extensions in the Notebooks API. You can find some example extensions from the community here.

May 8, 2018

  • When importing data entities from another workspace, the add icon is now always visible and usable. Previously, if you hid all columns, the add icon disappeared, even if you then un-hid individual columns.
  • When viewing the details of a genomics operation from a workspace's Monitor tab -> view submission -> view workflow -> show call -> operation, the json for the operation is now pretty-printed for easier reading. Previously it was on a single line, making it difficult to read.
  • Resolved a JavaScript error that resulted in a blank page when viewing the details of an entity within the Data tab of your workspace if that entity contained a numeric attribute created via the API. This did not occur for attributes created via TSV upload.
  • Fixed a bug where Notebooks with a space in the file name could not be localized.
  • You can now create a Spark cluster in the Notebooks API in a stopped state.
  • Previously, if you tried to recreate a Deleting cluster with the same name, or tried to stop a non-Running cluster, you would be able to, but this would get your cluster stuck in a bad state. Now the API correctly restricts what you can/can't do
  • You can now spin up Python kernels without Spark enabled, which starts up instantly and allows you to run as many as you want (vs. Spark kernels which take 20-30 seconds to start and you are limited to 3-4 at a time.)

May 1, 2018

  • In a Notebook, we've now configured pip to install as a user by default. Previously, pip install tensorflow would throw a permission error.

Comments (0)