Actual cloud costs, when available, are now displayed in the details page for individual workflows. Note: These costs are currently only available for Broad-based billing accounts.
Fixed an issue where the links to open a GCS bucket were incorrect for certain subworkflows.
May 23, 2018
You will now receive an error when you try to update a consent which is either in the voting stage, or consent which has been voted on and not archived in DUOS.
There is a new API POST duos/structuredData endpoint which can take answers from your consent questions and return data use codes in the form of a json file.
May 22, 2018
When viewing a single workflow, FireCloud now allows you to drill down into the details of subworkflows.
When viewing a single submission, FireCloud now shows actual cloud costs for that submission and each workflow in the submission, when available. Cost information will be added to additional parts of the UI in upcoming releases. Note: These costs are currently only available for Broad-based billing accounts.
May 17, 2018
Actual workflow run cost is now returned in the submission status API, when available. This will be available in the UI soon. Note: These costs are currently only available for Broad-based billing accounts.
FireCloud API's now better support the retrieval of subworkflow metadata and labels. This will be also be available in the UI soon.
The user group all_broad_users now includes all FireCloud users that have signed up with their @broadinstitute email address.
You can now install server-side and client-side extensions in the Notebooks API. You can find some example extensions from the community here.
May 8, 2018
When importing data entities from another workspace, the add icon is now always visible and usable. Previously, if you hid all columns, the add icon disappeared, even if you then un-hid individual columns.
When viewing the details of a genomics operation from a workspace's Monitor tab -> view submission -> view workflow -> show call -> operation, the json for the operation is now pretty-printed for easier reading. Previously it was on a single line, making it difficult to read.
Fixed a bug where Notebooks with a space in the file name could not be localized.
You can now create a Spark cluster in the Notebooks API in a stopped state.
Previously, if you tried to recreate a Deleting cluster with the same name, or tried to stop a non-Running cluster, you would be able to, but this would get your cluster stuck in a bad state. Now the API correctly restricts what you can/can't do
You can now spin up Python kernels without Spark enabled, which starts up instantly and allows you to run as many as you want (vs. Spark kernels which take 20-30 seconds to start and you are limited to 3-4 at a time.)
May 1, 2018
In a Notebook, we've now configured pip to install as a user by default. Previously, pip install tensorflow would throw a permission error.