Skip to end of metadata
Go to start of metadata

In FY16, we funded 13 projects, six of which presented their outcomes during the March 2017 monthly meeting. The recording and slides are posted on the meeting page if you are logged in. Six more projects will present on April 12th, and one project presented last December.

The March meeting also featured an update on recent Scientist's Challenges, and also the announcement of the eleven FY17 funded projects.

Thank you to all of our presenters!

Facilitating the USGS Scientific Data Management Foundation by integrating the process into current scientific workflow systems, Colin Talbert

Integrated tools that streamline the data release process with ScienceBase, and an imminent standalone Metadata Wizard tool.

Integration of Phenological Forecast Maps for Assessment of Biodiversity: An Enterprise Workflow, Jake Weltzin

Developing and sharing a generalized workflow for real-time delivery of biodiversity data products, and Al Roker on the Today Show with the USGS data product illustrating early onset of Spring.


Crowd-Sourced Earthquake Detections Integrated into Seismic Processing, Michelle Guy

Improving the speed and completeness of the USGS National Earthquake Information Center's capabilities through direct integration of crowd-sourced earthquake detections with traditional global seismic processing and data.


Evaluating a new open-source, standards-based framework for web portal development in the geosciences, Rich Signell

Evaluating a new tool for delivering USGS geospatial products to the public and building prototypes, while documenting lessons learned with respect to technical communication and USGS Bureau needs for collaborative work.



Development of Recommended Practices and Workflow for Publishing Digital Data through ScienceBase for Dynamic Visualization, Kathy Chase

Telling the story of A Hero's Journey for navigating the new USGS data release policies and creating interactive maps with tools like sbtools, Leaflet, and Shiny.



Hunting Invasive Species with HTCondor: High Throughput Computing for Big Data and Next Generation Sequencing, S. Grace McCalla

High throughput computing (HTC) is like doing big job with a whole bunch of shovels versus using the big bulldozer of High Performance Computing (HPC). Implementation of HTCondor at USGS science centers is using HTC to help deal with the vast amounts of USGS-produced genomic data that needs to be stored, analyzed, compared, and shared.

More CDI Blog posts 

  • No labels