Tuesday, July 30, Keynote
AI for Science
Rick Stevens
Argonne National Laboratory
The University of Chicago
In this talk, I will describe an emerging initiative at Argonne National Laboratory to advance the concept of Artificial Intelligence (AI) aimed at addressing challenge problems in science. We call this initiative “AI for Science”. The basic concept is threefold: (1) to identify those scientific problems where existing AI and machine learning methods can have an immediate impact (and organize teams and efforts to realize that impact); (2) identify areas of where new AI methods are needed to meet the unique needs of science research (frame the problems, develop test cases, and outline work needed to make progress); and (3) to develop the means to automate scientific experiments, observations, and data generation to accelerate the overall scientific enterprise. Science offers plenty of hard problems to motivate and drive AI research, from complex multimodal data analysis to integration of symbolic and data intensive methods, to coupling large-scale simulation and machine learning to drive improved training to control and accelerate simulations. A major sub-theme is the idea of working toward the automation of scientific discovery through integration of machine learning (active learning and reinforcement learning) with simulation and automated high-throughput experimental laboratories. I will provide some examples of projects underway and layout a set of long-term driver problems.
Biography
Professor Rick Stevens is internationally known for work in high-performance computing, and collaboration visualization technology, and for building computational tools and web infrastructures to support large-scale genome and metagenome analysis for basic science and infectious disease research. He is the principle investigator for the NIH-NIAID funded PATRIC Bioinformatics Resource Center which is developing comparative analysis tools for infectious disease research, and for the Exascale Computing Project (ECP) Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer project which focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) to address three top challenges of the National Cancer Institute. Stevens has been a professor at the University of Chicago since 1999, and Associate Laboratory Director at Argonne National Laboratory since 2004. Over the past twenty years, he and his colleagues have developed the SEED, RAST, MG-RAST, and ModelSEED genome analysis and bacterial modeling servers that have been used by tens of thousands of users to annotate and analyze more than 250,000 microbial genomes and metagenomic samples. He teaches and supervises students in the areas of computer systems and computational biology, and he co-leads the DOE national laboratory group that has been developing the national initiative for exascale computing.
Wednesday, July 30, Plenary Speakers
Redefining Today’s HPC
Patricia (Trish) A. Damkroger
Intel Corporation
In this talk we will look at the paradigm shift that is happening in HPC. It’s no longer just simulation and modeling. It now includes other data-centric applications such as advanced analytics and AI. We will take a deep dive into what this means, the current and future trends we are seeing the market place and look at how industries today are already moving toward this new Data-Centric HPC.
Biography
Patricia (Trish) A. Damkroger is vice president and general manager of the Extreme Computing Organization in the Data Center Group at Intel Corporation. She leads Intel’s global technical and high-performance computing (HPC) business and is responsible for developing and executing strategy, building customer relationships and defining a leading product portfolio for technical computing workloads, including emerging areas such as high performance analytics, HPC in the cloud and artificial intelligence.
An expert in the HPC field, Damkroger has more than 27 years of technical and managerial expertise both in the private and public sectors. Prior to joining Intel in 2016, she was the associate director of computation at the U.S. Department of Energy’s Lawrence Livermore National Laboratory where she led a 1,000-member group comprised of world-leading supercomputing and scientific experts. Since 2006, Damkroger has been a leader of the annual Supercomputing Conference (SC) series, the premier international meeting for high performance computing. She served as general chair of the SC’s international conference in 2014 and has held many other committee positions within industry organizations.
Damkroger holds a bachelor’s degree in electrical engineering from California Polytechnic State University, San Luis Obispo, and a master’s degree in electrical engineering from Stanford University. She was recognized on HPC Wire’s “People to Watch” list in 2014 and 2018.
Future of HPC in the Cloud
Ross Thomson
Google
Click here to download the presentation
Today’s research computing demands lightning-fast speed, vast data storage, and intensive processing power in order to advance discoveries across disciplines, from genomics to climate change. With high performance computing (HPC) in the cloud you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. Attend our talk as we explore the future of HPC in the cloud to help accelerate breakthroughs and unlock new scientific frontiers. We’ll highlight how customers are breaking computing boundaries with Google Cloud Platform as well as discuss TPUs for accelerating data analysis and hybrid HPC.
Biography
Trained as a computational physicist, I have worked in a broad range of academic and industry fields, from micro-gravity fluid simulation for NASA to “computation advertising” at Google. My current role as a Solutions Architect for Scientific Computing at Google Cloud Platform is among the most rewarding of my career. I have had the pleasure of working with the astronomy community, mostly in the context of the LSST
Are there Closets in the Cloud
Tim Carroll
Microsoft
Commodity clusters had a profound change on computational science and more importantly, the science it supported. Cloud represents an equally compelling boost for research. Though HPC users have had the ability to run workloads on public clouds since 2007, the percentage of workloads running in production remains relatively low. This talk will examine the coming changes in both technology and funding mechanisms that will accelerate cloud adoption specifically for the academic community.
Biography
Tim is responsible for Microsoft’s role in supporting computational research across academic and government research. He has spent sixteen years collaborating with the research community to broaden and simplify access to technology. While leading Dell’s HPC business from 2007 to 2013, they delivered break-through systems across several centers and NSF-funded centers. He joined Cycle Computing in 2014 to enable cloud to continue the trajectory established by Linux clusters. In 2017, Cycle was acquired by Microsoft to accelerate and simplify the management of HPC and AI workloads within Azure.
Thursday, August 1, Keynote
NSF/CISE and Advanced Cyberinfrastructure: An update and a look at, and over, the horizon
Jim Kurose
National Science Foundation
In this talk we provide an overview, an update, and a look into the future for the National Science Foundation Directorate of Computer and Information Science and Engineering, with a particular emphasis on the Office of Advanced Cyberinfrastructure, and NSF’s Big Ideas.
Biography
Dr. Jim Kurose is an Assistant Director of the National Science Foundation, where he leads the Directorate for Computer and Information Science and Engineering (CISE). With an annual budget of nearly $1B, CISE’s mission is to uphold the nation’s leadership in scientific discovery and engineering innovation through its support of fundamental research in computer and information science and engineering, transformative advances in cyberinfrastructure, and preparation of a diverse computing-capable workforce. Jim also co-chairs the Networking and Information Technology Research and Development (NITRD) Program, the Subcommittee on Machine Learning and AI, and the Subcommittee on Open Science of the National Science and Technology Council (NSTC), facilitating the coordination of these research and development efforts across Federal agencies. Recently, Jim also served as the Assistant Director for Artificial Intelligence in the US Office of Science and Technology Policy (OSTP). Jim is on leave from the University of Massachusetts, Amherst, where he is Distinguished University Professor of Computer Science.
Dr. Kurose received his Ph.D. in computer science from Columbia University and a BA degree in physics from Wesleyan University. He is a Fellow of the ACM and IEEE, and has received a number of research and education honors, including the ACM Sigcomm Lifetime Achievement Award, the IEEE Infocom Award, and the IEEE Computer Society Taylor Booth Education Medal.
Thursday, August 1, Town Hall
In this session a very brief description of the NSF Advanced Computing Systems & Services: Adapting to the Rapid Evolution of Science and Engineering Research solicitation will be presented followed by brief presentations from the announced awardees of this solicitation. The resources will be deployed and in production in 2020 and will be available for allocations requests via XSEDE’s resource allocations process.
Presenters
Manish Parashar, National Science Foundation — Presentation Slides
John Towns, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign
Nick Nystrom, Pittsburgh Supercomputing Center
Shawn Strande, San Diego Supercomputer Center, University of California at San Diego
Robert J. Harrison, Stony Brook University