The workshops and tutorials below will be featured at PEARC19. Click here for the full PEARC19 Program. This online program will contain the most up-to-date information for the PEARC19 conference. (Note: The content below is the content provided by authors during the submission process.)

Monday, July 29, 2019

High Performance Distributed Deep Learning: A Beginner’s Guide

7/29/2019, 8:30am-12:00pm, Tutorial Half-day

The current wave of advances in Deep Learning (DL) has led to many exciting challenges and opportunities for Computer Science and Artificial Intelligence researchers alike. DL frameworks like TensorFlow, PyTorch, Caffe, and several others have emerged that offer ease of use and flexibility to describe, train, and deploy various types of Deep Neural Networks (DNNs). In this tutorial, we will provide an overview of interesting trends in DNN design and how cutting-edge hardware architectures are playing a key role in moving the field forward. We will also present an overview of different DNN architectures and DL frameworks. Most DL frameworks started with a single-node/single-GPU design. However, approaches to parallelize the process of DNN training are also being actively explored. The DL community has moved along different distributed training designs that exploit communication runtimes like gRPC, MPI, and NCCL. In this context, we highlight new challenges and opportunities for communication runtimes to efficiently support distributed DNN training. We also highlight some of our co-design efforts to utilize CUDA-Aware MPI for large-scale DNN training on modern GPU clusters. Finally, we also include hands-on exercises to enable the attendees gain first-hand experience of running distributed DNN training experiments on a modern GPU cluster.

Introduction to Python for Scientific Computing

7/29/2019, 8:30am-12:00pm, Tutorial Half-day

This half-day tutorial is a quick immersion in the basics of the Python programming language and associated packages for scientific computing, including tools needed to participate in the Student Modeling Challenge, part of the PEARC19 student program. Topics covered will include key language features – such as variables, types, operators, control flow, input/output, functions, classes, built-in containers, and modules – as well as an overview of some of the important libraries and packages in the Python ecosystem for scientific computing (to support plotting, data access and manipulation, numerical algorithms, and the construction of integrated computational notebooks). Most techniques will be presented in live-demo mode, and each section will feature hands-on exercises so that participants can try out the commands or methods for themselves. To participate fully in the exercises, attendees should come with the latest version of the Anaconda Python 3 package downloaded and installed on their computer ( ).

Leveraging a Research IT Maturity Model for Strategic Decision Making

7/29/2019, 8:30am-12:00pm, Workshop Half-day

Research IT (computing, data, and related infrastructure and services) is changing at an accelerating rate, while the range of scientific fields and disciplines depending on research cyberinfrastructure is expanding and becoming increasingly diverse. We present a new Maturity Model for Research IT (computing, data, and related infrastructure and services) that identifies the range of relevant approaches to supporting research IT, for use by IT practitioners, researchers, and campus leadership. Participants will apply and evaluate the model for their institutions, provide feedback to improve the model, and discuss its use for strategic decision making. Conference materials can be found here

Deep Dive into Microsoft’s Azure for Research (Platinum Exhibitor)

7/29/2019, 8:30am-12:00pm, Tutorial Half-day

Whether it’s a computer with more memory, a cluster with thousands of cores, a big data platform, an internet of things solution, or open-source machine learning at scale, you can achieve more using the cloud. Microsoft Azure provides an open, flexible, global platform that supports multiple programming languages, tools, and frameworks.  This workshop will take us into the Azure offerings available to researchers as well as explore a few case studies on how these offerings have impacted academic research.  This will be followed by two of Deployment of Microsoft’s HPC Pack, and then How to use Azure CycleCloud Create, manage, operate and optimize HPC and big compute clusters of any scale.

Modern tools for supercomputers

7/29/2019, 8:30am-12:00pm, Tutorial Half-day

Powerful supercomputers have played an important role in the computational research community. However, the increasing complexity on modern systems may defer or hinder their work. A large amount of precious time and effort have been spent unnecessarily in managing the user environment, reproducing standard workflow, handling large scale I/O work, profiling and monitoring users jobs, understanding and resolving unnecessary system issues, etc. To help supercomputer users focus on their scientific and technical work and to minimize the workload for the consulting team, we designed and developed a series of powerful tools for supercomputer users. These tools are portable and effective on almost all supercomputers and are now serving thousands of supercomputer users of TACC, XSEDE, and other institutions every day. In this tutorial, we will present and practice with supercomputer tools specifically designed for complex user environment (LMod, Sanity Tool), tools for job monitoring and profiling (Remora, TACC-Stat, etc.), tools for large-scale IO work (OOOPS, FanStore), and several other convenient tools. Attendees will learn how these tools are designed and how to use them in their daily work. Detailed hands-on exercises are prepared beforehand and will be executed mainly on the Stampede2 supercomputer at the Texas Advanced Computing Center (TACC).

Pragmatic Science Engagement Using an Operations Center Approach

7/29/2019, 8:30am-12:00pm, Workshop Half-day

Establishing a process to regularly review technical requirements ( helps to determine the current and future science communication and collaboration needs for numerous scientific communities. The purpose of these reviews is to accurately characterize the near-term, medium-term and long-term technical requirements of the science being performed. This approach brings about a network-centric understanding of the science process used by the researchers and scientists, without asking technical questions directly, and derives network requirements from that understanding. This highly interactive session outlines a process that can be adopted by CI facilitators at the campus, regional, and national levels to advance the mission of science engagement and fully realize the investments made in networking and personnel by agencies such as the National Science Foundation and Department of Energy Office of Science. The topics of this event focus on the intersection of research and technology, specifically use of high speed networks. Specifically:

The purpose of science engagement (30 mins) – Overview of an outline for a case study approach to gather scientific requirements via documentation and meeting preparation (45 mins) – A live example of how to conduct an in-person review to characterize needs with a visiting scientific group (1 hour) – Participant discussion on CI facilitator techniques – sharing of challenges and lessons learned (45 mins) 

Participants in National Science Foundation Campus Cyberinfrastructure Program and the cyberinfrastructure engineering community are encouraged to attend, participate, and help define a strategy that can be used to encourage growth of scientific understanding and support. For more information please visit

PEARC19 Workshop Materials can be found at

The ACM SIGHPC SYSPROS Symposium 2019

7/29/2019, 8:30am-12:00pm, Workshop Half-day

In order to meet the demands of high-performance computing (HPC) researchers, large-scale computational and storage machines require many staff members who design, install, and maintain these systems. These HPC systems professionals include system engineers, system administrators, network administrators, storage administrators and operations staff who face problems that are unique to high performance computing systems. While many conferences exist for the HPC field and the system administration field, none exist that focus on the needs of HPC systems professionals. Support resources can be difficult to find to help with the issues encountered in this specialized field. Often times, systems staff turn to the community as a support resource and opportunities to strengthen and grow those relationships are highly beneficial. This Workshop is designed to share solutions to common problems, provide a platform to discuss upcoming technologies, and to present state of the practice techniques so that HPC centers will get a better return on their investment, increase performance and reliability of systems, and researchers will be more productive. Additionally, this Workshop is affiliated with the systems professionals’ chapter of the ACM SIGHPC (SIGHPC SYSPROS Virtual ACM Chapter). This session would serve as an opportunity for chapter members to meet face-to-face, discuss the chapter’s yearly workshop held at SC, and continue building our community’s shared knowledge base. For more information:

The Higher Education Campus Alliance for Advanced Visualization Tutorial and Workshop

7/29/2019, 8:30am-12:00pm, Tutorial Half-day


AI4GOOD@PEARC19: Workshop Proposal

7/29/2019, 8:30am-5:00pm, Workshop Full-day

The AI4GOOD@PEARC19 STEM-Trek workshop will enlighten participants about applications for artificial intelligence (AI) that are used for social good. Biomedical advances, economic empowerment strategies, agricultural innovation and quality of life improvements for citizens in underserved regions will be emphasized. Hands-on training will be led by Kang Lee (University of Iowa formerly Samsung’s Big Data Group), Ryan Quick and Arno Kolster. Quick and Kolster are with Proventia Worldwide. Quick previously led the PayPal Advanced Technology Group/HPC environments and specialized computing services, configurations and optimized workloads; and Kolster honed his wizardry skills working for the oil & gas industry, law enforcement and emergency services, a public utility and a number of Internet startups. Our security panel, led by Florence Hudson (FDHint, LLC), will foster thoughtful discussion about emerging AI-related privacy, ethics and compliance challenges associated with inter-institutional and international research. For more information, visit the STEM-Trek website

Cloud-based Virtual Clusters using Jetstream

7/29/2019, 8:30am-5:00pm, Tutorial Full-day

Cloud computing has grown at a significant rate over the past few years. While cloud computing alleviates much of the hardware management challenge, many researchers and educators have difficulty embracing the aspects of cloud computing that make it unique and preferred for many activities. Among the many features cloud computing brings, elastic computing – resources on demand, maybe one of the most appealing. A purpose-built virtual machine can be built quickly and easily as a first step. Taking the example further into scripted launching of more resources as necessary is the logical next step. The next step can use elastic computing techniques to create virtual clusters on demand, bringing compute resources into existence when needed and removing them when they are no longer necessary. This elasticity enables gateway providers and researchers to make efficient use of limited resources, while providing a resource for HPC-style jobs that isn’t dependent on external HPC resources. While modest cloud-based virtual clusters won’t replace traditional HPC resources for jobs that require high-speed interconnects or large core counts and high memory profiles, many smaller gateway, research, and education projects might benefit from the highly customizable and configurable, programmable cyberinfrastructure afforded by cloud computing environments. This tutorial will explore the basic methods required for interacting with elastic computing environments. It will then show a hands-on approach to creating virtual clusters in an OpenStack environment, including the necessary steps to make the cluster elastic, to take full advantage of the cloud environment.

CyberAmbassadors: Communications, Teamwork, Ethics and Leadership Training for Cyber-Infrastructure Professionals

7/29/2019, 8:30am-5:00pm, Workshop Full-day

This workshop provides communication, teamwork and leadership training for technically-proficient CI-Professionals, with the goal of developing “CyberAmbassadors” who are prepared to expand broader engagement in multidisciplinary, computationally-intensive research. The curriculum uses interactive exercises and small-group activities to help participants build professional skills within the context of multidisciplinary computational research. CyberAmbassador website link:

Humans in the Loop: Enabling and Facilitating Research on Cloud Computing

7/29/2019, 8:30am-5:00pm, Workshop Full-day

This workshop explores the role of humans in making cloud computing useful in research settings. Cloud computing is clearly a type of cyberinfrastructure, which, in its general definition, “consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high performance networks to improve research productivity and enable breakthroughs not otherwise possible.” In this workshop we will focus on the “and people” part of cyberinfrastructure, and in particular on the role of people in supporting the use of commercial cloud resources in research. Learn more about this workshop at

Portable, Reproducible High Performance Computing In the Cloud

7/29/2019, 8:30am-5:00pm, Tutorial Full-day

This tutorial will focus on providing attendees exposure to state-of-the-art techniques for portable, reproducible research computing, enabling them to easily transport analyses from cloud to HPC resources. We will introduce open source technologies such as Jupyter, Docker and Singularity, the emerging “serverless” computing paradigm, and how to utilize these tools within two NSF-funded cyberinfrastucture platforms, Agave and Abaco API. The approaches introduced not only increase application portability and reproducibility but also reduce or eliminate the need for investigators to maintain physical infrastructure so that more time can be spent on analysis. For the tutorial, attendees will have access to allocations on XSEDE JetStream and one or more HPC resources such as TACC’s Stampede2 or Frontera.

Programming Modern Multicore Processors

7/29/2019, 8:30am-5:00pm, Tutorial Full-day

Modern processors such as Intel’s Scalable Xeon line, AMD’s EPYC architecture, ARM’s ThunderX2 design, and IBM’s Power9 architecture are scaling out rather than up and increasing in complexity. Because the base frequencies for the large core count chips hover somewhere between 2-3 GHz, researchers can no longer rely on frequency scaling to increase the performance of their applications. Instead, developers must learn to take advantage of the increasing core count per processor and learn how to eke out more performance per core. To achieve good performance on modern processors, developers must write code amenable to vectorization, be aware of memory access patterns to optimize cache usage, and understand how to balance multi-process programming (MPI) with multi-threaded programming (OpenMP). This tutorial will cover the basics of vectorization, multi-threaded programming, memory affinity, load balancing, and hybrid execution. We will also provide an overview of current HPC processors and profiling tools. This session will include hands-on exercises that demonstrate the techniques discussed and usage of profiling tools. This tutorial is designed for experienced programmers, familiar with OpenMP and MPI, who wish to learn how to program for performance on modern architectures.

Building Container Images for HPC Workloads

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

This tutorial will describe how to build container images for HPC workloads with Docker, Singularity, and HPC Container Maker (HPCCM). Best practices including image layering, caching, multi-stage builds, and managing container image size will be covered. HPCCM will be a primary focus, describing how to use this open source tool to generate Dockerfiles and Singularity definition files from a Python recipe. More information:

Developing Science Gateways using Apache Airavata

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

This half-day tutorial will build on XSEDE14, XSEDE15, XSEDE16, and PEARC17 tutorials. Extensive tutorial material is available here. Previous tutorials have had approximately 20 attendees each. Science gateways provide science-specific user interfaces for scientific applications for end users who are unfamiliar with or need more capabilities than provided by command-line interfaces. Science gateway middleware provides the general purpose capabilities behind gateway user interfaces. In this tutorial, we present the Apache Airavata middleware for creating science gateways. The target audiences for this tutorial include a) scientific software developers, who want simplified ways to deliver their software and support larger user communities; b) educators, who want to integrate scientific software usage into their classroom without having students get bogged down in the submission mechanisms of specific resources; and c) campus computing center staff, who want to use gateways to broaden their reach beyond their traditional users and to help users make more efficient use of resources. The format will be a mixture of presentations, demonstrations, and hands-on sessions, as described in the detailed agenda. The agenda indicates the roles of each of the organizers. For more information visit:

Google Cloud Workshop: HPC & ML on Google Cloud Platform (Platinum Exhibitor)

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

This workshop will provide an introduction to HPC on Google Cloud, including: 

  • Customer Story: Paul Sagona @ USC – Predicting Climate Change by Sequencing Microbiomes
  • Hands-on HPC: Slurm Auto-Scaling Clusters on Google Cloud
  • Introduction to Machine Learning for HPC Engineers 
  • Hands-on Machine Learning: Introduction to Kubeflow on Google Cloud

How to Accelerate Your Big Data and Associated Deep Learning Applications with Hadoop and Spark?

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

Apache Hadoop and Spark are gaining prominence in handling Big Data analytics. Recent studies have shown that default Hadoop and Spark can not leverage the high-performance networking and storage architectures efficiently, like Remote Direct Memory Access (RDMA) enabled high-performance interconnects and heterogeneous storage systems (e.g. HDD, SSD, NVMe-SSD, and Lustre). These middleware are traditionally written with sockets and do not deliver the best performance on modern high-performance networks. In this tutorial, we will provide an in-depth overview of the architecture of Hadoop components (HDFS, MapReduce, etc.) and Spark. We will examine the challenges in re-designing networking and I/O components of these middleware with modern interconnects, protocols (such as InfiniBand and RoCE) with RDMA and storage architectures. Using the publicly available software packages in the High-Performance Big Data (HiBD, project, we will provide case studies of the new designs for several Hadoop/Spark components and their associated benefits. Through these case studies, we will also examine the interplay between high-performance interconnects, high-speed storage systems, and multi-core platforms to achieve the best solutions for these components, Big Data processing, and Deep Learning applications on modern HPC clusters. This tutorial will provide hands-on sessions of Hadoop and Spark on SDSC Comet supercomputer. (

Practical OpenHPC: Cluster Management, HPC Applications, Containers and Cloud

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

Since its inception as a Linux Foundation project in 2015, OpenHPC ( has steadily grown to provide a modern, consistent, reference collection of HPC cluster provisioning tools, together with a curated repository of common cluster management software, I/O clients, advanced computational science libraries and software development tools, container-based execution facilities, and application performance profiling tools. Although OpenHPC enables people deploying new HPC clusters to rapidly get their clusters up and running, the OpenHPC software repository itself is a reliable, portable, integrated collection of software, libraries, tools and user environment that can be employed in containers, VMs and HPC clusters to develop and execute computational science applications. This half-day tutorial will begin with a brief, advanced introduction to OpenHPC. We will then guide attendees through several practical, hands-on exercise modules employing an OpenHPC-based cluster and the OpenHPC software repository to explore real-world activities including: – HPC cluster management and job schedulers – Using containers to build and prototype HPC applications with OpenHPC – Running container-based applications on HPC clusters – Using Easybuild and Spack to streamline application builds – Instrumenting applications for performance measurement – Using Packer to build and use OpenHPC-ready VM images in the cloud.

Second Workshop on Strategies for Enhancing HPC Education and Training (SEHET19)

7/29/2019, 1:30pm-5:00pm, Workshop Half-day

High performance computing is becoming central for empowering scientific progress in the most fundamental research in various science and engineering, as well as society domains. It is remarkable to observe that the recent rapid advancement in the mainstream computing technology has facilitated the ability to solve complex large-scale scientific applications that perform advanced simulations of the implementation of various numerical models corresponding to numerous complex phenomena pertaining to diverse scientific fields. The inherent wide distribution, heterogeneity, and dynamism of the today’s and future computing and software environments provide both challenges and opportunities for cyberinfrastructure facilitators, trainers and educators to develop, deliver, support, and prepare a diverse community of students and professionals for careers that utilize high performance computing to advance discovery. The SEHET19 workshop is an ACM SIGHPC Education Chapter coordinated effort aimed at fostering collaborations among the practitioners from traditional and emerging fields to explore strategies to enhance computational, data-enabled and HPC educational needs. Attendees will discuss approaches for developing and deploying HPC training, as well as identifying new challenges and opportunities for keeping pace with the rapid pace of technological advances – from collaborative and online learning tools to new HPC platforms. The workshop will provide opportunities for: learning about methods for conducting effective HPC education and training; promoting collaborations among HPC educators, trainers and users; and for disseminating resources, materials, lessons learned and good/best practices.

Tools and Best Practices for Distributed Deep Learning with High Performance Computing

7/29/2019 , 1:30pm-5:00pm, Tutorial Half-day

This tutorial is a practical guide on how to run distributed deep learning over distributed compute nodes effectively. Deep Learning (DL) has emerged as an effective analysis method and are adapted quickly across many scientific domains in recent years. Domain scientists are embracing DL as both a standalone data science method, as well as an effective approach to reducing dimensionality in the traditional simulation. However. due to its inherent high computational requirement, application of DL is limited by the available computational resources. Recently, we have seen the fusion of DL and high-performance computing (HPC): supercomputers show an unparalleled capacity to reduce DL training time from days to minutes; HPC techniques have been used to speed up parallel DL training. Therefore distributed deep learning has great potential to augment DL applications by leveraging existing high performance computing cluster. This tutorial consists of three sessions. First, we will give an overview of the state-of-art approaches to enabling deep learning at scale. The second session is an interactive hands-on session to help attendees running distributed deep learning with resources at the Texas Advanced Computing Center. In the last session, we will focus on the best practices to evaluate and tune up performance.

Using the SPEC HPG Benchmarks for Better Analysis and Evaluation of Current and Future HPC Systems

7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

The High Performance Group (HPG) of the Standard Performance Evaluation Corporation (SPEC) develops bench- mark methodologies for High Performance Computing systems and releases production quality benchmark suites like SPEC MPI2007, SPEC OMP2012, and SPEC ACCEL. These benchmarks can evaluate all dimensions of parallelism on modern HPC systems and are used in academia and industry to conduct research in HPC systems, facilitate procurement, testing, and tuning of HPC systems. Since 2018, SPEC HPG offers these benchmark suites free of charge to non-commercial users. In this half-day tutorial, participants will learn how to leverage SPEC benchmarks for performance evaluation, tuning of system parameters, comparison of systems (e.g., for procurement), and get an outlook on its power measurement capabilities. The presenters will provide demos and hands-on guidance on how to install, compile, and run the benchmarks on HPC systems provided. The presenters will also show how the results will be interpreted, discuss various use cases, and present the publication process of results. An SSH-capable device is required for the hands-on sessions. Tutorial materials can be found here

Taking Advantage of Persistent Memory with Open Source PMDK and Intel Profiling Tools (Platinum Exhibitor)

 7/29/2019, 1:30pm-5:00pm, Tutorial Half-day

The recent emergence of persistent memory offers a new tier of data placement, with memory-style access and storage-style persistence. This half-day workshop covers how operating systems expose persistent memory to applications, and ways applications can fully leverage what it can do.  Full program examples showing solutions using persistent memory are presented. The workshop covers the use of the Persistent Memory Development Kit, PMDK, a growing collection of libraries which have been developed for various use cases, tuned, validated to production quality, and thoroughly documented. These libraries build on the Direct Access (DAX) feature available in both Linux and Windows, which allows applications direct load/store access to persistent memory by memory-mapping files on a persistent memory aware file system. PMDK will work with any persistent memory that provides the SNIA NVM Programming Model. It is open source and welcomes community contributions.

In addition to PMDK, this educational session will show users how to take advantage of freely available tools from Intel – namely the Intel® VTune™ Amplifier.  This premier performance profiler has new capabilities to help you optimize your persistent memory programs. This workshop will show users how to:

  • Analyze systems over longer intervals. Find out which workloads can benefit from larger memory allocations and which system configuration better fits the workloads. 
  • Locate code that is sensitive to memory bandwidth and latency issues. Identify hot, warm, or cool data to optimize memory usage and placement. 
  • Identify opportunities to replace disk or SSD-based storage with faster persistent memory.

The workshop will also show users how to take advantage of the new Intel® Persistence Inspector tool. This tool finds persistence errors quickly to make software fast and reliable.

Tuesday, July 30, 2019

Python or R or Both: Tools for Your Data Analytics Workflow

7/30/2019, 11:00am-12:30pm, Workshop Quarter-day

Python and R are among the most popular programming languages for data analysis. While they share many of the same important features as high-level, open-source programming languages, they each have their own strengths and weaknesses from a data-science perspective. While many researchers starting their data analysis journey feel they must choose between Python and R as their primary analytics tool, the reality is that there currently are no clear guidelines to make a decision between the two. This workshop aims to provide an in-depth comparison between Python and R for data science, thereby sparking an active discussion about which one would be the best choice given a specific goal and circumstance – or even when using both might be an appropriate choice. This workshop will benefit researchers, professionals, and students who need to choose between the two languages but also those who need to learn and use both.

Trusted CI Workshop on Trustworthy Scientific Cyberinfrastructure

7/30/2019, 11:00am-5:00pm, Workshop Three-quarter Day

The Trusted CI Workshop on Trustworthy Scientific Cyberinfrastructure provides an opportunity for sharing experiences, recommendations, and available resources for addressing cybersecurity challenges in research computing. Presentations by Trusted CI staff and community members will cover a broad range of cybersecurity topics, including science gateways, transition to practice, cybersecurity program development, workforce development, and community engagement (e.g., via the Trusted CI Fellows program). Learn more about this workshop at

Accelerating Data Science Workflows with RAPIDS

7/30/2019, 1:30pm-5:00pm, Tutorial Half-day

The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows. Learn how to GPU-accelerate your data science applications by: * Utilizing key RAPIDS libraries like cuDF (GPU-enabled Pandas-like dataframes) and cuML (GPU-accelerated machine learning algorithms) * Learning techniques and approaches to end-to-end data science, made possible by rapid iteration cycles created by GPU acceleration * Understanding key differences between CPU-driven and GPU-driven data science, including API specifics and best practices for refactoring Upon completion, you’ll be able to refactor existing CPU-only data science workloads to run much faster on GPUs and write accelerated data science workflows from scratch.

Tutorial on Floating-Point Analysis Tools

7/30/2019, 1:30pm-5:00pm, Tutorial Half-day

Scientific software is central to the practice of research computing. While scientific software is widely used in several science and engineering disciplines to simulate real-world phenomena, developing accurate and reliable scientific software is notoriously difficult. One of the most serious difficulties come from dealing with floating-point arithmetic to perform numerical computations. Round-off errors occur and accumulate at all levels of computation, and compiler optimizations and low precision arithmetic can significantly affect the final computational results. With accelerators such as GPUs dominating high-performance computing systems, computational scientists are faced with even bigger challenges, given that ensuring numerical reproducibility in these systems pose a very difficult problem. This tutorial will demonstrate tools that are available today to analyze floating-point scientific software. We focus on tools that allow programmers to get insight about how different aspects of floating-point arithmetic affect their code and how to fix potential bugs. The tools presented in the tutorial will allow programmers to understand how compiler optimizations affect floating-point computations, detect hidden floating-point exceptions on GPUs, reduce floating-point precision to obtain performance speedups, and understand the sensitivity of different regions of the code to floating-point rounding errors.

Wednesday, July 31, 2019

Introduction to Docker and Singularity

7/31/2019, 11:00am-12:30pm, Tutorial Quarter-day

A container is a portable software unit that has one or more applications and all their dependencies bundled together in a single package. Containers make software and data distribution easy, and can thus save time in software installation and maintenance. They are also useful in ensuring the reliability and reproducibility of the applications by future-proofing them against the changes in hardware and system software. In this beginner-level, quarter-day tutorial (1.5 hours duration), the participants will get an overview of containers and their relevance in cloud computing and high performance computing. The tutorial presenters will demonstrate the use of Docker and Singularity – the two open-source container solutions that are popular in the advanced research community. A short hands-on session on Docker will also be included. The participants will be required to create an account on DockerHub to participate in the hands-on session and would need to bring their laptop.

Managing HPC Software Complexity with Spack

7/31/2019, 1:30pm-5:00pm, Tutorial Half-day

The modern scientific software stack includes thousands of packages, from C, C++, and Fortran libraries, to packages written in interpreted languages like Python and R. HPC applications may depend on hundreds of packages spanning all of these ecosystems. To achieve high performance, they must also leverage low-level and difficult-to-build libraries such as MPI, BLAS, and LAPACK. Integrating this stack is extremely challenging. The complexity can be an obstacle to deployment at HPC sites and deters developers from building on each others’ work. Spack is an open source tool for HPC package management that simplifies building, installing, customizing, and sharing HPC software stacks. In the past few years, its adoption has grown rapidly: by end-users, by HPC developers, and by the world’s largest HPC centers. Spack provides a powerful and flexible dependency model, a simple Python syntax for writing package build recipes, and a repository of over 3,000 community-maintained packages. This tutorial provides a thorough introduction to Spack’s capabilities: installing and authoring packages, integrating Spack with development workflows, and using Spack for deployment at HPC facilities. Attendees will leave with foundational skills for using Spack to automate day-to-day tasks, along with deeper knowledge for applying Spack to advanced use cases.

Rogues Gallery: Addressing Post-Moore Computing

7/31/2019, 1:30pm-5:00pm, Tutorial Half-day

The Rogues Gallery is a new collaborative, experimental testbed hosted at Georgia Tech that is focused on tackling “rogue” architectures for the post-Moore era of computing. Some of these devices have roots in the embedded and high-performance computing spaces, but many of the expected post-Moore technologies are limited to custom prototypes as with quantum, neuromorphic, and reversible computing devices. This tutorial will present a brief overview of the Rogues Gallery including related tools and resources like benchmark suites for novel architectures and will focus on providing hands-on experience with two hardware rogues. Attendees will have an opportunity to learn about and program for the Emu Chick system, a near-memory computing architecture for sparse applications, and the Field Programmable Analog Array (FPAA), a mixed analog/digital platform for implementing machine learning and neuromorphic designs. We will provide and work through a set of demonstration codes based on Sparse Matrix-Vector Multiply for the Emu, and we will explore the open-source toolset and virtual machine image used to program the FPAA. Attendees will have an opportunity to continue their investigation into using post-Moore technologies by requesting a free account to access the Rogues Gallery at the end of the tutorial. Tutorial website: