Friday, October 30, 2015

SC15 Invited Talk-Dr. Laura Grigori Presents Fast and Robust Communication Avoiding Algorithms: Current Status and Future Prospects


Left: computation to communication ratio for the LU factorization with partial pivoting of a dense matrix on a model of exascale machine (ISC 2014). Right: preconditioned Krylov solver as used in the map making problem in astrophysics, results obtained on a Cray XE6 machine (Astronomy and Astrophysics, 2014). Please click to enlarge.
This talk will address one of the main challenges in high performance computing which is the increased cost of communication with respect to computation, where communication refers to data transferred either between processors or between different levels of memory hierarchy, including possibly NVMs.

I will overview novel communication avoiding numerical methods and algorithms that reduce the communication to a minimum for operations that are at the heart of many calculations, in particular numerical linear algebra algorithms.

Communication avoiding LU uses tournament pivoting to minimize communication (SC08).  Lightweight scheduling combines static and dynamic scheduling to provide a good trade-off beween load balance, data locality and dequeue overhead (IPDPS 2012). Please click to enlarge.
Those algorithms range from iterative methods as used in numerical simulations to low rank matrix approximations as used in data analytics. I will also discuss the algorithm/architecture matching of those algorithms and their integration in several applications.

Speaker Background:
Dr. Laura Grigori
Dr. Laura Grigori obtained her Ph.D. in Computer Science in 2001 from University Henri Poincare in France. She was a postdoctoral researcher at UC Berkeley and Lawrence Berkeley National Laboratory, before joining French Institute for Research in Computer Science and Automation (INRIA) in France in 2004.

Currently she now leads a joint research group between INRIA, University of Pierre and Marie Curie, and the National Center for Scientific Research (CNRS), called Alpines.

Her field of expertise is high performance scientific computing, numerical linear algebra, and combinatorial scientific computing. She co-authored the papers introducing communication avoiding algorithms that provably minimize communication.

She is leading several projects in preconditioning, communication avoiding algorithms, and associated numerical libraries for large scale parallel machines. She is currently the Program Director of the SIAM Special Interest Group on Supercomputing.

Thursday, October 29, 2015

SC15 Invited Talk Focus: Dr. Panagiotis Tsarchopoulos Explains the European Supercomputing Research Programme


Over the last couple of years, through a number of policy and research initiatives, the European Union has worked in putting together an ambitious supercomputing research programme.

As part of this effort, in autumn 2015, the European Commission has launched several new supercomputing projects covering supercomputing hardware, software and applications. This launch marks an important milestone in European supercomputing research and development.

The talk will provide a detailed overview of the European supercomputing research programme, its current status, and its future perspectives towards exascale.

Speaker Background:
Dr. Panagiotis (Panos) Tsarchopoulos
Dr. Panagiotis (Panos) Tsarchopoulos is responsible for supercomputing research projects at the Future and Emerging Technologies unit of the European Commission. He holds a Ph.D. in computer engineering from the University of Kaiserslautern, Germany and an M.B.A. from the UBI, Brussels, Belgium.

Wednesday, October 28, 2015

SC15 Releases Short Video on How HPC is Impacting Personalized Medical Treatments Especially in Children with Epilepsy


SC15 recently released this short video about how high performance computing (HPC) is making a difference in personalizing medical treatments as described by Dr. Chris R. Johnson, Founding Director of the Scientific Computing and Imaging Institute.

One specific example he cites is the amazing impact HPC is having on children with epilepsy. According to Dr. Johnson, 15% or more of children with epilepsy cannot be treated effectively with current epilepsy drugs.  Thus, the only recourse for giving them a more normal life is via surgery.

Through the use of patient specific high resolution computer models of a child’s head the surgery is much more precise and less traumatic in part thru the use of high performance imaging and visualization.  The end result is a much higher quality of life for the child.

This is another chapter in a series of short videos focusing on why HPC is important and of critical value to society. It is part of a three-year SC "HPC Matters" campaign.  Click here to view the entire HPC Matters video library.

SC15 Panel Line-Up for Nov. 20th

HPC Transforms DoD, DOE, and Industrial Product Design, Development, and Acquisition

Supercomputing has been shown to enable massive reductions in product development time, significant improvements in product capability, greater design innovation in new products, and effective systems engineering implementations.

Our panelists will share their intimate knowledge of the various methods and practices by which these results have been achieved in the U.S. Departments of Defense & Energy, and in industry.

Topics will include the Digital Thread & Twin of Air Force acquisition; the development and deployment of physics-based engineering analysis and design software for military aircraft, ships, ground vehicles, and antennas; high fidelity predictive simulation of challenging nuclear reactor conditions; accessibility in the era of hacking and exfiltration; STEM education using HPC; and cultural barriers to organizational adoption of HPC-based product development.

Audience questions and contributions to the list of key enablers and pitfalls for the implementation of HPC-based product development within both government and industry will be encouraged and discussed.

Moderator/Panelist Details:
  • Loren Miller (Moderator) - DataMetric Innovations, LLC
  • Christopher Atwood - U.S. Department of Defense High Performance Computing Modernization Program and CREATE Program
  • Col. Keith Bearden - United States Air Force
  • Douglas Kothe - Oak Ridge National Laboratory
  • Edward Kraft - United States Air Force
  • Lt. Col Andrew Lofthouse - United States Air Force Academy
  • Douglass Post - U.S. Department of Defense High Performance Computing Modernization Program and CREATE Program

Return of HPC Survivor: Outwit, Outlast, Outcompute

Back by popular demand, this panel brings together HPC experts to compete for the honor of "HPC Survivor 2015". Following up on the popular Xtreme Architectures (2004), Xtreme Programming (2005), Xtreme Storage (2007)), Build Me an Exascale (2010), and Does HPC Really Matter? (2014) competitions, the theme for this year is "HPC Transformed: How to Reduce/Recycle/Reuse Your Outdated HPC System."

The contest is a series of "rounds," each posing a specific question about system characteristics and how that affects its transformation to new and exciting uses. After contestants answer, a distinguished commentator furnishes additional wisdom to help guide the audience. At the end of each round, the audience votes (applause, boos, etc.) to eliminate a contestant. The last contestant left wins.

While delivered in a light-hearted fashion, the panel pushes the boundaries of how HPC can/should affect society in terms of impact, relevancy, and ROI.

Moderator/Panelist Details:
  • Cherri Pancake (Moderator) - Oregon State University
  • Robin Goldstone - Lawrence Livermore National Laboratory
  • Steve Hammond - National Renewable Energy Laboratory
  • Jennifer M. Schopf - Indiana University
  • John E. West - The University of Texas at Austin

 

HPC and the Public Cloud

Where high-performance computing collides with cloud computing, just about the only point where most interested and informed parties agree is that the overlap is incomplete, complex, and dynamic. We are bringing together stakeholders on all sides of the issue to express and debate their points of view on questions such as:
  • Which HPC workloads should be running in the public Cloud? Which should not?
  • How will Cloud economics affect the choices of algorithms and tools?
  • How does Cloud computing impact computational science?
  • Is there a line to be drawn between "Big Data" and "HPC"? If so, where?
  • Will Cloud HPC encourage or discourage innovation in HPC hardware and software?
  • What is it about HPC that the Cloud providers “don’t get”?

Moderator/Panelist Details:
  • Kevin D. Kissell (Moderator) - Google
  • Jeff Baxter - Microsoft Corporation
  • Shane Canon - Lawrence Berkeley National Laboratory
  • Brian Cartwright - MetLife Insurance Company
  • Steve Feldman - CD-adapco
  • Bill Kramer - University of Illinois at Urbana-Champaign
  • Kanai Pathak - Schlumberger Limited

In Situ Methods: Hype or Necessity?

Due to the widening gap between the FLOP and I/O capacity of HPC platforms, it is increasingly impractical for computer simulations to save full-resolution computations to disk for subsequent analysis.

"In situ" methods offer hope for managing this increasingly acute problem by performing as much analysis, visualization, and related processing while data is still resident in memory. While in situ methods are not new, they are presently the subject of much active R&D, though as yet are not widespread in deployment or use.

This panel examines different aspects of in situ methods, with an eye towards increasing awareness of the current state of this technology, how it is used in practice, and challenges facing widespread deployment and use. The panel will also explore the issue of whether in situ methods are really needed or useful in the first place, and invites discussion and viewpoints from the SC community.

Moderator/Panelist Details:
  • Wes Bethel (Moderator) - Lawrence Berkeley National Laboratory
  • Patrick O'Leary - Kitware, Inc.
  • John Clyne - National Center for Atmospheric Research
  • Venkat Vishwanath - Argonne National Laboratory
  • Jacqueline Chen - Sandia National Laboratories 

Please note: these panels and the rest of the Technical Program are open to exhibitors on November 20th.

Monday, October 26, 2015

SC15 Invited Talk - Nils Thuerey Presents Virtual and Real Flows: Challenges for Digital Special Effects

To get the motion of a crown splash right, surface tension is crucial. The simulation shown here uses a mesh-based approach (please click on photo to enlarge).

Physics simulations for virtual smoke, explosions or water are by now crucial tools for special effects in feature films. Despite their wide spread use, there are central challenges getting these simulations to be controllable, fast enough for practical use and to make them believable.


A drop of liquid impacting a pillar. The thin sheets that form over time are important for believable visuals, but tough for surface tracking algorithms (please click on photo to enlarge).

In this talk I will explain simulation techniques for fluids in movies, and why “art directability” is crucial in these settings. A central challenge for virtual special effects is to make them faster. Ideally, previews should be interactive. At the same time, interactive effects are highly interesting for games or training simulators.


While this image shows a two-way coupled liquid simulation, one-way coupling is often sufficient (or preferred) for special effects applications (please click on photo to enlarge).

I will highlight current research in flow capture and data-driven simulation which aims at shifting the computational load from run-time into a pre-computation stage, and give an outlook on future developments in this area.

Speaker Background:
In 2012, Dr. Nils Thuerey won a Technical Oscar.
Dr. Nils Thuerey works in the field of computer graphics, with a particular emphasis on physically-based animation. One focus of his research targets the simulation of fluid phenomena, such as water and smoke. These simulations find applications as visual effects in computer generated movies and digital games.

Examples of his work are novel algorithms to make simulations easier to control, to handle detailed surface tension effects, and to increase the amount of turbulent detail. After studying computer science, Professor Thuerey acquired a Ph.D. for his work on liquid simulations in 2006. He received both degrees from the University of Erlangen-Nuremberg.

The Technical Achievement Award
Until 2010 he held a position as a post-doctoral researcher at ETH Zurich, in collaboration with Ageia/Nvidia. Subsequently, he worked for three years as Research & Development Lead at ScanlineVFX, developing large scale physics-simulators for visual effects. Since fall 2013 he has been Professor for Games Engineering at Technischen Universität München (TUM).

In 2012, Dr. Thuerey won a Technical Oscar from the Academy of Motion Pictures Arts and Sciences.

SC15 Student Cluster Competition Fires Things Up with Secret Sauce

SCC Secret Sauce

Salsa secreta. Geheime Sosse. Secret sauce. 紹兴. 沙茶.  


No matter how you say it, that extra special ingredient known only to you can sometimes make all the difference in the world. And that’s just what nine teams of students are hoping as they converge on Austin, Texas, to demonstrate their prowess in the SC15 Student Cluster Competition. This year’s competition will feature teams from the Australia, China, Colombia, Germany, Taiwan and the United States.

SC15 attendees are encouraged to drop by the Student Cluster Competition in the southwest corner of the first floor of the Austin Convention Center and get in on some of the secret sauce firing up the competition. Attendees who visit the competition and speak with at least four of the teams will get both inside information and the very own bottle of Student Cluster Competition Secret Sauce.

“We wanted to spice things up this year and since we are in Austin, what better way than by giving out bottles of secret sauce?” said SC15 Student Cluster Competition Chair Hai Ah Nam of Los Alamos National Laboratory.

The competition, which officially kicks off at 5 p.m. Monday, Nov. 16, pits the nine teams against each other in this high-energy event featuring young supercomputing talent from around the world competing to build and operate powerful cluster computers. In this real-time, non-stop, 48-hour competition, teams of undergraduate and/or high school students assemble small cluster computers on the SC15 exhibit floor and race to complete a real-world workload across a series of applications and impress HPC industry judges.

First held at SC07 in Reno, the Student Cluster Competition has proven so popular that it has been replicated in Europe, Africa and Asia.

“It’s a global event in every sense, from the teams to the appreciation by those watching, who understand they are seeing the future of HPC,” said Stephen Harrell of Purdue University, who has taken several Purdue teams to compete in the U.S. and Germany.

And those SC15 attendees who drop by the competition will be able to take home a small reminder of what fires up the competitive juices when students get together to show their stuff.

Sunday, October 25, 2015

SC15 Invited Talk Spotlight: Dr. Yutaka Ishikawa Presents System Software in Post K Supercomputer

The K computer manufactured by Fujitsu, currently installed at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan.

The next flagship supercomputer in Japan, replacement of K supercomputer, is being designed toward general operation in 2020. Compute nodes, based on a many core architecture, connected by a 6-D mesh/torus network is considered. A three level hierarchical storage system is taken into account.


Co-kernel approach for HPC OS: Separation of system and process activities.

A heterogeneous operating system, Linux and a light-weight kernel, is designed to build suitable environments for applications. It cannot be possible without co-design of applications that the system software is designed to make maximum utilization of compute and storage resources.

After a brief introduction of the post K supercomputer architecture, the design issues of the system software will be presented in this Talk. Two big-data applications, genome processing and meteorological and global environmental predictions will be sketched out as target applications in the system software design. Then, it will be presented how these applications' demands affect the system software.

Dr. Yutaka Ishikawa
Speaker Background:    
Dr. Yutaka Ishikawa is the project leader of post K supercomputer development. From 1987 to 2001, he was a member of AIST (former Electrotechnical Laboratory), METI. From 1993 to 2001, he was the chief of Parallel and Distributed System Software Laboratory at Real World Computing Partnership.

He led development of cluster system software called SCore, which was used in several large PC cluster systems around 2004.

From 2002 to 2014, he was a professor at the University Tokyo. He led the project to design a commodity-based supercomputer called T2K open supercomputer.

As a result, three universities (Tsukuba, Tokyo, and Kyoto), obtained each supercomputer based on the specification in 2008. He has been involved in the design of the post T2K machine since 2013, which will be operational in 2020.   

Thursday, October 22, 2015

Make New Friends at SC15 with this Free Meet-up/Networking Tool

SC15 is on Duckling! Is there an Austin restaurant or attraction that you are dying to visit, but don’t want to explore on your own? Create an informal outing of like-minded SC15 attendees with this free networking website called Duckling.

Simply visit the SC15 page on the Duckling site and click “add an outing.” Use the Yelp-connected search engine to find a specific restaurant or venue or search around for something of interest. Once you've picked the venue, fill in information about the details, meeting spot, etc. to create your event. We are encouraging all SC15 Duckling outings to meet at the information booth at the Fourth St. entrance of the Austin Convention Center. 

If you are looking to network and meet other attendees while exploring the city, just visit the SC15 Duckling site for a list of events. Or follow @ducklingquacks on twitter. Outings created on Duckling are open to everyone, so all you have to do is show up. You can also search the #SC15 hashtag on Twitter for duckling events.

Mentor-Protégé Program Needs You

Mentor Jim Ahrens of Los Alamos National Laboratory meets with his protégé, Maria Ruiz, at SC14.
Organized as part of the Students@SC program, the Mentor-Protégé Program supports the growth of a vibrant HPC community by connecting the newcomers at the SC conference with experienced conference attendees.

The mentors share their experiences from previous SC conferences and their expertise in high performance computing with the student protégés participating in Student Volunteers, HPC for Undergraduates, the Student Cluster Competition, the Doctoral Showcase and the ACM Student Research Competition.

For many of the students, SC will be their first conference and can be an overwhelming experience. A mentor can help them make the best use of their time and make personal connections with the HPC community.

Want to be a mentor?

Conference attendees who have attended SC one or more times in the past are encouraged to sign up to be mentors when they register for SC. Mentors will also choose their discipline on the form to help make the best matches. Mentors should plan their travel arrangements so they can attend a Monday afternoon mixer for mentors and protégés from 3:30-5 pm. Monday, Nov. 16.

Click here for more details on the program including Jim Ahrens's and Maria Ruiz's experience from SC14.

Wednesday, October 21, 2015

SC15 Panel Line-Up for Nov. 19th

Asynchronous Many-­Task Programming Models for Next Generation Platforms

Next generation platform architectures will require us to fundamentally rethink our programming models and environments due to a combination of factors including extreme parallelism, data locality issues, and resilience. As seen in the computational sciences community, asynchronous many-task (AMT) programming models and runtime systems are emerging as a leading new paradigm.

While there are some overarching similarities between existing AMT systems, the community lacks consistent 1) terminology to describe runtime components, 2) application- and component-level interfaces, and 3) requirements for the lower level runtime and system software stacks.

This panel will engage a group of community experts in a lively discussion on status and ideas to establish best practices in light of requirements such as performance portability, scalability, resilience, and interoperability. Additionally, we will consider the challenges of user-adoption, with a focus on the issue of productivity, which is critical given the application code rewrite required to adopt this approach.

Moderator/Panelist Details:
  • Robert Clay (Moderator) - Sandia National Laboratories
  • Alex Aiken - Stanford University
  • Martin Berzins - University of Utah
  • Matthew Bettencourt - Sandia National Laboratories
  • Laxmikant Kale - University of Illinois at Urbana-Champaign
  • Timothy Mattson - Intel Corporation
  • Lawrence Rauchwerger - Texas A&M University
  • Vivek Sarkar - Rice University
  • Thomas Sterling - Indiana University
  • Jeremiah Wilke - Sandia National Laboratories

 

Towards an Open Software Stack for Exascale Computing

The panel will discuss what an open software stack should contain, what would make it feasible and what is not looking possible at the moment. The discussion is inspired by the fact that "this time, we have time" before the hardware actually reaches the market after 2020, so we can work on a software stack accordingly.

We will cover questions such as: Which would be the software development costs? What industries will migrate first? Would a killer app accelerate this process? Do we focus on algorithms to save power? How heterogeneous would/should “your” exascale system be? Is there a role for Co-design towards exascale? Is the Square Kilometre Array (SKA) project an example to follow? Would cloud computing be possible for exascale? Who will “own” the exascale era?

Moderator/Panelist Details:
  • Nicolás Erdödy (Moderator) - Open Parallel Ltd.
  • Pete Beckman - Argonne National Laboratory
  • Chris Broekema - Netherlands Institute for Radio Astronomy
  • Jack Dongarra - University of Tennessee
  • John Gustafson - Ceranovo, Inc.
  • Thomas Sterling - Indiana University
  • Robert Wisniewski - Intel Corporation

 

Procuring Supercomputers: Best Practices and Lessons Learned

Procuring HPC systems is a challenging process of acquiring the most suitable machine under technical and financial constrains aiming at maximizing the benefits to the users’ applications and minimizing the risks during its lifetime.

In this panel, HPC leaders will discuss and debate on keys requirements and lessons learned for successful procurements of supercomputers.

How do we define the requirements of the system? Is it to acquire a system for maximizing the capacity and capability, assessing new/future technologies, deliver a system designed for specific applications or provide an all-purpose solution to a broad range of applications? Is the system just a status symbol or must it do useful work?

This panel will give the audience an opportunity to ask questions to panelists who are involved in the procurement of leadership-class supercomputers and capturing lessons learned and turning that hindsight into best practices to procure and the most suitable HPC system.

Moderator/Panelist Details:
  • Bilel Hadri (Moderator) - King Abdullah University of Science and Technology
  • Katie Antypas - National Energy Research Scientific Computing Center
  • Bill Kramer - University of Illinois at Urbana-Champaign
  • Satoshi Matsuoka - Tokyo Institute of Technology
  • Greg Newby - Compute Canada
  • Owen Thomas - Red Oak Consulting

Monday, October 19, 2015

SC15 Invited Talk Spotlight: Revealing the Hidden Universe - Supercomputer Simulations of Black Hole Mergers

Numerical Relativity simulation of Binary Black Holes  by Campanelli et al, 2006.
Supermassive black holes at the centers of galaxies power some of the most energetic phenomena in the Universe. Our understanding of these extremely powerful events and their observations has numerous exciting consequences for our understanding of galactic evolution, black hole demographics, plasmas in strong-field gravity, and general relativity.

Near black holes not only have gravity that is so extreme as to generate observable gravitational radiation, but also highly-relativistic gas flows around them can produce powerful electromagnetic signals as the gas is pulled by extreme gravitational forces, and it is believed to be responsible of the launching of the powerful observed jets across the entire universe. 

Simulation of Accretion Dynamics into a Binary Black Hole System by Scott Noble et al. 2014. The figure shows the rest mass density in the midplane of the black hole binary orbit. Insets are progressively zoomed. Black Dots are the Black Holes.
The mathematics involved in modeling these events is very sophisticated because one has to solve the equations of Einstein’s general relativity and magneto-hydrodynamics, all-together.  The problem also requires very advanced supercomputers running programs on tens of thousands of CPUs simultaneously, and the use of sophisticated techniques for data extraction and visualization. Petascale numerical simulation is therefore the only tool available to accurately model these systems.

This talk will review some of the new developments in the field of numerical relativity, and relativistic astrophysics that allow us to successfully simulate and visualize the innermost workings of these violent astrophysical phenomena. More images and material can be found in these links:
Speaker background:
Dr. Manuela Campanelli
Dr. Manuela Campanelli is a professor of Mathematics and Astrophysics at the Rochester Institute of Technology. She is the director of the Center for Computational Relativity and Gravitation. Campanelli was the recipient of the Marie Curie Fellowship (1998), the American Physical Society Fellowship (2009) and the RIT Trustee Award (2014). She was also the Chair of the APS Topical Group in Gravitation in 2013.

Dr. Campanelli has an extensive research experience on Einstein’s theory of General Relativity, astrophysics of black holes and gravitational waves. She is known for groundbreaking work on numerical simulations of binary black hole space times and for explorations of physical effects such as “super kicks” and spin-driven orbital dynamics.

In 2005, she was the lead author of a work that produced a breakthrough on binary black hole simulations. In 2007, she discovered that supermassive black holes can be ejected from most galaxies at speeds of up to 4000km/s. Her more current research focuses on computer simulations of merging supermassive black holes, and on magnetohydrodynamics simulations of their accretion disk and jet dynamics, in connection with both gravitational-wave and electromagnetic observations. She also participates in the LIGO scientific collaboration. The search for gravitational waves from binary black holes and binary neutron stars moved forward in September with the first observing run of the upgraded laser interferometer, Advanced LIGO gravitational wave detectors.

Dr. Campanelli’s research include numerous publications and invited presentations and reviews papers. One of her papers was recently highlighted by the APS as one of the landmark papers of the century  on the subject of general relativity, starting with a contribution from Einstein himself. Her work was highlighted by the American Physical Society’s Focus, New Scientist, Astronomy, and the Laser Interferometer Gravitational-Wave Observatory’s LIGO Magazine. More info can be found by clicking here.

SC15 Panel Line-Up for Nov. 18th

Supercomputing and Big Data: From Collision to Convergence

As data intensive science emerges, the need for high performance computing (HPC) to converge capacity and capabilities with Big Data becomes more apparent and urgent. Capacity requirements have stemmed from science data processing and the creation of large scale data products (e.g., earth observations, Large Hadron Collider, square-kilometer array antenna) and simulation model output (e.g., flight mission plans, weather and climate models).

Capacity growth is further amplified by the need for more rapidly ingesting, analyzing, and visualizing voluminous data to improve understanding of known physical processes, discover new phenomena, and compare results.

• How does HPC need to change in order to meet these Big Data needs?
• What can HPC and Big Data communities learn from each other?
• What impact will this have on conventional workflows, architectures, and tools?

An invited international panel of experts will examine these disruptive technologies and consider their long-term impacts and research directions.

Moderator/Panelist Details:
  • George O. Strawn (Moderator) - Networking and Information Technology Research and Development National Coordination Office
  • David Bader - Georgia Institute of Technology
  • Ian Foster - University of Chicago
  • Bruce Hendrickson - Sandia National Laboratories
  • Randy Bryant - Executive Office of the President, Office of Science and Technology Policy
  • George Biros - The University of Texas at Austin
  • Andrew W. Moore - Carnegie Mellon University


Mentoring Undergraduates Through Competition

SC14 Group Photo of the SCC Teams
The next generation of HPC talent will face significant challenges to create software ecosystems and optimally use the next generation of HPC systems. The rapid advances in HPC make it difficult for academic institutions to keep pace.

The Student Cluster Competition (SCC), now in its ninth year, was created to address this issue by immersing students into all aspects of HPC. This panel will examine the impact of the SCC on the students and schools that have participated.

Representatives from five institutions from around the world will talk about their experiences with the SCC with regards to their students' career paths, integration with curriculum and academic HPC computing centers.

The panel will further discuss whether "extracurricular" activities, such as the SCC, provide sufficient return on investment and what activities could change or replace the competition to meet these goals more effectively.

Moderator/Panelist Details:
  • Brent Gorda (Moderator) - Intel Corporation
  • Jerry Chou - Tsinghua University
  • Rebecca Hartman-Baker - Lawrence Berkeley National Laboratory
  • Doug Smith - University of Colorado Boulder
  • Xuanhua Shi - Huazhong University of Science and Technology
  • Stephen Lien Harrell - Purdue University 

 

Programming Models for Parallel Architectures and Requirements for Pre-Exascale

Relying on domain scientists to provide programmer intervention to develop applications to emerging exascale platforms is a real challenge. A scientist prefers to express mathematics of the science, not describe the parallelism of the implementing algorithms.

Do we expect too much of the scientist to code for high parallel performance given the immense capabilities of the platform. This ignores that the scientist may have a mandate to code for a new architecture, and yet preserve portability in their code.

This panel will bring together user experience, programming model, architecture experts to discuss the pressing needs in finding the path forward to port scientific codes to such a platform. We hope to discuss the evolving programming stack, application-level requirements, and address the hierarchical nature of large systems in terms of different cores, memory levels, power consumption and the pragmatic advances of near term technology.

Moderator/Panelist Details:
  • Fernanda Foertter (Moderator) - Oak Ridge National Laboratory
  • Barbara Chapman - University of Houston
  • Steve Oberlin - NVIDIA Corporation
  • Satoshi Matsuoka - Tokyo Institute of Technology
  • Jack Wells - Oak Ridge National Laboratory
  • Si Hammond - Sandia National Laboratories

Friday, October 16, 2015

SC15 Releases Short Video Explaining Why High Performance Computing is Important to NASA

NASA Aerospace Engineer Dr. Shishir Pandya explains how high performance computers (HPCs) help advance airplane and rocket technologies to save fuel and make travel more affordable for the public.  According to Pandya it is also critical to be pushing technology further to search for that next breakthrough.



This is another chapter in a series of short videos focusing on why HPC is important and of critical value to society. It is part of a three-year SC "HPC Matters" campaign.  Click here to view the entire HPC Matters video library. Also, once again, there will be an HPC Matters Plenary during the conference.  This year's chosen speaker is Intel's Diane Bryant - recently named one of the top 51 most powerful women by Forbes. To view those details, click here.

Editorial note: Special thanks to NASA's Jill Dunbar for her assistance on this project.

Thursday, October 15, 2015

SC15 Invited Talk Spotlight: the U.S. Under Secretary for Science and Energy Lynn Orr Presents the 2015 Quadrennial Technology Review

Click here to download the Quadrennial Technology Review.
The United States is in the midst of an energy revolution. Over the last decade, the United States has slashed net petroleum imports, dramatically increased shale gas production, scaled up wind and solar power, and cut the growth in electricity consumption to nearly zero through widespread efficiency measures.

Technology is helping to drive this revolution, enabled by years to decades of research and development that underpin these advances in the energy system.
The Department of Energy’s 2015 Quadrennial Technology Review (QTR) examines the status of the science and technology that are the foundation of our energy system, together with the research, development, demonstration, and deployment opportunities to advance them.

Advancing clean power technologies will be just one of many topics addressed.
This analysis is particularly instructive in the run up to the international climate negotiations taking place later this year at the 21st Conference of Parties, as technological advancements will be crucial to achieving global greenhouse gas emissions reductions.

During his presentation, Under Secretary for Science and Energy Lynn Orr will provide an overview of the highlights of the QTR report and discuss examples of promising research and development opportunities that can help the nation achieve a low-carbon economy.

Speaker Background:
Dr. Franklin (Lynn) Orr
Dr. Franklin (Lynn) Orr was sworn in as the Under Secretary for Science and Energy in December 2014. As the Under Secretary, Dr. Orr is the principal advisor to the Secretary and Deputy Secretary on clean energy technologies and science and energy research initiatives.

In this role, he oversees DOE’s offices of Electricity Delivery and Energy Reliability, Energy Efficiency and Renewable Energy, Fossil Energy, Indian Energy Policy and Programs, Nuclear Energy, and Science.  In total, these programs steward the majority of DOE’s National Laboratories (13 of 17).

Prior to joining the Department of Energy, Dr. Orr was the Keleen and Carlton Beal Professor Emeritus in the Department of Energy Resources Engineering at Stanford University. He joined Stanford in 1985, serving in a number of roles including Dean of the School of Earth Sciences as well as the founding director of the Precourt Institute for Energy.

Dr. Orr holds a Ph.D. from the University of Minnesota and a B.S. from Stanford University, both in Chemical Engineering.

Wednesday, October 14, 2015

SC15 Invited Talk Spotlight: The U.S. National Strategic Computing Initiative as Explained by the White House's Randal Bryant and Tim Polk

Tim Polk and Randal Bryant will explain the current status of the National Strategic Computing Initiative and provide a look into its future impact as it relates to high performance computing.
U.S. President Obama signed an Executive Order creating the National Strategic Computing Initiative (NSCI) on July 31, 2015.

In the order, he directed agencies to establish and execute a coordinated Federal strategy in high-performance computing (HPC) research, development, and deployment. The NSCI is a whole-of-government effort to be executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States.

The Federal Government is moving forward aggressively to realize that vision. This presentation will describe the NSCI, its current status, and some of its implications for HPC in the U.S. for the coming decade.

Click here to review the official Executive Order from President Obama and click here for the official fact sheet.

Speaker Backgrounds:

Randal E. Bryant
Executive Office of the President, Office of Science and Technology Policy
The United States of America
Randal Bryant has been on the computer science faculty at Carnegie Mellon University for over 30 years, serving as Dean of the School of Computer Science from 2004 to 2014. Starting in 2014, he also has been at the White House Office of Science and Technology Policy, where he serves as Assistant Director for IT R&D.

William (Tim) Polk    
Executive Office of the President, Office of Science and Technology Policy
The United States of America 
Tim Polk joined the National Institute of Standards and Technology in 1982, where he has concentrated on Internet security since 1987. In 2013, he joined the Office of Science and Technology Policy, where high performance computing complements his duties as Assistant Director for Cybersecurity.

Register for SC15 by Oct. 15th and Save up to $275!

SC15 is the 27th year of the SC Conference series – once again featuring an exceptional Technical Program, Industry and Research Exhibits, Education Program and many other activities.  SC15 is the one place where attendees can see tomorrow’s technology being used to solve world-class challenge problems today.

The SC15 early registration deadline is October 15th.  The fee structure for SC15 makes it even more advantageous to register early this year. Registering early can save you up to $275 off your Technical Program registration (depending on your registration category).  Also, registering by Oct. 15th can save you up to $350 off Tutorial registration. Register early for both and save up to $625!

Conference attendees, click here to register.

Exhibitor Booth Management, click here to register.

Tuesday, October 13, 2015

Berkeley Lab's Katherine Yelick Wins Ken Kennedy Award

UC Berkeley Professor to Receive ACM/IEEE-CS Kennedy Award at SC15

for Contributions to International Research Agenda


Katherine Yelick
ACM and IEEE Computer Society have named Katherine Yelick as the recipient of the 2015 ACM/IEEE Computer Society Ken Kennedy Award for innovative research contributions to parallel computing languages that have been used in both the research community and in production environments. She was also cited for her strategic leadership of the national research laboratories and for developing novel educational and mentoring tools. The award will be presented at SC 15: the International Conference for High Performance Computing, Networking, Storage and Analysis November 17, in Austin, Texas.

Yelick’s work has improved the programmability of high performance computing (HPC) through innovations to parallel languages and runtime systems. Her contributions to compiler research and open source software were key to the success of a new parallel programming model known as partitioned global address space (PGAS), an important software innovation for developers facing the challenges of exascale computing. She developed new automatic performance tuning techniques and runtime systems that maximize performance across a wide variety of computer architectures.

Yelick has authored more than 170 technical papers and reports on parallel languages, compilers, algorithms, libraries, architecture and storage, and she has also trained more than 50 graduate students, postdocs and research undergraduate students. She has worked with interdisciplinary teams developing scientific applications ranging from simulations of chemistry, fusion, and blood flow in the heart to analysis problems in phylogenetics and genome assembly.

Yelick is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley since 1991 and Faculty Scientist at Lawrence Berkeley National Laboratory since 1996 where she is currently the Associate Laboratory Director for Computing Sciences. In that role, Yelick oversees computing research and facility directions for the Lab, including both the Energy Sciences Network (ESnet) and the National Energy Research Scientific Computing Center (NERSC), where she served as director for five years. Her leadership at the national level has significantly influenced international scientific research on behalf of the U.S. Department of Energy. 

An ACM Fellow, Yelick was named the 2013 Athena Lecturer by the ACM Council on Women (ACM-W). She is a member of the National Academies Computer Science and Telecommunications Board and previously served on the California Council on Science and Technology and the University of California Science and Technology Council. Yelick participated in the National Research Council report, “The Future of Computer Performance: Game Over or Next Level?” on opportunities and challenges for sustaining growth in computing performance. 

ACM and the Computer Society co-sponsor the Kennedy Award, which was established in 2009 to recognize substantial contributions to programmability and productivity in computing and significant community service or mentoring contributions. It was named for the late Ken Kennedy, founder of Rice University’s computer science program and a world expert on high performance computing. The Kennedy Award carries a US $5,000 honorarium endowed by ACM, the IEEE Computer Society and the SC Conference Steering Committee.

About ACM
ACM
, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

About IEEE Computer Society
IEEE Computer Society
is one of the world’s leading computing membership organizations and a trusted information and career-development source for a global workforce of technology leaders, including professors, researchers, software engineers, IT professionals, employers and students. IEEE Computer Society provides high-quality, state-of-the-art information on an on-demand basis. The Computer Society provides a wide range of forums for top minds to come together, including technical conferences, publications, a comprehensive digital library, unique training webinars and professional training. IEEE is the world's largest professional association for advancement of technology and the Computer Society is the largest society within IEEE.

About SC15
SC15
sponsored by ACM and IEEE-CS offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning.

Thursday, October 8, 2015

SC15 Invited Talk Spotlight: Societal Impact of Earthquake Simulations at Extreme Scale by USC's Dr. Thomas H. Jordan

Simulation of a “wall-to-wall” rupture of the southern San Andreas Fault. The peak ground velocities during this magnitude-8 earthquake are shown in color. White lines are seismograms at selected points. Graphic by Geoff Ely, Southern California Earthquake Center (click on image to enlarge).
The highly nonlinear, multiscale dynamics of large earthquakes is a wicked physics problem that challenges HPC systems at extreme computational scales. This presentation will summarize how earthquake simulations at increasing levels of scale and sophistication have contributed to our understanding of seismic phenomena, focusing on the practical use of simulations to reduce seismic risk and enhance community resilience.

Milestones include the terascale simulations of large San Andreas earthquakes that culminated in the landmark 2008 ShakeOut planning exercise and the recent petascale simulations that have created the first physics-based seismic hazard models.

CyberShake seismic hazard map for the Los Angeles region, showing the 2-s spectral acceleration response (in units of surface gravity) at an exceedance probability of 2% in 50 years. To create this map, over 300 million seismograms were computed at 336 sites for multiple realizations of all fault ruptures in Version 2 of the Uniform California Earthquake Rupture Forecast using the tomographic velocity model CVM-S4.26. This is the information engineers need to design seismically safe structures.
From the latter it is shown that accurate simulations can potentially reduce the total hazard uncertainty by about one-third relative to empirical models, which would lower the exceedance probabilities at high hazard levels by orders of magnitude.

Realizing this gain in forecasting probability will require enhanced computational capabilities, but it could have a broad impact on risk-reduction strategies, especially for critical facilities such as large dams, nuclear power plants, and energy transportation networks.

Click here for an animation of ground motions in the Los Angeles region excited by a magnitude-7.8 earthquake on the southern San Andreas Fault. Areas of high plastic strain and inelastic dissipation are shown in green. Credit: Dan Roten, San Diego Supercomputer Center.

Speaker Background:
Dr. Thomas H. Jordan
Dr. Thomas H. Jordan is a University Professor and the W. M. Keck Foundation Professor of Earth Sciences at the University of Southern California. His current research is focused on system-level models of earthquake processes, earthquake forecasting, continental structure and dynamics, and full-3D waveform tomography.

As the director of the Southern California Earthquake Center (SCEC), he coordinates an international research program in earthquake system science that involves over 1000 scientists at more than 70 universities and research organizations. He is an author of more than 230 scientific publications, including two popular textbooks.

Jordan received his Ph.D. from the California Institute of Technology in 1972 and taught at Princeton University and the Scripps Institution of Oceanography before joining the Massachusetts Institute of Technology in 1984. He was head of MIT’s Department of Earth, Atmospheric and Planetary Sciences from 1988 to 1998.

He has received the Macelwane and Lehmann Medals of the American Geophysical Union and the Woollard Award and President’s Medal of the Geological Society of America. He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society.

Wednesday, October 7, 2015

SC15 Panel Focus for Nov. 17th

Panel Title: Post Moore's Law Computing: Digital versus Neuromorphic versus Quantum

The end of Moore’s Law scaling has sparked research into preserving performance scaling through alternative computational models. This has sparked a debate for the future of computing. Currently, the future of computing is expected to include a mix of quantum, neuromorphic, and digital computing. However, a range of questions remain unanswered for each option.

For example, what problems each approach is most efficient for remains to be determined, and so are issues such as manufacturability, long-term potential, inherent drawbacks, programming, and many others. Can neuromorphic or quantum ever replace digital computing? Can we find alternative CMOS technologies and clever architectures to preserve digital computing performance scaling? What is the upper limit of CMOS?

This is a critical debate for a wide audience, because solving many of tomorrow’s problems requires a reasonable expectation of what tomorrow looks like.

Moderator/Panelist Details:
  • George Michelogiannakis (Moderator) - Lawrence Berkeley National Laboratory
  • John Shalf - Lawrence Berkeley National Laboratory
  • Bob Lucas - University of Southern California
  • Jun Sawada - IBM Corporation
  • Mattias Troyer - ETH Zurich
  • David Donofrio - Lawrence Berkeley National Laboratory
  • Shekhar Bokhar - Intel Corporation

Panel Title: Future of Memory Technology for Exascale and Beyond III

Memory technology is in the midst of profound change as we move into the exascale era. Early analysis, including the DARPA UHPC Exascale Report correctly identified the fundamental technology problem as one of enabling low-energy data movement throughout the system.

However, the end of Dennard Scaling and the corresponding impact on Moore’s Law has begun a fundamental transition in the relationship between the processor and memory system. The lag in the increase in the number of cores compared to what Moore’s Law would provide has proven a harbinger of the trend towards memory systems performance dominating compute capability.

Moderator/Panelist Details:
  • Richard Murphy (Moderator) - Micron Technology, Inc.
  • Shekhar Borkar - Intel Corporation
  • Bill Dally - NVIDIA Corporation
  • Wendy Elasser - ARM Ltd.
  • Mike Ignatowski - Advanced Micro Devices, Inc.
  • Doug Joseph - IBM Corporation
  • Peter Kogge - University of Notre Dame
  • Steve Wallach - Micron Technology, Inc.

Tuesday, October 6, 2015

SC15 Releases Video Profiling the Importance of Supercomputing as Explained by IDC's Steve Conway

 
Click above to view the latest SC Conference video on the importance of HPC.

Steve Conway, Research Vice President in IDC's High Performance Computing group, provides a high-level overview of the importance supercomputers and cites specific examples where it is making a difference every day.

Mr. Conway plays a major role in directing and implementing HPC research related to the worldwide market for technical servers and supercomputers. A 25-year veteran of the HPC and IT industries, Mr. Conway authors key IDC studies, reports and white papers, helps organize and advance the HPC User Forum, and provides thought leadership and practical guidance for users, vendors and other members of the HPC community.

Before joining IDC, Mr. Conway was vice president of corporate communications and investor relations for Cray Inc. He was also a divisional leader for SGI and headed corporate communications and analyst relations for Cray Research and CompuServe Corporation. Mr. Conway had a 12-year career in university teaching and administration at Boston University and Harvard University.
 
Check back here for more videos on the value high performance computing and dynamic ways it is improving society and mark your calendars for the HPC Matters Plenary on November 16 in Austin, Texas.