Wednesday, September 30, 2015

SC15 Invited Talk Spotlight: The Power of Visual Analytics - Unlocking the Value of Big Data

Visual Exploration of Network Traffic for Host and Server Monitoring: The screenshot shows the hourly amount of network traffic for thousands of hosts in a large computer network for 24 hours. The different nested circles represent the topological subnet hierarchy of the network. Each filled circle represents a whole subnet or when zoomed in single hosts. Each circle consists of 24 segments, while each colored segment visualizes the number of bytes transferred in the respective hour.
Never before has data been generated and collected at such high volumes as it is today. As the volumes of multidimensional data available to businesses, scientists, and the public increase, their effective use becomes more challenging. Visual analytics seeks to provide people with effective ways to understand and analyze large multidimensional data sets, while also enabling them to act upon their findings immediately.

Visualization of Large-Scale Distributed Network Attacks:  This picture illustrates attacks from the Internet to computers located in a large computer network (brute force SSH attacks).  The background represents the network structure with computer systems as rectangles. External hosts are shown as colored circles on the outside.  The splines represent the connections between attackers and computers within the network.  This reveals a network scan (from top) and a distributed attack (from bottom) originating from hundreds of hosts working together in an attempt to break into specific computer systems.
It integrates the analytic capabilities of the computer and the abilities of the human analyst, allowing novel discoveries and empowering individuals to take control of the analytical process.

This talk presents the potential of visual analytics and discusses the role of automated versus interactive visual techniques in dealing with big data. A variety of application examples ranging from news analysis over network security to SC performance analysis illustrate not only the exiting potential of visual analysis techniques, but also their limitations.

Speaker background:       
Dr. Daniel A. Keim
Dr. Daniel A. Keim is professor and head of the Information Visualization and Data Analysis Research Group in the Computer Science Department of the University of Konstanz, Germany. He has been actively involved in data analysis and information visualization research for more than 20 years and developed a number of novel visual analysis techniques for very large data sets.

He has been program co-chair of the IEEE InfoVis and IEEE VAST as well as the ACM SIGKDD conference, and he is member of the IEEE VAST as well as EuroVis steering committees. He is coordinator of the German Science Foundation funded Strategic Research Initiative "Scalable Visual Analytics" and has been scientific coordinator of the European Commission funded Coordination Action "Visual Analytics - Mastering the Information Age (VisMaster)".

Dr. Keim received his Ph.D. and habilitation degrees in computer science from the University of Munich. Before joining the University of Konstanz, Dr. Keim was associate professor at the University of Halle, Germany, and Senior Technology Consultant at AT&T Shannon Research Labs, NJ, USA.

Tuesday, September 29, 2015

Alexander Szalay Named Recipient of 2015 IEEE Computer Society Sidney Fernbach Award

Alexander Szalay
Alexander Szalay, a Bloomberg Distinguished Professor at the Johns Hopkins University, has been selected as the recipient of the 2015 IEEE Computer Society Sidney Fernbach Award. Szalay was recognized “for his outstanding contributions to the development of data-intensive computing systems and on the application of such systems in many scientific areas including astrophysics, turbulence, and genomics.”

Established in 1992 in memory of high-performance computing pioneer Sidney Fernbach, the Fernbach Award recognizes outstanding contributions in the application of high-performance computers using innovative approaches. The award consists of a certificate and a (U.S.) $2,000 honorarium. Szalay will be presented with the award on 17 November 2015 in Austin, Texas at the SC15 Conference.  To learn more about Sidney Fernbach, click here.

Szalay teaches in the Physics and Astronomy and Computer Science Departments at Johns Hopkins. He is the Director of the JHU Institute for Data Intensive Engineering and Science. Born and educated in Hungary, Szalay spent postdoctoral periods at UC Berkeley and the University of Chicago before accepting a faculty position at Johns Hopkins.

A cosmologist, Szalay works on the statistical measures of the spatial distribution of galaxies and galaxy formation. He wrote the first papers associating dark matter with relic particles from the Big Bang. Recently he has been working on problems related to large data sets in various areas of physics and astrophysics.

Szalay is the architect for the Science Archive of the Sloan Digital Sky Survey and was project director of the NSF-funded National Virtual Observatory, building an open system out of the various distributed astronomy collections across the US. Recently he has been focusing on the challenges of building public numerical laboratories from large supercomputer simulations.

He has built various data-intensive parallel computers, one of them winning the Storage Challenge at SC-08. His papers cover areas from theoretical cosmology, observational astronomy, spatial statistics, and computer science.

Szalay was elected to the Hungarian Academy of Sciences as a corresponding member in 1990. In 2003 he became a Fellow of the American Academy of Arts and Sciences. He received an Alexander von Humboldt Prize in Physical Sciences in 2004 and the Microsoft Jim Gray Award in 2007. In 2008, he became Doctor Honoris Causa of the Eötvös University.

About IEEE Computer Society
IEEE Computer Society is the world’s leading membership organization for professionals in all aspects of modern computing—from cloud to big data, security to mobile, robotics to software defined systems, wearables to IoT. A true community for technology leaders, the Computer Society provides resources to keep its members current in technology and moving forward in their professions—publications, a renowned digital library, a vast program of conferences, technical committees, and much more. The Computer Society also serves the profession at large through the establishment of standards, professional qualifications and certifications, training and education programs, events on leading-edge technologies, conference publishing, and a wealth of other services and programs focused on advancing the science and art of computing. More information on IEEE Computer Society can be found at www.computer.org.

Monday, September 28, 2015

SC15 Invited Talk Spotlight: Trends and Challenges in Computational Modeling of Giant Hydrocarbon Reservoirs

Multimodal visualization of giant oil and gas reservoir models.
Giant oil and gas reservoirs continue to play an important role in providing energy to the world. Nowadays, state of the art technologies are utilized to further explore and produce these reservoirs since a slight increase in the recovery amounts to discovering a mid-size reservoir somewhere else.

Mathematical modeling and numerical simulation play a major role in managing and predicting the behavior of these systems using large supercomputers. With the aid of evolving measurement technologies a vast amount of geoscience, fluid and dynamic data is now being collected.

Consequently, more and more high resolution, high fidelity numerical models are being constructed. However, certain challenges still remain in model construction and simulating the dynamic behavior of these reservoirs.
The benefits of fine-scale simulation are improved accuracy and a higher rate of oil recovery.
Challenges include determination of rock property variation between the wells, accurate location of faults, effective simulation of multi-component, multi-phase transient flow in fractures, complex wells, and rock matrix.

Computational challenges include effective parallelization of the simulator algorithms, cost-effective large-scale sparse linear solvers, discretization, handling multi-scale physics, complex well shapes, fractures, complaint software engineering with the rapidly evolving super computer architectures, and effective visualization of very large data sets.

This presentation will cover examples for the giant reservoir models using billion plus elements, model calibration to historical data, challenges, current status, and future trends in computational modeling in reservoir modeling.

Speaker background:
Dr. Ali H. Dogru
Dr. Ali H. Dogru is a Saudi Aramco Fellow and Chief Technologist of Computational Modeling Technology. Before joining Saudi Aramco in 1996 he worked for Core Labs Inc from 1979 to 1982 and then Mobil R&D from 1982 to 1996.

His academic experiences include University of Texas at Austin; Norwegian Institute of Technology; California Institute of Technology; University of Texas; and Istanbul Technical University. He is a visiting Scientist at Earth Sciences at MIT. He holds a PhD from The University of Texas.

He has 12 U.S. patents, is the recipient of the SPE’s John Franklin Carl award, SPE's Reservoir Description and Dynamics award, and a recipient of World Oil’s Innovative Thinker award. He has published extensively.

Sunday, September 27, 2015

Getting to Know the Student Cluster Competition Teams

From left, the Technische Universität München team is: Michael Zellner, Gregor Matl, Felix Thimm, Daniel Gallenberger, Felix Spaeth, and Sharru Moeller.
The following is an interesting look at one of the nine Student Cluster Competition teams competing in Austin this November. Click here to learn more about some of the other teams.

For what reasons are you studying computer science?
  • The joy of building something new combined with an interest in computers since childhood.
  • Enjoyment in creating new things while using innovative computer systems.
  • Enjoyment in trying to solve problems and a general interest in innovative computer systems.
  • It really influences daily life and most of all, it is fascinating!
In what other events like this have you competed? Was that as a team or individually?  One student has previously competed in the German Federal Competition of Computer Science, which deals with algorithms in a rather theoretical way, and the International Olympics of Informatics (IOI), which also requires the algorithms implementations.

Of those two competitions, he won the first one and became a bronze medalist in the latter. He also competed in the (International Collegiate Programming Contest) ICPC on regional level (as a team) and other programming challenges like the Google Code Jam (Individually, advanced to top 500).
  
What training routine are you following to prepare for this year's SCC15 competition?
We are meeting every other week to discuss our progress with respect to the codes. Furthermore, we have the unique possibility to access the LRZ SuperMIC cluster.

Describe your team's distributed application responsibilities based on experience and interest:
  • My primary role is the RepastHPC (Zombie) code because I have experience with OpenGL and Paraview.
  • I am responsible for Weather Research and Forecasting (WRF) and post-processing. WRF because I have a great experience with paraview and blender. My responsibly for WRF evolves from my interest in climatic simulations, which never stops challenging me.
  • I'm mainly focusing on Trinity and Linpack. Further, I am focusing on system administration tasks with a special focus on tuning the overall system for maximum efficiency.
  • My focus is on the MILC and the Trinity applications mainly because I'm interested in those algorithms.
  • I am concentrating on the WRF and MILC codes.  

What have been some challenges preparing for the competition?
  • Getting all of WRFs additional libraries to compile fine with the Intel compiler has proven to be a challenge. We're making progress, though
  • Compiling the library's for the Repast Code with the Intel compiler for the Xeon Phi cluster.
  • Developing the post-processing of the WRF code was quiet challenging. Trying not to give too much away yet, but this could possibly lead us to an advantage in the competition.
  • So far the most challenging assignment is getting the programs to run on the LRZ SuperMIC.
  • Installing all the Trinity plugins on the SuperMIC has been the hardest part up until now, but we now know those plugins structures and dependencies inside and out. 

Describes what has been relatively easy so far:
  • The easiest part was getting the Zombie code running on my laptop. It just worked as described in the ReadMe.
  • I must admit getting WRF to work was easy for there is very detailed documentation online
  • Running the MILC application really was a piece of cake. It just works out of the box.

What architectural details can you share that will put fear into the hearts of your competitors?Our system will have multiple Xeon Phi coprocessor cards per node which will allow us to crunch huge amounts of data with high efficiency, especially with respect to power consumption.

What do you like about mysteries? How are your preparing for the mystery application?It is the unknown people fear most - that is what makes the mystery application the real challenge. Not knowing what preparation is necessary, but believing that we can do it is important. It gives us a chance to demonstrate the knowledge about our machine as well as the flexibility of our team.

We are preparing for the mystery application by trying to make applications from previous years Student Cluster Challenges. Further, we try to find other popular codes with interesting properties or system requirements through HPC-related news and conferences. By doing this, we get more experience in working with unknown applications in general.

Why do you want to win? What are you willing to do to win?Our advisors and sponsors expect us to do our best. We can't let them down! We are prepared for long hours in front of our available HPC systems to be able to utilize our final cluster setup as efficient as possible.

What are you planning to do after the competition is over?A few of us are going to stay in the United States a bit and will get to know Texas! Among other places, the plan is to visit the Houston Space Center. And, besides school, there are many more competitions waiting for us!

Thursday, September 24, 2015

SC15 Invited Talk Spotlight: Reproducibility in High Performance Computing


The number of lines of code published in ACM Transactions on Mathematical Software, 1960–2012, on a log scale. The proportion of articles that published code remained roughly constant at about a third, with standard error of about 0.12, and the journal consistently published around thirty-five articles each year. Source: click here and click here.
Ensuring reliability and reproducibility in computational research raises unique challenges in the
supercomputing context. Specialized architectures, extensive and customized software, and complex workflows all raise barriers to transparency, while established concepts such as Validation, Verification, and Uncertainty Quantification point ways forward.

Invited speaker Victoria Stodden co-edited this book.
The topic has attracted national attention: President Obama's July 29, 2015 Executive Order "Creating a National Strategic Computing Initiative" includes accessibility and workflow capture as objectives; an XSEDE14 workshop released a report "Standing Together for Reproducibility in Large-Scale Computing"; on May 5, 2015 ACM Transactions in Mathematical Software released a "Replicated Computational Results Initiative"; and this conference is host to a new workshop "Numerical Reproducibility at Exascale", to name but a few examples. In this context I will outline a research agenda to establish reproducibility and reliability as a cornerstone of scientific computing.



Speaker Background:
Dr. Victoria Stodden
Dr. Victoria Stodden is an associate professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign, with a faculty appointment at the National Center for Supercomputing Applications. She completed both her PhD in statistics and her law degree at Stanford University.

Her research centers on the multifaceted problem of enabling reproducibility in computational science. This includes studying adequacy and robustness in replicated results, designing and implementing validation systems, developing standards of openness for data and code sharing, and resolving legal and policy barriers to disseminating reproducible research.

She is the co-chair the Advisory Committee for the National Science Foundation's Division of Advanced CyberInfrastructure, and is a member of the NSF CISE directorate's Advisory Committee.


  

Wednesday, September 23, 2015

Mateo Valero Selected as Recipient of 2015 IEEE-CS Seymour Cray Computer Engineering Award

Mateo Valero
Mateo Valero, a professor in the Computer Architecture Department at UPC in Barcelona, has been named the recipient of the 2015 IEEE Computer Society Seymour Cray Computer Engineering Award.

Prof. Valero, Director of the Barcelona Supercomputing Center at the National Center of Supercomputing in Spain, was selected as the recipient for the award “in recognition of seminal contributions to vector, out-of-order, multithreaded, and VLIW architectures.”

The Seymour Cray Computer Engineering award is one of the IEEE Computer Society’s highest awards, and is presented in recognition of innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The award consists of a crystal memento, a certificate, and a US$10,000 honorarium.

Prof. Valero’s research is in the area of computer architecture, with special emphasis on high-performance computers, including processor organization, memory hierarchy, interconnection networks, numerical algorithms, compilers, and performance evaluation and runtime-aware architecture for multicore.

He has published approximately 600 papers, has served in the organization of more than 300 international conferences, and has given more than 500 invited talks at conferences, universities, and companies. Prof. Valero has been an associate editor of IEEE Transactions on Parallel and Distributed Systems, IEEE Micro, IEEE Computer Architecture Letters, Parallel Programming Languages, and the editor of several special issues of IEEE Transactions on Computers and Computer magazine.

Prof. Valero has been honored with several prestigious awards including the IEEE-Computer Society ACM Eckert-Mauchly Award, the IEEE Computer Society Harry Goode Award, the ACM Distinguished Service Award, the Euro-Par Achievement Award, the King Jaime I in basic research, and two Spanish National Awards—the Julio Rey Pastor award, in recognition of research on informatics and/or mathematics, and the Leonardo Torres Quevedo award for engineering.

He has been named Honorary Doctor by the Spanish Universities of Las Palmas de Gran Canaria, Zaragoza, Complutense de Madrid, and Cantabria, and by the University of Chalmers in Sweden, the University of Belgrade in Serbia, and the University of Veracruz in Mexico. He is a Hall of the Fame member of the IST European Program (selected as one of the 25 most influential European researchers in IT from 1983–2008 in Lyon, November 2008).

In 1994, Prof. Valero became a founding member of the Royal Spanish Academy of Engineering. In 2005, he was elected Correspondent Academic of the Spanish Royal Academy of Science; in 2006, member of the Royal Spanish Academy of Doctors; in 2008, member of the Academia Europaea; and in 2012, Correspondent Academic of the Mexican Academy of Sciences. Prof. Valero is a Fellow of the IEEE, a Fellow of the ACM, and an Intel Distinguished Research Fellow.

In 1998, Prof. Valero won a “Favorite Son” Award in his home town, Alfamen (Zaragoza), and in 2006, his native town named their public college after him.

Seymour Cray
Previous Seymour Cray Award recipients include Gordon Bell, Ken Batcher, John Cocke, Glen Culler, William J. Dally, Monty Denneau, Alan Gara, John L. Hennessy, Peter Kogge, Kenichi Miura, Steven L. Scott, Charles Seitz, Burton J. Smith, Marc Snir, Steven Wallach, and Tadashi Watanabe.

The 2015 IEEE Computer Society Seymour Cray Computer Engineering Award will be presented at the SC15 Conference awards plenary session in Austin, Texas on November 17,  2015. For more information about IEEE Computer Society awards, click here.



About IEEE Computer Society
IEEE Computer Society is the world’s leading membership organization for professionals in all aspects of modern computing—from cloud to big data, security to mobile, robotics to software defined systems, wearables to IoT. A true community for technology leaders, the Computer Society provides resources to keep its members current in technology and moving forward in their professions—publications, a renowned digital library, a vast program of conferences, technical committees, and much more. The Computer Society also serves the profession at large through the establishment of standards, professional qualifications and certifications, training and education programs, events on leading-edge technologies, conference publishing, and a wealth of other services and programs focused on advancing the science and art of computing. More information on IEEE Computer Society can be found at www.computer.org.

Tuesday, September 22, 2015

Invited Talk Spotlight: Supercomputing, High-Dimensional Snapshots, and Low-Dimensional Models - A Game Changing Computational Technology for Design and Virtual Testing

Aerodynamic analysis of a complete Formula 1 configuration (click on any image to enlarge it).
During the last two decades, giant strides have been achieved in many aspects of computational engineering. Higher-fidelity mathematical models and faster numerical algorithms have been developed for an ever increasing number of applications. Linux clusters are now ubiquitous, GPUs continue to shatter computing speed barriers, and Exascale machines will increase computational power by at least two orders of magnitude.

Coupled fluid-structure analysis of an F-16 Fighting Falcon configuration at high angle of attack.
More importantly, the potential of high-fidelity physics-based simulations for providing deeper understanding of complex systems and enhancing their performance has been recognized in almost every field of engineering. Yet, in many applications, high-fidelity numerical simulations remain so computationally intensive that they cannot be performed as often as needed, or are more often performed in special circumstances than routinely.

High-dimensional solution snapshots of a street car flow problem computed for the purpose of constructing a parametric reduced-order model.
Consequently, the impact of supercomputing on time-critical operations such as engineering design, optimization, control, and test support has not yet fully materialized. To this effect, this talk will argue for the pressing need for a game-changing computational technology that leverages the power of supercomputing with the ability of low-dimensional computational models to perform in real-time.

It will also present a candidate approach for such a technology that is based on projection-based nonlinear model reduction, and demonstrate its potential for parametric engineering problems using real-life examples from the naval, automotive, and aeronautics industries.

Speaker Background:
Invited speaker Charbel Farhat prior to his flight in a F/A-18 Hornet with the legendary Blue Angels.
Charbel Farhat is the Vivian Church Hoff Professor of Aircraft Structures, Chairman of the Department of Aeronautics and Astronautics, and Director of the Army High Performance Computing Research Center at Stanford University. He is a member of the National Academy of engineering, a Fellow of AIAA, ASME, IACM, SIAM, and USACM, and a designated Highly Cited Author in Engineering by the ISI Web of Knowledge.

He was knighted by the Prime Minister of France in the Order of Academic Palms and awarded the Medal of Chevalier dans l’Ordre des Palmes Academiques. He is also the recipient of many other professional and academic distinctions including the Lifetime Achievement Award from ASME, the Structures, Structural Dynamics and Materials Award from AIAA, the John von Neumann Medal from USACM, the Gauss-Newton Medal from IACM, the Gordon Bell Prize and Sidney Fernbach Award from IEEE, and the Modeling and Simulation Award from DoD.

Recently, he was selected by the US Navy as a Primary Key-Influencer, flown by the Blue Angels during Fleet Week 2014, and appointed to the Air Force Science Advisory Board.

 

Monday, September 21, 2015

SC15 Releases Video on How Berkeley Lab's Electrolyte Genome Project Could Be Battery Game-Changer

A new breakthrough battery—one that has significantly higher energy, lasts longer, and is cheaper and safer—will likely be impossible without a new material discovery. And a new material discovery could take years, if not decades, since trial and error has been the best available approach. But Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson says she can take some of the guesswork out of the discovery process with her Electrolyte Genome.

Think of it as a Google-like database of molecules. A battery scientist looking for a new electrolyte would specify the desired parameters and properties, and the Electrolyte Genome would return a short list of promising candidate molecules, dramatically speeding up the discovery timeline.
Click here to watch the video.


“This is just one of several compelling videos that SC15 will be releasing over the coming weeks to help describe how high performance computing is helping to transform society and have a tremendous positive impact on everyday life,” said Jackie Kern, SC15 Conference Chair from University of Illinois at Urbana Champaign.

According to Kern, It is also part of a three-year “HPC Matters” campaign that will be a big focus at the SC15 conference in Austin this November.  This includes a free plenary session led by Diane Bryant – one of Intel’s top executives and recently named to Fortune’s list of the 51 most powerful women.

Faster, Smarter, Better
Besides being faster and more efficient in screening out bad candidates, the Electrolyte Genome offers two other significant advantages to battery scientists. The first is that it could generate novel ideas. “While there are some amazing organic chemists out there, this allows us to be agnostic in how we search for novel ideas instead of relying purely on chemical intuition,” Persson said. “We can be surprised by what we find by combining experience with new, non-traditional ideas.”

The second advantage of the Electrolyte Genome is that it can add to scientists’ fundamental understanding of chemical interactions.

“It adds explanations to why certain things work or don’t work,” Persson said. “Frequently we rely on trial and error. If something doesn’t work, we throw it away and go to the next thing, but we don’t understand why it didn’t work. Having an explanation becomes very useful—we can apply the principles we’ve learned to future guesses. So the process becomes knowledge-driven rather than trial and error.”

How it Works – Funnel Method
The Electrolyte Genome uses the infrastructure of the Materials Project, a database of calculated properties of thousands of known materials, co-founded by Persson and Gerbrand Ceder. The researchers apply a funnel idea, doing a first screening of materials by applying a series of first principles calculations for properties that can be calculated quickly and robustly. This winnows down the candidate pool, on which they do a second screening for another property, and so on.

The concept was described in a recent essay in The Journal of Physical Chemistry Letters co-authored by Persson and her collaborators at Berkeley Lab and Argonne National Laboratory.
With a short list of candidate molecules, researchers can then perform more detailed computational evaluations, applying molecular dynamics simulations or other calculations as needed, for example to characterize the interactions of the different components.

The number of possible combinations is infinite since so many different salts can be combined with so many different solvents; plus impurities play a role. So Persson and her team do work closely with experimentalists to guide their research. “Because the space is so vast, we typically don’t throw the whole kitchen sink at it because it would take forever,” she said. “We tend to take some base molecule or some idea, then we explore all the variations on that idea. That’s the way to attack it.”

The methodology has been validated with known electrolytes. Using the supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, the researchers can screen hundreds of molecules per day.

To date, more than 15,000 molecules for electrolytes—including 10,000 redox active molecules, hundreds of conductive network molecules, and salts, solvents, and more—have been calculated. Screening such quantities of molecules for suitable properties using traditional synthesis and testing techniques would take decades.
Early Success Stories

The Electrolyte Genome’s first major scientific finding—that magnesium electrolytes are very prone to forming ion pairs, which impacts several crucial aspects such as conductivity, charge transfer and stability of the electrolyte—was published in February in the Journal of the American Chemical Society.

They had another success screening molecules for redox capabilities for flow batteries for fellow Berkeley Lab scientist Brett Helms. “He basically gave us a chemical space of organogelator molecules and asked, ‘Can you tell me the best molecule if I want a voltage window that’s precisely here,’” Persson said. “We filtered down about a hundred candidates to one. It worked, and the molecule fit the intended purpose perfectly.”

The Electrolyte Genome is funded by the Joint Center for Energy Storage Research (JCESR), a Department of Energy multi-partner Energy Innovation Hub announced in 2012, led by Argonne National Laboratory and including Berkeley Lab. It is open source and will be made public by the end of JCESR’s five-year charter, at the latest, according to Persson.
This is the first in a series of SC15 videos that will be released leading up to the conference to help tell compelling and interesting stories of why HPC Matters.
 
About SC15
SC15, sponsored by ACM (Association for Computing Machinery) and IEEE Computer Society offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC15, please visit http://www.sc15.supercomputing.org/, or contact communications@info.supercomputing.org for more information.
 
About ACM
ACM, the Association for Computing Machinery www.acm.org, is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Special thanks to Julie Chao from Lawrence Berkeley National Laboratory for her assistance with this article and video.

Friday, September 18, 2015

SC15 Invited Talk Spotlight: Superscalar Programming Models - Making Applications Platform Agnostic

StarSs is a task-based programming model by Barcelona Supercomputing Center that has the potential to change the way applications are developed (click on image to enlarge).
Programming models play a key role providing abstractions of the underlying architecture and systems to the application developer and enabling the exploitation of the computing resources possibilities from a suitable programming interface.

When considering complex systems with aspects such as large scale, distribution, heterogeneity, variability, etc. it is indeed more important to offer programming paradigms that simplify the life of the programmers while still providing competitive performance results.

StarSs (Star superscalar) is a task-based family of programming models that is based on the idea of writing sequential code which is executed in parallel at run-time taking into account the data dependencies between tasks.

The talk will describe the evolution of this programming model and the different challenges that have been addressed in order to consider different underlying platforms from heterogeneous platforms used in HPC to distributed environments, such as federated clouds and mobile systems.

Speaker background:
Dr. Badia in the MareNostrum supercomputer site (BSC, Spain).
Rosa M. Badia holds a PhD on Computer Science (1994) from the Technical University of Catalonia (UPC). She is a Scientific Researcher from the Consejo Superior de Investigaciones Científicas (CSIC) and team leader of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC). She was involved in teaching and research activities at the UPC from 1989 to 2008, where she was an Associated Professor since year 1997. From 1999 European Center of Parallelism of Barcelona (CEPBA).

Her current research interest are programming models for complex platforms (from multicore, GPUs to Grid/Cloud). The group led by Dr. Badia has been developing StarSs programming model for more than 10 years, with a high success in adoption by application developers. Currently the group focuses its efforts in two instances of StarSs: OmpSs for heterogeneous platforms and COMPSs/PyCOMPSs for distributed computing including Cloud.

For this last case, the group has been doing efforts on interoperability through standards, for example using OCCI to enable COMPSs to interact with several Cloud providers at a time. Dr Badia has published more than 150 papers in international conferences and journals in the topics of her research. She has participated in several European projects, for example BEinGRID, Brein, CoreGRID, OGF-Europe, SIENA, TEXT and VENUS-C, and currently she is participating in the project Severo Ochoa (at Spanish level), ASCETIC, Euroserver, The Human Brain Project, EU-Brazil CloudConnect, and trasnPLANT and it is a member of HiPEAC2 NoE.

This is the second episode in a series providing you with details behind the SC15 Invited Talks Program.