Showing posts with label Jack Dongarra. Show all posts
Showing posts with label Jack Dongarra. Show all posts

Tuesday, September 1, 2015

Supercomputing Serengeti

The SC Perennials offer reflections on the annual migration

By Mike Bernhardt

How many of you can say you’ve attended any type of annual event 27 years in a row?  When it comes to the SC conferences, that claim to fame would only apply to 18 very hardy souls we like to refer to as the SC Perennials.  SC15 in Austin, Texas, will be the 28th annual migration for this group.  For more background on the SC Perennials, click here.

Jack Dongarra addresses the press at SC14.
Professor Jack Dongarra is one of the SC Perennials. Jack’s name is known to both the seasoned conference attendees and the community’s newest entrants.  His work with Linpack, system performance benchmarks and analysis, the TOP 500, and the SC conference itself, among numerous other contributions, has had a significant impact on building awareness for the importance of the HPC community and has been a key factor in bringing an understanding and appreciation of supercomputing to much of the world’s media.

Jack Dongarra is a warm and approachable guy and he wears his professorial title well.  He warmly states that attending the SC conference each year is like a family reunion – a homecoming – with the added benefit of connecting with so many old friends.  That reflection is shared by many of us.   Even though the watering hole location changes each year, the migration of the HPC community, like the great migration of the Serengeti, is an event we can all count on, and one we count on with great anticipation.

The SC conference has such a rich history and has evolved into one of the finest, most respected technical conferences in the world. Looking back, all of the SC Perennials remember that first keynote presentation with the legendary Seymour Cray, “What's this about Gallium Arsenide?”  While a few people recognized the historical significance of that first event in 1988, it never even dawned on many of us that we were participating in a gathering that would change the world of scientific computing so significantly.  Who could have known back then how the SC conference series would grow into the amazing technical program and networking event it is today?

Steve Wallach - HPC expert
One of the community’s undisputed HPC Rock Stars, Steve Wallach, has a fond memory of asking Seymour a question about debugging the Cray-2, the start of many future conversations between the two. Wallach by the way, is one of the honored recipients of the IEEE Computer Society’s Seymour Cray Award*.

I was a Marketing Communications Director with Multiflow Computer at that first event and had the awkward experience of literally bumping into Seymour as I was carrying an unmanageable armful of literature into the tiny exhibit hall area.  Papers went flying everywhere and the man himself stopped to help me pick them up.  I was both humbled and embarrassed, but by the time SC88 came to a close, I was truly inspired.  We didn’t use the term HPC back then.  It was all supercomputing.  And I knew from that time I had found my home in this community.

Several of the SC Perennials recall how manageable the size of the conference was and interestingly the fact that there were very few marketing types in attendance.  I recall that too. I felt awkward telling people I was in marketing – a comment that usually resulted in someone rolling their eyes and brushing me off.  I vowed to do what I could to bring some appreciation and respect to how people in the community perceived us marketing types. 

Maxine Brown has a long SC history.
Maxine Brown who currently serves as Director of the Electronic Visualization Laboratory at the University of Illinois at Chicago, recalls hosting the Visualization Theater at SC88.  Maxine and her colleagues collected videos that were published as part of SIGGRAPH’s Video Review (SVR), along with a 3D movie created by Digital Productions, a Hollywood computer animation company that owned a Cray X-MP.  In many ways, this was the infancy of 3D movies, and in fact, there were no 3D projectors in Orlando, so her team had to rent one from Miami and have it brought in.

According to Maxine, “I started attending SC to showcase areas of research in which I was involved that were also relevant to the supercomputing community, from visualization in scientific computing to the CAVE virtual environment, to national and international high-performance networks. And, while that’s still an important part of why I attend, it is perhaps even more important that it is the only worldwide venue of which I am aware that attracts most of my national and international collaborators, friends and funding agency representatives, all in one place for an exhausting but productive and pleasurable week.”

We’ll have more reflections from the SC Perennials in subsequent blog posts, but we also invite you to post your comments and share with us some of your favorite SC memories - whether it be technical program lessons, colorful presentations, memorable keynotes, and of course, those amazing special events and parties for which SC has become so well known.  Send memories or story ideas to: sc15communications@committee.supercomputing.org

The legendary Seymour Cray.
*The Seymour Cray Award, is an award given by the IEEE Computer Society, to recognize significant and innovative contributions in the field of high-performance computing. The award honors scientists who exhibit the creativity demonstrated by Seymour Cray, founder of Cray Research, Inc., and an early pioneer of supercomputing.

Wednesday, May 20, 2015

SC15 Selects Numerical Aerodynamic Simulation Program for the Test of Time Award

Austin, TX – May 20, 2015 –The Supercomputing Conference (SC15) Test of Time Award Committee has recognized “The NAS Parallel Benchmarks - Summary and Preliminary Results” written by D. Bailey, E. Barszcz, J. Barton, D. Browning, R. Carter, L. Dagum, R. Fatoohi, P. Frederickson, T. Lasinski, R. Schreiber, H. Simon, V. Venkatakrishnan, and S. Weeratunga from the SC91 conference as the SC15 Test of Time Award (ToTA) paper for this year.

The ToTA recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends, and will be presented at the SC15 conference in a non-plenary session and the authors will be asked to give a presentation on the work in Austin, TX in November 2015.  To view this specific paper, click here.

“The paper and benchmark captures specifications and implementations of an important set of representative scientific codes,” said Jack Dongarra, SC15 Test of Time Award Chair from University of Tennessee, Knoxville.  “The work is still actively used, and has inspired numerous sets of benchmarking codes that continue to drive research and development innovation.”

In 1991, this team of computer scientists from the Numerical Aerodynamic Simulation Program—predecessor to the NASA Advanced Supercomputing (NAS) facility at Ames Research Center—unveiled the NAS Parallel Benchmarks (NPB), developed in response to the U.S. space agency’s increasing involvement with massively-parallel architectures and the need for a more rational procedure to select supercomputers to support agency missions. Then, existing benchmarks were usually specialized for vector computers, with shortfalls including parallelism-impeding tuning restrictions and insufficient problem sizes, making them inappropriate for highly-parallel systems.

The NPBs mimic computation and data movement of large-scale computational fluid dynamics (CFD) applications, and provide objective evaluation of parallel HPC architectures. The original NPBs featured “pencil-and-paper” specifications, which bypassed many difficulties associated with standard benchmarking methods for sequential or vector systems. The principal focus was in computational aerophysics, although most of these benchmarks have broader relevance for many real-world scientific computing applications.

The NPBs quickly became an industry standard and have since been implemented in many modern programming paradigms. Since 1991, research areas influenced by the NPBs have broadened to include network design, programming languages, compilers, and tools. Google Scholar yields over 27,000 results for the NBPs, with about 7,400 citations since 2014.

Today’s version is alive and well, and continues to significantly influence NASA projects. It is used around the world by national labs, universities, and computer vendors to evaluate sustained performance of highly-parallel supercomputers and the capability of parallel compilers and tools.

For more information on the award, click here.

Thursday, March 26, 2015

SC's Jack Dongarra Talks Exascale and Beyond with VR World

Please note: Excerpt approved by VR World.  For the full article, click here.

Dr. Jack Dongarra of Oak Ridge National Laboratory and the University of Tennessee was the SC14 Technical Program Chair and is serving as the SC15 Test of Time Award Co-Chair.

Jack Dongarra, SC15 Test of Time Award Co-Chair
VR World: During your [recent] keynote you mentioned the ‘exascale challenge’. In your opinion, how do we get there from here? What has to happen?

Jack Dongarra: We can’t use today’s technology to build that exascale machine. It would cost too much money, and the power requirements would be way too much. It would take 30 Tianhe-2 clusters in order to get there. We have to have some way to reduce the power and keep the cost under control.

Today, all of our machines are over-provisioned for floating-point. They have an excess floating-point capability. The real issues are related to data movement. It’s related to bandwidth. For example, you have a chip. And this chip has increasing computing capability — you put more cores on it. Those cores need data, and the data has to come in from the sides. You’ve got area that’s increasing due to the computing capability but the perimeter is not increasing to compensate for it. The number of pins limits the data that can go in. That’s the crisis we have.

That has to change. One way it changes is by doing stacking. 3D stacking is a technology that we have at our disposal now. That will allow much more information flow in a way that makes a lot more sense in terms of increasing bandwidth. We have a mechanism for doing that, so we get increased bandwidth. That bandwidth is going to help reduce [power draw] as we don’t have to move data into the chip.

The other thing that’s going to happen is that photonics is going to take over. The data is going to move not over copper lines but over optical paths. The optical paths reduce the amount of power necessary. So that’s a way to enhance the data movement, and to reduce the power consumption of these processors. The chip gets much more affordable, and we can have a chance at turning that computing capability into realized performance — which is a key thing.

In the US, I think we’ll reach exascale in 2022. 2022 is the point where the money will be in place and it’s a question of money. We could build a machine today, but it it would be too expensive. The current thinking is it will be realizable around 2020, and the US is going to be able to deploy the machine in 2022. The money won’t be in place until then, but the technology will be ready ahead of time.

VRW: What’s your take on vendors’ 3D stacking efforts so far?

JD: It’s great. It has to happen. It’s gotta be that way. It’s a natural way to move. It’s going to be the key thing in terms of performance enhancement in the next few years, and being able to effectively employ that as a device. Things look very positive.

VRW: Over the last few years we’ve witnessed China becoming a rising CPU player, with its domestic Alpha and MIPS-based CPUs. Do you have a feeling that conventional CPU vendors have over complicated things for themselves?

JD: China has an indigenous processor which may or may not come out and be deployed in a high performance machine. There are some rumors that the next big machine would be based on the ShenWei CPU. I can understand the motivation for China wanting a processor, they don’t want to be dependent on Western technology for these things. There are some issues here. It’s not going to be on x86 architecture, so software will have to be re-written for this machine. Software is a big deal on these systems, but that can be overcome.

When China does deploy this wide scale, Intel will stand up and take notice. It will be a big thing, now China will be in a position to use their product and not Intel’s product. That becomes a big issue.

End of excerpt. For the full article, click here.

Tuesday, February 17, 2015

SC15 Announces New Test of Time Award Details

The SC Test of Time Award (ToTA) Committee is soliciting nominations for this year’s Test-of-Time Award to be given at the SC15 conference in November in Austin, Texas.  The ToTA recognizes an outstanding paper that has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends.

The award is also an incentive for researchers and students to send their best work to the SC conference and a tool to understand why and how results last in the HPC discipline.  Papers that appeared in the SC conference series are considered for this award.  A paper must be at least 10 years old, from 1988 to 2005, inclusive; papers from 18 years of conferences are eligible.

IMPORTANT DETAILS:

Which papers are eligible for the 2015 SC ToTA?
Only papers published at SC conferences between 1988 - 2005 inclusive. The complete SC proceedings can be found by clicking here.

Who can nominate a paper?
Anyone can nominate a paper except the author or co-author of the nominated paper.

Is there a limit on the number of papers one person can nominate?
Yes. One can nominate up to five papers from all eligible papers. These five papers can be published at any eligible year.

When is the last day to submit a nomination?
Nominations must be received by March 31, 2015.

How should a nomination be submitted?
 Nominations must be submitted by clicking here.

Selection process 

SC ToTA committee will evaluate all submitted nominations and eligible SC papers to select the SC15 Test of Time Award.  The committee will produce an award citation and a $1,000 prize for the selected paper. ToTA Committee Member paper’s are not eligible for this award.

Who are the current SC15 ToTA committee members?
  • Jack Dongarra, co-chair
  • Mary Hall, co-chair
  • David Abramson
  • Bill Gropp
  • Michael Heroux
  • Liz Jessup
  • Rusty Lusk
  • Leonid Oliker
  • Padma Ragahavan
  • Yves Robert
  • Valerie Taylor
  • Mateo Valero

When will the award be announced and given?
The award will be presented to the authors of the selected paper at the SC15 conference in November in Austin TX. The authors will be asked to give a presentation in a non-plenary session (typically 45 minutes).

What does the award consist of?
An award certificate, $1,000, and peer recognition.

What is the deadline?
We invite anyone to submit a nomination by the deadline of March 31st, 2015. Please submit your nomination by clicking here.