Thursday, March 26, 2015

SC's Jack Dongarra Talks Exascale and Beyond with VR World

Please note: Excerpt approved by VR World.  For the full article, click here.

Dr. Jack Dongarra of Oak Ridge National Laboratory and the University of Tennessee was the SC14 Technical Program Chair and is serving as the SC15 Test of Time Award Co-Chair.

Jack Dongarra, SC15 Test of Time Award Co-Chair
VR World: During your [recent] keynote you mentioned the ‘exascale challenge’. In your opinion, how do we get there from here? What has to happen?

Jack Dongarra: We can’t use today’s technology to build that exascale machine. It would cost too much money, and the power requirements would be way too much. It would take 30 Tianhe-2 clusters in order to get there. We have to have some way to reduce the power and keep the cost under control.

Today, all of our machines are over-provisioned for floating-point. They have an excess floating-point capability. The real issues are related to data movement. It’s related to bandwidth. For example, you have a chip. And this chip has increasing computing capability — you put more cores on it. Those cores need data, and the data has to come in from the sides. You’ve got area that’s increasing due to the computing capability but the perimeter is not increasing to compensate for it. The number of pins limits the data that can go in. That’s the crisis we have.

That has to change. One way it changes is by doing stacking. 3D stacking is a technology that we have at our disposal now. That will allow much more information flow in a way that makes a lot more sense in terms of increasing bandwidth. We have a mechanism for doing that, so we get increased bandwidth. That bandwidth is going to help reduce [power draw] as we don’t have to move data into the chip.

The other thing that’s going to happen is that photonics is going to take over. The data is going to move not over copper lines but over optical paths. The optical paths reduce the amount of power necessary. So that’s a way to enhance the data movement, and to reduce the power consumption of these processors. The chip gets much more affordable, and we can have a chance at turning that computing capability into realized performance — which is a key thing.

In the US, I think we’ll reach exascale in 2022. 2022 is the point where the money will be in place and it’s a question of money. We could build a machine today, but it it would be too expensive. The current thinking is it will be realizable around 2020, and the US is going to be able to deploy the machine in 2022. The money won’t be in place until then, but the technology will be ready ahead of time.

VRW: What’s your take on vendors’ 3D stacking efforts so far?

JD: It’s great. It has to happen. It’s gotta be that way. It’s a natural way to move. It’s going to be the key thing in terms of performance enhancement in the next few years, and being able to effectively employ that as a device. Things look very positive.

VRW: Over the last few years we’ve witnessed China becoming a rising CPU player, with its domestic Alpha and MIPS-based CPUs. Do you have a feeling that conventional CPU vendors have over complicated things for themselves?

JD: China has an indigenous processor which may or may not come out and be deployed in a high performance machine. There are some rumors that the next big machine would be based on the ShenWei CPU. I can understand the motivation for China wanting a processor, they don’t want to be dependent on Western technology for these things. There are some issues here. It’s not going to be on x86 architecture, so software will have to be re-written for this machine. Software is a big deal on these systems, but that can be overcome.

When China does deploy this wide scale, Intel will stand up and take notice. It will be a big thing, now China will be in a position to use their product and not Intel’s product. That becomes a big issue.

End of excerpt. For the full article, click here.