The new 50 and 100-Core CPUs might actually slow down some programs? Here’s how.
"We think 'Oh, there's something I'm computing. I'll compute it once and put it into a variable'. Well, if you put it in a variable, and then a hundred processors access that variable to get that data, you've got a bottleneck. But if all one hundred of them computed it, no bottleneck," he said.
"Boy, is that foreign to my brain," he confessed.
We suggested that our generation might have to wait for our kids to become programmers before this new way of thinking became the new standard. "I hope not," he chuckled, noting that although today's programmers may have to learn a new mindset, they do have one great advantage over the next generation of code monkeys: experience.
Intel code guru: Many-core world requires radical rethink • The Register
The GPU-processing mindset is quite similar. Who would have thought that running applications in the GPU would dramatically increase processing power of otherwise standard hardware. However, it requires programming specifically optimized for the GPU which many programmers wouldn’t pick up on.
Before you go optimizing for a GPU, watch for the FSA, a GPU+CPU architecture that is transparent to developers.
I wonder how fast email would come up with this one…
The XK6, announced on Tuesday, is made up of multiple supercomputer blade servers. Each blade includes up to four compute nodes containing AMD Opteron CPUs and Nvidia Tesla-architecture GPUs. It marks Cray's first attempt to blend dedicated GPUs and CPUs in a single high-performance computing (HPC) system.
Why does it look like they put in just enough blades to spell CRAY XK6?
The world’s most powerful computer currently runs at 2.5 petaflops. The Cray beast will run at 50.
No comments:
Post a Comment