Will 2011 mark the beginning of Manycore?


As we enter the second decade of the third millennium, I have been thinking about my previous talks on a “Manycore future.”  Based on trends of single core use in desktop computers and mobile devices, which has ended on the desktop and looks to end soon on smart phones, plus the growth of multi-core computing, primarily on the desktop and server market, I have predicted in the past that the logical future is “manycore.”  So let me back up a bit and apply some ad-hoc definitions:

1. Single-core computing: this is a general purpose CPU in a desktop, laptop, or smartphone that is a bottleneck for all instructions that can’t be sent to special purpose processors such as a graphics card.  This period more or less ended in 2005 when dual core computers became common.

2. Multi-core computing: multiple cores on a single socket chip, from 2 to 8, have become commonplace on servers, desktops, and laptops.  Soon multi-core will be on smart phones as well.  Several special purpose desktops such as graphics workstations have included multiple CPU sockets for decades, for example Macintosh Pro workstations and Dell Precision workstations, but the price of multiple sockets on the motherboard made these machines fairly rare (probably less than 1% of desktops sold).  With the advent of multi-core CPU’s from both AMD and Intel, computer manufacturers such as Apple, Dell, and HP have put multi-core CPU’s into commonplace consumer laptops and desktops for the past 5 years.

3. Manycore computing: I am defining manycore as 16+ cores per CPU.  This is the defining difference between multi-core and the exponential doubling of CPU cores over the next few years to 32, 64, 128, 256, 512, 1K cores.  Based on recent history where CPU speed doubling has been replaced with the number of cores doubling, the prediction is that in a decade from now, the year 2020 could be the year that 1,000 core CPU’s become commonplace on desktops, laptops, and even smart phones….

Or will it?  My last couple of tweets have outlined two very important events in my mind:

http://twitter.com/#!/talbott/status/20867677597732864 – 1,000 Core CPU from University of Glasgow, Scotland

http://twitter.com/#!/talbott/status/19949772387385344 – 1,000 Core CPU discussion with researcher engineer Timothy Mattson from Intel

I think the first is important, that 1,000 core CPU’s are possible.  But I think the second tweet from four days ago is even more important.  Why?  Because Mattson outlines the fundamental blocks that exist in the market that may prevent manycore from even occurring.  Mattson and his team built a general purpose 48 core CPU for research only.  In the article mentioned above, he states that they have basically proved that 1,000 core CPU’s are possible, but it is the job of research to tell the rest of Intel what is possible and then they build what the consumers will buy.  And that is the clincher, will the consumers buy a 1,000 core CPU?  Well if developers aren’t able to exploit the massive parallelism required take advantage of all the processing power, then the market may end up leveling off at 16 cores due to the cost/benefit.

So is it possible?  Yes, manycore is possible.  Is it probable?  After reading Mattson’s interview, maybe not.  My personal opinion is that the industry will continue to push the edge, and manycore will definitely happen, but maybe not in the way we expect.  I think it is likely that manycore will become common for specialty purposes not necessarily general purpose CPU’s.  Perhaps manycore will surface the same way that FPU’s were used in the 90’s for floating point math, where for a while general purpose CPU’s offloaded certain calculations or how currently most machines have dedicated graphics processors to handle the high resolution and video requirements of common computing today.

Maybe it is conceivable in the future that the main CPU will level out to 8 or 16 cores and the a separate manycore chip with thousands of cores handles special types of processing that can exploit it.  CPU speed doubled with Moore’s Law until 2005 when it leveled out to 2 to 3 GHz.  Today you rarely see a CPU greater than 4 GHz.  But starting at 2005, the core count has been doubling with Moore’s Law (every 18 months.)  The Intel/AMD general purpose central CPU may end up plateauing at some number between 8 and 32 and stop doubling due to Amdahl’s Law and other fundamental barriers to scaling general purpose code across many cores.  Matson responds to the question, can we make make programming the chip acceptable to the general-purpose programmer? as follows:

Matson: “If the answers to these questions are yes, then we have a path to 1,000 cores. But that assumption leads to a larger and much more difficult series of questions. Chief among these is whether we have usage models and a set of applications that would demand that many cores. We have groups working on answers to that question.” from Timothy Mattson’s Interview with Jack Clark on ZDNet

So to summarize: we can’t predict the future, but with this information from Intel Research, we can get a clearer picture of what it may look like.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s