PDA

View Full Version : 1,000 times faster than today’s fastest processor



sdowney717
September 15th, 2011, 03:37 AM
http://mashable.com/2011/09/09/breakthrough-the-secret-to-making-processors-1000-times-faster-video/?WT.mc_id=obnetwork

serious they are serious that this is what we will get

using a special glue they will be able to stack on many

multiple layers to chips.

in 2013 to start off.

When can we get our hands on this breakthrough tech? IBM’s media relations representative Michael Corrado tells us, “By the end of 2013 it should go into production. It’ll show up on servers first, and then a year after that consumers might see it.”

SoFl W
September 15th, 2011, 03:52 AM
Great, all that processing power can do nothing while I wait for my internet connection to check my email.

ninjaaron
September 15th, 2011, 03:56 AM
This will be nice for the next time I try to encode a movie, which I've needed to do about twice in my life. This technology will probably be cheap by the time I need to do it again. :p

Lucradia
September 15th, 2011, 04:04 AM
Great, all that processing power can do nothing while I wait for my internet connection to check my email.

And even if you can afford those fibre internet things, it would be a hefty bill for us Wisconsin people, because we have no competition.

drawkcab
September 15th, 2011, 06:40 AM
Wow, flash might actually run well in linux and people might finally be able to play Crysis on the medium settings!

NightwishFan
September 15th, 2011, 07:32 AM
1,000x nothing is still nothing. The problem is nothing to do with linux's task scheduler, which is actually quite awesome. An area that would really help is better graphics card support or perhaps a more earnestly codec flash plugin?

That being said I have had good results from my intel mobile gpu. I can play 1080p flash without a problem.

Oxwivi
September 15th, 2011, 10:56 AM
Hmm, I wonder if we can mash this tech with Intel's 3D transistors...

fatality_uk
September 15th, 2011, 11:43 AM
A PC with 100 CPU's & 100 GPU's stacked :) hmmm
Might be able to play Crysis at something better than 800x600 no AA

blueturtl
September 15th, 2011, 12:03 PM
It will struggle to run whatever Microsoft has out by that time.
Everyone will need to have that much power just to write emails and play solitaire... ;)

forrestcupp
September 15th, 2011, 04:22 PM
Great, all that processing power can do nothing while I wait for my internet connection to check my email.Lol. There will be thousands of people who think they need this and all they ever do is check emails and Facebook.


1,000x nothing is still nothing. The problem is nothing to do with linux's task scheduler, which is actually quite awesome. An area that would really help is better graphics card support or perhaps a more earnestly codec flash plugin?
You'll start to see more graphics being handled by the CPU. There will come a time when separate GPUs are obsolete.

_d_
September 15th, 2011, 04:26 PM
Wow, flash might actually run well in linux and people might finally be able to play Crysis on the medium settings!

Flash runs extremely well on Linux, considering. I've been using the Flash 11 RC1 (64bit native), and I can play 1080p flash without any chop or audio glitches.

Also, the CPU isn't really a bottleneck for games...it's the GPU most of the time.


That being said I have had good results from my intel mobile gpu. I can play 1080p flash without a problem.

Pretty much same here with little ol ATI Radeon HD4200, pretty much flawless flash playback @ 1080p.

docbop
September 15th, 2011, 04:31 PM
For typical user the CPU is idling more than processing not much use for workstation. The problem is computer being IO bound, need faster devices, ports, busses.

3Miro
September 15th, 2011, 04:50 PM
This would be very useful for scientific computing (my job). Although, they will have to figure out something similar with respect to the RAM, 100 with small and slow RAM will not cut it.

For the regular desktop, multi-core isn't that useful pass 2 - 3. Very few apps can be effectively parallelized to so many CPUs and people don't do that many things at once.

_d_
September 15th, 2011, 04:52 PM
This would be very useful for scientific computing (my job). Although, they will have to figure out something similar with respect to the RAM, 100 with small and slow RAM will not cut it.

For the regular desktop, multi-core isn't that useful pass 2 - 3. Very few apps can be effectively parallelized to so many CPUs and people don't do that many things at once.

Would be really useful for those that do BOINC, or any other crunching based projects. :D

aaaantoine
September 15th, 2011, 05:08 PM
Will it also be 1000 times more expensive? ;)

At the very least I'm concerned this will only create more heat / require more energy in the system. Indeed, it makes sense for a clustered server replacement, where you can combine multiple processors into a more compact and efficient package. My concern is that, if you want to apply the same technology to a laptop or desktop, it will also make an excellent replacement for a space heater.

snip3r8
September 15th, 2011, 05:18 PM
Wow, flash might actually run well in linux and people might finally be able to play Crysis on the medium settings!
No offence ,you have been living under a large rock

_d_
September 15th, 2011, 05:23 PM
At the very least I'm concerned this will only create more heat

I'm pretty sure they already figured that problem out, and most likely the glue they will use to fuse the layers together will have some sort of anti-heat properties to help negate the possible higher heat output.

aaaantoine
September 15th, 2011, 05:49 PM
No, no. I understand they've figured out heat dissipation. But that heat has to go somewhere.

What can I say, I'm a fan of low TDP.

Npl
September 15th, 2011, 06:20 PM
this is a thing that will be interesting for servers and mobiles.
More transistors equals more heat, and heat is already the primary concern for CPUs.
This tech will help putting more CPUs in small space while having to pair them with sophisticated liquid cooling systems.
And it will allow to create (unupgradeable) all-in-one-modules with CPU/GPU + memory + IO, saving precious space and some valuable fractions of a Watt compared to seperate chips.

Dont see where desktops would benefit from it, maybe GPUs with coupled high-bandwidth memory or CPUs with huge L3 caches.

IBM does only create server-chips btw, so Im curious to see when this will trickle down to the mobile space.

DZ*
September 15th, 2011, 06:22 PM
This would be very useful for scientific computing (my job).

+1
Maybe then I'll be able to ditch C++ for R completely. Although I like C++, it gets in the way of getting science done. R is shorter to write but also way too slow. Sometimes each of my runs in R takes many minutes to complete while I need to collect 100K of these runs. I got a 64 CPU linux computer but each of those CPUs isn't all that fast.

3Miro
September 15th, 2011, 06:37 PM
+1
Maybe then I'll be able to ditch C++ for R completely. Although I like C++, it gets in the way of getting science done. R is shorter to write but also way too slow. Sometimes each of my runs in R takes many minutes to complete while I need to collect 100K of these runs. I got a 64 CPU linux computer but each of those CPUs isn't all that fast.

Well, in my area we prefer Matlab to R, but this is not the point that I want to make. C++ isn't going anywhere and here is why:


R is an interpreted language typically used through a command line interpreter.

Without getting technical, this means that R is written in C++.

With a faster computer, you should be able to get all of your current research done in R, however, regardless of fast a computer may be, there are problems that are too difficult for it. If R is good enough for the current problem, then it will not be good enough for the next problem. Things like R and Matlab are nice, however, the hardest problems will always require C++ (or something similar to it).

sanderd17
September 15th, 2011, 06:48 PM
Will it really be 1000x faster? If I look at it, gluing them together seems like you create a lot of parallel processors, so you computer would be 1000x faster if you have 1000 threads running.

I haven't looked into this technology, so I don't know anything about it yet, but I learned that 4GHz was about the maximum they could get out of a normal processor because of power and heating issues. That's why they started with dual core, quad core ... processors.

http://www.gotw.ca/images/CPU.png

As you see on this graph: the moment they started with multi-core processors, the power usage stabilised, but the number of transistors goes still exponentially, so the speed if you have multiple threads running goes also exponentially up.

Edit: Oh damn, how can I make this any smaller?

DZ*
September 15th, 2011, 06:51 PM
With a faster computer, you should be able to get all of your current research done in R, however, regardless of fast a computer may be, there are problems that are too difficult for it. If R is good enough for the current problem, then it will not be good enough for the next problem. Things like R and Matlab are nice, however, the hardest problems will always require C++ (or something similar to it).

I don't understand your point. Give me an example of a scientific algorithm that can be programmed in C++ but is "too difficult" to program in R. I do use C++ function calls from R on occasion but the reason for that is always speed, not algorithmic limitations of R.

Or are you talking about new problems that would once again be too slow for R?

3Miro
September 15th, 2011, 07:40 PM
Or are you talking about new problems that would once again be too slow for R?

I am talking about speed only. The algorithms can be coded in any language (including brain****), then there is the ease of coding and the speed of the execution. R and Matlab are MUCH easier than R, but C++ will always be faster and when you have a large enough problem, C++ is the only option.

sanderd17
September 15th, 2011, 07:50 PM
I am talking about speed only. The algorithms can be coded in any language (including brain****),

If you can program the Dijkstra algorithm in brain****, than I'll buy you a huge pint of beer :D

disabledaccount
September 15th, 2011, 09:08 PM
Almost every existing modern CPU is way better than Intel products - and 1000x faster is nothing special - in fact it's rather shame for Intel as they are just dreaming of such performance while in fact being able only to release advertisements.
f.e.
http://www.tilera.com/products/processors/TILE-Gx_Family

It's not fully their fault - they just get stuck in the past with backward-compatible crappy (like hell) x86 and little to use x64. 30 years ago MC68xxx family was 100 years ahead to x86, but it was "military" cpu (expensive and low clock - resistant to radiation, etc.) ColdFire and PPC was several times more efficient at the same clock rate. MS wins the everyday computing market with cheap crappy Intel - the worse misery in IT/computing history.

who cares... :)

MasterNetra
September 15th, 2011, 09:31 PM
Well maybe with the 1000x processors you could use the processors to lift some of the load off of the ram as well as the GPU? Granted not much of a improvement would probably be seen with casual / workstation users. But would be nice for those who do resource intensive tasks, video rendering, science stuff, gaming, etc.

disabledaccount
September 15th, 2011, 09:41 PM
Well, in fact it already happened without the need to have 1000x more processing power and in fact without the need to have anything more than blitter: Ulimited Detail Engine.
(it's not fully ready yet, but it clearly shows that vertex HW is obsolete)

Inodoro Pereyra
September 15th, 2011, 10:55 PM
http://mashable.com/2011/09/09/breakthrough-the-secret-to-making-processors-1000-times-faster-video/?WT.mc_id=obnetwork

serious they are serious that this is what we will get

using a special glue they will be able to stack on many

multiple layers to chips.

in 2013 to start off.

It's all BS.
Everybody knows the World will end in 2012...:lolflag:

lisati
September 15th, 2011, 11:04 PM
Lol. There will be thousands of people who think they need this and all they ever do is check emails and Facebook.


You'll start to see more graphics being handled by the CPU. There will come a time when separate GPUs are obsolete.
What? There are separate GPUs? When did that happen? :D

No offence ,you have been living under a large rock
Oh..... I wondered why I had a headache. :D

But seriously, no matter how how fast our processors get or how cleverly the engineers manage to deal with any heat generated, I don't see the supply of "difficult" and "uncomputable" stuff running out in the immediate future.

earthpigg
September 16th, 2011, 03:57 AM
I.... don't really think hardware matters all that much any more.

Well, I'll take that back -- it does. Especially in terms of form factor (touch screen? e-ink? traditional desktop paradigm? other options?) and battery life.

But, for the vast majority of users the vast majority of the time they are in front of their computer the hardware only matters in that the overpowered machines being used generate higher electricity bills than necessary.

I have an i7 quad desktop, a present to myself on my return to civilian life. I wish I'd purchased an i5 duo, or less. Even under-clocked at 1.6ghz, this thing heats my room up to be the warmest in the house during the summer as it idles at 50c (not that unusual (http://www.pugetsystems.com/blog/2009/02/26/intel-core-i7-temperatures/), as it turns out).

Moor's law may or may not be broken, but it isn't particularly relevant to me any more. A gamer? Maybe. A coder? Sure, if he remembers to get all them thar cores working when he compiles.

akand074
September 16th, 2011, 04:28 AM
Even under-clocked at 1.6ghz, this thing heats my room up to be the warmest in the house during the summer as it idles at 50c (not that unusual (http://www.pugetsystems.com/blog/2009/02/26/intel-core-i7-temperatures/), as it turns out).

My 6 core i7 idles around 26C. On normal usage I've never seen it go past like 39C. Though, it could be the 32nm lithography on this chip. And also the fact that 5 fo the 6 cores are idle most of the time.

Get that 1.6GHz dual core i5 now and you'll probably end your life. You don't actually realize how fast your computer is until you use something slower regularly. I use my laptop with a dual core turion and I am driven insane sometimes. I gave my mother a desktop with an old core 2 duo clocked at 2.3GHz and she said she couldn't even bare using her computer at work anymore (one she's been using for years without any particular complaining about performance). Once you have a direct comparison with another machine you've used regularly is when you actually are able to tell how well your system has been performing. People tend to take it for granted otherwise. I find my desktop slow sometimes, but once I use my laptop or other machines I feel so glad I have my desktop to go back home to.

NightwishFan
September 16th, 2011, 05:41 AM
I am not a huge fan of powerful hardware. I would prefer something efficient (power usage and work per clock cycle) over something with raw power.

sanderd17
September 16th, 2011, 10:45 AM
I don't see the supply of "difficult" and "uncomputable" stuff running out in the immediate future.

As mathematicians have proven: Almost all problems are uncomputable.

But luckily for us, the computable problems are more interesting.

Grenage
September 16th, 2011, 10:56 AM
Computers are rather well optimised these days. I have an i5 at 4.6, but it doesn't sit at 4.6 all the time - that's what Turbo is for. Graphics cards are the same; they just sip power.

As for the article, what it's basically saying is "1000 times the speed, by using 1000 times the CPU bulk"?

Oxwivi
September 16th, 2011, 02:19 PM
But seriously, no matter how how fast our processors get or how cleverly the engineers manage to deal with any heat generated, I don't see the supply of "difficult" and "uncomputable" stuff running out in the immediate future.
It doesn't matter if there any difficult or uncomputable stuff is around or not - it all boils down to how much processing power you can fit into a server farm.

pqwoerituytrueiwoq
September 16th, 2011, 03:09 PM
if this is being done by adding layers to a cpu does this mean 1000x cores?
if so we will need the over due software updated for multi core usage
aside from file compression/file conversions the most cores a app uses is 3 (games)
we need a software jump to keep use the hardware as it is now

DZ*
September 17th, 2011, 04:26 PM
R and Matlab are MUCH easier [...], but C++ will always be faster and when you have a large enough problem, C++ is the only option.

Maybe "to ditch" [C++] was not a good choice of words. OTOH maybe then I'll just wait a month or so for another 1000x processor speed increase :-)

koleoptero
September 17th, 2011, 06:14 PM
Great, I guess we'll also need a 5Kwatt PSU too.

DZ*
September 17th, 2011, 06:42 PM
I forgot, I'll also need a brain upgrade.