At SIGGRAPH in L.A., Watching the Future of Computing Unfold

I’m at the SIGGRAPH 2008 conference in Los Angeles this week. My group, Intel Software Network, has a lot of cool stuff going on this week around the recent paper that was published on the Larrabee architecture.

I just put up a post on the ISN blog, about the history of SIGGRAPH and the ACM, and then waxing a bit philosophical about Larrabee and the future of computing as we know it:

I can’t shake the feeling that the Larrabee Architecture paper that was just published by the ACM, written mostly by Intel engineers, is one of those landmark events in computing. I’m really not trying to add to the hype that’s already surrounding Larrabee. There’s enough of that already. But it really is going to be a huge leap in computing. Imagine that in a couple of years, instead of having one, or two, or maybe four cores, your computer could have a Larrabee card with 24 or 32 (I’m guessing – this number isn’t final) programmable x86 cores that can be set to any task that benefits from massive parallelism (like, say, making that 3D game you’re playing look REALLY pretty and smooth), along with 8 “bigger” traditional Nehalem (I mean, Core i7) processor cores that do the things your current single or dual core processor does. Oh, and with Hyperthreading, all of those cores can run more than one thread, which makes them appear as even more “virtual” processors to the operating and software that use them.

How in the world are operating systems, applications, and games going to have to change to deal with this massive shift to many cores and many threads?

I’ll be posting more, and helping to get some videos of the cool stuff here at SIGGRAPH posted quickly to ISN’s video site, Take Five, so keep any eye out over there for any cool stuff I come across.

It’s things like this that make me love my job! :-)


6 thoughts on “At SIGGRAPH in L.A., Watching the Future of Computing Unfold

  1. Pingback:

  2. Chris says:

    I agree that massive parallelism is the next great leap for computing. Think about the benefits of real time rendering of CAT scans, CGI scenes, CAD designs. All these tasks that currently take either thousands of dollars in specialized equipment or many hours of rendering time (sometimes both).

    I don’t claim to understand the underlying hardware involved, but it seems to me that NVIDIA is already doing what Larabee is talking about. Are 24-32 x86 processor cores going to be able to compete with the 480 cores in the latest Quadro release? Or the 200+ on the current desktop GPUs? My personal opinion? I hope so. I have always believed in the power of real competition for improving technology and, in turn, the common man’s experience. Of course, I am a bit motivated to hope that NVIDIA comes out on top, but I say, “Bring on the competition!”

    From the software viewpoint, which I understand well, I think that the real success will need to be in the API that we Software Engineers are given for using these GPUs. I don’t want to have to know the intimate details of how all these linked cores do their work. I just want to be able to assign them a bundle of computations and see the results in an easily-integratable manner.

  3. Chris, you make some good points. I don’t understand all of the hardware, either, but one thing I’ve been trying to get my head around is what the difference is between what Nvidia considers a core, and what Intel considers a core. Or more specifically, what can one do that the other can’t, and vice versa? Do you have any good info you can point me to on this, so I can understand better? I know it’s all pretty new/future stuff, but it’s exciting, either way.

    And like you said, bring on the competition! 😉

  4. Chris says:

    Excellent question.

    I suspect that the Intel cores will be more beefy, but if my CS 333 memory serves –which is spotty at best for us old fogies– each core can still only actually perform one calculation at a time. Context switching makes it look like more than one happens simultaneously.

    I will certainly do some research with people that do know hardware to see what the functional differences are between x86 cores and geforce cores. I will let you know anything I can share.

  5. Art Scott says:

    Yeah, 2008 a turning point, SIGGRAPH ’08 Larrabee paper, GDC ’08 multi-core .ppt’s, GameFest too, … and F#
    The word is getting out, this ain’t gonna be your fathers’ PC.

    Seems multi-core issues (Larrabee) is driven by the CG Game playing young male scions, testosterone poisoning; they want it all and they want it now.
    I guess Dad just wants faster SQL, and is willing to wait …

    When? Let the guessing games begin, 2009? 2010? 2011? Later?
    If I’m Intel I’m hoping sooner, guess MS too.

Comments are closed.