Giz Explains: Snow Leopard’s Grand Central Dispatch

You’ve probably heard about this snow kitty operating system for Macintosh computers. What you might not’ve heard is exactly how it’s supposed to be unleashing the power of all those processor cores crammed inside your computer.

The heart of the matter is that the trick to actually utilizing the full power of multiple processors—or multiple cores within a processor, like the Core 2 Duo you’ve probably got in your computer if you bought in the last two years—is processing things in parallel. That is, doing lots of stuff side by side. After all, you’ve got 2, maybe 4 or even 8 processors at your disposal, so to use them as efficiently as possible, you want to pull a problem apart and throw a piece of it at each core, or at least send different problems to different cores. Sounds logical, right? Easy, even.

The rub is that writing software that can actually take advantage of all of that parallel processing at an application level isn’t easy, and without software built for it, all that power is wasted. In fact, cracking the nut of parallel processing is one the major movements in tech right now, since parallelism, while it’s been around forever, has been the domain of solving really big problems, not running Excel sheets on your laptop. It’s why, for instance, former Intel chair Craig Barrett told me at CES that Intel hires more software engineers than hardware engineers—to push the software paradigm shift that’s gotta happen.

A big part of the reason parallel programming is hard for programmers to wrestle with is simply most of them have never spent any time thinking about parallelism, says James Reinders, Intel’s Chief Software Evangelist, who’s spent decades working with parallel processing. In the single core world, more speed primarily came from a faster clock speed—all muscle. Multi-core is a different approach. Typically, the way a developer takes advantage of parallelism is by breaking their application down into threads, sub-tasks within a process that run simultaneously or in parallel. And processes are just instances of an application—the things you can see running on your machine by firing up the Task Manager in Windows, or Activity Monitor in OS X. On a multi-core system, different threads can be handled by different processors so multiple threads can be run at once. An app can a lot run faster if it was written to be multi-threaded.

One of the reasons parallel programming is tricky is that some kinds of processes are really hard to do in parallel—they have to be done sequentially. That is, one step in the program is dependent on the result from a previous step, so you can’t really run those steps in parallel. And developers tend to run into problems, like a race condition, where two processes try to do something with the same piece of data and the order of events gets screwed up, resulting in a crash.

Snow Leopard‘s Grand Central Dispatch promises to take a lot of the headache out of parallel programming by managing everything at the OS level, using a system of blocks and queues, so developers don’t even have to thread their apps in the traditional way. In the GCD system, a developer tags self-contained units of work as blocks, which are scheduled for execution and placed in a GCD queue. Queues are how GCD manages tasks running parallel and what order they run in, scheduling blocks to run when threads are free to run something.

Reinders says he’s “not convinced that parallel programming is harder, it’s just different.” Still, he’s a “big fan of what Apple’s doing with Grand Central Dispatch” because “they’ve made a very approachable, simple interface for developers to take advantage of the fact that Snow Leopard can run things in parallel and they’re encouraging apps to take advantage of that.”

How Snow Leopard handles parallelism with GCD is a little different than what Intel’s doing however—you might recall Intel just picked up RapidMind, a company that specializes in optimizing applications for parallelism. The difference between these two, at a broad level, represent two of the major approaches to parallelism—task parallelism, like GCD, or data parallelism, like RapidMind. Reinders explained it like this: If you had a million newspapers you want to cut clips out of, GCD would look at cutting from each newspaper as a task, whereas RapidMind’s approach would look at it as one cutting to be executed in a repetitive manner. For some applications, RapidMind’s approach will work better, and for some, GCD’s task-based approach will work better. In particular, Reinders says something like GCD works best when a developer can “figure out what the fairly separate things to do are and you don’t care where they run or in what order they run” within their app.

It’s also a bit different from Windows’ approach to parallelism, which is app oriented, rather than managing things at the OS level, so it essentially leaves everything up to the apps—apps have got to manage their own threads, make sure they’re not eating all of your resources. Which for now, isn’t much of a headache, but Reinders says that there is a “valid concern on Windows that a mixture of parallel apps won’t cooperate with each other as much,” so you could wind up with a situation where say, four apps try to use all 16 cores in your machine, when you’d rather they split up, with say one app using eight cores, another using four, and so on. GCD addresses that problem at the system level, so there’s more coordination between apps, which may make it slightly more responsive to the user, if it manages tasks correctly.

You might think that the whole parallelism thing is a bit overblown—I mean, who needs a multicore computer to run Microsoft Word, right? Well, even Word benefits from parallelism Reinders told me. For instance, when you spool off something to the printer and it doesn’t freeze, like it used to back in the day. Or spelling and grammar running as you type—it’s a separate thread that’s run in parallel. If it wasn’t, it’d make for a miserable-ass typing experience, or you’d just have to wait until you were totally finished with a document. There’s also the general march of software, since we love to have more features all the time: Reinders says his computer might be 100X faster than it was 15 years ago, but applications don’t run 100x faster—they’ve got new features that are constantly added on to make them more powerful or nicer to use. Stuff like pretty graphics, animation and font scaling. In the future, exploiting multiple cores through parallelism that might be stuff like eyeball tracking, or actually good speech recognition.

Reinders actually thinks that the opportunities for parallelism are limitless. “Not having an idea to use parallelism in some cases I sometimes refer to as a ‘lack of imagination,'” because someone simply hasn’t thought of it, the same way people back in the day thought computers for home use would be glorified electronic cookbooks—they lacked the imagination to predict things like the web. But as programmers move into parallelism, Reinders has “great expectations they’re going to imagine things the rest of us,” so we could see some amazing things come out of parallelism. But whether that’s next week or five years now, well, we’ll see.

[Back to our Complete Guide to Snow Leopard]

Still something you wanna know? Send questions about parallel processing, parallel lines or parallel universes to tips@gizmodo.com, with “Giz Explains” in the subject line.

Grand Central Terminal main concourse image from Wikimedia Commons

IBM brings the ruckus — and new Power7 processor

IBM likes its servers and supercomputers. A lot. After giving the Power6 plenty of self-congratulatory publicity, Big Blue is ready to move on to the 7th generation of Power, which is set to be announced at the Hot Chips conference this evening. With eight cores and up to four SMT4 threads running on each, the 45nm Power7 can perform 32 simultaneous tasks per chip. The designers have slapped in a whopping 32MB of eDRAM in each chip for improved latency, dual DDR3 memory controllers for a sustained 100GB per second bandwidth, and even error correcting code and memory mirroring for redundancy. Sounds like a major boon for research into the brains of mice and the history of dirty words, but we don’t expect to hear much about this proc outside the server farm.

Filed under:

IBM brings the ruckus — and new Power7 processor originally appeared on Engadget on Wed, 26 Aug 2009 10:45:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

IBM studying ‘DNA origami’ to build next-gen microchips, paralyze world with fear

IBM is already making a beeline to 28nm process technology, but it looks like the train may deviate a bit before it even reaches the bottom. Reportedly, the company responsible for PowerPC, the original business laptop and all sorts of underground things that we’ll never comprehend is now looking to use DNA as a model for crafting the world’s next great processor. DNA origami, as it’s so tactfully called, can supposedly provide a cheap framework “on which to build tiny microchips,” with IBM research manager Spike Narayan proclaiming that this is “the first demonstration of using biological molecules to help with processing in the semiconductor industry.” Sir Spike also noted that “if the DNA origami process scales to production-level, manufacturers could trade hundreds of millions of dollars in complex tools for less than a million dollars of polymers, DNA solutions, and heating implements.” The actual process still seems murky from here, but we’re told to expect real results within ten years. Which should be just in time for the robot apocalypse to really hit its stride — awesome.

[Via HotHardware]

Filed under: ,

IBM studying ‘DNA origami’ to build next-gen microchips, paralyze world with fear originally appeared on Engadget on Mon, 17 Aug 2009 10:51:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel still won’t talk Core i5 details, but you can order one anyway

Intel still won't talk Core i5 details, but you can order one anyway
It’s been a long, strange road for the Core i5 series of processors, first announced way back in March not by Intel, but by a motherboard spec sheet. Since then we’ve seen rebranding talk, lots of grids of various colors, and a delay purely for selfish reasons. Intel still isn’t saying how much they’ll cost or when they’ll ship, but that’s okay, because retailers have answered the first question and given us reason to believe the answer to the second is “soon.” Two computer hardware sites confirm that the Core i5 570 will have a 2.66GHz clock speed and sport 8MB of cache, matching expectations for this new mass-market processor, and the prices (as low as $233) are a fair bit cheaper than a comparably spec’d but higher performing Core i7. Mind you, both of those retailers list the chip as being out of stock, but we’re sure if you’re so inclined they’d be happy to put you down for a pre-order.

[Via PC World]

Read – Core i5 570 at Fad Fusion
Read – Core i5 570 at Computer Connection

Filed under:

Intel still won’t talk Core i5 details, but you can order one anyway originally appeared on Engadget on Thu, 06 Aug 2009 10:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Intel denies rumors that Z-series Atoms are headed for the grave

Intel wants you to know that the rumor that its Z-series Atom chips are headed for the “discontinued” pile is not true. A few days back, we heard that the chips — which were initially designated for MIDs but made their way into some netbooks — could no longer be ordered from Intel. A spokesperson for the company, however, speaking with Register Hardware, said that the rumors were “100 percent inaccurate.” We’ll just have to wait and see how this all pans out, but we’re still not feeling terribly positive about poor little MIDs’ odds.

Filed under:

Intel denies rumors that Z-series Atoms are headed for the grave originally appeared on Engadget on Mon, 03 Aug 2009 10:22:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Samsung confirms a Tegra-based smartphone is in the works, all other details shrouded in mystery

NVIDIA’s Tegra chip has shown itself to be quite a gem, especially in the field of augmented reality zombie destruction. Looks like Samsung agrees with that sentiment, and has confirmed that it’s currently developing a smartphone with the powerful processor. That’s not a lot to go on, but knowing the capabilities of the CPU, we’re excited. It’s probably safe to assume an AMOLED touchscreen is a given, as well as a plethora of TouchWiz widgets, but whether or not the phone goes with Windows Mobile or Android is still a mystery. A recent rumor suggested one of the “top five” smartphone makers would be releasing a $199 GSM-based Tegra device by year’s end — no indication if these two reports are one in the same, but we’d love to see what Sammy has in store sooner rather than later.

Filed under:

Samsung confirms a Tegra-based smartphone is in the works, all other details shrouded in mystery originally appeared on Engadget on Fri, 24 Jul 2009 15:54:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD parties hard after shipping 500 millionth x86 processor

Get on down with your bad self, Mr. Spaceman — AMD just shipped its 500 millionth x86 processor! Shortly after the company celebrated 40 years of hanging tough and doing its best to overtake Intel, the outfit has now revealed that a half billion x86 CPUs have left its facilities over the past two score. We pinged Intel in order to find out just how that number stacked up, but all we were told is that the 500 million milestone was celebrated awhile back down in Santa Clara. We’ll just chalk the vagueness up to Intel not wanting to spoil an otherwise raucous Silicon Valley shindig. Classy.

[Via HotHardware]

Filed under: ,

AMD parties hard after shipping 500 millionth x86 processor originally appeared on Engadget on Fri, 24 Jul 2009 02:18:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Fujitsu’s sleek FUTRO S100 gets VIA Eden implant

By and large, thin clients are relatively boring. That said, they’re typically dead silent and plenty powerful to handle the most basic of tasks, and thanks to Fujitsu, this one’s even halfway easy on the eyes. The new FUTRO S100 was revealed today, complete with a 500MHz VIA Eden ULV processor that enables the entire system to suck down just 11 watts under full load. Other specs include 1GB of DDR2 memory, a pair of USB 2.0 connectors, VGA output, Ethernet, a VX800 media processor, Chrome9 HC3 graphics and internal CF-based storage support. There’s nary a mention of price, but it’s ready to ship today for those with the correct change.

[Via HotHardware]

Filed under:

Fujitsu’s sleek FUTRO S100 gets VIA Eden implant originally appeared on Engadget on Sat, 18 Jul 2009 03:49:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD Phenom II TWKR Black Edition CPU up for auction, sure to fetch a bundle

Remember that AMD Phenom II X4 TWKR processor that we saw overclocked and reviewed just over a fortnight ago? Don’t you recall reading and wondering why you were even wasting your time given the scarcity of said chip? It took awhile, but it seems the justification you’ve been searching for has finally arrived. AMD only manufactured a smattering of these chips in order for select media outlets to showcase the company’s potential, and somehow one has found its way onto eBay. Best of all, 100 percent of the proceeds will benefit a charity (Family Eldercare), so you can feel good about spending way, way too much on a slab of silicon. Tap the read link if you care to drive the price up even further (and you know you do).

[Thanks, Alex]

Filed under:

AMD Phenom II TWKR Black Edition CPU up for auction, sure to fetch a bundle originally appeared on Engadget on Wed, 15 Jul 2009 11:50:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Leaked Intel roadmap specs upcoming Core i5 and i7 ‘Lynnfield’ CPUs

Looking for something to print out and put on your wall that demonstrates the full extent of your Intel dedication? PC Watch has some mighty high resolution charts of the company’s desktop and mobile CPU roadmaps, including a handful of chips that we haven’t seen before. On the Lynnfield / desktop side, there’s the Core i7 870 (2.93 to 3.6GHz) and 860 (up to 3.46GHz), due out second half 2009, with the latter having a greater range in available clock speeds and a less power hungry, 82 watt version due out next year. Listed squarely in the Q3 2009 column is one of the first spec’d Core i5-branded chips we’ve seen, the 750 (up to 3.2GHz), which also boasts a more energy efficient iteration due out sometime in first third of 2010. Looking to mobile, the three Core i7 Clarksfield processors that were recently rumored for September are also listed here for Q4 of this year as 720QM, 820QM, and 920XM, and on the more value end of the charts, Intel’s Atom / Pineview series (N450 for mobile and D410 / D510 for desktop) is listed for release just after the stroke of 2010. There’s seriously a lot to digest here, so if reading over large multi-colored tables full of data is your idea of a fun time, hit up the read link for a veritable gold mine of delight.

[Via Electronista]

Filed under: ,

Leaked Intel roadmap specs upcoming Core i5 and i7 ‘Lynnfield’ CPUs originally appeared on Engadget on Wed, 15 Jul 2009 04:33:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments