Intel’s Lynnfield processors now officially official, benchmarked

Sure, Taiwan’s been enjoying these chips for almost a month at this point, but it’s taken until now for Intel go official with its announcement of the “Lynnfield” processors, Core i5-750 and Core i7-870. If the early reviews are to be believed, both chips are dominant in their performance and price range, although there are some notable caveats for the tech savvy to take heed of. If you’re in need of the finer details of all these, hit up the read links below for the skinny.

Read – HotHardware
Read – PC Perspective
Read – Tech Report
Read
– TweakTown
Read – Official Intel Press release

Filed under:

Intel’s Lynnfield processors now officially official, benchmarked originally appeared on Engadget on Tue, 08 Sep 2009 00:11:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Seven Samurai chipmakers set to take on Intel

You know, it’s been nearly forty years since Intel introduced the first microprocessor, and even at this late date the company comprises a whopping eighty percent of the global market for CPUs. But not so fast! Like an electronics industry remake of The Magnificent Seven (which is, of course, an American remake of The Seven Samurai) NEC and Renesas have teamed up with a stalwart band of companies, including Hitachi, Toshiba, Fujitsu, Panasonic, and Canon, to develop a new CPU that is compatible with Waseda University professor Hironori Kasahara’s “innovative energy-saving software.” The goal is to create a commercial processor that runs on solar cells, moderates power use according to the amount of data being processed (a current prototype runs on 30% the power of a standard CPU), remains on even when mains power is cut, and, of course, upsets the apple cart over at Intel. Once a standard is adopted and the chip is used in a wide range of electronics, firms will be able to realize massive savings on software development. The new format is expected to to be in place by the end of 2012. [Warning: Read link requires subscription]

Filed under:

Seven Samurai chipmakers set to take on Intel originally appeared on Engadget on Fri, 04 Sep 2009 15:58:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel rumored to be launching new Core i5, i7 processors September 8th

Well, Intel hasn’t exactly been making many secrets about its latest cadre of processors, and at least a few of them already seem to be shipping in some parts of the world, but it now looks like things could soon be about to get a whole lot more official. According to DigiTimes, Intel is set to announce its new Core i5-750, Core i7-860 and Core i7-870 CPUs (and the P55 chipset to go along with ’em) on September 8th, which is almost right in line with some of the earliest rumors on the matter. Details are otherwise a bit light, although DigiTimes‘ “sources” estimate that P55-based motherboards could account for as much as 20% of total motherboard shipments by the end of 2009.

Filed under:

Intel rumored to be launching new Core i5, i7 processors September 8th originally appeared on Engadget on Mon, 31 Aug 2009 13:54:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Giz Explains: Snow Leopard’s Grand Central Dispatch

You’ve probably heard about this snow kitty operating system for Macintosh computers. What you might not’ve heard is exactly how it’s supposed to be unleashing the power of all those processor cores crammed inside your computer.

The heart of the matter is that the trick to actually utilizing the full power of multiple processors—or multiple cores within a processor, like the Core 2 Duo you’ve probably got in your computer if you bought in the last two years—is processing things in parallel. That is, doing lots of stuff side by side. After all, you’ve got 2, maybe 4 or even 8 processors at your disposal, so to use them as efficiently as possible, you want to pull a problem apart and throw a piece of it at each core, or at least send different problems to different cores. Sounds logical, right? Easy, even.

The rub is that writing software that can actually take advantage of all of that parallel processing at an application level isn’t easy, and without software built for it, all that power is wasted. In fact, cracking the nut of parallel processing is one the major movements in tech right now, since parallelism, while it’s been around forever, has been the domain of solving really big problems, not running Excel sheets on your laptop. It’s why, for instance, former Intel chair Craig Barrett told me at CES that Intel hires more software engineers than hardware engineers—to push the software paradigm shift that’s gotta happen.

A big part of the reason parallel programming is hard for programmers to wrestle with is simply most of them have never spent any time thinking about parallelism, says James Reinders, Intel’s Chief Software Evangelist, who’s spent decades working with parallel processing. In the single core world, more speed primarily came from a faster clock speed—all muscle. Multi-core is a different approach. Typically, the way a developer takes advantage of parallelism is by breaking their application down into threads, sub-tasks within a process that run simultaneously or in parallel. And processes are just instances of an application—the things you can see running on your machine by firing up the Task Manager in Windows, or Activity Monitor in OS X. On a multi-core system, different threads can be handled by different processors so multiple threads can be run at once. An app can a lot run faster if it was written to be multi-threaded.

One of the reasons parallel programming is tricky is that some kinds of processes are really hard to do in parallel—they have to be done sequentially. That is, one step in the program is dependent on the result from a previous step, so you can’t really run those steps in parallel. And developers tend to run into problems, like a race condition, where two processes try to do something with the same piece of data and the order of events gets screwed up, resulting in a crash.

Snow Leopard‘s Grand Central Dispatch promises to take a lot of the headache out of parallel programming by managing everything at the OS level, using a system of blocks and queues, so developers don’t even have to thread their apps in the traditional way. In the GCD system, a developer tags self-contained units of work as blocks, which are scheduled for execution and placed in a GCD queue. Queues are how GCD manages tasks running parallel and what order they run in, scheduling blocks to run when threads are free to run something.

Reinders says he’s “not convinced that parallel programming is harder, it’s just different.” Still, he’s a “big fan of what Apple’s doing with Grand Central Dispatch” because “they’ve made a very approachable, simple interface for developers to take advantage of the fact that Snow Leopard can run things in parallel and they’re encouraging apps to take advantage of that.”

How Snow Leopard handles parallelism with GCD is a little different than what Intel’s doing however—you might recall Intel just picked up RapidMind, a company that specializes in optimizing applications for parallelism. The difference between these two, at a broad level, represent two of the major approaches to parallelism—task parallelism, like GCD, or data parallelism, like RapidMind. Reinders explained it like this: If you had a million newspapers you want to cut clips out of, GCD would look at cutting from each newspaper as a task, whereas RapidMind’s approach would look at it as one cutting to be executed in a repetitive manner. For some applications, RapidMind’s approach will work better, and for some, GCD’s task-based approach will work better. In particular, Reinders says something like GCD works best when a developer can “figure out what the fairly separate things to do are and you don’t care where they run or in what order they run” within their app.

It’s also a bit different from Windows’ approach to parallelism, which is app oriented, rather than managing things at the OS level, so it essentially leaves everything up to the apps—apps have got to manage their own threads, make sure they’re not eating all of your resources. Which for now, isn’t much of a headache, but Reinders says that there is a “valid concern on Windows that a mixture of parallel apps won’t cooperate with each other as much,” so you could wind up with a situation where say, four apps try to use all 16 cores in your machine, when you’d rather they split up, with say one app using eight cores, another using four, and so on. GCD addresses that problem at the system level, so there’s more coordination between apps, which may make it slightly more responsive to the user, if it manages tasks correctly.

You might think that the whole parallelism thing is a bit overblown—I mean, who needs a multicore computer to run Microsoft Word, right? Well, even Word benefits from parallelism Reinders told me. For instance, when you spool off something to the printer and it doesn’t freeze, like it used to back in the day. Or spelling and grammar running as you type—it’s a separate thread that’s run in parallel. If it wasn’t, it’d make for a miserable-ass typing experience, or you’d just have to wait until you were totally finished with a document. There’s also the general march of software, since we love to have more features all the time: Reinders says his computer might be 100X faster than it was 15 years ago, but applications don’t run 100x faster—they’ve got new features that are constantly added on to make them more powerful or nicer to use. Stuff like pretty graphics, animation and font scaling. In the future, exploiting multiple cores through parallelism that might be stuff like eyeball tracking, or actually good speech recognition.

Reinders actually thinks that the opportunities for parallelism are limitless. “Not having an idea to use parallelism in some cases I sometimes refer to as a ‘lack of imagination,'” because someone simply hasn’t thought of it, the same way people back in the day thought computers for home use would be glorified electronic cookbooks—they lacked the imagination to predict things like the web. But as programmers move into parallelism, Reinders has “great expectations they’re going to imagine things the rest of us,” so we could see some amazing things come out of parallelism. But whether that’s next week or five years now, well, we’ll see.

[Back to our Complete Guide to Snow Leopard]

Still something you wanna know? Send questions about parallel processing, parallel lines or parallel universes to tips@gizmodo.com, with “Giz Explains” in the subject line.

Grand Central Terminal main concourse image from Wikimedia Commons

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September

Looks like AMD‘s heading off trail with its upcoming 40nm DirectX 11-based Evergreen series processors. The Inquirer’s dug up some details, and while clock speeds are still unknown, the codenames for the lineup include Cypress at the top of the pile, followed by Redwood, then Juniper and Cedar for the mainstream crowd, and finally Hemlock for the lower end. The series could reportedly be ready by late September, which gives a month of breathing room before DX11-supporting Windows 7 hits the scene. Could this give AMD its much-desired lead over NVIDIA? Hard to say, but things should get mighty interesting between now and late October.

Filed under: ,

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September originally appeared on Engadget on Tue, 21 Jul 2009 19:43:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Mobile Chipsets: WTF Are Atom, Tegra and Snapdragon?

Low-power processors aren’t just for netbooks: These computers-on-a-chip are going to be powering our smartphones and other diminutive gadgets in the forseeable future. So what’s the difference between the Atoms, Snapdragons and Tegras of the world?

Intel Atom
The current reigning king of low-cost, low-power processors, Intel’s Atom flat-out dominates the netbook market. Its single- and dual-core processors are also some of the most powerful on our list, despite having abilities roughly equal to, in Intel’s own terms, a 2003-2004 vintage Celeron. Based on the x86 architecture, the Atom is capable of running full versions of Windows XP, Vista (though not all that well), and 7, as well as modern Linux distros and even Hackintosh. While it requires far less power than a full-power chip, it’s still more power-hungry than the ARM-based processors on our list, requiring about 2 watts on average. That’s why netbook battery life isn’t all that much longer than that of a normal laptop.

You can find the Atom in just about every netbook, including those from HP, Dell, Asus, Acer, Sony, Toshiba, MSI, and, well, everyone else. The 1.6GHz chip is the most popular at the moment, but Intel is definitely going to keep improving and upgrading the Atom line. However, you’re unlikely to catch an Atom in a handset; it’s low-power, yes, but low-power for a notebook. Battery life on an Atom handset would be pretty atrocious, which is why Intel’s sticking to netbooks for now.

Qualcomm Snapdragon
Based on ARM, which is a 32-bit processor architecture that powers just about every mobile phone (and various other peripherals, though never desktop computers) out there, Snapdragon isn’t competing directly with the Intel Atom—it’s not capable of running full versions of Windows (only Windows Mobile and Windows CE), it’s incredibly energy-efficient (requiring less than half a watt), and is designed for always-on use. In other words, this is the evolution of the mobile computing processor. It’s got great potential: Qualcomm is trumpeting battery life stretching past 10 hours, smooth 1080p video, support for GPS, 3G, and Bluetooth, and such efficiency that a Linux-based netbook can use Snapdragon without a fan or even a heat sink. Available in single core (1GHz) or dual-core (1.5GHz), it can be used in conjunction with Android, Linux, and various mobile OSes.

Unfortunately, Qualcomm is still holding onto the notion that people want MIDs, and is championing “smartbooks,” which are essentially smartphones with netbook bodies, like Asus’s announced-then-retracted Eee with Android. Snapdragon’s got promise, but we think that promise lies in super-powered handheld devices, not even more underpowered versions of already-underpowered netbooks.

We’re frankly not sure when we’ll see Snapdragon-based devices sold in the US. We’re sure Snapdragon will end up in smartphones at some point, as at least one Toshiba handset has been tentatively announced, but the only concrete demonstrations we’ve seen have been in MIDs, and Snapdragon themselves spend all their energy touting these “smartbooks.” Snapdragon’s Windows Mobile compatibility suggests we may see it roll out with Windows Mobile 7, if Tegra hasn’t snapped up all the good handsets.

Nvidia Tegra
Nvidia’s Tegra processor is very similar to Snapdragon—both are based on ARM architecture, so both are designed for even less intense applications than the Atom. Like Snapdragon, Tegra isn’t capable of running desktop versions of Windows, so it’s primarily targeted at Android and handheld OSes, especially forthcoming versions of Windows Mobile. What sets Tegra apart from Snapdragon is the Nvidia graphics pedigree: The company claims smooth 1080p video, like Snapdragon, but also hardware-accelerated Flash video and even respectable gaming (though no, you won’t be able to run Crysis). They also go even further than Qualcomm in their battery life claim, suggesting an absolutely insane 30 hours of HD video.

While Snapdragon tends to be loosely associated with Android, Tegra is an integral part of Microsoft’s plan for next-generation Windows Mobile devices. Instead of focusing on “smartbooks” and MIDs, which we think are part of a dead-end category, Tegra’s commitment to pocketable handhelds could spell success. We’ve seen proof-of-concept demonstrations of Tegra already, but its real commercial debut will come with Windows Mobile 7—and if WM7 doesn’t suck, Tegra could take off.

Others
We haven’t included certain other processors, especially VIA’s Nano, due to intent: The Nano requires lower power than full-scale processors, but at 25 watts, it’s not even really in the same league as Atom, let alone Snapdragon or Tegra. The VIA Nano is really targeted at non-portable green technology, and looks like it’ll do a good job—it outperformed Atom in Ars Technica’s excellent test, and stands up to moderate use with ease. AMD’s Puma (Turion X2) is in a similar boat: It’s certainly markedly more energy-efficient than AMD’s other offerings, but as it’s targeted at laptops (not netbooks) with a screen size greater than 12-inches, it’s not quite right for our list here.

These low-power processors aren’t just, as we so often think, crappier versions of “real” processors. They’ve got uses far beyond netbooks, especially in the near future as the gap between netbooks and smartphones narrows.

Still something you still wanna know? Send any questions about why your iPhone can’t play Crysis, how to tie a bow tie, or anything else to tips@gizmodo.com, with “Giz Explains” in the subject line.

Intel debuts three new Core 2 Duo procs, new SU2700 ULV chip and GS40 Express Chipset

It doesn’t take an Intel-salaried futurist to see that extended battery life and thin form factors are kind of a big deal going forward, while price and performance aren’t getting swept away either — it’s been basically the ongoing state of the laptop industry since time began (as Intel has so helpfully illustrated for us). What is new is that form factors and bang-for-buck is truly getting wild of late, and Intel’s latest crop of chips should help keep moving things along. In the high end, Intel’s Core 2 Duo processor is breaking 3GHz with the 3.06GHz T9900 in the high end, alongside the new P9700 and P8800 chips. Meanwhile, the Pentium SU2700 is a 1.3GHz ULV chip for stuffing in everybody’s next low-cost thin and light, while Intel is also introducing the GS40 Express Chipset as a scaled-down, lower power alternative to the GS45, likely for similar aims. No word on price points or availability just yet.

Filed under:

Intel debuts three new Core 2 Duo procs, new SU2700 ULV chip and GS40 Express Chipset originally appeared on Engadget on Tue, 02 Jun 2009 00:01:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel’s Medfield Project May, May Not Go Into Smartphones

It’s all very wink wink, nudge nudge, hush hush, but the odor that Intel is giving off in this Fortune article about the Medfield project is that Intel’s trying to shrink x86 down to smartphones.

Intel’s roadmap looks like this: Now they have Atom, which powers many of the netbooks on the market today. Next comes Moorestown, which is supposed to be like the Atom, but house two chips and be a low-power solution that can be customizable (the 2nd chip) for whatever gadget a client shoves it into. Moorestown isn’t quite small enough for smartphones, but Intel’s saying Medfield may be, when Medfield follows up Moorestown.

There’s a lot of hinting, but not a lot of outright declaration here, so it’s not certain that Medfield may be able to fit into something the size of an iPhone or a Pre or an Android. What they are saying is that they can fit into something the size of a UMPC or a MID or a large PMP—something that Nvidia’s Tegra or Qualcomm’s Snapdragon are aiming for as well.

The timeline for Medfield is 2011ish, so there’s a while yet before anything materializes. But if Intel does somehow find a way to get their system-on-a-chip into your phones, that means bigger OSes and more laptop-like performance. We’ll see. [Fortune]

AMD reorganizes, ATI now fully assimilated

It looks like the final step in AMD totally subsuming ATI has been taken. The company announced a reorganization around four specific pillars: products, future techology, marketing, and customer relations. The restructuring also marks the end of Randy Allen’s tenure, as the SVP of the Computing Solutions Group has decided to leave for unspecified reasons. ATI holdover Rick Bergman, who had also be head of the subsidiary known internally as the Graphics Product Group, will head up the products division with the goal of unifying the GPU and CPU teams (not necessarily the products). We highly doubt this means ATI branding is going anywhere — it’s far too valuable for AMD. Will Bergman’s lead help the company reclaim its position among the top ten chip makers? Give Fusion the kick in the pants it needs? Only time will tell.

Filed under: , ,

AMD reorganizes, ATI now fully assimilated originally appeared on Engadget on Wed, 06 May 2009 20:58:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Ex-Seagate CEO joins startup Vertical Circuits, learns secret of the silver, gadget-shrinking ooze

Bill Watkins, the oft-outspoken former CEO of Seagate, has thrown his support behind tech startup Vertical Circuits, who claim to have an uncanny knack for shrinking gadgets with the power of voodoo — or rather, a patented silver ooze, but we prefer our theories. The goo works as a replacement for gold wires to connect vertically stacked chips, cleaning up the internal cable clutter and leaving more room for better processor, bigger batteries, larger displays, or just a tinier form factor. Right now the focus is on stacking flash memory, but the group says they can use the same technique for processors and other chips. At this stage, there’s no product or partnership to show for it, but if they’re as good as they say, we hopefully won’t have to wait long to see the fruits of their labor.

Filed under: , , , ,

Ex-Seagate CEO joins startup Vertical Circuits, learns secret of the silver, gadget-shrinking ooze originally appeared on Engadget on Sat, 02 May 2009 09:51:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments