AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena

AMD’s newfangled ATI Radeon E4690 may not be the next Crysis killer, but it should do just fine in next-gen arcade and slot machines. All kidding aside (sort of…), this new embedded graphics set is said to triple the performance of AMD’s prior offerings in the field, bringing with it 512MB of GDDR3 RAM, DirectX 10.1 / OpenGL 3.0 support and hardware acceleration of H.264 and VC-1 high-definition video. The 35mm chip also differentiates itself by integrating directly onto motherboards and taking on many of the tasks that are currently assigned to the CPU, but alas, it doesn’t sound as if we’ll be seeing this in any nettops / netbooks anytime soon ever.

Filed under:

AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena originally appeared on Engadget on Mon, 01 Jun 2009 08:56:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

What to See, Do, Hear and Hack at the Maker Faire

8-bit CPU

Maker Faire, the largest festival for DIYers, crafters and hackers, happens Saturday and Sunday, May 30 and 31, in San Mateo, California. More than  80,000 people are expected to attend this year to check out what the 600 odd makers have to show, including robotics, music, crafts and food.

bug_makerfaireHere are some of the highlights:

  • Steve Chamberlin’s 8-bit homebrewed CPU.  Nearly 1,253 pieces of wire were individually hand wrapped to create the connection and Chamberlin has built a functional computer based on it. The computer and the CPU will be on display in booth 296 at the main Expo Hall.
  • A group of Disney Pixar’s Wall-E movie aficianados will also be showing their handmade Wall-E robots and other characters from the movie. The hobbyists have  created life-size, fully functional replicas from the scratch that are indistinguishable from their namesakes in the movie. The robots will be on display at booth 147 in the Expo Hall.
  • There will also be interesting musical instruments on display such as the Yotam Mann’s multitouch musical pad. The musical pad has optical lasers, a webcam and some custom software rigged together to provide an inexpensive way to make some cool music. The contraption will be on display at booth 211 in the Expo Hallo.
  • The Bay Area Lego Users Group (BayLUG), which has more than 100 members, will show an entire city constructed of Lego bricks. The exhibit, with individual members responsible for building a single city block, will measure about 2,000 square feet.
  • Other cool exhibits include Daniel Fukuba’s DIY Segway. Fukuba, with some help from other Segway enthusiasts, has created a balancing scooter, first with a wooden frame and then an aluminum frame.  “I started with raw, plain PCB boards and soldered on all the components for the speed controller and the logic controller,” says Fukuba. The project took about two months and $4000. And at the Faire, he will be sharing his expertise on how to do it yourself. Fukuba’s DIY balancing scooter will be on display at A1 in the Bike Town pavilion.
  • We are also eager to see the two-person self-propelled Ferris Wheel where riders use their arm muscles to shift their weight and turn the wheel. This Ferris Wheel is about 20 feet tall, made of plywood and will be in the Midway M2 area.
  • Don’t forget to also check out the CandyFab Project that uses low-cost, open-source fabrication to create 3D sugary confections. A completely new CandyFab machine will be on display at booth number 293 in the Expo Hall cranking out some sweet goodies.

Know of some other cool exhibits or events at the Faire? Post them in the comments below.

And follow @gadgetlab on Twitter, where we’ll be tweeting throughout the weekend with tips on the most interesting, fun and wacky things to see. Stay.

For more on the event, check out O’Reilly’s Maker Faire website.

Photo: Wire wrapped 8-bit CPU/Steve Chamberlin


Homebrewed CPU Is a Beautiful Mess of Wires

bmow1

Intel’s fabrication plants can churn out hundreds of thousands of processor chips a day. But what does it take to handcraft a single 8-bit CPU and a computer? Give or take 18 months, about $1,000 and 1,253 pieces of wire.

Steve Chamberlin, a Belmont, California, videogame developer by day, set out on a quest to custom design and build his own 8-bit computer. The homebrew CPU would be called Big Mess of Wires or BMOW. Despite its name, it is a painstakingly created work of art.

“Computers can seem like complete black boxes. We understand what they do, but not how they do it, really,” says Chamberlin. “When I was finally able to mentally connect the dots all the way from the physics of a transistor up to a functioning computer, it was an incredible thrill.”

The 8-bit CPU and computer will be on display doing an interactive chess demo at the fourth annual Maker Faire in San Mateo, California, this weekend, May 30-31. It will be one of 600 exhibits of do-it-yourself technology, hacks, mods and just plain strange hobby projects at the faire, which is expected to draw 80,000 attendees.

The BMOW is closest in design to the MOS Technology 6502 processor used in the Apple II, Commodore 64 and early Atari videogame consoles. Chamberlin designed his CPU to have three 8-bit data registers, a 24-bit address size and 12 addressing modes. It took him about a year and a half from design to finish. Almost all the components come from the 1970s- and 1980s-era technology.

“Old ’80s vintage parts may not be very powerful, but they’re easy to work with and simple to understand,” he says. “They’re like the Volkswagen Beetles of computer hardware. Nobody argues they’re the best but we love them for their simplicity.”

To connect the parts, Chamberlin used wire wrapping instead of soldering. The technique involves taking a hollow, screwdriver-shaped tool and looping the wire through it to create a tight, secure connection. Wire wraps are seen as less prone to failures than soldered junctions but can take much longer to accomplish. Still, they offer one big advantage, says Chamberlin.

“Wire wrapping is changeable,” he says. “I can unwrap and start over if I make a mistake. It is is much harder to recover from a mistake if you solder.”

Chamberlin started with a a 12×7-inch Augat wire-wrap board with 2,832 gold wire-wrap posts that he purchased from eBay for $50. Eventually he used 1,253 pieces of wire to create 2,506 individually-wrapped connections, wrapping at the rate of almost 25 wires in an hour. “It’s like a form of meditation,” he wrote on his blog. “Despite how long it takes to wrap, the wire-wrapping hasn’t really impacted my overall rate of progress. Design, debugging, and general procrastination consume the most time.”

The BMOW isn’t just a CPU. Chamberlin added a keyboard input, an LCD output that shows a strip of text, a USB connection, three-voice audio, and VGA video output to turn it into a functioning computer. The video circuitry, a UMC 70C171 color palette chip, was hard to come by, he says. When Chamberlin couldn’t find a source for it online, he went to a local electronics surplus warehouse and dug through a box of 20-year-old video cards. Two cards in there had the chip he needed, so he took one and repurposed it for his project.

The use of retro technology and parts is essential for a home hobbyist, says Chamberlin. Working with newer electronics technology can be difficult because a lot of modern parts are surface-mount chips instead of having through-hole pins. That requires a wave soldering oven, putting them out of reach of non-professionals.

After months of the CPU sitting naked on his desk, Chamberlin fashioned a case using a gutted X Terminal, a workstation popular in the early 1990s.

“Why did I do all this?” he says. “I don’t know. But it has been a lot of fun.”

Check out Steve Chamberlin’s log of how BMOW was built.

Photo:  Wire wrapped 8-bit CPU/Steve Chamberlin


New Intel Core i7 CPUs show up unannounced

Intel’s Core i7 has become somewhat of a mainstay in the most recent wave of gaming rigs, but it’s been quite awhile (in processor years, anyway) since we’ve seen any new siblings join the launch gang. We’d heard faint whispers that a new crew was set to steal the stage on May 31st, and those rumors are looking all the more likely now that a few heretofore unheard of chips have appeared online. The 3.06GHz Core i7 950 is shown over at PCs For Everyone with 8MB of shared L3 cache and a $649 price tag, and it’s expected that said chip will replace the aging Core i7 940. Moving on up, there’s the luscious 3.33GHz Core i7 Extreme 975, which is also listed with 8MB of shared L3 cache but packs a staggering price tag well above the $1,100 mark. If all this pans out, this CPU will replace the Core i7 Extreme 965 as Intel fastest Core i7 product. Just a few more days to wait, right?

[Via PCWorld]

Read – Core i7 Extreme 975 listing
Read – Core i7 950 listing

Filed under:

New Intel Core i7 CPUs show up unannounced originally appeared on Engadget on Wed, 27 May 2009 20:05:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Intel said to slip Core i5 platform to September, competition needed

Want a good example of why Intel — or we, the consumer — needs a strong competitor? DigiTimes‘ has it from sources at motherboard makers that Intel will delay its mainstream desktop Core i5 platform (including Lynnfield procs and 5-series chipsets) from July to early September. A rumor with merit given DigiTimes’ proven sources within motherboard makers like ASUS, Gigabyte, and MSI. The reason for the delay is to allow vendors to deplete 4-series inventories that have piled-up during the economic slow-down. Of course, if AMD or… well, AMD could muster the silicon to compete with Intel at the same price point then such a delay would not be possible. How much you say? DigiTimes has the Core i5 processors priced at 2.93GHz ($562), 2.8GHz ($284) and 2.66GHz ($196) when purchased in bulk.

[Via PC Perspective]

Filed under:

Intel said to slip Core i5 platform to September, competition needed originally appeared on Engadget on Tue, 26 May 2009 04:15:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel details next-generation Atom platform, say hello to Pine Trail

Intel has been doing a lot of talking about big new processors and platforms as of late, and it’s now gotten official with one that’s soon to be ever-present: its next-generation Atom platform, codenamed Pine Trail. In case you haven’t been tracking the rumors, the big news here is that the processor part of the equation, dubbed Pineview, will incorporate both the memory controller and the GPU, which reduces the number of chips in the platform to two, and should result in some significant size and power savings. As Ars Technica points out, the platform is also the one that’ll be going head to head with NVIDIA’s Ion, which is likely to remain more powerful but not as affordable or efficient, especially considering that NVIDIA can’t match Intel’s on-die GPU. Either way, things should only get more interesting once Pine Trail launches in the last quarter of this year.

Filed under:

Intel details next-generation Atom platform, say hello to Pine Trail originally appeared on Engadget on Wed, 20 May 2009 13:58:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Fujitsu’s supercomputer-ready Venus CPU said to be “world’s fastest”

Due to the intrinsic limitations of machine translation, it’s hard to say exactly what makes Fujitsu‘s latest supercomputer the “world’s fastest,” but we’ll hesitantly believe for the time being. We’re told that the SPARC64 VIIIfx (codename Venus) can churn through 128 billion calculations per second, which supposedly bests the current champ — a chip from Intel — by 2.5 times. An AP report on the matter states that Fujitsu shrunk the size of each central circuit, which in turn doubled the number of circuits per chip. ‘Course, this beast won’t be ready for supercomputer work for several years yet, giving the chip maker’s biggest rivals plenty of time to sabotage its moment in the limelight.

[Via Physorg]

Filed under:

Fujitsu’s supercomputer-ready Venus CPU said to be “world’s fastest” originally appeared on Engadget on Fri, 15 May 2009 16:34:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel’s Medfield Project May, May Not Go Into Smartphones

It’s all very wink wink, nudge nudge, hush hush, but the odor that Intel is giving off in this Fortune article about the Medfield project is that Intel’s trying to shrink x86 down to smartphones.

Intel’s roadmap looks like this: Now they have Atom, which powers many of the netbooks on the market today. Next comes Moorestown, which is supposed to be like the Atom, but house two chips and be a low-power solution that can be customizable (the 2nd chip) for whatever gadget a client shoves it into. Moorestown isn’t quite small enough for smartphones, but Intel’s saying Medfield may be, when Medfield follows up Moorestown.

There’s a lot of hinting, but not a lot of outright declaration here, so it’s not certain that Medfield may be able to fit into something the size of an iPhone or a Pre or an Android. What they are saying is that they can fit into something the size of a UMPC or a MID or a large PMP—something that Nvidia’s Tegra or Qualcomm’s Snapdragon are aiming for as well.

The timeline for Medfield is 2011ish, so there’s a while yet before anything materializes. But if Intel does somehow find a way to get their system-on-a-chip into your phones, that means bigger OSes and more laptop-like performance. We’ll see. [Fortune]

Giz Explains: GPGPU Computing, and Why It’ll Melt Your Face Off

No, I didn’t stutter: GPGPU—general-purpose computing on graphics processor units—is what’s going to bring hot screaming gaming GPUs to the mainstream, with Windows 7 and Snow Leopard. Finally, everbody’s face melts! Here’s how.

What a Difference a Letter Makes
GPU sounds—and looks—a lot like CPU, but they’re pretty different, and not just ’cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they’re essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs have a pretty long history—you could go all the way back to the Commodore Amiga, if you wanted to—but we’re going to stick to the fairly present. That is, the last 10 years, when Nvidia’s Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations—the calculations needed to figure out what pixels to display your screen as you snipe someone’s head off in Team Fortress 2—are particularly suited to being handled in parallel.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear [the book] into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU—800 stream processors in ATI’s Radeon 4870—make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD’s Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where’s the General Part?
Okay, so the thing about parallel processing—using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They’re frameworks that make it way easier to use graphics cards for kinds of computing that aren’t related to making zombie guts fly in Left 4 Dead. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it’s pretty much an industry-wide thing. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

OpenCL is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we’re really looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What’s It Going to Do for Me?
Parallel processing is pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it’s not going too far from graphics, since that’s still the easiest to parallelize. But converting, decoding and creating videos—stuff you’re probably using now more than you did a couple years ago—will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you’re just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.

eASIC eDV9200 H.264 codec promises HD for all devices

We’ve already got HD in places that the cast of Step by Step would’ve sworn was never possible way back when, but eASIC is far from satisfied. To that end, it’s introducing a new H.264 codec aimed to bring high-def capabilities to all manners of devices, including (but certainly not limited to) toys, baby monitors, public transportation, wireless video surveillance and wireless webcams. The highly integrated eDV9200 is said to “dramatically lower the cost of entry into the high-definition video market, enabling a new class of low-cost applications to fully leverage the benefits offered by HD technology.” Best of all, these guys aren’t just blowing smoke, as the chip — which captures streaming data directly from a CMOS sensor, compresses it, and transfers it to a host system or to a variety of storage devices — is priced at just $4.99 each in volume. HD oven timers, here we come!

Filed under:

eASIC eDV9200 H.264 codec promises HD for all devices originally appeared on Engadget on Tue, 12 May 2009 09:16:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments