Corsair pushes speed envelope with 2,333MHz Dominator GTX RAM modules

Corsair and speed generally run in the same circles, so it follows logic to see said memory outfit cranking out the planet’s fastest Intel XMP-certified RAM. The 2,333MHz Dominator GTX now has Intel‘s stamp of approval, and it easily surpasses the company’s 2,000MHz stuff that was king of the castle just yesterday. As the story goes, each module is “hand screened” and tested to the hilt before being shipped to end users, which apparently explains the $200 per 2GB stick that you’ll be asked to lay down. Speed kills… the wallet.

Corsair pushes speed envelope with 2,333MHz Dominator GTX RAM modules originally appeared on Engadget on Thu, 21 Jan 2010 15:02:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceHot Hardware  | Email this | Comments

Component shortages lead analysts to forecast rise in prices of personal electronics

As you might well know, we’re not the biggest fans of analyst blather, but this piece of research by Gartner is backed by some substantial numbers. The FT reports that DRAM prices have recently risen by 23 percent, followed closely by LCD prices with a 20 percent jump, both in response to the financial crisis the whole globe seems to be suffering from. Because the effects of recently renewed investment in capacity building won’t be felt for a while, we’re told to prepare for higher prices throughout this year — a significant combo breaker from the previous decade’s average of around 7.8 percent drops. Oh well, let’s just cling to the encouraging signs for the future and ignore this bump on the road to gadget nirvana.

[Thanks, Ben W]

Component shortages lead analysts to forecast rise in prices of personal electronics originally appeared on Engadget on Wed, 13 Jan 2010 07:05:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceFinancial Times  | Email this | Comments

Panasonic shipping first SDXC cards next month for ungodly amounts of cash

Here we go, folks. Nearly a year to the day after the term “SDXC” cemented itself into our vernacular, Panasonic has announced the first two that’ll ever ship to end users. Unless a competitor jumps in and steals the thunder before then, of course. Announced here in the desert, the outfit has proclaimed that a 48GB and 64GB SDXC card will begin shipping to fat-walleted consumers in February, bringing with it a Class 10 speed rating and maximum data transfer rates of 22MB/sec. You know what else they’ll be bringing? Price tags that are guaranteed to make you simultaneously weep and hoot — the 48GB model will list for $449.95, while the 64 gigger will go for $599.95. Tissues, anyone?

Continue reading Panasonic shipping first SDXC cards next month for ungodly amounts of cash

Panasonic shipping first SDXC cards next month for ungodly amounts of cash originally appeared on Engadget on Wed, 06 Jan 2010 19:26:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

University of Tokyo Unveils Flexible Organic Flash Memory

Flexible_Memory.jpgThe photo to the left shows an example of non-volatile, flexible organic flash memory developed at the University of Toyko–something that could lead the way to a slew of flexible computing gadgets, such as large-area sensors and electronic paper devices, Engadget reports.

The design uses a polyethylene naphthalate (PEN) resin sheet arrayed with memory cells, the report said; data can be written to it and erased over 1,000 times. The university claims it can be bent up to six millimeters without any degradation.

So far, it only retains data for about a day–but researchers expect to improve that drastically over time.

64GB iPhones and 128GB iPod Touches on the Way?

image-of-toshibas-new-64gb-embedded-nand-flash-memory-modules

Toshiba has just announced the availability of a new embedded NAND flash memory chip, which can hold up to 64GB of data. These are the chips that sit inside the iPhone and iPod Touch.

One reason that the iPod Touch usually has more memory than the iPhone is that, despite its skinny form, there is more room inside. Consequently, the Touch can fit in two chips where the iPhone only has space for one. This new release from Toshiba, then, means that the iPhone could double-up on storage and the iPod Touch could again leap ahead.

Of course, pricing of these chips will have a lot to do with when Apple actually starts to buy them. This, in combination with the now well-established launch schedule of iPhones in the summer and iPods in the Autumn means that we might be waiting a while. On the other hand, if Apple goes ahead with a camera-equipped Touch as expected, we may get a New Year surprise.

One thing we are sure of is that the days of the hard-drive based iPod Classic are now numbered.

Toshiba Launches Highest Density Embedded NAND Flash Memory Modules [Toshiba]


Understanding the Windows Pagefile and Why You Shouldn’t Disable It

As a tech writer, I regularly cringe at all the bad tweaking advice out there, and disabling the system pagefile is often a source of contention among geeks. Let’s examine some of the pagefile myths and debunk them once and for all.

What is a Pagefile and How Do I Adjust It?

Before we get into the details, let’s review what the pagefile actually does. When your system runs low on RAM because an application like Firefox is taking too much memory, Windows moves the least used “pages” of memory out to a hidden file named pagefile.sys in the root of one of your drives to free up more RAM for the applications you are actually using. What this actually means to you is that if you’ve had an application minimized for a while, and you are heavily using other applications, Windows is going to move some of the memory from the minimized application to the pagefile since it’s not being accessed recently. This can often cause restoring that application to take a little longer, and your hard drive may grind for a bit.

If you want to take a look at your own pagefile settings, launch sysdm.cpl from the Start menu search or run box (Win+R) and navigate to Advanced –> Settings –> Advanced –> Change. From this screen you can change the paging file size (see image above), set the system to not use a paging file at all, or just leave it up to Windows to deal with—which is what I’d recommend in most cases.

Why Do People Say We Should Disable It?

Look at any tweaking site anywhere, and you’ll receive many different opinions on how to deal with the pagefile—some sites will tell you to make it huge, others will tell you to completely disable it. The logic goes something like this: Windows is inefficient at using the pagefile, and if you have plenty of memory you should just disable it since RAM is a lot faster than your hard drive. By disabling it, you are forcing Windows to keep everything in much faster RAM all the time.

The problem with this logic is that it only really affects a single scenario: switching to an open application that you haven’t used in a while won’t ever grind the hard drive when the pagefile is disabled. It’s not going to actually make your PC faster, since Windows will never page the application you are currently working with anyway.

Disabling the Pagefile Can Lead to System Problems

The big problem with disabling your pagefile is that once you’ve exhausted the available RAM, your apps are going to start crashing, since there’s no virtual memory for Windows to allocate—and worst case, your actual system will crash or become very unstable. When that application crashes, it’s going down hard—there’s no time to save your work or do anything else.

In addition to applications crashing anytime you run up against the memory limit, you’ll also come across a lot of applications that simply won’t run properly if the pagefile is disabled. For instance, you really won’t want to run a virtual machine on a box with no pagefile, and some defrag utilities will also fail. You’ll also notice some other strange, indefinable behavior when your pagefile is disabled—in my experience, a lot of things just don’t always work right.

Less Space for File Buffers and SuperFetch

If you’ve got plenty of RAM in your PC, and your workload really isn’t that huge, you may never run into application crashing errors with the pagefile disabled, but you’re also taking away from memory that Windows could be using for read and write caching for your actual documents and other files. If your drive is spending a lot of time thrashing, you might want to consider increasing the amount of memory Windows uses for the filesystem cache, rather than disabling the pagefile.

Windows 7 includes a file caching mechanism called SuperFetch that caches the most frequently accessed application files in RAM so your applications will open more quickly. It’s one of the many reasons why Windows 7 feels so much more “snappy” than previous versions—and disabling the pagefile takes away RAM that Windows could be using for caching. Note: SuperFetch was actually introduced in Windows Vista.

Put the Pagefile on a Different Drive, Not Partition

The next piece of bad advice that you’ll see or hear from would-be system tweakers is to create a separate partition for your pagefile-which is generally pointless when the partition is on the same hard drive. What you should actually do is move your pagefile to a completely different physical drive to split up the workload.

What Size should my Pagefile Be?

Seems like every IT guy I’ve ever talked to has stated the “fact” that your pagefile needs to be 1.5 to 2x your physical RAM—so if you have a 4GB system, you should have an 8GB pagefile. The problem with this logic is that if you are opening 12 GB worth of in-use applications, your system is going to be extremely slow, and your hard drive is going to grind to the point where your PC will be fairly unusable. You simply will not increase or decrease performance by having a gigantic pagefile; you’ll just use up more drive space.

Mark Russinovich, the well-known Windows expert and author of the Sysinternals tools, says that if you want to optimize your pagefile size to fit your actual needs, you should follow a much different formula: The Minimum should be Peak Commit – Physical RAM, and the Maximum should be double that.

For example, if your system has 4GB of RAM and your peak memory usage was 5GB (including virtual memory), you should set your pagefile to at least 1GB and the maximum as 2GB to give you a buffer to keep you safe in case a RAM-hungry application needs it. If you have 8GB of RAM and a max 3GB of memory usage, you should still have a pagefile, but you would probably be fine with a 1 GB size. Note: If your system is configured for crash dumps you’ll need to have a larger pagefile or Windows won’t be able to write out the process memory in the event of a crash—though it’s not very useful for most end-users.

The other size-related advice is to set the minimum and maximum size as the same so you won’t have to deal with fragmentation if Windows increases the size of the pagefile. This advice is rather silly, considering that most defrag software will defragment the pagefile even if Windows increases the size, which doesn’t happen very often.

The Bottom Line: Should You Disable It?

As we’ve seen, the only tangible benefit of disabling the pagefile is that restoring minimized applications you haven’t used in a while is going to be faster. This comes at the price of not being able to actually use all your RAM for fear of your applications crashing and burning once you hit the limit, and experiencing a lot of weird system issues in certain applications.

The vast majority of users should never disable the pagefile or mess with the pagefile settings—just let Windows deal with the pagefile and use the available RAM for file caching, processes, and Superfetch. If you really want to speed up your PC, your best options are these:

On my Windows 7 system with 6GB of RAM and a Windows-managed pagefile, every application opens quickly, and even the applications I haven’t used in a while still open almost instantaneously. I’m regularly running it up to 80-90% RAM usage, with dozens of application windows open, and I don’t see a slowdown anywhere.

If you want to read more extremely detailed information about how virtual memory and your pagefile really work, be sure to check out Mark Russinovich’s article on the subject, which is where much of this information was sourced.


Don’t agree with my conclusions? Voice your opinion in the comments, or even better—run some benchmarks to prove your point.


The How-To Geek has tested pagefile settings extensively and thinks everybody should just upgrade to Windows 7 already. His geeky articles can be found daily here on Lifehacker, How-To Geek, and Twitter.

Datel sues Microsoft, wants its Xbox 360 market back

Seems like we just can’t go a week without some corporate power plays or mudslinging making our pages. Back in October, Datel promised it would “remedy” the situation created by Microsoft’s forthcoming (now present) Dashboard update locking out its higher capacity memory modules. The accessory company was the first (and only) third-party supplier of memory cards for the Xbox 360, but it seems that MS took a dislike to the MicroSD-expandable Max Memory units and has since taken the unusual step of downgrading the console to being able to read only chips up to 512MB, essentially taking Datel’s 2GB+ wares out of commission. Yeah, classy. Datel’s retaliation is in the finest Anglo-Saxon legal tradition, namely to assert antitrust concerns and to claim its right to act as a competitor to Microsoft in the memory market for Redmond’s own console. It all sounds rather silly to us too, and could probably have been avoided by a rational compromise, but what’s the fun in that?

Datel sues Microsoft, wants its Xbox 360 market back originally appeared on Engadget on Tue, 24 Nov 2009 05:19:00 EST. Please see our terms for use of feeds.

Permalink Joystiq  |  sourceHoward, Rice et al  | Email this | Comments

Samsung slims down NAND memory packaging, wafer-thin gadgets to follow

Good old Samsung and its obsession with thinness. After finally letting its 30nm 32Gb NAND chips out of the bag in May, the Korean memory maker has now successfully halved the thickness of its octa-die memory package to a shockingly thin 0.6mm (or 0.02 inches). The new stacks will start out at a 32GB size, though the real benefits are likelier to be felt down the line when the ability to pack bits more densely pays off in even higher storage capacities. Cellphones, media players and digital cameras will inevitably take the lion’s share, but we’re hopeful — eternal optimists that we are — that this could accelerate the decline of SSD prices to a borderline affordable level. Intel and Micron promised us as much, how about Samsung delivering it?

[Via Information Week]

Filed under:

Samsung slims down NAND memory packaging, wafer-thin gadgets to follow originally appeared on Engadget on Fri, 06 Nov 2009 08:19:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel and Numonyx pave the way for scalable, higher density phase change memory

Both Intel and Numonyx have been talking up phase change memory for years now, but for some reason, we’re slightly more inclined to believe that the latest breakthrough is actually one that’ll matter to consumers. In a joint release, the two have announced a new non-volatile memory technology that supposedly “paves the way for scalable, higher density phase change memory products.” Put as simply as possible, researchers have been testing a 64Mb chip that “enables the ability to stack, or place, multiple layers of PCM arrays within a single die,” and the two are calling the discovery PCMS (phase change memory and switch). We know, you’re drowning in technobabble here, but if these two can really apply Moore’s Law to density scaling, you’ll be thanking ’em as you pick up your $50 6TB hard drive in 2014.

Filed under:

Intel and Numonyx pave the way for scalable, higher density phase change memory originally appeared on Engadget on Fri, 30 Oct 2009 01:51:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Super-Sized Memory Could Fit Into Tiny Chips

RAM

North Carolina State University engineers have created a new material that could allow a fingernail-sized chip to store the equivalent of 20 high-definition DVDs or 250 million pages of text — fifty times the capacity of current memory chips.

“Instead of making a chip that stores 20 gigabytes, we have a created a prototype that can [potentially] handle one terabyte,” says Jagdish Narayan, a professor of materials science and engineering at NC State. That’s at least fifty times the capacity of the best current DRAM (Dynamic Random Access Memory) systems.

The key to the breakthrough is selective doping, the process by which an impurity is added to a material to change its properties. The researchers added nickel, a metal, to magnesium oxide, a ceramic. The result has clusters of nickel atoms no bigger than 10 square nanometers that can store data. Assuming a 7-nanometer magnetic nanodot could store one bit of information, this technique would enable storage density of more than 10 trillion bits per square inch, says Narayan.

Expanding current memory systems is a hot topic of research. At the University of California Berkeley, Ting Xu, an assistant professor of materials science, has also developed a way to guide the self-assembly of nano-sized elements in precise patterns. Xu is trying to extend the technique to create paper-thin, printable solar cells and ultra-small electronic devices.

Other researchers have shown a way to develop a carbon nanotube-based technique for storing data that could potentially last more than a billion years, thereby improving on the lifespan on storage.

A big challenge for Narayan and his team, who have been working on the topic for more than five years, was the creation of nanodots that can be aligned precisely.

“We need to be able to control the orientation of each nano dot,” says Narayan, “because any information that you store in it has to be read quickly and exactly the same way.” Earlier, the researchers could make only one-layer structures and 3-D self assembly of nano-dots wasn’t possible. But using pulsed lasers they have been able to achieve greater control over the process.

Unlike many research breakthroughs, Narayan says, his teams’ work is ready to go into manufacturing in just about a year or two. And memory systems based on doped nano-dots won’t be significantly more expensive than current systems.

“We haven’t scaled up our prototype but we don’t think it should cost a lot more to do this commercially,” he says. “The key is to find someone to start on the large-scale manufacturing process.”

See Also:

Photo: RAM (redjar/Flickr)