Here Comes the Zettabyte Age

Big pile of DVDs. Photo by John A Ward

How much information is out there?

For most of us, “a crapload” is a sufficiently accurate answer. But for a few obsessive data analysts, more precision is necessary. According to a recent study by market-research company IDC, and sponsored by storage company EMC, the size of the information universe is currently 800,000 petabytes. Each petabyte is a million gigabytes, or the equivalent of 1,000 one-terabyte hard drives.

If you stored all of this data on DVDs, the study’s authors say, the stack would reach from the Earth to the moon and back.

That’s a 62% increase over the amount of digital information floating around the year before — but it’s just a down payment on next year’s total, which will reach 1.2 million petabytes, or 1.2 zettabytes.

If these growth rates continue, by 2020 the digital universe will total 35 zettabytes, or 44 times more than in 2009.

It’s interesting to compare IDC’s study with a recent UC San Diego report on how much information Americans consume per year. According to that study, media consumption in 2008 added up to 3.6 zettabytes and 10,845 trillion words, or about 34 gigabytes per person per day.

Much of the media we view (TV shows, streaming video from YouTube) is centrally stored, on internet-connected servers, so the totals for consumption are naturally higher than the storage requirements.

IDC notes that while data storage will increase 44-fold by 2020, the number of IT professionals worldwide will only grow by 40%, which means each IT guy is going to have a lot more data to oversee.

Good luck with that, guys!

Chart showing relative size of digital universe in 2009 and 2020

See Also:

Gadgets on the Go: Follow Dylan Tweney and Gadget Lab on Twitter for real-time tech updates.

EMC-IDC Digital Universe Study


26 Percent of Wired’s Mobile Traffic Comes From the iPad

percentage of Wired.com mobile visitors using iPhone, iPod and iPad
Less than three weeks after its launch, Apple’s iPad already accounts for 26 percent of the mobile devices accessing Wired.com.

Overall, mobile devices account for between 2.3 percent and 3.5 percent of our traffic. For April 3 to 19, iPad users represented 0.91 percent of total site traffic.

For the past year, the vast majority of mobile visitors to Wired have been using the iPhone. Before April, about 10 percent were using the iPod Touch, and 15 percent to 18 percent other devices, led by the Motorola Droid (with 5 percent to 7 percent of mobile traffic).

But with the launch of the iPad on April 3, it seems that many iPhone users have picked up iPads — and are finding them a good way to browse this site. The sudden jump in iPad users is matched by a declining share of iPhone and iPod Touch users, which suggests that most iPad customers are people who were already accustomed to mobile browsing with an Apple handheld, and are trading up to a bigger screen — rather than coming from another platform.

It’s too early to say whether the iPad is bringing new mobile users to Wired.com. The overall proportion of mobile users has remained more or less level this month, and because the mobile total varies from month to month, we need more data before we can draw conclusions about the number of new mobile users.

One conclusion we can draw: iPad users are using it to browse the web, and they’re doing it a lot.

And yes, we are aware of the irony that the majority of Wired.com’s videos, which use an Adobe Flash-based player, don’t play on the iPad. We’re working on that, starting with our homepage, which became iPad-compatible starting Wednesday evening, thanks to Wired.com managing editor Pam Statz.

Chart: Percentage of mobile users visiting Wired.com using iPhone or iPod Touch, iPad, and other devices. This month’s data is partial, covering April 3 to 19.

See Also:


NVIDIA GeForce GTX 480 set up in 3-way SLI, tested against Radeon HD 5870 and 5970

Not many mortals will ever have to worry about choosing between a three-way GeForce GTX 480 SLI setup, an equally numerous Radeon HD 5870 array, or a dual-card HD 5970 monstrosity, but we know plenty of people would care about who the winner might be. Preliminary notes here include the fun facts that a 1 Kilowatt PSU provided insufficient power for NVIDIA’s hardware, while the mighty Core i7-965 test bench CPU proved to be a bottleneck in some situations. Appropriately upgraded to a six-core Core i7-980X and a 1,200W power supply, the testers proceeded to carry out the sacred act of benchmarking the snot out of these superpowered rigs. We won’t spoil the final results of the bar chart warfare here, but rest assured both camps score clear wins in particular games and circumstances. The source link shall reveal all.

NVIDIA GeForce GTX 480 set up in 3-way SLI, tested against Radeon HD 5870 and 5970 originally appeared on Engadget on Tue, 20 Apr 2010 04:16:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceHardware.info  | Email this | Comments

Windows 7 is safer when the admin isn’t around

Not that we necessarily needed a report to tell us this, but the fewer privileges you afford yourself as a Windows user, the more secure your operating system becomes. Such is the conclusion of a new report from BeyondTrust, a company that — surprise, surprise — sells software for “privileged access management.” The only way we use Windows 7 is as admins and we’ve never had a moment’s bother, but some of you like stats, and others among you might be involved in business, which tends to make people a little more antsy about these things. So for your collective sake, let there be pie charts! The report looks into vulnerabilities disclosed by Microsoft during 2009 and concludes that all 55 reported Microsoft Office issues and 94 percent of the 33 listed for IE could be prevented by simply running a standard user account. Or using better software, presumably. Hit the PDF source for more info — go on, it’s not like you have anything better to do while waiting for the Large Hadron Collider to go boom.

Windows 7 is safer when the admin isn’t around originally appeared on Engadget on Tue, 30 Mar 2010 06:44:00 EST. Please see our terms for use of feeds.

Permalink CNET  |  sourceBeyondTrust (PDF)  | Email this | Comments

Stats: iPhone OS is still king of the mobile web space, but Android is nipping at its heels

AdMob serves north of 10 billion ads per month to more than 15,000 mobile websites and applications. Thus, although its data is about ad rather than page impressions, it can be taken as a pretty robust indicator of how web usage habits are developing and changing over time. Android is the big standout of its most recent figures, with Google loyalists now constituting a cool 42 percent of AdMob’s smartphone audience in the US. With the EVO 4G and Galaxy S rapidly approaching, we wouldn’t be surprised by the little green droid stealing away the US share crown, at least until Apple counters with its next slice of magical machinery. Looking at the global stage, Android has also recently skipped ahead of Symbian, with a 24 percent share versus 18 percent for the smartphone leader. Together with BlackBerry OS, Symbian is still the predominant operating system in terms of smartphone sales, but it’s interesting to see both falling behind in the field of web or application usage, which is what this metric seeks to measure. Figures from Net Applications (to be found at the TheAppleBlog link) and ArsTechnica‘s own mobile user numbers corroborate these findings.

Stats: iPhone OS is still king of the mobile web space, but Android is nipping at its heels originally appeared on Engadget on Mon, 29 Mar 2010 10:18:00 EST. Please see our terms for use of feeds.

Permalink ArsTechnica  |  sourceAdMob  | Email this | Comments

The Future of Storage [Memory Forever]

The Future of StorageIf you take the guts of a Blu-ray or DVD player, blow it up, and spread it across a work bench, it looks like this. So you might be surprised to know that you’re looking at the future of storage.

A laser beam whose wavelength is being monitored by this Soviet-looking machine is being bounced from mirror to mirror to mirror before it lands on a spinning disc the size of CD, but orange, and transparent. It’s reading the holograms that are embedded buried inside the disc, gigabytes of random test data.

This work table is deep inside the labyrinthine complex that is GE’s Global Research Lab, 550 acres of big machines and big brains, in the hinterlands of Niskayuna, New York. It’s where the company that brought us 30 Rock invents the future of energy, aviation, healthcare, and dozens of other mega-industries, including, as it turns out, data storage.

***

Hard drives, DVDs, USB sticks: This is where we store our digital lives. But while our data is timeless, our storage devices aren’t. So, what’s next? And then what?

Data storage is something most people don’t spend much time thinking about, and if we do, it’s in abstract terms. Laptops have a fixed amount of space; we pay for more, but accept less. DVDs hold a certain length of video, or a healthy chunk of a music collection; these are disposable. Flash drives move stuff from one place to another; we sense that they’re different than hard drives; but we’re not sure how.

What we know is that we need to store stuff, somewhere. And by we, I mean we: our network infrastructure won’t be ready for widespread cloud computing, or that fantasy of downloading everything you’ll ever watch in full HD, for a very, very long time, and until then—or for people with unease about that concept, even then—storage is something we need to think about.

In 2010, storage tech is in flux. Here’s how we—and the people and companies we’re slowly (but surely) handing our data over to, store stuff now, and more importantly, later.

Hard Drives Aren’t Dead

Hard drives! You almost certainly own at least one of these, in you laptop, desktop, or even portable music player. The basic principle revolves (ha!) around the reading and writing of data onto a magnetized, metallic platter, which is assembled inside a hard drive’s case alongside a head, which is roughly analogous to the needle on a record player, except instead reading variations in a physical groove, this head floats above the platter, reading little tiny magnetic variations from a short distance.

If the immediate evocation of a record player didn’t tip you off, this technology has a long legacy (read: It’s old as hell): The first machine to utilize the concept was built in 1956; the first modern-looking, reasonably small hard drive (at 5MB, no less!) shipped in 1980, from Seagate.

The Future of Storage
The story since then has been surprisingly uncomplicated, with steady advances in data storage density, decreases in size and a drastic drops in price. The first 1GB hard drive, built in 1980, weighed over 500 pounds. Today, a 2 terabyte—that’s 2,000 times more capacious—hard drive is small enough to tuck into a loose jeans pocket, and can be had for under $140.

But surely this technology is reaching a breaking point, right? Not quite. With storage density approaching practical maximum’s, hard drive manufacturers resurrected an old theory somewhere around 2005: Perpendicular storage. Seagate senior vice president, Recording Media R & D and Operations Mark E. Re:

We use to use a recording method called longitudinal recording, which is called that because the magnetization and the storage layer on the disk or platter is a plane. It’s parallel to the surface. And when we moved to perpendicular [storage], we change the magnetization layer on the disk so now it aligns perpendicular to the surface

Why?

When you’re trying to get your bits closer and closer together with longitudinal storage, the magnetization didn’t want to say there. It wanted to spring apart, like if you’re putting two bar magnets together. But if align them perpendicular…they want to be closer together.

Translation: More data, less surface space.

Seagate saw longitudinal recording limiting their hard drives to somewhere around 100 gigabits (12.5 gigabytes) per square inch, and at the rate things were going, without perpendicular storage, hard drive makers would be up against a wall.

With perpendicular recording, though, they think they can eventually hit somewhere around 1 terabit (about 128 gigabytes) per square inch. Today, in 2010, they’re maxing out at about 400 gigabits per square inch in stuff you can buy off the shelf. There are quite a few years left of regular hard drives getting larger, faster and cheaper before the technology runs its course, and that’s not even counting the wilder hard drive research that’s going on. Heat assisted magnetic recording uses localized heating of disc surfaces, for ultra-dense data writing. Bit pattern media could reduce the space needed for a bit on a hard drive’s surface from 50 to 1 magnetic grains, by encoding the platter’s substrate with molecular patterns.

Seagate’s hazy prediction for what this actually means for hard drives: Upwards of 50 terabits (6.25 terabytes) per square inch, which companies be working towards, and making money from, for years. Hard drives aren’t going anywhere—at least, not for now.

The Inevitable Rise of SSDs

So what about SSDs, or solid-state drives? They’re by far the buzziest of the storage options, and we’re constantly told that solid-state drives will replace hard drives, like, now. That’s not quite right. Solid-state drives, which have no moving parts and store data with electrical charge rather than magnetism, are taking over—just, not everything.
The Future of Storage
The basics, from our last Giz Explains on the subject:

What’s inside is a bunch of flash memory chips and a controller running the show. There are no moving parts, so an SSD doesn’t need to start spinning, doesn’t need to physically hunt data scattered across the drive and doesn’t make a whirrrrr. The result is that it’s crazy faster than a regular hard drive in nearly every way, so you have insanely quick boot times (an old video, but it stands), application launches, random writes and almost every other measure of drive performance (writing large files excepted).

So, they’re fast. They don’t catastrophically fail (though they do slowly degrade). They’re perfect for laptops! And you probably want one.

But the future of SSDs is a fairly narrow one, at least for now: Consumer applications range from notebooks to desktops to NAS storage, but they’re all just that: consumer solutions. While we’re going to have to wait a few more years for Flash storage to reach a truly reasonable price point for our new gaming PCs and notebooks, the enterprise world—where data needs are rapidly outpacing ours, and the scale of storage is so much larger—will have to wait much longer.

The fastest area of growth for solid-state storage isn’t even in HDD-like SSDs anyway—it’s in portable devices, like smartphones (and soon, tablets). This storage is of a different nature, though: speed isn’t terribly important in a mobile device, nor is capacity. People are going to be fine with their iPad’s low-mid-range chips of flash storage, because they’ll run apps, play movies and store magazines just fine. Meanwhile, Google will continue to buy hundreds of thousands of massive hard drives to keep up with demand, and the rest of us will gleefully shell out for the rapidly cheapening solid-state drives that will power our laptops. This will continue in parallel, for as far as the eye can see.

But what will the SSDs of the future be like? Research now is focused on eliminating their comparative weaknesses more than anything else. They’ll become more buyable, I guess? Cheaper? Longer-lived? (Current flash storage of the more affordable multi-level cell variety can only be written to about 10,000 before failure.) Yes, all of that. General Manager of SanDisk’s SSD group, Doron Myersdorf, from our SSD Giz Explains: “More granular algorithms with caching and prediction means there’s less unnecessary erasing and writing.” In simpler terms, companies are getting smarter about writing data to SSDs, with their limited lifespan in mind. And on the storage capacity/price issue:

There have been several walls in history of the [flash] industry—there was transition to MLC, then three bits per cell, then four—every time there is some physical wall, that physics doesn’t allow you to pass, there is always a new shift of paradigm as to how we make the next step on the performance curve.

SSDs as we know them today are still a young, and they’ve got a long way to go. And before the technology can completely take over the consumer space, we’re going to see more and more awkward hybrid products, like Samsung’s MH80 drive, which uses a small bank of flash memory for some tasks, and spins up the hard drive only when necessary. Progress!

Your next computer probably won’t have one. But the one after that? Sure. Meanwhile, cheap flash storage, like the stuff inside your crappy USB key, will only get cheaper. And when 64GB thumb drives are commonplace and cheap, you’ll probably stop caring about optical media, like Blu-ray discs, for file storage and sharing. Or not.

Our Holographic Future

Optical media isn’t going anywhere, either. Put another way, Blu-ray isn’t going to be the last disc you buy—it’s just the last one where data will be stored only on the surface. Holographic storage, like GE is working on, and which we got to see up close at their Global Research labs, stores data down inside in many, many layers (GE’s demoed up to 75), encoding the data using thousands and thousands of tiny holograms throughout the entire disc. The secret sauce is the material the disc is made out of, and how it reacts to light. On a broader level, where GE’s holographic storage differs from the other major approach to holographic storage (called page-based), and what allows it to reach densities of 1TB per disc, is that it uses even tinier micro holograms that store less data per individual hologram, but more in aggregate.

While GE is mostly pitching the tech to archivists for now—like our friends at the Library of Congress, who wanna hold onto stuff for a real long time—since the discs, GE says, last for 30 years, what makes it viable as a storage tech you might get your hands on soon after it launches in 2012 is that it’s designed to fit in with the current optical media infrastructure, meaning it’ll be cheaper and easier to roll out than some radically different tech. That is, the discs are the same physical size and shape as CDs and DVDs, and they use a laser that’s very similar to Blu-ray’s, even using the same wavelength. On a hardware level, it just uses a slightly different optical element, but the rest basically comes down to software/firmware, meaning you might still be able to play your Blu-ray discs in a holographic storage drive. (This exploded view of a disc being read, that orange spinning thing, is what all readers look like in a laboratory, even Blu-ray drives—because it’s easier to tweak settings than in their actual product form.)

Sci-Fi

After SSDs and hard drives are reduced to hilarious relics, mentioned only to shock classrooms full of children to attention with a jolt of pure absurdity (“so you’re saying the spun? In circles?), how will we store data? A few of the nuttier possibilities:

Carbon Nanoballs:

Interest is growing in the use of metallofullerenes – carbon “cages” with embedded metallic compounds – as materials for miniature data storage devices. Researchers at Empa have discovered that metallofullerenes are capable of forming ordered supramolecular structures with different orientations. By specifically manipulating these orientations it might be possible to store and subsequently read out information.

Two of pop-science’s favorite buzz words, united.

Molecular memory:

What if, instead of carving transistors and other microelectronic devices out of chunks of silicon, you used organic molecules? Even large molecules are only a few nanometers in size; an integrated circuit using molecules could contain trillions of electronic devices-making possible tiny supercomputers or memories with a million times the storage density of today’s semiconductor chips.

A thumb drive larger than your entire NAS would actually have to be made arbitrarily larger, just so you wouldn’t lose it.

Bacteria:

Trust your data with tiny bugs: Artificial DNA with encoded information can be added to the genome of common bacteria, thus preserving the data….

According to researchers, up to 100 bits of data can be attached to each organism. Scientists successfully encoded and attached the phrase “e=mc2 1905” to the DNA of bacillus subtilis, a common soil bacteria.

Your storage drive could literally be alive, one day.

Quantum mechanics: Data encoded on an unfathomable scale:

In a quantum computer, a single bit of information is encoded into a property of a quantum mechanical system-the spin of an electron, for example. In most arrangements that rely on Nitrogen atoms in diamond to store data, reading the information also resets the qubit, which means there is only one opportunity to measure the state of the qubit.

Granted, research into this now is focused on storing tiny amounts of data for a matter of seconds, which is just long enough to allow a quantum computer to barely function, but still: potential!

Data: It’s everywhere. And one day, we’ll be able to take advantage of that.

[Bacteria pic via]

Still something you wanna know? Send questions about platters, disks, bits, bops, beeps or boops here, with “Giz Explains” in the subject line.

Memory [Forever] is our week-long consideration of what it really means when our memories, encoded in bits, flow in a million directions, and might truly live forever.

Giz Explains: How Data Dies (and How It Can Be Saved) [Giz Explains]

Giz Explains: How Data Dies (and How It Can Be Saved)Bits don’t have expiration dates. But memories will only live forever if the media and file formats holding them remain intact and coherent. Time can be as deadly to data storage as it is to carbon-based life forms.

There are lots of ways data can die: YouTube can pull a video offline before anybody snags it, your hard drive can crash, taking ultra-rare Grateful Dead bootlegs that you never got a chance to upload to Usenet with it, or maybe you designed a brilliant piece of visual art a decade ago in some kooky file format that simply doesn’t exist anymore, and there’s no possible way to view the file without traveling to some creepy dude’s basement a thousand miles away.

What we’re talking about is digital rot—or data rot or bit decay or whatever you’d like to call it—systemic processes which can mean death to data. Kind of a problem when you’d like to keep it around forever. Let’s paint this in broad strokes: You can roughly break the major kinds of rot into hardware, software and network. That is, the hardware that breaks down, the formats that go extinct, and the online stuff that vanishes one way or another.

The Hard Life of Hardware

Everything’s gotta be stored on something. And guess what? All media age. (Except diamonds—bling bling, biatch.) Brain cells die, film degrades and hard drives break.

A sampling of common digital media and their life expectancies (assuming you take care of them):
• Floppy disk – This can theoretically survive between 3 and 10 million passes
• CD and DVDs – It depends heavily on the materials used in their construction (PDF), but you’re looking at anywhere between 2 and 10 and 25 years, in the best of circumstances
• Flash storage – Also depends on the type, letting you write between 10,000 cycles with multi-level flash memory, or 100,000 with single-cell flash
Hard disk drives – Kind of a crapshoot—anecdotally, five years is a good average, though they can last shorter or longer, depending, again, on how they’re built

Google, with its millions of servers, is in the best position to test hard drives from every manufacturer, and conducted a massive study of HDD failure. Basically, if a drive makes it past the first six months, it’s pretty likely to make it through Year 4, but it is going to die at some point (and makes/models die in batches). As you probably don’t need to be told, hard drives can fail in any number of ways.

In other words, whatever you’re storing your precious data on, back it up, preferably with a mix of drives or media from different manufacturers/time periods.

But what if you’re, say, the Library of Congress, the largest library in the world, charged with a mission “to sustain and preserve a universal collection of knowledge and creativity for future generations,” and suddenly confronted—after 200 years of relatively tranquil existence—by an unending, ever-expanding digital deluge that must be archived and cataloged? On top of a copy of every piece of material that’s registered through the United States Copyright Office, and the two centuries of (oftentimes badly damaged) cultural history you’re already trying to preserve? How do you store stuff?

Giz Explains: How Data Dies (and How It Can Be Saved)

“DVDs and CDs aren’t even considered storage,” say Martha Anderson and Beth Dulaban, from the LoC’s Office of Strategic Initiatives. They need to transfer shiny-silver-disc content to something sturdier to meet their mission requirements. For digital content, the Library uses a mix of hard disks and tape, like Oracle’s StorageTek T10000B 1TB tape drives, rated for 30 years of archive life. At the Packard Campus, the main battle station for the LoC’s audio-visual preservation, they have 10,000 tapes providing 10 petabytes of capacity, Gregory Lukow, from the LoC’s Motion Picture, Broadcasting & Recorded Sound Division told me. In the video above, you can see a SAMMA robot hard at work. These do analog-to-digital conversion en masse, and the LoC has four of ’em.

The key, though, is that even though the LoC works with drive manufacturers on boosting reliability and meeting the Library’s technical specifications, is that they have a policy of redundancy and diversity—two to three copies, maybe spread across different states, and stored in different kinds of hardware running different kinds of software. The Packard Campus, which is where music and video are archived and preserved in crazy labs with robots, mirrors everything to a secret location via fiber optic cable. While you probably don’t have secret bunkers to stash your porn, it’s a good general guideline: More copies on more disks is more better.

A Format Can Be a Tomb

It’s obvious, though, that storage media age and die. The more insidious problem, particularly with “born digital” content—stuff that started life as bits—is format obsolescence. That is, just ’cause a video wrapped up in MKV, or an Ogg Vorbis music file, or a DOCX file is readable on computers today doesn’t mean they will be 20 years from now. And if nothing can read what’s inside the file, the data inside is basically lost.

The way you might’ve already experienced this, in a way, is via DRM that’s been deactivated (like a bunch of digital music stores did after being crushed by iTunes), rendering your songs wrapped up in it completely useless. I suspect people who bought into ebooks early, before the emergence of EPUB, are going to be effed in the ay in a similar manner. And don’t even get us started on HD DVD and other failed video and audio physical formats—that’s potentially a double whammy of format death.

It’s important, then, to store your memories using formats that are legit standards that’ll be around for a longass time, if not quite forever. Growing recognition of the problem, particularly as it pertains to ephemeral web content, is part of what’s behind the push for open standards—proprietary standards, from a long-term survival standpoint, are not the best idea, ’cause once whoever makes them dies, the format may die too.

The Library of Congress has picked out seven points that’ll give you an idea of how sustainable a format is—that is, likely to outlast your current Lady Gaga obsession:
• Disclosure – how open the specs are
• Adoption – “an open format that nobody’s adopted isn’t too useful to us”
• Transparency – how readable it is on a technical level
• Self-documentation – decent metadata, which is in some ways the secret challenge, given that it becomes more valuable as the amount of data you have grows exponentially
• External dependencies – how much you need particular hardware to read it, for example
• Impact of patents
• Technical protection mechanisms – is DRM in the way?

Quality is also an issue. So, for instance, for master digital archives of video, the Library uses mtion JPEG-2000 in an MXF wrapper, because it’s mathematically lossless. It uses MPEG2 for sub-masters, which are the source material for MPEG-4 copies that patrons can access. Or, as another example, for a long time, “PDF was considered persona non-grata” because it was proprietary, but since Adobe’s opened it up, they’re now working with Adobe on an archivable form of PDF.

The advantage the Library has with analog-to-digital conversions is that they get to dictate the format and specs—that’s not so with most of the content out there. For instance, there’s not really an agreed upon web video standard—witness the H.264 vs. Ogg Theora codec war, though that’s lookin’ more and more like it’s going toward H.264—so web video is considered “highly at risk.” Despite the large amount of web video the Library has captured—after a year working out the process for doing so, Martha and Beth “don’t have real high hopes for them surviving.” YouTube provides one form of hope, though, in that there’s so many YouTube videos, and so many copies, “there’s bound to be some community interest in keeping them alive over time.”

Pulling the Plug

There might be community interest in keeping the copies of Trolololo alive and playable for the next generation from a format standpoint, but what if Google suddenly pulls the plug on YouTube? How much of it what’s there would be lost forever? Or photos uploaded to Flickr and Facebook that have been wiped from hard drives, since they’re in the cloud. Consider, for instance, everything that would be lost if Wikipedia really did run out of money, and was shut down. Or Twitter.

This isn’t a patently “what if” scenario. Last year, Yahoo, who has a habit of closing services, killed GeoCities—you had a GeoCities page, right?—nuking not just people’s personal pages on an individual level, but really deleting a massive archive of web history. Yahoo paid more than $3.5 billion for GeoCities just over 10 years ago. So it could happen, even to popular services—especially ones that operate under the radar, legal or otherwise, like say, Oink.CD.

They’re fragile, yeah, but bits, unlike ink on paper or brain cells, can live forever, if they’re taken care of. As we’re awash in an ever-cresting tsunami of data, sometimes it’s easy to forget that can be a pretty big if.

Thanks to Beth, Martha and Greg at the Library of Congress, the friendliest government employees I’ve ever talked to! Still something you wanna know? Send questions about data, Data or Reading Rainbow here with “Giz Explains” in the subject line.

Original photo from RAMAC Restoration site

Memory [Forever] is our week-long consideration of what it really means when our memories, encoded in bits, flow in a million directions, and might truly live forever.

NPD: Xbox 360 wins US sales war in a downbeat February

The cosmos must clearly have approved of Microsoft’s actions over this past month, as today we’re hearing the Xbox 360 broke out of its competitive sales funk to claim the title of “month’s best-selling console” … for the first time in two years. Redmond’s own Aaron Greenberg describes it as the best February in the console’s history, with 422,000 units sold outshining the consistently popular Wii (397,900) and the resurgent PS3 (360,100 consoles shifted, which was a 30 percent improvement year-on-year). In spite of the happy campers in Redmond and Tokyo, the overall numbers for the games industry were down 15 percent on 2009’s revenues, indicating our collective gaming appetite is starting to dry up. Good thing we’ve got all those motion-sensing accessories coming up to reignite our fire.

NPD: Xbox 360 wins US sales war in a downbeat February originally appeared on Engadget on Fri, 12 Mar 2010 03:16:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceYahoo  | Email this | Comments

Is Amazon hiring devs to build a robust web browser for Kindle?

Are you a software dev with a Bachelors Degree in Computer Science, familiarity with current Web standards, and experience with browser engines, Linux on embedded devices, and Java? If so, do we have the job for you. Lab126, the group at Amazon responsible for the Kindle, wants you to help “conceive, design, and bring to market” a new embedded browser on a Linux device. Might this be a sign that the company is ready to start taking web browsing on the e-reader seriously? We don’t know, but it sure sparked some interesting discussion over at All Things Digital. As Peter Kafka points out, a decent browser for the thing is pretty much a no-brainer in light of the Apple iPad. On the other hand, the idea of a robust browser on the Kindle has its own complications. What about subscription content like the New York Times — why would anyone pay for something that’s available for free on the web, if you’re using the same device to view both? And what about all that new data traffic? Surely AT&T will have something to say about that. Of course, we’ve been hearing enough scuttlebutt about a mysterious next-gen device being developed at Amazon that perhaps this has nothing to do with the Kindle whatsoever. Who knows? These are all questions that will have to be answered sooner or later, but in the meantime we can say with some certainty that E ink is definitely not the best way to troll 4chan.

Is Amazon hiring devs to build a robust web browser for Kindle? originally appeared on Engadget on Tue, 09 Mar 2010 16:21:00 EST. Please see our terms for use of feeds.

Permalink All Things Digital  |  sourceAmazon  | Email this | Comments

AT&T USBConnect Turbo and Velocity are carrier’s first LG and GPS modems, respectively

Location-based services have finally melted our brains to the point where we’re completely useless without immediate and constant access to Google Maps or a reasonable facsimile — we couldn’t fold a paper map if we tried, and even if we could, we’d spend an hour looking for the pulsing blue dot. That’s why we’re so delighted to hear that AT&T has finally outed its very first GPS-enabled USB modem, the USBConnect Velocity from Option, that includes a so-called Option GPS Control Panel for injecting your whereabouts into popular services like Yahoo and Bing (Google, curiously, isn’t mentioned). The other newbie to the lineup is the USBConnect Turbo — AT&T’s very first modem from LG — with an “ergonomic design” and versatile connector for even the most awkward ports (MacBook, we’re looking straight at you). Both devices will be available on the 7th of the month; the Turbo will be free on contract after rebate while the Velocity comes in at $29.99.

AT&T USBConnect Turbo and Velocity are carrier’s first LG and GPS modems, respectively originally appeared on Engadget on Thu, 04 Mar 2010 13:11:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceAT&T  | Email this | Comments