OpenAI’s latest machine learning mode has arrived. On Friday, the company released o3-mini and it’s available to try now. What’s more, for the first time OpenAI is making one of its “reasoning” models available to free users of ChatGPT. If you want to try it yourself, select the “Reason” button under the message composer to get started.
According to OpenAI, o3-mini is faster and more accurate than its predecessor, o1-mini. In A/B testing, the company found o3-mini was 24 percent faster than o1 at delivering a response. Moreover, set to its “medium” reasoning effort, the new model can come close to the performance of the more expensive o1 system in some math, coding and science benchmarks. Like OpenAI’s other reasoning models, o3-mini will show you how it arrived at an answer instead of simply responding to a prompt. Notably, the model works with ChatGPT Search out of box, enabling it to comb the web for the latest information and useful links. OpenAI says it’s working on integrating search across all of its reasoning models.
“The release of OpenAI o3-mini marks another step in OpenAI’s mission to push the boundaries of cost-effective intelligence. By optimizing reasoning for STEM domains while keeping costs low, we’re making high-quality AI even more accessible,” OpenAI said. “This model continues our track record of driving down the cost of intelligence — reducing per-token pricing by 95% since launching GPT-4 — while maintaining top-tier reasoning capabilities. As AI adoption expands, we remain committed to leading at the frontier, building models that balance intelligence, efficiency, and safety at scale.”
With today’s announcement, o3-mini will replace o1-mini in the model picker. Additionally, OpenAI is tripling the rate limit for Plus and Team ChatGPT users from 50 messages per day with o1-mini to 150 messages per day for o3-mini. OpenAI’s recently launched $200 per month Pro tier offers unlimited access to the new system.
When OpenAI first previewed o3 and o3-mini at the end of last year, CEO Sam Altman said the latter would arrive “around the end of January.” Altman gave a more concrete timeline on January 17 when he wrote on X that OpenAI was “planning to ship in a couple of weeks.”
Now that it’s here, it’s safe to say o3-mini arrives with a sense of urgency. On January 20, the same day Altman was attending Donald Trump’s inauguration, China’s DeepSeek quietly released its R1 chain-of-thought model. By January 27, the company’s chatbot surpassed ChatGPT as the most-download free app on the US App Store after going viral. The overnight success of DeepSeek wiped $1 trillion of stock market value, and almost certainly left OpenAI blindsided.
In the aftermath of last week, OpenAI said it was working with Microsoft to identify two accounts the company claims may have distilled its models. Distillation is the process of transferring the knowledge of an advanced AI system to a smaller, more efficient one. Distillation is not a controversial practice. DeepSeek has used distillation on its own R1 model to train its smaller algorithms; in fact, OpenAI’s terms of service allow for distillation as long users don’t train competing models on the outputs of the company’s AI.
OpenAI did not explicitly name DeepSeek. “We know [China]-based companies — and others — are constantly trying to distill the models of leading US AI companies,” an OpenAI spokesperson told The Guardian recently. However, David Sacks, President Trump’s AI advisor, was more direct, claiming there was “substantial evidence” that DeepSeek had “distilled the knowledge out of OpenAI’s models.”
This article originally appeared on Engadget at https://www.engadget.com/ai/openais-o3-mini-is-here-and-available-to-all-users-190918706.html?src=rss
OpenAI’s latest machine learning mode has arrived. On Friday, the company released o3-mini and it’s available to try now. What’s more, for the first time OpenAI is making one of its “reasoning” models available to free users of ChatGPT. If you want to try it yourself, select the “Reason” button under the message composer to get started.
According to OpenAI, o3-mini is faster and more accurate than its predecessor, o1-mini. In A/B testing, the company found o3-mini was 24 percent faster than o1 at delivering a response. Moreover, set to its “medium” reasoning effort, the new model can come close to the performance of the more expensive o1 system in some math, coding and science benchmarks. Like OpenAI’s other reasoning models, o3-mini will show you how it arrived at an answer instead of simply responding to a prompt. Notably, the model works with ChatGPT Search out of box, enabling it to comb the web for the latest information and useful links. OpenAI says it’s working on integrating search across all of its reasoning models.
“The release of OpenAI o3-mini marks another step in OpenAI’s mission to push the boundaries of cost-effective intelligence. By optimizing reasoning for STEM domains while keeping costs low, we’re making high-quality AI even more accessible,” OpenAI said. “This model continues our track record of driving down the cost of intelligence — reducing per-token pricing by 95% since launching GPT-4 — while maintaining top-tier reasoning capabilities. As AI adoption expands, we remain committed to leading at the frontier, building models that balance intelligence, efficiency, and safety at scale.”
With today’s announcement, o3-mini will replace o1-mini in the model picker. Additionally, OpenAI is tripling the rate limit for Plus and Team ChatGPT users from 50 messages per day with o1-mini to 150 messages per day for o3-mini. OpenAI’s recently launched $200 per month Pro tier offers unlimited access to the new system.
When OpenAI first previewed o3 and o3-mini at the end of last year, CEO Sam Altman said the latter would arrive “around the end of January.” Altman gave a more concrete timeline on January 17 when he wrote on X that OpenAI was “planning to ship in a couple of weeks.”
Now that it’s here, it’s safe to say o3-mini arrives with a sense of urgency. On January 20, the same day Altman was attending Donald Trump’s inauguration, China’s DeepSeek quietly released its R1 chain-of-thought model. By January 27, the company’s chatbot surpassed ChatGPT as the most-download free app on the US App Store after going viral. The overnight success of DeepSeek wiped $1 trillion of stock market value, and almost certainly left OpenAI blindsided.
In the aftermath of last week, OpenAI said it was working with Microsoft to identify two accounts the company claims may have distilled its models. Distillation is the process of transferring the knowledge of an advanced AI system to a smaller, more efficient one. Distillation is not a controversial practice. DeepSeek has used distillation on its own R1 model to train its smaller algorithms; in fact, OpenAI’s terms of service allow for distillation as long users don’t train competing models on the outputs of the company’s AI.
OpenAI did not explicitly name DeepSeek. “We know [China]-based companies — and others — are constantly trying to distill the models of leading US AI companies,” an OpenAI spokesperson told The Guardian recently. However, David Sacks, President Trump’s AI advisor, was more direct, claiming there was “substantial evidence” that DeepSeek had “distilled the knowledge out of OpenAI’s models.”
This article originally appeared on Engadget at https://www.engadget.com/ai/openais-o3-mini-is-here-and-available-to-all-users-190918706.html?src=rss
GoPro is rolling out a software update for its entry-level Hero camera that allows users to shoot 4:3 video in 4K. This is great for the kinds of clips that populate social media sites like TikTok, as the footage is taller. The update is available for free via the company’s GoPro Quik app on iOS and Android.
Obviously, the new aspect ratio is intended for social media content, but shooting in 4:3 has several use case scenarios. For instance, it can be the perfect choice for capturing video from a first-person perspective. If social media isn’t your bag, GoPro says that these 4:3 videos can easily be cropped to 16:9 “for a traditional widescreen look.”
There’s another tool available with this update that adds a bit of pizzazz when converting from 4:3 to 16:9. The app’s SuperView Digital Lens option adds a widening effect during the conversion process, which makes captured footage “look faster, more immersive and more exciting.” This app-based lens has been available for a while, but only worked with GoPro’s pricier offerings.
Speaking of budgets, the cute lil Hero camera is just $180 right now. It’s also incredibly light, at 86 grams. The company’s calling it the “smallest, lightest and widest angle GoPro, ever.”
This article originally appeared on Engadget at https://www.engadget.com/cameras/gopro-pushes-update-to-its-entry-level-hero-camera-adding-43-video-for-social-clips-195325285.html?src=rss
GoPro is rolling out a software update for its entry-level Hero camera that allows users to shoot 4:3 video in 4K. This is great for the kinds of clips that populate social media sites like TikTok, as the footage is taller. The update is available for free via the company’s GoPro Quik app on iOS and Android.
Obviously, the new aspect ratio is intended for social media content, but shooting in 4:3 has several use case scenarios. For instance, it can be the perfect choice for capturing video from a first-person perspective. If social media isn’t your bag, GoPro says that these 4:3 videos can easily be cropped to 16:9 “for a traditional widescreen look.”
There’s another tool available with this update that adds a bit of pizzazz when converting from 4:3 to 16:9. The app’s SuperView Digital Lens option adds a widening effect during the conversion process, which makes captured footage “look faster, more immersive and more exciting.” This app-based lens has been available for a while, but only worked with GoPro’s pricier offerings.
Speaking of budgets, the cute lil Hero camera is just $180 right now. It’s also incredibly light, at 86 grams. The company’s calling it the “smallest, lightest and widest angle GoPro, ever.”
This article originally appeared on Engadget at https://www.engadget.com/cameras/gopro-pushes-update-to-its-entry-level-hero-camera-adding-43-video-for-social-clips-195325285.html?src=rss
GoPro is rolling out a software update for its entry-level Hero camera that allows users to shoot 4:3 video in 4K. This is great for the kinds of clips that populate social media sites like TikTok, as the footage is taller. The update is available for free via the company’s GoPro Quik app on iOS and Android.
Obviously, the new aspect ratio is intended for social media content, but shooting in 4:3 has several use case scenarios. For instance, it can be the perfect choice for capturing video from a first-person perspective. If social media isn’t your bag, GoPro says that these 4:3 videos can easily be cropped to 16:9 “for a traditional widescreen look.”
There’s another tool available with this update that adds a bit of pizzazz when converting from 4:3 to 16:9. The app’s SuperView Digital Lens option adds a widening effect during the conversion process, which makes captured footage “look faster, more immersive and more exciting.” This app-based lens has been available for a while, but only worked with GoPro’s pricier offerings.
Speaking of budgets, the cute lil Hero camera is just $180 right now. It’s also incredibly light, at 86 grams. The company’s calling it the “smallest, lightest and widest angle GoPro, ever.”
This article originally appeared on Engadget at https://www.engadget.com/cameras/gopro-pushes-update-to-its-entry-level-hero-camera-adding-43-video-for-social-clips-195325285.html?src=rss
GoPro is rolling out a software update for its entry-level Hero camera that allows users to shoot 4:3 video in 4K. This is great for the kinds of clips that populate social media sites like TikTok, as the footage is taller. The update is available for free via the company’s GoPro Quik app on iOS and Android.
Obviously, the new aspect ratio is intended for social media content, but shooting in 4:3 has several use case scenarios. For instance, it can be the perfect choice for capturing video from a first-person perspective. If social media isn’t your bag, GoPro says that these 4:3 videos can easily be cropped to 16:9 “for a traditional widescreen look.”
There’s another tool available with this update that adds a bit of pizzazz when converting from 4:3 to 16:9. The app’s SuperView Digital Lens option adds a widening effect during the conversion process, which makes captured footage “look faster, more immersive and more exciting.” This app-based lens has been available for a while, but only worked with GoPro’s pricier offerings.
Speaking of budgets, the cute lil Hero camera is just $180 right now. It’s also incredibly light, at 86 grams. The company’s calling it the “smallest, lightest and widest angle GoPro, ever.”
This article originally appeared on Engadget at https://www.engadget.com/cameras/gopro-pushes-update-to-its-entry-level-hero-camera-adding-43-video-for-social-clips-195325285.html?src=rss
The 2025 NAMM Show is over. Every year music gear manufacturers, ranging from iconic synth brands like Korg, to boutique guitar pedal makers like Walrus Audio, and even companies making fog machines and knobs descend on Anaheim to show off their latest wares. It is chaos in all the ways that you’d expect a convention to be — miles-long lines for coffee, hordes of strangers jockeying for position around new products, food options that range from barely edible to instant heart attack. But NAMM is also a special beast. If you’ve ever wondered what eight out-of-sync drummers, two finger tapping guitar solos, an acoustic slide blues riff and a simple ukulele ditty would all sound like simultaneously vying for your attention, well, this is the only place to experience that particular brand of hell. But, now that my legs and, more importantly, my eardrums have finally started to recover I’ve had a chance to reflect on some of the best things I saw on the show floor. Here are the 10 things that grabbed my attention the most.
Eternal Research Demon Box
Terrence O’Brien for Engadget
Eternal Research launched a successfully funded Kickstarter campaign back in September, but this was the first time I was able to see the Demon Box in person. Think of it like a supercharged version of the Soma Labs Ether featured in a handful of our gift guides. The Demon Box doesn’t make any sound on its own, instead it features three pickups that turn EMFs (electromagnetic fields) into music — or at least audible noise. Run a cellphone, power drill or a tuning fork across the top and you’ll get unique whines, hisses, clicks and beeps that only that device could produce. But where the Ether is basically just a microphone, the Demon Box is an instrument designed for live interaction and controlling other devices. In addition to outputting audio, it can also convert those electro magnetic fields into CV (control voltage) for controlling eurorack synths or MIDI for triggering a visual synthesizer, or all three simultaneously. There are tons of options out there if you want a buzzy sawtooth bass, but if you want to turn the invisible radiation emitted by a TV remote into a centerpiece of a multimedia performance, this is basically your only option.
Circle Guitar
Terrence O’Brien for Engadget
The Circle Guitar is impractical. It costs over $12,000 (insert grimacing emoji). But it’s also just insane fun. Instead of playing it with a pick or finger plucking the strings (though, you can do that if you want), the strings are strummed by movable plectrums you mount inside a spinning wheel. There are sixteen slots allowing you to design your own strumming rhythm, and then there are six sliders under the pickups for controlling the volume of each string individually. This allows you to create complex, robotic rhythms like a drum machine, but on your guitar. And, what’s more, you can sync it to a DAW to make sure you’re in lockstep with your backing track, even when it stutters and pauses. It’s a completely unique creation that has already drawn the attention of artists like Ed O’Brien of Radiohead.
Akai + Native Instruments
One of the biggest announcements out of NAMM wasn’t really a new product, but two titans of the industry joining forces. Several of Native Instruments (NI) Play Series synths and genre-specific Expansion Pack sound kits are being ported over to Akai’s new MPC 3.0 platform. While the availability of some existing soft synths on some existing hardware might not seem like a big deal at first, it greatly expands the sonic palette of the MPC and gives NI another foothold in the world of standalone music hardware after giving it go on its own with the Machine+. The selection of sounds is limited at the moment, with three synths (Analog Dreams, Cloud Supply, and Nacht) and just one expansion (Faded Reels) available. But two more synths and four more expansions will be added soon and, if all goes well, I’m sure more will follow.
Korg HandyTraxx Play
Terrence O’Brien for Engadget
The HandyTraxx Play is the first and only portable turntable that I know of with built-in effects. It has a DJ filter, a delay and even a simple looper which can, in theory, negate the need for a separate mixer and even a second turntable in some cases. While I can not scratch, I’ve always wanted to learn, and the all-in-one portable nature of the HandyTraxx Play, including a speaker and battery power, is pretty appealing to someone who just wants to dip their toe in and doesn’t want invest a ton of money and space in a separate mixer and dual turntable setup. Plus, Korg designed the Play in conjunction with the late Toshihide Nakama, the founder of Vestax and builder of the original Handy Trax (two words, one x), an icon in the world of portablism.
Donner Essential L1 Synthesizer
Terrence O’Brien for Engadget
Over the last few years Chinese music gear maker Donner has started to really expand its offerings, going from mostly digital pianos and some bargain bin guitar pedals, to shockingly decent DSP effects, drum machines and even a pocketable groovebox. The L1 is the latest in its growing line of synths and it has a lot of promise. It’s based in large part on the Roland SH-101, an iconic instrument from the ‘80s that found particular favor among artists like Aphex Twin, Orbital, Depeche Mode, KMFDM, and Boards of Canada.
What makes the L1 particularly intriguing is that it’s the first entry in the company’s new Snap2Connect (S2C) system. The keyboard attaches to the synth magnetically, allowing you to leave it behind if you want, or use it as a separate MIDI controller with your DAW or another synth. But Donner also says it plans on adding other instruments to the S2C system, so you could buy a module based on, say, a Juno-60 one day and just slap it on to the keyboard you already own.
Enjoy Electronics DeFeel
Terrence O’Brien for Engadget
The DeFeel is hard to explain. The company calls it a “modular monotony degenerator,” which is both extremely accurate and extremely unhelpful. Basically, you stick this thing between your sequencer and your synthesizer and it mangles the incoming CV to generate fills, stutters, and all manner of barely controlled chaos. In short, it takes that rock-solid sequence you’ve programmed and makes it less monotonous. It can resequence your sequence or add modulation. You can draw modulation curves using the 4.3-inch touchscreen, or even turn it into an X/Y pad for live performance. It’s designed mostly with eurorack synths in mind, but it’s also available as a standalone unit in a classy wooden case.
Melbourne Instruments RotoControl
Terrence O’Brien for Engadget
The RotoControl might not seem like the most exciting device at first. It’s a MIDI controller with eight knobs and keys on the right side, and a handful of other buttons on the left for navigating the device. But what makes it special is that those knobs are motorized — if you change a parameter in your DAW or softsynth, that is reflected physically on the controller. That might sound a little gimmicky, but it’s actually incredibly useful.
See, knobs on a controller or synth generally come in two flavors: pots and encoders. A pot, or potentiometer, has a beginning and end. So, if you change a preset or switch instruments, it may no longer reflect the actual setting in question. Encoders have no beginning or end. Since they don’t point to a concrete position in space, there’s no need to worry about a disagreement between knob position and an actual parameter value. But they’re also less than ideal for live performance. Judging how far you need to turn to get that filter sweep just right is difficult, and encoders generally have a less smooth response than a pot. Melbourne solves this by just moving the pots to where they’re supposed to be.
Roli Piano & Airwave
Terrence O’Brien for Engadget
I’ve been saying for a few years that I’m going to finally learn how to play piano. But, I’m a busy dad of two, a part-time bartender and a full-time freelancer. I don’t really have the time or disposable income, frankly, to treat myself to piano lessons. And the app-based or video options I’ve tried have been a bit underwhelming. I don’t know that the Roli Piano and Airwave are for sure more effective than Melodics or Duolingo at teaching how to play, but it seems like there’s more potential there. Where most music education apps are basically glorified versions of Guitar Hero, Roli uses the Airwave’s camera to track your whole hand, letting you know if you’re out of position, if your wrists are at the wrong angle or if you’re using the wrong fingers. It’s probably not as good as having a real professional teaching you the ropes, but it’s probably better than a repurposed video game bolted on to some rudimentary music theory lesson.
Oh, and once you feel comfortable enough with your playing, the Roli Piano and Airwave combine to create what is probably the most extensive MPE controller on the market.
Entropy & Sons Recursion Studio
Terrence O’Brien for Engadget
Video synthesizers are not new, but they’re also not the most common things on the planet. And the Recursion Studio from Entropy & Sons is probably one of the most capable I’ve ever seen. For one, this is not some simple visualizer where a basic clip of animation is manipulated, all of the visuals are generated live, algorithmically. In addition it can process incoming video, distort images and react to incoming audio, it even has multiple oscilloscope modes builtin.
For those that like to get their hands dirty there are over 300 modules that can be combined to create custom visual patches. But there are also about a 1,000 presets on board so you can quickly get some visuals up immediately to go with your synth jam. And the company is constantly updating the device and adding new features.
SoundToys SpaceBlender
Terrence O’Brien for Engadget
SoundToys is one of the biggest names in effect plugins out there. They’re used by everyone from Radiolab’s Jad Abumrad to Kenny Beats and Trent Reznor. The company’s Decapitator saturation plugin is one of the best things to ever happen to drums and EchoBoy is a must have delay. But, it doesn’t introduce new effects terribly often. SuperPlate was added to the roster in mid 2023, but that was the first new addition since Little Plate in November of 2017 — the company takes its time.
SpaceBlender is SoundToys’ take on an ambient granular reverb. It’s not really a straight granular plugin, that chops up your audio and spits it back out in little bits, instead it’s a bunch of delays that get combined and smeared into something ethereal. It even has an interactive envelope designer that you can manipulate to not only hone the shape of your reverb, but even has potential as a live performance tool. SpaceBlender isn’t quite ready for release just yet, but even in this early sneak peek it sounded phenomenal and seemed pretty stable.
This article originally appeared on Engadget at https://www.engadget.com/audio/the-10-best-things-i-saw-at-namm-140044601.html?src=rss
Apple was apparently developing augmented reality glasses powered by its Mac computers, but it canceled the project before the company could even announce it. According to Bloomberg, Apple scrapped the program this week because the product didn’t perform well when executives tested it and the company kept on changing the features it wanted for the device. The glasses, while still powered by visionOS, weren’t supposed to be the direct successor to the Vision Pro. They reportedly weren’t a headset, but a pair of normal-looking glasses instead.
Bloomberg says Apple originally wanted the AR glasses to be powered by the iPhone, but the smartphone didn’t have the processing capacity to sustain the device’s features. They also drained the iPhone’s battery. The scrapped AR glasses had built-in displays that can project information, images and video into the user’s field of view. They were lighter than the Vision Pro and didn’t show the wearer’s eyes like the headset can, but they had lenses that could change their tint to show if the user is working on a task or isn’t busy and can be approached. Bloomberg compared the canceled product to XReal’s One glasses and to the Orion prototype Meta revealed last year. While the Orion needs to be paired with a “wireless compute puck” to work, it doesn’t need to be connected to a computer or a phone.
Apple was developing the glasses as a device people can use every day. One of the issues it’s reportedly facing is that people who already own the Vision Pro aren’t using it as much as the company expects. However, employees part of the company’s vision products group reportedly thought the project suffered from a lack of focus and clear direction. Apple is still working on a successor to the Vision Pro, though, and it’s still looking to develop AR glasses in the future. It’s also continuing to work on the technologies the scrapped glasses used, such as microLED-type screens, for future projects.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/apple-reportedly-shelved-a-mac-connected-ar-glasses-project-160921712.html?src=rss
This article originally appeared on Engadget at https://www.engadget.com/entertainment/what-to-read-this-weekend-engrossing-literary-horror-and-a-dark-whimsical-new-comic-series-175906806.html?src=rss
Samsung recently unveiled the Galaxy S25 series — which we had the pleasure to attend in person to check out the new devices, and even to have a glimpse into the S25 Edge — introducing new Galaxy AI features aimed at enhancing user experience. These AI-powered tools streamline everyday tasks and creative projects, offering improved search capabilities, cross-app integration, and also advanced photo editing options.
Galaxy AI: Photo Gallery vocal search – Main stage photo shot during Unpacked
The key AI features include:
Generative Edit: Users can animate photos with simple text commands using Drawing Assist or transform videos into GIFs with AI Select.
AI Assist: While planning trips, users can search for the best fares and seamlessly share options with friends or save them in Samsung Notes.
Simple Search: A single command allows users to quickly locate specific photos in their Gallery.
Meal Prep Made Easy: By snapping a picture of fridge contents, Galaxy AI suggests recipes based on available ingredients.
Galaxy AI: Circle to Search
For those who love to travel
Need to book a flight? Galaxy AI will not only find the best deals in real time but also share them instantly and save them in Samsung Notes for later reference; and when you arrive at your destination, navigating foreign menus can be a breeze—simply scan a restaurant menu and ask Galaxy AI to suggest dishes within your budget. It even translates and places your order in the local language.
Galaxy AI: Translating the menu
When you get home, you can relive travel memories and search for photos with Photo Gallery vocal search by describing the moment, letting Galaxy AI instantly pull up the exact pictures you’re looking for. Still on the photography department, with Generative Edit, users can remove unwanted objects and distractions from photos, ensuring a perfect composition. Also, Audio Eraser refines videos by eliminating background noise.
Galaxy AI: Erasing objects/people from the picture
Galaxy S25 series: available for pre-order
The Galaxy S25 series is currently available for pre-order through Amazon, Best Buy, Samsung.com, and major carriers, with general availability starting February 7. Customers who pre-order on Samsung.com can take advantage of various promotions, including:
A $50 Samsung Credit for early reservations
A free storage upgrade
Up to $900 in trade-in savings
15% off Samsung Care+ (with Theft and Loss), featuring $0 screen repairs
Discounts of up to 40% on Galaxy Buds and Watches
New Galaxy Club
Additionally, Samsung is launching the NewGalaxy Club, a flexible subscription model for easier device ownership and upgrades. Users can enroll for $8.33/month for the S25 Ultra and S25+ or $6.20/month for the S25.
After 12 months, members can upgrade to the latest Galaxy device, with Samsung covering remaining installments or offering a 50% trade-in credit. New enrollees also receive one year of Samsung Care+ (excluding Theft and Loss) at no additional cost.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.