‘The World Will Miss You’: Celebrities Pay Tribute To Late ‘Friends’ Star Matthew Perry

Maggie Wheeler, Paget Brewster and Morgan Fairchild were among the “Friends” actors who mourned Perry’s death.

Apple’s 9th-gen iPad is back to its all-time low price of $250 ahead of Black Friday

Apple’s 9th generation iPad is $80 off at Amazon right now. The discount brings the 64GB variant down to just $250 from its regular price of $330, a record low typically only seen on Prime Day. You can also snag the 9th-gen iPad with 256GB of storage for $80 off at Amazon, where it’s currently down to $400 from its usual $480. 

The 9th-gen iPad came out in 2021, but it’s still a solid tablet especially if you’re on a budget. While its A13 Bionic chip isn’t the fastest or most powerful, it’s more than enough for basic productivity tasks, browsing and streaming. It earned a score of 86 when we reviewed it back at the time of its release, and it’s still one of the best iPads you can get that won’t break the bank.

It has a heftier build than the newer, sleeker models, with chunky bezels framing its 10.2-inch Retina Display, and a physical Home button with Touch ID. Apple’s 9th-gen iPad also still has a headphone jack and charges via lightning port. It has a 12MP ultrawide front camera and 8MP back camera, and supports Apple’s Center Stage video calling feature.

The 9th generation iPad comes in Silver and Space Gray, and the discount applies to both color variants for the Wi-Fi only model. It’s a great option for the casual iPad user, and the price right now can’t be beat. But, if those specs aren’t quite cutting it, Amazon is also running a deal on the 10th generation iPad, which is a step up. That model is currently $50 off.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-9th-gen-ipad-is-back-to-its-all-time-low-price-of-250-ahead-of-black-friday-154710678.html?src=rss

Maine Residents Gather To Pray And Reflect After Mass Shooting

Mainers attended Sunday services just days after the man suspected of killing 18 people and wounding 13 in Lewiston was found dead.

Jamie Lynn Spears Is Being Dragged For Supporting Justin Timberlake In A Resurfaced Tweet

As fans well know, Britney Spears and Jamie Lynn’s relationship has long been strained.

Girl With the Dragon Tattoo is Hacking Its Way to a TV Reboot

David Fincher is known for a number of films and TV shows, though his 2011 adaptation of The Girl With the Dragon Tattoo isn’t one that gets talked about as heavily. With widespread critical acclaim and an Oscar nomination for Rooney Mara (along with a still pretty great opening title sequence), the movie mainly…

Read more…

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week’s Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary “breakthroughs,” amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can’t quite get a grasp on the vagaries of human speech.

It's a picture of a brain with words over it
HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss

I’ve Been To Over 20 Homeschool Conferences. The Things I’ve Witnessed At Them Shocked Me.

“I am 20 minutes into the presentation when a woman interrupts me. ‘When are you going to talk about God in all of this?’ she asks.”

Thousands Break Into Aid Warehouses In Gaza As Deaths Top 8,000

Gaza’s Health Ministry said the death toll among Palestinians has passed 8,000 — mostly women and minors.

Star Trek: Lower Decks goes back to its beginnings

The following article contains major spoilers for Season Four, Episode Nine

Star Trek: Lower Decks takes its name and premise from a late episode of Star Trek: The Next Generation. “Lower Decks” pivots away from the show’s usual format to focus on four junior crew members and is told mostly from their perspective. One of them is Sito Jaxa (Shannon Fill) who had appeared two years earlier as a cadet in “The First Duty.” That episode focused on Wesley Crusher’s involvement in a conspiracy to cover up an accident that killed a fellow cadet. It also gave us our first look at Nicholas Locarno (Robert Duncan McNeill), the episode’s ostensible villain. Locarno was, at some point, intended to be the helm officer in Voyager and was named as such in an early draft of the series’ bible. But, during pre-production, Locarno’s name was dropped and McNeill instead played Tom Paris, with the same backstory. Producers have, in various interviews, said the issue hinged on Locarno’s redeemability after his actions in “The First Duty.” But it’s equally plausible that the character was changed to avoid paying royalties to the character’s creators. But, even if you knew none of the above information, I don’t think you’d get any less out of this week’s episode of Lower Decks. Because while this series was conceived at the get-go to play to the crowd and bury itself in references, it rarely does so at the expense of telling a good story.

Mariner is once again throwing herself into harm’s way to save her friends without regard to her own safety. Her cavalier attitude to life, death, and her own career have threaded through much of this season to the point that now, even Captain Freeman is worried. She pulls the rest of Beta shift into a plan that’ll keep her daughter out of harm’s way on the next mission. Starfleet thinks the rogue ship destroying everything in its path might be targeting former officers. The list of at-risk individuals includes high-profile figures like Dr. Crusher but, this being Lower Decks, the Cerritos is sent off to find Nicholas Locarno. And while that’s going on, Freeman sends Mariner, Boimler, Tendi and T’Lyn on what she hopes will be a zero-stakes assignment to fix a weather buoy in orbit around Sherbal V. Except, of course, the crew’s shuttle is attacked by a Klingon Bird of Prey and the crew have to beam down to the hostile planet below.

Meanwhile, Freeman, Shaxs and Rutherford head to what can only be described as a Star Wars planet where Locarno is meant to be plying his trade. Despite its reputation as a wretched hive of scum and villainy, it’s got a muscular bureaucracy that the inhabitants use to frustrate Starfleet officers. The episode makes full use of that disconnect between the stuffed-shirt crew and the rougher corners of the universe. It was rare that we’d see the Next Generation crew really get their elbows dirty – the best I can call to mind is the awkward moments in “Gambit.” There’s just something inherently funny about the primary-colored space communist scouts encountering hairy-assed people who live in the “real world.” That’s before you get to Captain Freeman trying to beat up a Balok puppet that turns out to be a real alien. Of course, it’s a double bluff – at each turn, the villains put bureaucratic obstacles in Starfleet’s way but wave through a sinister bounty hunter type out of spite. Except the bounty hunter in question is Billups wearing a silly helmet, who got the necessary data to track down Locarno.

On the planet, the rest of Beta Shift is left fending for their lives as chaotic weather makes survival even harder. It doesn’t help that the victims of other attacks, explorers from several other alien races, are all fighting to the death for supremacy. Mariner, frustrated at the gang’s wise refusal to fight their way to safety, opts to go it alone and bumps into a Klingon. But their own fight to the death is interrupted by a rainstorm of glass shards and, while they shelter, Mariner finally reveals the source of her angst. She’s been sabotaging her career because she’s deeply resentful about Starfleet, and her role within it. When she signed up, she’d bought into the idea of exploring strange new worlds, but instead the Federation has been embroiled in an endless parade of galaxy-threatening wars. Her best friend was Sito Jaxa, from “Lower Decks,” who in that episode was sent to her death on a covert mission. Starfleet quite literally chewed up and spat out one of her friends, but as much as Mariner may hate what Starfleet is, she can’t quite just walk away because of what the Starfleet ideal represents. And you don’t need to be fluent with the events of a TV series from 31 years ago — Good God, I feel old — or the para-narrative around Voyager’s pre-production, to appreciate that dilemma. Of course, her Klingon opponent counters, saying that Mariner’s angst dishonors Sito’s sacrifice, and that she needs to get on with the job at hand. And, much as she agrees, she adds (just before hugging her former opponent) that she’s still duty-bound to call out when Starfleet “can do better.” 

Despite its love of self-referentiality, Star Trek has often struggled with any degree of on-screen self-interrogation. There are moments, best exemplified by the Root Beer scene in “The Way of the Warrior,” where the show touches on the values it espouses. The show’s numerous creative teams have often pushed the idea that Starfleet, and the Federation, aren’t as noble a force as the myth suggests. With Beyond, Simon Pegg wanted to focus on the nature of the Federation as a colonizing force, even if that concept is almost entirely erased from the finished film. I’ll leave it to better writers than I to explore this in depth, but it’s rare we get moments where Starfleet officers wonder, out loud or in private, if they aren’t the universally good force they’ve been led to believe they are. This thread is also paid off in the B-story as Freeman and Co. are told, more or less, that nobody in the real world likes having them around. Sure, it’s a gag in a sitcom, and our sympathies are almost universally with the Starfleet crew, but the fact it’s here at all isn’t to be sniffed at.

By the time we’ve reached the cliffhanger, Beta shift is trying to cajole the warring parties to work together. And, if we’re honest, the idea of disparate groups coming together to solve a problem as a whole is, surely, an idea worth upholding. But before we can see if they are able to be rescued, Mariner is beamed away to an ultra-minimalist starship. After forcing the door, she comes face-to-face with her rescuer / captor, and it’s… Nicholas Locarno.

This article originally appeared on Engadget at https://www.engadget.com/star-trek-lower-decks-goes-back-to-its-beginnings-130001207.html?src=rss

30 Problem-Solving Skincare Products For Those Irritating Issues You’ve Had For Quite Some Time

Don’t let those problems plague you anymore.