Gov. Cuomo Shares Personal Impact Of Pandemic In Candid ‘Daily Show’ Interview
Posted in: Today's ChiliThe New York governor said 9/11 “was supposed to be the worst experience of my life.” But that was before now.
The New York governor said 9/11 “was supposed to be the worst experience of my life.” But that was before now.
Police in Westport, Connecticut, announced this week that they’re testing a so-called “pandemic drone” that can detect when people on the ground have fevers. The new drone platform will also be used to determine when people are closer than six feet to each other. Police will be able to deliver a verbal warning through…
Last week, Amazon shut down its fulfillment centers in France amid a dispute with unions over alleged covid-19-related health hazards. A French court ruled in favor of the union, ordering the company to limit deliveries to necessities such as medical supplies and food until it fulfills a risk assessment and improves…
Apple’s long-rumored Mac ARM chip transition could happen as early as next year, according to a new report from Bloomberg. The report says that Apple is currently working on three Mac processors based on the design of the A14 system-on-a-chip that will power the next-generation iPhone. The first of the Mac versions will greatly exceed the speed of the iPhone and iPad processors, according to the report’s sources.
Already, Apple’s A-series line of ARM-based chips for iPhones and iPads have been steadily improving, to the point where their performance in benchmark tests regularly exceeds that of Intel processors used currently in Apple’s Mac line. As a result, and because Intel’s chip development has encountered a few setbacks and slowdowns in recent generations, rumors that Apple would move to using its own ARM-based designs have multiplied over the past few years.
Bloomberg says that “at least one Mac” powered by Apple’s own chip is being prepared for release in 2021, to be built by chip fabricator and longtime Apple partner Taiwan Semiconductor Manufacturing Co. (TSMC). The first of these chips to power Macs will have at least 12 cores, including eight designed for high-performance applications, and four designed for lower-intensity activities with battery-preserving energy efficiency characteristics. Current Intel designs that Apple employs in devices such as the MacBook Air have four or even two cores, by comparison.
Initially, the report claims Apple will focus on using the chips to power a new Mac design, leaving Intel processors in its higher-end pro level Macs, because the ARM-based designs, while more performant on some scores, can’t yet match the top-end performance of Intel-based chip technology. ARM chips generally provide more power efficiency at the expense of raw computing power, which is why they’re so frequently used in mobile devices.
The first ARM-based Macs will still run macOS, per Bloomberg’s sources, and Apple will seek to make them compatible with software that works on current Intel-based Macs as well. That would be a similar endeavor to when Apple switched from using PowerPC-based processors to Intel chips for its Mac lineup in 2006, so the company has some experience in this regard. During that transition, Apple announced initially that the switch would take place between 2006 and 2007, but accelerated its plans so that all new Macs shipping by the end of 2006 were powered by Intel processors.
A few years ago, hearing about an Intel Core i5 processor inside a laptop only 8 inches in screen size would have sounded ridiculous. Today, it still sounds almost ridiculous but not because it’s impossible. GPD, which has spawned a niche market for small-sized gaming handhelds and laptops, is doing exactly just that. No, the ridiculousness of the GPD WIN … Continue reading
A team of researchers from Apple and Carnegie Mellon University’s Human-Computer Interaction Institute have presented a system for embedded AIs to learn by listening to noises in their environment without the need for up-front training data or without placing a huge burden on the user to supervise the learning process. The overarching goal is for smart devices to more easily build up contextual/situational awareness to increase their utility.
The system, which they’ve called Listen Learner, relies on acoustic activity recognition to enable a smart device, such as a microphone-equipped speaker, to interpret events taking place in its environment via a process of self-supervised learning with manual labelling done by one-shot user interactions — such as by the speaker asking a person ‘what was that sound?’, after it’s heard the noise enough time to classify in into a cluster.
A general pre-trained model can also be looped in to enable the system to make an initial guess on what an acoustic cluster might signify. So the user interaction could be less open-ended, with the system able to pose a question such as ‘was that a faucet?’ — requiring only a yes/no response from the human in the room.
Refinement questions could also be deployed to help the system figure out what the researchers dub “edge cases”, i.e. where sounds have been closely clustered yet might still signify a distinct event — say a door being closed vs a cupboard being closed. Over time, the system might be able to make an educated either/or guess and then present that to the user to confirm.
They’ve put together the below video demoing the concept in a kitchen environment.
In their paper presenting the research they point out that while smart devices are becoming more prevalent in homes and offices they tend to lack “contextual sensing capabilities” — with only “minimal understanding of what is happening around them”, which in turn limits “their potential to enable truly assistive computational experiences”.
And while acoustic activity recognition is not itself new, the researchers wanted to see if they could improve on existing deployments which either require a lot of manual user training to yield high accuracy; or use pre-trained general classifiers to work ‘out of the box’ but — since they lack data for a user’s specific environment — are prone to low accuracy.
Listen Learner is thus intended as a middle ground to increase utility (accuracy) without placing a high burden on the human to structure the data. The end-to-end system automatically generates acoustic event classifiers over time, with the team building a proof-of-concept prototype device to act like a smart speaker and pipe up to ask for human input.
“The algorithm learns an ensemble model by iteratively clustering unknown samples, and then training classifiers on the resulting cluster assignments,” they explain in the paper. “This allows for a ‘one-shot’ interaction with the user to label portions of the ensemble model when they are activated.”
Audio events are segmented using an adaptive threshold that triggers when the microphone input level is 1.5 standard deviations higher than the mean of the past minute.
“We employ hysteresis techniques (i.e., for debouncing) to further smooth our thresholding scheme,” they add, further noting that: “While many environments have persistent and characteristic background sounds (e.g., HVAC), we ignore them (along with silence) for computational efficiency. Note that incoming samples were discarded if they were too similar to ambient noise, but silence within a segmented window is not removed.”
The CNN (convolutional neural network) audio model they’re using was initially trained on the YouTube-8M dataset — augmented with a library of professional sound effects, per the paper.
“The choice of using deep neural network embeddings, which can be seen as learned low-dimensional representations of input data, is consistent with the manifold assumption (i.e., that high-dimensional data roughly lie on a low-dimensional manifold). By performing clustering and classification on this low-dimensional learned representation, our system is able to more easily discover and recognize novel sound classes,” they add.
The team used unsupervised clustering methods to infer the location of class boundaries from the low-dimensional learned representations — using a hierarchical agglomerative clustering (HAC) algorithm known as Ward’s method.
Their system evaluates “all possible groupings of data to find the best representation of classes”, given candidate clusters may overlap with one another.
“While our clustering algorithm separates data into clusters by minimizing the total within-cluster variance, we also seek to evaluate clusters based on their classifiability. Following the clustering stage, we use a unsupervised one-class support vector machine (SVM) algorithm that learns decision boundaries for novelty detection. For each candidate cluster, a one-class SVM is trained on a cluster’s data points, and its F1 score is computed with all samples in the data pool,” they add.
“Traditional clustering algorithms seek to describe input data by providing a cluster assignment, but this alone cannot be used to discriminate unseen samples. Thus, to facilitate our system’s inference capability, we construct an ensemble model using the one-class SVMs generated from the previous step. We adopt an iterative procedure for building our ensemble model by selecting the first classifier with an F1 score exceeding the threshold, 𝜃&'( and adding it to the ensemble. When a classifier is added, we run it on the data pool and mark samples that are recognized. We then restart the cluster-classify loop until either 1) all samples in the pool are marked or 2) a loop does not produce any more classifiers.”
The paper touches on privacy concerns that arise from such a listening system — given how often the microphone would be switched on and processing environmental data, and because they note it may not always be possible to carry out all processing locally on the device.
“While our acoustic approach to activity recognition affords benefits such as improved classification accuracy and incremental learning capabilities, the capture and transmission of audio data, especially spoken content, should raise privacy concerns,” they write. “In an ideal implementation, all data would be retained on the sensing device (though significant compute would be required for local training). Alternatively, compute could occur in the cloud with user-anonymized labels of model classes stored locally.”
You can read the full paper here.
Scientists have announced that they have recorded a signal unlike any gravitational wave signal they have recorded before. The team recorded the signal from a gravitational wave resulting from a binary black hole merger where the black holes had unequal masses. According to the team, the binary black hole merger was a result of two black holes of approximately eight … Continue reading
Apple is preparing to move some of its laptop and desktop PCs away from Intel chips, according to Bloomberg. The company is reportedly planning three Mac processors that are based on the A14, a yet-to-be-confirmed chip that is expected to power the n…
CNN’s Chris Cuomo and Brooke Baldwin each have provided regular on-air updates following their positive diagnoses.
Kristen Welker didn’t miss a beat as high winds toppled light stands during her report on the coronavirus pandemic.