For years, smartphone users have been growing increasingly suspicious that their devices are listening to them to feed them advertisements and to “enhance their experience” on third-party apps. Companies like Google and Facebook have consistently denied these claims, saying that targeted ads and messages are merely a coincidence, and that data for these services are taken in other ways.
However, earlier this year during the Cambridge Analytica scandal we began to see some of the first hints that our phones may actually be listening to us.
Cambridge Analytica whistleblower Christopher Wylie says that they have probably been listening all along. During an appearance before the UK parliament, Wylie said, “There’s audio that could be useful just in terms of, are you in an office environment, are you outside, are you watching TV, what are you doing right now?”
Since the scandal, experts who have studied this possibility began revealing their surprising results.
In a recent interview with Vice, Dr. Peter Hannay, the senior security consultant for the cybersecurity firm Asterisk, explained how third-party apps exploit a loophole to gather the voice data from your phone.
Hannay said that while your microphone is always on, your voice data is only sent out to other parties if you say specific trigger words such as “Hey Siri” or “OK Google,” but there is a catch. Third-party apps often ask to gain access to voice data in their user agreements to “enhance the experience” of their products.
“From time to time, snippets of audio do go back to [other apps like Facebook’s] servers but there’s no official understanding what the triggers for that are. Whether it’s timing or location-based or usage of certain functions, [apps] are certainly pulling those microphone permissions and using those periodically. All the internals of the applications send this data in encrypted form, so it’s very difficult to define the exact trigger,” Hannay said.
While this process is becoming more obvious by the day, many tech companies continue to deny that they are engaged with this practice, and since all of the outgoing information is encrypted there is no way of telling exactly which information they are getting and how they are using it.
“Seeing Google are open about it, I would personally assume the other companies are doing the same. Really, there’s no reason they wouldn’t be. It makes good sense from a marketing standpoint, and their end-user agreements and the law both allow it, so I would assume they’re doing it, but there’s no way to be sure.” Hannay said.
Vice reporters then conducted their own experiment, saying random phrases into their phones and then seeing advertisements affiliated with those terms pop up in their news feeds. You can try this experiment at home yourself, and it is highly likely that you have experienced results like this by accident.
In April, I experienced something like this when a friend visited my house from the west coast. I picked him up from the Baltimore-Washington airport and during a conversation about his flight, he told me that he had a layover in Charlotte, North Carolina, and mentioned that they had a nice airport.
The following morning I woke up with these messages on my phone:
Oddly enough, I have never been to Charlotte, North Carolina, never really thought about the place, and have never typed anything about that place into Google or Facebook. But sure enough, after having a conversation about the airport in Charlotte, my phone thought I was interested.
As of right now, there is no way to avoid this spying, aside from being extremely careful about the apps that you sign up for, and actually reading their user agreements—or getting rid of your cell phone altogether, which could be counterproductive if you use it for business.
The Reason That Google,YouTube,Facebook & Twitter Censor Is The Same Reason China & The Zionist Forces Them To Censor… They Are All Weak Stupid & Greedy & Have To Buy Or Censor The Creative Alpha’s They Can Never Compete With
If virtual assistants have been the breakthrough technology in this year’s smartphone software, then the AI processor is surely the equivalent on the hardware side.
Apple has taken to calling its latest SoC the A11 Bionic on account of its new AI “Neural Engine”. Huawei’s latest Kirin 970 boasts a dedicated Neural Processing Unit (NPU) and is billing its upcoming Mate 10 as a “real AI phone“. Samsung’s next Exynos SoC is rumored to feature a dedicated AI chip too.
Qualcomm has actually been ahead of the curve since opening up the Hexagon DSP (digital signal processor) inside its Snapdragon flagships to heterogeneous compute and neural networking SDKs a couple of generations ago. Intel, Nvidia, and others are all working on their own artificial intelligence processing products too. The race is well and truly on.
There are some good reasons for including these additional processors inside today’s smartphone SoCs. Demand for real-time voice processing and image recognition is growing fast. However, as usual, there’s a lot of marketing nonsense being thrown around, which we’ll have to decipher.
Facial recognition technology explained
More and more smartphones now come equipped with facial recognition security, offering up a new way for us all to secure and unlock our smartphones. While not as widespread and not necessarily more secure than …
AI brain chips, really?
Companies would love us to believe that they’ve developed a chip smart enough to think on its own or one that can imitate the human brain, but even today’s cutting edge lab projects aren’t that close. In a commercial smartphone, the idea is simply fanciful. The reality is a little more boring. These new processor designs are simply making software tasks such as machine learning more efficient.
These new processor designs are simply making software tasks such as machine learning more efficient.
There’s an important difference between artificial intelligence and machine learning that’s worth distinguishing. AI is a very broad concept used to describe machines that can “think like humans” or that have some form of artificial brain with capabilities that closely resemble our own.
Machine learning is not unrelated, but only encapsulates computer programs that are designed to process data and make decisions based on the results, and even learn from results to inform future decisions.
Neural networks are computer systems designed to help machine learning applications sort through data, enabling computers to classify data in ways similar to humans. This includes processes like picking out landmarks in a picture or identifying the make and color of a car. Neural networks and machine learning are smart, but they’re definitely not sentient intelligence.
When it comes to talk of AI, marketing departments are attaching a more common parlance to a new area of technology that makes it harder to explain. It’s equally as much an effort to differentiate themselves from their competitors too. Either way, what all of these companies have in common is that they’re simply implementing a new component into their SoCs that improves the performance and efficiency of tasks that we now associate with smart or AI assistants. These improvements mainly concern voice and image recognition, but there are other use cases, too.
New types of computing
Perhaps the biggest question yet to answer is: why are companies suddenly including these components? What does their inclusion make it easier to do? Why now?
You may have noticed a recent increase in chatter about Neural Networks, Machine Learning, and Heterogeneous Computing. These are all tied into emerging use cases for smartphone users, and across a broader range of fields. For users, these technologies are helping to empower new user experiences with enhanced audio, image and voice processing, human activity prediction, language processing, speeding up database search results, and enhanced data encryption, among others.
What is machine learning?
One area of technology that is helping improve the services that we use on our smartphones, and on the web, is machine learning. Sometimes, the terms machine learning and artificial intelligence get used as synonyms, …
One of the questions still yet to be answered is whether computing these results is best done in the cloud or on the device, though. Despite what one OEM or another says is better, it’s more likely to depend on the exact task being calculated. Either way, these use cases require some new and complicated approaches to computing, which most of today’s general 64-bit CPUs aren’t particularly well suited to dealing with. 8- and 16-bit floating point math, pattern matching, database/key lookup, bit-field manipulation, and highly parallel processing, are just some examples that can be done faster on dedicated hardware than on a general purpose CPU.
To accommodate the growth of these new use cases, it makes more sense to design a custom processor that’s better at these type of tasks rather than have them run poorly on traditional hardware. There’s definitely an element of future proofing in these chips too. Adding in an AI processor early will give developers a baseline on which they can target new software.
Efficiency is the key
It’s worth noting that these new chips aren’t just about providing more computational power. They’re also being built to increase efficiency in three main areas: size, computation, and energy.
Today’s high-end SoCs pack in a ton of components, ranging from display drivers to modems. These parts have to fit into a small package and limited power budget, without breaking the bank (see Moore’s Law for more information). SoC designers have to stick to these rules when introducing new neural net processing capabilities too.
A dedicated AI processor in a smartphone SoC is designed around area, computational, and power efficiency for a certain subset of mathematical tasks.
It’s possible that smartphone chip designers could build larger, more powerful CPU cores to better handle machine learning tasks. However, that would significantly bulk up the size of the cores, taking up considerable die size given today’s octa-core setups, and make them much more expensive to produce. Not to mention that this would also greatly increase their power requirements, something that there simply isn’t a budget for in sub-5W TDP smartphones.
Heterogeneous Compute is all about assigning the most efficient processor to the task most suited for it, and an AI processor, HPU, or DSP are all good at Machine Learning math.
Instead, it’s much more astute to design a single dedicated component of its own, something that can handle a specific set of tasks very efficiently. We have seen this many times over the course of processor development, from the optional floating point units in early CPUs to the Hexagon DSPs inside Qualcomm’s higher-end SoCs. DSPs have fallen in and out of use across audio, automotive, and other markets over the years, due to the ebb and flow of computational power versus cost and power efficiency. The low power and heavy data crunching requirements of machine learning in the mobile space is now helping to revive demand.
An extra processor dedicated to complex math and data sorting algorithms is only going to help devices crunch numbers better.
It’s not cynical to question whether companies are being really accurate with their portrayal of neural networking and AI processors. However, the addition of an extra processor dedicated to complex math and data sorting algorithms is only going to help smartphones, and other pieces of technology, crunch numbers better and enable a variety of new useful technologies, from automatic image enhancement to faster video library searches.
As much as companies may tout virtual assistants and the inclusion of an AI processor as making your phone smarter, we’re nowhere near seeing true intelligence inside our smartphones. That being said, these new technologies combined with emerging machine learning tools are going to make our phone even more useful than ever before, so definitely watch this space.