Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.
With on-device processing, they don’t need to send audio. They can just send the text, which is infinitely smaller and easier to encrypt as “telemetry”. They’ve probably got logs of conversations in every Alexa household.
This has always blown my mind. Watching people willingly allow Big Brother-esque devices into their home for very, very minor conveniences like turning on some gimmicky multi-colored light bulbs. Now they’re literally using home “security” cameras that store everything on some random cloud server. I’ll truly never understand.
Why has no security researcher published evidence of these devices with microphones uploading random conversations? Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?
We know more about top secret NSA programs than we do about this proposed Alexa spy mechanism. None of the people working on this at Amazon have wanted to leak anything?
I’m not saying it’s not possible, but it seems extremely improbable to me that everyone’s microphones are listening to their conversations, they’re being uploaded somewhere to serve them better ads, and absolutely nobody has leaked anything or found any evidence.
Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?
Sure, but that’s not the commonly repeated conspiracy, even by non technical normal people, that everyone’s mics are listening all the time and they’re being used to serve you ads or whatever. The scale of this is not at all comparable to what I’m talking about. Yeah, I’m sure sometimes devices are inactivated inadvertently, those responses are uploaded, and people have listened to those recordings when they didn’t have permission. That is a far cry from all devices listening nearly all the time, using some surreptitious method to upload the data, and what was being recorded being used for some nefarious purpose.
Again, I’m not excusing these devices for being a privacy nightmare, but I just think it’s extremely implausible that Alexa, Siri, Google, etc. are always listening and nobody has discovered a device uploading.
The real privacy nightmare is that recording your conversations is completely unnecessary to build a richly detailed profile of you and your contacts. Regular old device / browser fingerprinting and a few people in your group sharing contacts with apps is enough for that, and it’s not a top secret conspiracy.
Emphasis on “when it thinks”. Not much point to a privacy control that the device can just ignore for unspecified reasons, and they had 150+ instances of that occurring in this data set.
There is no argument from ignorance fallacy in what I said. I am not claiming these devices never send audio without you wanting because there’s no evidence to the contrary.
However, the idea that everyone’s microphones are always listening, and that’s why you saw an ad for whatever after talking to your friend, yet not a single person has observed a device uploading this kind of data, nor has anyone ever leaked any kind of information on this supposed system, is extremely unlikely to be true in my opinion.
They don’t need microphones to do this. Regular tracking is plenty to do a good job at suggesting you a highly relevant ad, and frequency illusion does the rest. You’re not noticing the thousand times you see ads that are irrelevant to whatever you were talking about, but the one time you do notice really sticks out.
Frankly there are plenty of more concerning ways of violating our privacy that are out in the open that I believe are a much higher priority than mics always recording, of which there is no evidence for.
I am not, thank you very much. Even if I wasn’t, you can simply disable the wake word. And you can go into your account and see/listen to any recordings it has made to verify that it has stopped listening.
If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn’t capable of running whatever models Amazon’s running remotely.
I think that there’s a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn’t cheap. It’s worse if you stick it on every device. Companies — even aside from not wanting someone to pirate their model running on the device — are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.
What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.
I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I’m doing AI image generation with it 3% of the time. It’d be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That’s a heckofa big improvement.
And the situation that they’re dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you’re talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.
This isn’t specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.
I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices — phones, Alexa-type speakers, whatever — make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can’t leak information from one device to another. Some tasks require storing a lot of state — like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive — like voice recognition — and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.
But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud.
Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it’s cheaper than putting parallel compute on every device.
Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don’t know whether it’s too-restrictive to permit running parallel compute tasks.
In such a scenario, you’d have a “household parallel compute server”, in much the way that one might have a “household music player” hooked up to a house-wide speaker system running something like mpd or a “household media server” providing storage of media, or suchlike.
Publicly, that is. They have no doubt been doing it in secret since they launched it.
Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.
With on-device processing, they don’t need to send audio. They can just send the text, which is infinitely smaller and easier to encrypt as “telemetry”. They’ve probably got logs of conversations in every Alexa household.
This has always blown my mind. Watching people willingly allow Big Brother-esque devices into their home for very, very minor conveniences like turning on some gimmicky multi-colored light bulbs. Now they’re literally using home “security” cameras that store everything on some random cloud server. I’ll truly never understand.
Why has no security researcher published evidence of these devices with microphones uploading random conversations? Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?
We know more about top secret NSA programs than we do about this proposed Alexa spy mechanism. None of the people working on this at Amazon have wanted to leak anything?
I’m not saying it’s not possible, but it seems extremely improbable to me that everyone’s microphones are listening to their conversations, they’re being uploaded somewhere to serve them better ads, and absolutely nobody has leaked anything or found any evidence.
I hate to be the bearer of bad news, but…
Sure, but that’s not the commonly repeated conspiracy, even by non technical normal people, that everyone’s mics are listening all the time and they’re being used to serve you ads or whatever. The scale of this is not at all comparable to what I’m talking about. Yeah, I’m sure sometimes devices are inactivated inadvertently, those responses are uploaded, and people have listened to those recordings when they didn’t have permission. That is a far cry from all devices listening nearly all the time, using some surreptitious method to upload the data, and what was being recorded being used for some nefarious purpose.
Again, I’m not excusing these devices for being a privacy nightmare, but I just think it’s extremely implausible that Alexa, Siri, Google, etc. are always listening and nobody has discovered a device uploading.
The real privacy nightmare is that recording your conversations is completely unnecessary to build a richly detailed profile of you and your contacts. Regular old device / browser fingerprinting and a few people in your group sharing contacts with apps is enough for that, and it’s not a top secret conspiracy.
Per that article, it only happens when it thinks it’s been activated, and only when you opt in. Not much of a bombshell.
Emphasis on “when it thinks”. Not much point to a privacy control that the device can just ignore for unspecified reasons, and they had 150+ instances of that occurring in this data set.
~150 inadvertent activations is pretty low for the number of devices times however many years it spans.
Because if they would publish it, the other security experts would say “well, duh, that’s how it works”.
It is just the average people that are unaware of it, or don’t seem to care.
Argument from ignorance
It’s better to be safe than sorry is all I’m saying.
Edit: There’s also this.
There is no argument from ignorance fallacy in what I said. I am not claiming these devices never send audio without you wanting because there’s no evidence to the contrary.
However, the idea that everyone’s microphones are always listening, and that’s why you saw an ad for whatever after talking to your friend, yet not a single person has observed a device uploading this kind of data, nor has anyone ever leaked any kind of information on this supposed system, is extremely unlikely to be true in my opinion.
They don’t need microphones to do this. Regular tracking is plenty to do a good job at suggesting you a highly relevant ad, and frequency illusion does the rest. You’re not noticing the thousand times you see ads that are irrelevant to whatever you were talking about, but the one time you do notice really sticks out.
Frankly there are plenty of more concerning ways of violating our privacy that are out in the open that I believe are a much higher priority than mics always recording, of which there is no evidence for.
Stating that you don’t think that it’s possible is irrelevant. It’s either happening or it isn’t. True or false. P or ¬P.
Is an argument from ignorance. Not trying to be rude, but this is basic logic.
Do you own a smartphone?
Yeah, but it’s rooted and running a custom ROM ;)
I mean… I 100% agree, and yet you and I and everyone reading this are carrying around a phone that can do the exact same shit
I am not, thank you very much. Even if I wasn’t, you can simply disable the wake word. And you can go into your account and see/listen to any recordings it has made to verify that it has stopped listening.
This is why jailbreaking/rooting your phone is so important.
If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn’t capable of running whatever models Amazon’s running remotely.
I think that there’s a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn’t cheap. It’s worse if you stick it on every device. Companies — even aside from not wanting someone to pirate their model running on the device — are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.
What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.
I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I’m doing AI image generation with it 3% of the time. It’d be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That’s a heckofa big improvement.
And the situation that they’re dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you’re talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.
This isn’t specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.
I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices — phones, Alexa-type speakers, whatever — make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can’t leak information from one device to another. Some tasks require storing a lot of state — like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive — like voice recognition — and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.
But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud.
Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it’s cheaper than putting parallel compute on every device.
Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don’t know whether it’s too-restrictive to permit running parallel compute tasks.
In such a scenario, you’d have a “household parallel compute server”, in much the way that one might have a “household music player” hooked up to a house-wide speaker system running something like mpd or a “household media server” providing storage of media, or suchlike.