- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That’s when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn’t consented to. The user, Harishankar, decided to block the telemetry servers’ IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.


As a layman, can someone explain what the ramifications of smart devices sharing your data is. I know it’s bad, but I don’t understand why it’s bad and how it’s used against you.
The problem that is created by a person’s private data being collected against a person against their will is primarily a philosophical one similar to the “principle of least privilege”, which you may be familiar with. The idea is that those collecting the data have no reasonable need for access to it in order to provide the services they’re providing, so their collection of that information can only be for something other than the user’s benefit, but the user gets nothing in exchange for it. The user is paying for the product/service they get, so the personal data is just a bonus freebie that the vendor is making off with. If the personal data is worthless, then there is no need to collect it, and if it does have worth, they are taking something of value without paying for it, which one might call stealing, or at least piracy. To many, this is already enough to cry foul, but we haven’t even gotten into the content and use of the collected data yet.
There is a vibrant marketplace among those in the advertising business for this personal data. There are brokers and aggregators of this data with the goal of correlating every data point they have gotten from every device and app they can find with a specific person. Even if no one individual detail or set of details presents a risk or identifies who the specific person is, they use computer algorithms to analyze all the data, narrowing it down to exactly one individual, similar to the way the game “20 questions” works to guess what object the player is thinking of–they can pick literally any object or concept in the whole world, and in 20 questions or less, the other player can often guess it. If you imagine the advertisers doing this, imagine how successful they would be at guessing who a person is if they can ask unlimited questions forever until there can be no doubt; that is exactly what the algorithm reading the collected data can do.
There was an infamous example of Target (the retailer) determining a young girl was pregnant before she told anyone or even knew herself, and created a disastrous home situation for her by sending her targeted maternity marketing materials to her house, which was seen by her abusive family.
These companies build what many find to be disturbingly invasive dossiers on individuals, including their private health information, intimacy preferences, and private personal habits, among other things. The EFF did a write-up many years ago with creepy examples of basic metadata collection that I found helpful to my understanding of the problem here:
https://www.eff.org/deeplinks/2013/06/why-metadata-matters?rss=1
Companies have little to no obligation to treat you fairly or even do business with, allowing them to potentially create a downright exile situation for you if they have decided you belong on some “naughty list” because of an indicator given to them by an algorithm that analyzed your info. They can also take advantage of widely known weaknesses in human psychology to influence you in ways that you don’t even realize, but are undeniably unethical and coercive. Also, it creates loopholes for bad actors in government to exploit. For example, in my country (USA), the police are forbidden from investigating me if I am not suspected of a crime, but they can pay a data broker $30 for a breakdown of everything I like, everything I do, and everywhere I’ve been. If it was sound government policy to allow arbitrary investigation of anyone regardless of suspicion, then ask yourself why every non-authoritarian government forbids it.
I know that’s a lot; it is a complicated topic that is hard to understand the implications of. Unfortunately, everyone that could most effectively work to educate everyone on those risks is instead exploiting their ignorance for a wide variety of purposes. Some of those purposes are innocuous, but others are ethically dubious, and many more are just objectively nefarious. To be clear, the reason for the laws against blanket investigations was to prevent the dubious and nefarious uses, because once that data is collected, it isn’t feasible to ensure it will stay in the right hands. The determination was that potential net good of this kind of data collection is far outweighed by the potential net negatives.
I hope that helps!
One aspect to consider is exactly what data these devices are exfiltrating from your network. You usually can’t see the contents of the telemetry sent, but given that a LOT of smart devices have cameras and/or microphones, do you really trust that your IoT devices are not sending back audio and or video recordings of the inside of your house?
I’m sure theres more than a few programmers here that secretly work on crap like this at work.
Email me the blueprints to your house, your address, name, and your favorite hobbies and I will tell you the answer.
You might get some snarky comments, but the way I envision it is that the fuller of a picture companies can get of you (when you’re running a vacuum, when you’re driving, when your lights are on and off, etc.) the more data they have to try and run predictive analytics on your behavior and that can be used in a variety of ways that may or may not benefit you. At this point it’s mostly just to get you to buy things they think you’ll buy, but what happens when your profile starts to match up with someone who commits crimes? Maybe you get harassed by the authorities a little more often? Generally the lack of consent around how the data is collected and how it’s used is the problem most people have.
I’d dismiss this as fanciful ten years ago. But we’ve got ICE agents staking out grocery stores and flea markets looking for anyone passably “illegal”. Palantir seems to have made a trillion dollar business model out of promising an idiot president the ability to Minority Report crime. And then you’ve got the Israeli’s Lavendar AI and “Where’s Daddy” programs, intended to facilitate murdering suspects by bombing the households of relatives.
I guess it wouldn’t hurt to be a little bit more paranoid.
A detailed room-mapping scan is basically a wealth report disguised as vacuum telemetry: square footage, room count, layout complexity, “bonus” spaces like offices or nurserie; all of it feeds straight into socioeconomic profiling. And once companies have that floor plan, they’re not just storing it; they’re monetizing it, feeding it into ad networks, data brokers, and pricing algorithms that adjust what you see (=and what you pay) based on the shape of your living space.
And a mapped floor plan also quietly exposes who lives in the home, how they move, and what can be inferred from that.
If they brick your device for wanting privacy, why should you trust them?