Proton Mail is famous for its privacy and security. The cool trick they do is that not even Proton can decode your email. That’s because it never exists on their systems as plain text — it’s always…
The worst part is that once again, proton is trying to convince its users that it’s more secure than it really is. You have to wonder what else they are lying or deceiving about.
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.
And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.
Their AI is not local, so adding it to your email means breaking e2ee. That’s to some extent fine. You can make an informed decision about it.
But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the “zero trust” bullshit on protons own page.
Where does it say “zero trust” ‘on Protons own page’? It does not say “zero-trust” anywhere, it says “zero-access”. The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo, except for ghost mode, and they are talking about the chat once it’s complete and you choose to leave it there to use later, not about the prompts you send in.
Zero-access encryption
Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass. Our encryption is open source and trusted by over 100 million people to secure their data.
Which means that they are not advertising anything they are not doing or cannot do.
By posting this disinformation all you’re achieving is getting people to pedal back to all the shit services out there for “free” because many will start believing that privacy is way harder than it actually is so ‘what’s the point’ or, even worse, no alternative will help me be more private so I might as well just stop trying.
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.
First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.
Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.
So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”
Beyond the fact that
They are not supposed to be able to and well designed e2ee services can’t be.
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.
And even if you disagree about mail, there is no excuse for comparing to proton drive.
When you email someone outside Proton servers, doesn’t the same thing happen anyway? But the LLM is on Proton servers, so what’s the actual vulnerability?
When you email someone outside Proton servers, doesn’t the same thing happen anyway?
Yes it does.
But the LLM is on Proton servers, so what’s the actual vulnerability?
Again, the issue is not the technology. The issue is deceptive marketing. Why doesn’t their site clearly say what you say? Why use confusing technical terms most people won’t understand and compare it to drive that is fully e2ee?
Also emails for the most part are not e2ee, they can’t be because the other party is not using encryption. They use “zero-access” which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.
The email is encrypted in transit using TLS. It is then unencrypted and re-encrypted (by us) for storage on our servers using zero-access encryption. Once zero-access encryption has been applied, no-one except you can access emails stored on our servers (including us). It is not end-to-end encrypted, however, and might be accessible to the sender’s email service.
End to end encryption of a interaction with a chat-bot would mean the company doesn’t decrypt your messages to it, operates on the encrypted text, gets an encrypted response which only you can decrypt and sends it to you. You then decrypt the response.
So yes. It would require operating on encrypted data.
The documentation says it’s TLS encrypted to the LLM context window. LLM processes, and the context window output goes back via TLS to you.
As long as the context window is only connected to Proton servers decrypting the TLS tunnel, and the LLM runs on their servers, and much like a VPN, they don’t keep logs, then I don’t see what the problem actually is here.
Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass.
And why this is not true is explained in the article from the main post as well as easily figured out with a little common sense (AI can’t respond to messages it can’t understand, so the AI must decrypt them).
They actually don’t explain it in the article.
The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats.
The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.
Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.
I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.
You understand that. I understand that. But try to read it from the point of view of an average user that knows next to nothing about cyber security and LLMs. It sounds like it’s e2ee that proton mail and drive are famous for. To us, that’s obviously impossible but most people will interpret that marketing this way.
It’s intentional deception, using technical terms to confuse nontechnical customers.
How would you explain it in a way that is both nontechnical, accurate and differentiates yourself from all the other companies that are not doing something even remotely similar?
I am asking genuinely because from the perspective of a user that decided to trust the company, zero-access is functionally much closer to e2ee than it is to “regular services”, which is the alternative.
This I can agree on. They would have been better served and made it clearer to their users by clarifying that it is not ‘zero trust’ and not e2ee. At the end of the day, once the masses start trusting a company they stop digging deep, just read the first couple of paragraphs of the details, if at all, but some of us are always digging to make sure we can find the weakest links in our security as well as our privacy to try and strengthen them. So yeah, pretty stupid of them.
The worst part is that once again, proton is trying to convince its users that it’s more secure than it really is. You have to wonder what else they are lying or deceiving about.
We really need to audit Proton
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.
And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.
Their AI is not local, so adding it to your email means breaking e2ee. That’s to some extent fine. You can make an informed decision about it.
But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the “zero trust” bullshit on protons own page.
Where does it say “zero trust” ‘on Protons own page’? It does not say “zero-trust” anywhere, it says “zero-access”. The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo, except for ghost mode, and they are talking about the chat once it’s complete and you choose to leave it there to use later, not about the prompts you send in.
Which means that they are not advertising anything they are not doing or cannot do.
By posting this disinformation all you’re achieving is getting people to pedal back to all the shit services out there for “free” because many will start believing that privacy is way harder than it actually is so ‘what’s the point’ or, even worse, no alternative will help me be more private so I might as well just stop trying.
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.
First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.
Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.
Yes, that is exactly what I am saying. You seem to be confused by basic English.
They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.
I know. When did I say there is?
So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”
Beyond the fact that
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
Well, even the mail is sometimes e2ee. Making the comparison without specifying is like marketing your safe as being used in Fort Knox and it turns out it is a cheap safe used for payroll documents like in every company. Technically true but misleading as hell. When you hear Fort Knox, you think gold vault. If you hear proton mail, you think e2ee even if most mails are external.
And even if you disagree about mail, there is no excuse for comparing to proton drive.
It is e2ee – with the LLM context window!
When you email someone outside Proton servers, doesn’t the same thing happen anyway? But the LLM is on Proton servers, so what’s the actual vulnerability?
It is not. Not in any meaningful way.
Yes it does.
Again, the issue is not the technology. The issue is deceptive marketing. Why doesn’t their site clearly say what you say? Why use confusing technical terms most people won’t understand and compare it to drive that is fully e2ee?
You’re using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.
If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.
Or because the bulletproof vest company might sell you a faulty one as part of a conspiracy to kill you.
Scribe can be local, if that’s what you are referring to.
They also have a specific section on it at https://proton.me/support/proton-scribe-writing-assistant#local-or-server
Also emails for the most part are not e2ee, they can’t be because the other party is not using encryption. They use “zero-access” which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.
See https://proton.me/support/proton-mail-encryption-explained
If an AI can work on encrypted data, it’s not encrypted.
SMH
No one is saying it’s encrypted when processed, because that’s not a thing that exists.
End to end encryption of a interaction with a chat-bot would mean the company doesn’t decrypt your messages to it, operates on the encrypted text, gets an encrypted response which only you can decrypt and sends it to you. You then decrypt the response.
So yes. It would require operating on encrypted data.
The documentation says it’s TLS encrypted to the LLM context window. LLM processes, and the context window output goes back via TLS to you.
As long as the context window is only connected to Proton servers decrypting the TLS tunnel, and the LLM runs on their servers, and much like a VPN, they don’t keep logs, then I don’t see what the problem actually is here.
homomorphic encryption?
not there yet, of course, but it is conceptually possible
@wewbull@feddit.uk
Mullvad FTW
MullChad is the best for anyone who doesn’t require port forwarding
Yes, indeed. Even so, just because there is a workaround, we should not ignore the issue (governments descending into fascism).
Very true
Sauce?
from protons own website.
And why this is not true is explained in the article from the main post as well as easily figured out with a little common sense (AI can’t respond to messages it can’t understand, so the AI must decrypt them).
They actually don’t explain it in the article. The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats. The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.
Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.
I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.
You understand that. I understand that. But try to read it from the point of view of an average user that knows next to nothing about cyber security and LLMs. It sounds like it’s e2ee that proton mail and drive are famous for. To us, that’s obviously impossible but most people will interpret that marketing this way.
It’s intentional deception, using technical terms to confuse nontechnical customers.
How would you explain it in a way that is both nontechnical, accurate and differentiates yourself from all the other companies that are not doing something even remotely similar? I am asking genuinely because from the perspective of a user that decided to trust the company, zero-access is functionally much closer to e2ee than it is to “regular services”, which is the alternative.
The easiest is to explain the consequence.
We can’t access your chat history retroactively, but we can start wiretapping your future chats.
If that is too honest for you, then just explain the data is encrypted after the LLM reads them instead of using technical terms like zero access.
This I can agree on. They would have been better served and made it clearer to their users by clarifying that it is not ‘zero trust’ and not e2ee. At the end of the day, once the masses start trusting a company they stop digging deep, just read the first couple of paragraphs of the details, if at all, but some of us are always digging to make sure we can find the weakest links in our security as well as our privacy to try and strengthen them. So yeah, pretty stupid of them.