Comments on the Content Security Policy specification

First, let me state up front some assumptions I'm making:

* Authors will rely on technologies that they perceive are solving their 
  problems,

* Authors will invariably make mistakes, primarily mistakes of omission,

* The more complicated something is, the more mistakes people will make.


I think CSP is orders of magnitude too complicated to be a successful 
security mechanism on the Web.

I believe that if one were to take a typical Web developer, show him this:

   X-Content-Security-Policy: allow self; img-src *;
                              object-src media1.com media2.com;
                              script-src trustedscripts.example.com

....and ask him "does this enable or disable data: URLs in <embed>" or 
"would an onclick='' handler work with this policy" or "are framesets 
enabled or disabled by this set of directives", the odds of them getting 
the answers right are about 50:50.

Similarly, given the following:

   X-Content-Security-Policy: allow https://self:443

....I don't think a random Web developer would be able to correctly guess 
whether or not inline scripts on the page would work, or whether Google 
Analytics would be disabled or not.

I think that this complexity, combined with the tendency for authors to 
rely on features they think are solvign their problems, would actually 
lead to authors writing policy files in what would externally appear to be 
a random fashion, changing them until their sites worked, and would then 
assume their site is safe. This would then likely make them _less_ 
paranoid about XSS problems, which would further increase the possibility 
of them being attacked, with a good chance of the policy not actually 
being effective.



Other comments:

I'm concerned about the round-trip latency of fetching an external policy 
file. Would the policy only be enforced after it is downloaded, or would 
it block page loading? The former seems like a big security problem (you 
would be vulnerable to an XSS if the attacker can DOS the connection). The 
latter would be unacceptable from a performance perspective. Applying a 
lockdown policy in the meantime would likely break the page (e.g. no 
scripts or images could be fetched).

I think CSP should be more consistent about what happens with multiple 
policies. Right now, two headers will mean the second is ignored, and two 
<meta>s will mean the second is ignored; but a header and a <meta> will 
cause the intersection to be used. Similarly, a header with both a policy 
and a URL will cause the most restrictive mode to be used (and both 
policies to be ignored), but a misplaced <meta> will cause no CSP to be 
applied.

A policy-uri to a third-party domain is blocked supposedly to prevent an 
XSS from being able to run a separate policy, but then the policy can be 
inclued inline, so that particular hole doesn't seem to be actually 
blocked.

I don't think UAs should advertise support for this feature in their HTTP 
requests. Doing this for each feature doesn't scale. Also, browsers are 
notoriously bad at claiming support accurately; since bugs will be present 
whatever happens, servers are likely to need to do regular browser 
sniffing anyway, even if support _is_ advertised. On the long term, all 
browsers would support this, and during the transition period, browser 
sniffing would be fine. (If we do add the advertisment, we can never 
remove it, even if all browsers support it -- just like we can't remove 
the "Mozilla/4.0" part of every browser's UA string now.)

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
0
Ian
7/16/2009 10:51:03 AM
mozilla.dev.security 649 articles. 0 followers. Post Follow

55 Replies
629 Views

Similar Articles

[PageSpeed] 41
Get it on Google Play
Get it on Apple App Store

Ian Hickson wrote:
> * Authors will rely on technologies that they perceive are solving their 
>   problems,

XSS is a huge and persistent problem on the web. If this solves that
problem authors will use it.

> * Authors will invariably make mistakes, primarily mistakes of omission,

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.

> * The more complicated something is, the more mistakes people will make.

We encourage people to use the simplest policy possible. The additional
options are there for the edge cases.

> I believe that if one were to take a typical Web developer, show him this:
> 
>    X-Content-Security-Policy: allow self; img-src *;
>                               object-src media1.com media2.com;
>                               script-src trustedscripts.example.com
> 
> ...and ask him "does this enable or disable data: URLs in <embed>" or 
> "would an onclick='' handler work with this policy" or "are framesets 
> enabled or disabled by this set of directives", the odds of them getting 
> the answers right are about 50:50.

Sure, if you confuse them first by asking about "disabling".
_everything_ is disabled; the default policy is "allow none". If you ask
"What does this policy enable?" the answers are easier.

data URLs? nope, not mentioned
inline handlers? nope, not mentioned

>    X-Content-Security-Policy: allow https://self:443

Using "self" for anything other than a keyword is a botch and I will
continue to argue against it. If you mean "myhost at some other scheme"
then it's not too much to ask you to spell it out. I kind of liked
Gerv's suggestion to syntactically distinguish keywords from host names,
too.

> ...I don't think a random Web developer would be able to correctly guess 
> whether or not inline scripts on the page would work, or whether Google 
> Analytics would be disabled or not.

Are inline scripts mentioned in that policy? Is Google Analytics? No, so
they are disabled. I'll admit that the default "no inline" behavior is
not at all obvious and people will just have to learn that, but when it
comes to domains it should be pretty clear from the syntax that anything
not explicitly "allowed" is, in fact, not allowed.

> lead to authors writing policy files in what would externally appear to be 
> a random fashion, changing them until their sites worked, and would then 
> assume their site is safe.

We are not creating this tool for naive, untrained people. We don't
expect every site to use it. Taking that approach to any security
technology is going to get you into trouble.

> This would then likely make them _less_ paranoid about XSS problems,

I hope not, since it does nothing to help their visitors using legacy
browsers that don't support CSP. CSP is a back-up insurance policy,
defense-in-depth and not the defense itself.

> I'm concerned about the round-trip latency of fetching an external policy

Us too. We don't like the complexity added by the external policy file,
but it was a popular request. It could reduce bandwidth for a site with
a complex policy since it would be cachable.

> or would it block page loading?

It will block page _parsing_, just as a <script> tag must (except, of
course, before parsing starts).

> would be unacceptable from a performance perspective.

No more than a <script> tag I would think. And anyway, if it's a
performance problem for any given site then they can always go the
preferred route of putting the policy in the header.

> should be more consistent about what happens with multiple policies.

Yup, this is a muddled area that needs more work.

> A policy-uri to a third-party domain is blocked supposedly to prevent an 
> XSS from being able to run a separate policy, but then the policy can be 
> inclued inline, so that particular hole doesn't seem to be actually 
> blocked.

We're seriously considering dropping <meta> support. We added it
worrying about hosting services that don't give authors the ability to
set HTTP headers, but such sites aren't likely to have anything worth
stealing. Any server scripting environment that supports back-end data
should have the ability to set headers. Dropping <meta> support
simplifies things greatly, although it may leave some web authors out in
the cold.

> I don't think UAs should advertise support for this feature in their HTTP 
> requests. Doing this for each feature doesn't scale.

I personally agree for all the reasons you mention, but we still have a
potential versioning problem to resolve. Or not -- if we do nothing we
could always add a CSP-2 header in the future if necessary. I'm just
worried that it's unlikely that we thought of everything the first time
through.

-Dan Veditz
0
Daniel
7/17/2009 1:42:26 AM
Daniel Veditz wrote:
> CSP is designed so that mistakes of omission tend to break the site
> break. This won't introduce subtle bugs, rudimentary content testing
> will quickly reveal problems.

But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.

A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.
0
Jean
7/17/2009 7:26:00 AM
Daniel Veditz wrote:
> CSP is designed so that mistakes of omission tend to break the site
> break. This won't introduce subtle bugs, rudimentary content testing
> will quickly reveal problems.

But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.

A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.
0
Jean
7/17/2009 7:26:13 AM
Daniel Veditz wrote:
> CSP is designed so that mistakes of omission tend to break the site
> break. This won't introduce subtle bugs, rudimentary content testing
> will quickly reveal problems.

But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.

A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.
0
Jean
7/17/2009 7:26:36 AM
Jean-Marc Desperrier wrote on 7/17/2009 2:26 AM: 
> Daniel Veditz wrote:
>> CSP is designed so that mistakes of omission tend to break the site
>> break. This won't introduce subtle bugs, rudimentary content testing
>> will quickly reveal problems.
> 
> But won't authors fail to understand how to solve the problem, and open
> everything wide ? From experience, that's what happens with technologies
> that are too complex.

If authors believe it's too complex, I would imagine they wouldn't implement it at all; but if they do configure it wide open, it's the equivalent of not using it -- the net result is identical, except perhaps Ian's suggestion that an uninformed author would mistakenly believe they were protected.

An external validation tool could help authors understand what their CSP rules are actually allowing/preventing (maybe something similar to validator.w3.org).  To compliment it, another handy tool would be a browser plug-in that could help create CSP rules based on how the site actually works.


> A simpler syntax for simple case really would help, it's just that Ian
> is coming a bit late for this.

What specific changes do you recommend that would make it easier to use, but still function properly?

There appears to be a disconnect between the audience CSP is actually targeting vs. the general audience some believe it is targeting.  CSP is non-trivial; it takes a bit of work to configure it properly and requires on-going maintenance as the site evolves.  It's not targeted to the uninformed author, it simply isn't possible to achieve that kind of coverage -- I suspect in the pool of all authors, the majority of them don't even know what XSS is, let alone ways to code against it and using CSP to augment defense.


- Bil

0
Bil
7/17/2009 3:40:57 PM
Bil Corry wrote:
> CSP is non-trivial; it takes a bit of work to configure it properly
> and requires on-going maintenance as the site evolves.  It's not
> targeted to the uninformed author, it simply isn't possible to
> achieve that kind of coverage -- I suspect in the pool of all
> authors, the majority of them don't even know what XSS is, let alone
> ways to code against it and using CSP to augment defense.

But did you try to get feedback, not from the average site author, but 
from those who have experience at successfully protecting against XSS 
large sites that evolve frequently ?

If the syntax has to be ugly, then there should be a tool that takes a 
site and calculates the appropriate CSP declarations.

In fact a solution could be that everytime the browser reject 
downloading a ressource due to CSP rules, it spits out a warning on the 
javascript console together with the minimal CSP authorization that 
would be required to obtain that ressource.
This could help authors to write the right declarations without 
understanding much to CSP.

PS : Sorry for the multi-posting earlier, I was trying to cross-post to 
www-archive@w3.org but it didn't work and I did not know it had sent the 
message to the group.
0
Jean
7/17/2009 4:18:24 PM
On 7/17/09 8:40 AM, Bil Corry wrote:
> An external validation tool could help authors understand
 > what their CSP rules are actually allowing/preventing (maybe
 > something similar to validator.w3.org).  To compliment it,
 > another handy tool would be a browser plug-in that could help
 > create CSP rules based on how the site actually works.
These are great ideas.  We are currently working on some "how to" 
documents with the spec for CSP that cover things such as "how to create 
a policy for my site", and would love to see such tools come out of all 
this.

-Sid
0
Sid
7/17/2009 4:20:14 PM
Ian Hickson wrote:
> This isn't intended to be a "gotcha" question. My point is just that CSP 
> is too complicated, too powerful, to be understood by many authors on the 
> Web, and that because this is a security technology, this will directly 
> lead to security bugs on sites (and worse, on sites that think they are 
> safe because they are using a Security Policy).

So do you have a simpler syntax to suggest? A different approach
entirely? Or should we do nothing and expect site authors to write
correct and safe PHP+HTML+JavaScript as it stands. CSP seems far less
complicated than the things authors already are expected to understand.

>>>    X-Content-Security-Policy: allow https://self:443
>> Using "self" for anything other than a keyword is a botch and I will 
>> continue to argue against it.
> 
> The examples I gave in the previous e-mail were all directly from the 
> spec itself.

The spec is a group effort and I'm sure there are things in it each of
us would prefer to be different. It's also not set in stone, which is
why I mention things like this (but I don't hear a lot of agreement so
maybe everyone else likes using "self" as a pseudo-host).

>> I'll admit that the default "no inline" behavior is not at all obvious 
>> and people will just have to learn that
> 
> This strategy has not worked in the past.

But in this case they will learn rather quickly if their site doesn't work.

>> We are not creating this tool for naive, untrained people.
> 
> Naive, untrained people are who is going to use it.

Yes, but we're really trying to protect the millions of users who visit
Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of
high-traffic sites are run by smart people (yes, I am being naive).

> I agree entirely. But we don't get to require that people pass a test 
> before they use a technology. They'll use it because they heard of it on 
> w3schools, or because someone on digg linked to it, or because their 
> friend at the local gym heard his sysadmin team is using it.
> 
> We know that people do this. We have to take that into account.

I don't know what to do with this feedback. Are you saying "don't do
CSP"? Do you have suggestions on how to make it safer or simpler to use?
An alternate technology that will address the XSS problem?

> I would recommend making the entire policy language signficantly simpler, 
> such that it can be expressed in less space than a URL's length, which 
> would solve this problem as well as the above issues.

Since the policy is mostly a list of hosts or domains it would seem
difficult to shorten it much. We could make the directives terse or even
cryptic, but that doesn't gain much in length nor would it help
understandability.

>> It will block page _parsing_, just as a <script> tag must (except, of 
>> course, before parsing starts).
> 
> I think that would basically make the external policy unusable for Google 
> properties. Specifying a policy inline would still be ok though.

Using a policy file and having a different one for every page would be
horrid, but what would be the problem with having a cachable policy file
per service? Only the user's initial visit would suffer.

-Dan Veditz
0
Daniel
7/17/2009 4:24:44 PM
Jean-Marc Desperrier wrote on 7/17/2009 11:18 AM: 
> Bil Corry wrote:
>> CSP is non-trivial; it takes a bit of work to configure it properly
>> and requires on-going maintenance as the site evolves.  It's not
>> targeted to the uninformed author, it simply isn't possible to
>> achieve that kind of coverage -- I suspect in the pool of all
>> authors, the majority of them don't even know what XSS is, let alone
>> ways to code against it and using CSP to augment defense.
> 
> But did you try to get feedback, not from the average site author, but
> from those who have experience at successfully protecting against XSS
> large sites that evolve frequently ?

It's my opinion that anyone with experience configuring rules for firewalls and WAFs to protect large sites by will find CSP very understandable and approachable.  In fact, when compared to the syntax for iptables[1] or modsecurity[2], CSP is actually much simpler to understand and implement and is on par with the syntax of a similar technology, ABE[3].


> If the syntax has to be ugly,

It has to be functional; do you have specific suggestions on how the syntax should look?


> then there should be a tool that takes a
> site and calculates the appropriate CSP declarations.

I agree that a browser plug-in to do this would be helpful.


> In fact a solution could be that everytime the browser reject
> downloading a ressource due to CSP rules, it spits out a warning on the
> javascript console together with the minimal CSP authorization that
> would be required to obtain that ressource.
> This could help authors to write the right declarations without
> understanding much to CSP.

This could work too.  Or a tool that imports the Violation Report and allows an author to generate rules to allow the violation in the future.


- Bil

[1] http://iptables-tutorial.frozentux.net/iptables-tutorial.html
[2] http://www.modsecurity.org/documentation/modsecurity-apache/2.5.9/html-multipage/
[3] http://noscript.net/abe/abe_rules-0.5.pdf

0
Bil
7/17/2009 5:44:18 PM
On 7/16/09 8:17 PM, Ian Hickson wrote:
> On Thu, 16 Jul 2009, Daniel Veditz wrote:
>> Ian Hickson wrote:
>>> * The more complicated something is, the more mistakes people will 
>>> make.
>> We encourage people to use the simplest policy possible. The additional 
>> options are there for the edge cases.
> 
> It doesn't matter what we encourage. Most authors are going to be using 
> this through copy-and-paste from tutorials that were written by people who 
> made up anything they didn't work out from trial and error themselves.

Dan's point is absolutely true.  The majority of sites will be able to
benefit from simple, minimal policies.  If a site hosts all its own
content then a policy of "X-Content-Security-Policy: allow self" will
suffice and will provide all the XSS protection out of the box.  I tend
to think this will be the common example that gets cut-and-pasted the
majority of the time.  Only more sophisticated sites will need to delve
into the other features of CSP.

Content Security Policy has admittedly grown more complex since it's
earliest design but only out of necessity.  As we talked through the
model we have realized that a certain about of complexity is in fact
necessary to support various use cases which might not common on the
Web, but need to be supported.

>>> I believe that if one were to take a typical Web developer, show him 
>>> this:
>>>
>>>    X-Content-Security-Policy: allow self; img-src *;
>>>                               object-src media1.com media2.com;
>>>                               script-src trustedscripts.example.com
>>>
>>> ...and ask him "does this enable or disable data: URLs in <embed>" or 
>>> "would an onclick='' handler work with this policy" or "are framesets 
>>> enabled or disabled by this set of directives", the odds of them 
>>> getting the answers right are about 50:50.
>> Sure, if you confuse them first by asking about "disabling". 
>> _everything_ is disabled; the default policy is "allow none". If you ask 
>> "What does this policy enable?" the answers are easier.
> 
> I was trying to make the questions neutral ("enable or disable"). The 
> authors, though, aren't going to actually ask these questions explicitly, 
> they'll just subconsciously form decisions about what the answers are 
> without really knowing that's what they're doing.

I don't think it makes sense for sites to work backwards from a complex
policy example as the best way to understand CSP.  I imagine sites
starting with the simplest policy, e.g. "allow self", and then
progressively adding policy as required to let the site function
properly.  This will result in more-or-less minimal policies being
developed, which is obviously best from a security perspective.

>> data URLs? nope, not mentioned
>> inline handlers? nope, not mentioned
> 
> How is an author supposed to know that anything not mentioned won't work?
> 
> And is that really true?
> 
>    X-Content-Security-Policy: allow *; img-src self;
> 
> Are cross-origin scripts enabled? They're not mentioned, so the answer 
> must be no, right?
> 
> This isn't intended to be a "gotcha" question. My point is just that CSP 
> is too complicated, too powerful, to be understood by many authors on the 
> Web, and that because this is a security technology, this will directly 
> lead to security bugs on sites (and worse, on sites that think they are 
> safe because they are using a Security Policy).

I don't think your example is proof at all that CSP is too complex.  If
I were writing that policy, my spidey senses would start tingling as
soon as I wrote "allow *".  I would expect everything to be in-bounds at
that point.  This is a whitelist mechanism after all.

>>>    X-Content-Security-Policy: allow https://self:443
>> Using "self" for anything other than a keyword is a botch and I will 
>> continue to argue against it. If you mean "myhost at some other scheme" 
>> then it's not too much to ask you to spell it out. I kind of liked 
>> Gerv's suggestion to syntactically distinguish keywords from host names, 
>> too.
> 
> The examples I gave in the previous e-mail were all directly from the 
> spec itself.

I also agree that this example is awkward.  In fact, the scheme and port
are inherited from the protected document if they are not specified in
the policy, so this policy would only make sense if it were a non-https
page which wanted to load all its resources over https.

I don't feel strongly about keeping that feature.  Perhaps we should
allow self to be used not-in-conjunction with scheme or port as Dan says.

>>> ...I don't think a random Web developer would be able to correctly 
>>> guess whether or not inline scripts on the page would work, or whether 
>>> Google Analytics would be disabled or not.
>> Are inline scripts mentioned in that policy? Is Google Analytics? No, so 
>> they are disabled.
> 
> _I_ know the answer. I read the spec. My point is that it isn't intuitive 
> and that authors _will_ guess wrong.

Sorry, but I think this is also weak evidence for too much complexity.
This is a whitelist technology so if a source isn't whitelisted, it
won't be allowed.  This is a fundamental aspect of CSP which I think
will be the starting point of reference for most developers.

>> I'll admit that the default "no inline" behavior is not at all obvious 
>> and people will just have to learn that
> 
> This strategy has not worked in the past.

They'll learn it as soon as they apply a policy and all their inline
script stops working :-)

>> We are not creating this tool for naive, untrained people.
> 
> Naive, untrained people are who is going to use it.
> 
>> Taking that approach to any security technology is going to get you into 
>> trouble.
> 
> Have you seen the Web? :-)
> 
> I agree entirely. But we don't get to require that people pass a test 
> before they use a technology. They'll use it because they heard of it on 
> w3schools, or because someone on digg linked to it, or because their 
> friend at the local gym heard his sysadmin team is using it.
> 
> We know that people do this. We have to take that into account.

People can't generally hurt themselves if they start with "allow self"
and incrementally relax the policy until their site functions again.

>>> This would then likely make them _less_ paranoid about XSS problems,
>> I hope not, since it does nothing to help their visitors using legacy 
>> browsers that don't support CSP.
> 
> That's a temporary situation. In 20 years, when everyone supports it and 
> nobody cares about today's browsers, CSP will make people less paranoid.

It is possible that is the case, but I don't think it is justifiable to
not provide tools because we are worried that people will come to rely
upon them for security.  An analogy: seat belts were introduced in the
auto industry and yet people still (attempt to) drive safely even though
they know they're safely buckled-up.  Industry reliance upon a anti-XSS
mechanism such as CSP is a problem I would be happy to have.

>> CSP is a back-up insurance policy, defense-in-depth and not the defense 
>> itself.
> 
> Again, you and I know that. The people using it won't.
> 
>>> I'm concerned about the round-trip latency of fetching an external 
>>> policy
>> Us too. We don't like the complexity added by the external policy file, 
>> but it was a popular request. It could reduce bandwidth for a site with 
>> a complex policy since it would be cachable.
> 
> I would recommend making the entire policy language signficantly simpler, 
> such that it can be expressed in less space than a URL's length, which 
> would solve this problem as well as the above issues.

I think the vast majority of sites' policies will be less than a URL
worth of text.

>>> or would it block page loading?
>> It will block page _parsing_, just as a <script> tag must (except, of 
>> course, before parsing starts).
> 
> I think that would basically make the external policy unusable for Google 
> properties. Specifying a policy inline would still be ok though.
> 
> 
>> We're seriously considering dropping <meta> support.
> 
> I would support dropping <meta> support.

I do too.  I haven't heard anyone object strongly to Sid's proposal to
drop <meta> support, so I imagine we'll be taking it out soon.

>>> I don't think UAs should advertise support for this feature in their 
>>> HTTP requests. Doing this for each feature doesn't scale.
>> I personally agree for all the reasons you mention, but we still have a 
>> potential versioning problem to resolve. Or not -- if we do nothing we 
>> could always add a CSP-2 header in the future if necessary. I'm just 
>> worried that it's unlikely that we thought of everything the first time 
>> through.
> 
> Just make sure it's forwards-compatible, so you can add new features, 
> then you don't need to version it. (The same way HTML and CSS and the DOM 
> have been designed, for instance.)

I think Dan summarized the trade-off nicely here:
http://groups.google.com/group/mozilla.dev.security/msg/787c87362d08bf5e

I can see why folks want to avoid a version string but several of us
have limited confidence in our ability to design with
forward-compatibility.  Perhaps you could provide some guidance in this
particular area since you have a lot of experience doing so.

Thanks for the feedback, Ian.  It's great to have your voice in this
discussion.

Cheers,
Brandon
0
Brandon
7/17/2009 9:58:40 PM
Jean-Marc Desperrier wrote:
> In fact a solution could be that everytime the browser reject
> downloading a ressource due to CSP rules, it spits out a warning on the
> javascript console together with the minimal CSP authorization that
> would be required to obtain that ressource.
> This could help authors to write the right declarations without
> understanding much to CSP.

Announcing rejected resources is an important part of the plan. The spec
has a reportURI for just this reason, and the Mozilla implementation
will also echo errors to the Error Console.
0
Daniel
7/18/2009 12:37:03 AM
On Thu, 16 Jul 2009, Bil Corry wrote:
> Ian Hickson wrote on 7/16/2009 5:51 AM: 
> > I think that this complexity, combined with the tendency for authors 
> > to rely on features they think are solvign their problems, would 
> > actually lead to authors writing policy files in what would externally 
> > appear to be a random fashion, changing them until their sites worked, 
> > and would then assume their site is safe. This would then likely make 
> > them _less_ paranoid about XSS problems, which would further increase 
> > the possibility of them being attacked, with a good chance of the 
> > policy not actually being effective.
> 
> I think your point that CSP may be too complex and/or too much work for 
> some developers is spot on.  Even getting developers to use something as 
> simple as the Secure flag for cookies on HTTPS sites is still a 
> challenge.  And if we can't get developers to use the Secure flag, the 
> chances of getting sites configured with CSP is daunting at best.

I agree. I think many people will try, will think they got it right 
(because their site works), and will then assume that they therefore don't 
have to worry about (e.g.) people inserting scripts into their pages, when 
in fact they just allowed anything.


> At first glance, it may seem like a waste of time to implement CSP if 
> the best we can achieve is only partial coverage, but instead of looking 
> at it from the number of sites covered, look at it from the number of 
> users covered.  If a large site such as Twitter were to implement it, 
> that's millions of users protected that otherwise wouldn't be.

Assuming they got it right.

I think that something like CSP can definitely be useful. I just think it 
has to be orders of magnitude simpler.


> > I think CSP should be more consistent about what happens with multiple 
> > policies. Right now, two headers will mean the second is ignored, and 
> > two <meta>s will mean the second is ignored; but a header and a <meta> 
> > will cause the intersection to be used. Similarly, a header with both 
> > a policy and a URL will cause the most restrictive mode to be used 
> > (and both policies to be ignored), but a misplaced <meta> will cause 
> > no CSP to be applied.
> 
> I agree.  There's been some discussion about removing <meta> support 
> entirely and/or allowing multiple headers with a intersection algorithm, 
> so depending on how those ideas are adopted, it makes sense to ensure 
> consistency across the spec.

Removing <meta> altogether would be one good step towards simplification.


On Fri, 17 Jul 2009, Daniel Veditz wrote:
> Ian Hickson wrote:
> > This isn't intended to be a "gotcha" question. My point is just that 
> > CSP is too complicated, too powerful, to be understood by many authors 
> > on the Web, and that because this is a security technology, this will 
> > directly lead to security bugs on sites (and worse, on sites that 
> > think they are safe because they are using a Security Policy).
> 
> So do you have a simpler syntax to suggest? A different approach 
> entirely?

Here are some suggestions for simplification:

 * Remove external policy files.
 * Remove <meta> policies.
 * If there are multiple headers, fail to fully closed.
 * Combine img-src, media-src, object-src, frame-src
 * Combine style-src and font-src
 * Drop the "allow" directive, default all the directives to "self"
 * Move "inline" and "eval" keywords from "script-src" to a separate 
   directive, so that all the -src directives have the same syntax



> Or should we do nothing and expect site authors to write correct and 
> safe PHP+HTML+JavaScript as it stands. CSP seems far less complicated 
> than the things authors already are expected to understand.

Authors get the things authors already are expected to understand wrong 
all the time.


> >>>    X-Content-Security-Policy: allow https://self:443
> >> Using "self" for anything other than a keyword is a botch and I will 
> >> continue to argue against it.
> > 
> > The examples I gave in the previous e-mail were all directly from the 
> > spec itself.
> 
> The spec is a group effort and I'm sure there are things in it each of
> us would prefer to be different. It's also not set in stone, which is
> why I mention things like this (but I don't hear a lot of agreement so
> maybe everyone else likes using "self" as a pseudo-host).

My point is that these are not things I made up -- they are policies that 
have been put forward by people as examples. If they demonstrate problems, 
then it's not just me making up edge cases that show problems.


> >> I'll admit that the default "no inline" behavior is not at all 
> >> obvious and people will just have to learn that
> > 
> > This strategy has not worked in the past.
> 
> But in this case they will learn rather quickly if their site doesn't 
> work.

I'm concerned that people will eventually do something that causes the 
entire policy to be ignored, and not realise it ("yay, I fixed the 
problem") or will do something that other people will then copy and paste 
without understanding ("well this policy worked for that site... yay, now 
I'm secure").


> >> We are not creating this tool for naive, untrained people.
> > 
> > Naive, untrained people are who is going to use it.
> 
> Yes, but we're really trying to protect the millions of users who visit 
> Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of 
> high-traffic sites are run by smart people (yes, I am being naive).

It doesn't matter who you are trying to protect. This _will_ be used by 
naive, untrained people, and so we have to make sure it works for them.


> > I agree entirely. But we don't get to require that people pass a test 
> > before they use a technology. They'll use it because they heard of it 
> > on w3schools, or because someone on digg linked to it, or because 
> > their friend at the local gym heard his sysadmin team is using it.
> > 
> > We know that people do this. We have to take that into account.
> 
> I don't know what to do with this feedback. Are you saying "don't do 
> CSP"? Do you have suggestions on how to make it safer or simpler to use? 
> An alternate technology that will address the XSS problem?

I think massive simplification would be a start; I'm not sure how much 
further we would have to go after that.


> > I would recommend making the entire policy language signficantly 
> > simpler, such that it can be expressed in less space than a URL's 
> > length, which would solve this problem as well as the above issues.
> 
> Since the policy is mostly a list of hosts or domains it would seem 
> difficult to shorten it much. We could make the directives terse or even 
> cryptic, but that doesn't gain much in length nor would it help 
> understandability.

We could remove many of the directives, for example. That would make it 
much shorter.


> >> It will block page _parsing_, just as a <script> tag must (except, of 
> >> course, before parsing starts).
> > 
> > I think that would basically make the external policy unusable for 
> > Google properties. Specifying a policy inline would still be ok 
> > though.
> 
> Using a policy file and having a different one for every page would be 
> horrid, but what would be the problem with having a cachable policy file 
> per service? Only the user's initial visit would suffer.

Making the user's initial visit suffer wouldn't be acceptable to Google, 
for several reasons; first, it seems that far more visits than just the 
"initial" visit involve cache misses, and second, the first visit is the 
most important one in terms of having a good (= fast) user experience.


On Fri, 17 Jul 2009, Brandon Sterne wrote:
> 
> Dan's point is absolutely true.  The majority of sites will be able to
> benefit from simple, minimal policies.  If a site hosts all its own
> content then a policy of "X-Content-Security-Policy: allow self" will
> suffice and will provide all the XSS protection out of the box.

It will also break inline scripts, analytics, and ads.


> I tend to think this will be the common example that gets cut-and-pasted 
> the majority of the time.

I doubt it, because of what it breaks.


> Content Security Policy has admittedly grown more complex since it's
> earliest design but only out of necessity.

I strongly recommend reconsidering what is necessary.


> I don't think it makes sense for sites to work backwards from a complex
> policy example as the best way to understand CSP.

I don't think it makes sense either. Yet that's what people do with HTML, 
CSS, and JS. Why would it be any different here?


> I imagine sites starting with the simplest policy, e.g. "allow self", 
> and then progressively adding policy as required to let the site 
> function properly.  This will result in more-or-less minimal policies 
> being developed, which is obviously best from a security perspective.

This is maybe how competentely written sites will do it. It's not how most 
sites will do it.


> >> data URLs? nope, not mentioned
> >> inline handlers? nope, not mentioned
> > 
> > How is an author supposed to know that anything not mentioned won't work?
> > 
> > And is that really true?
> > 
> >    X-Content-Security-Policy: allow *; img-src self;
> > 
> > Are cross-origin scripts enabled? They're not mentioned, so the answer 
> > must be no, right?
> > 
> > This isn't intended to be a "gotcha" question. My point is just that CSP 
> > is too complicated, too powerful, to be understood by many authors on the 
> > Web, and that because this is a security technology, this will directly 
> > lead to security bugs on sites (and worse, on sites that think they are 
> > safe because they are using a Security Policy).
> 
> I don't think your example is proof at all that CSP is too complex.  If
> I were writing that policy, my spidey senses would start tingling as
> soon as I wrote "allow *".  I would expect everything to be in-bounds at
> that point.  This is a whitelist mechanism after all.

Ok, consider:

   X-Content-Security-Policy: allow self;

Will cross-site form submission work? It's not mentioned, so the answer 
must be no, right? As far as I can tell, the answer is actually yes.

You are assuming the person reading all this is familiar with security 
concepts, with Web technologies, with "whitelists" and wildcards and so 
on. This is a fundamentally flawed assumption.


> >>> This would then likely make them _less_ paranoid about XSS problems,
> >>
> >> I hope not, since it does nothing to help their visitors using legacy 
> >> browsers that don't support CSP.
> > 
> > That's a temporary situation. In 20 years, when everyone supports it 
> > and nobody cares about today's browsers, CSP will make people less 
> > paranoid.
> 
> It is possible that is the case, but I don't think it is justifiable to 
> not provide tools because we are worried that people will come to rely 
> upon them for security.  An analogy: seat belts were introduced in the 
> auto industry and yet people still (attempt to) drive safely even though 
> they know they're safely buckled-up.  Industry reliance upon a anti-XSS 
> mechanism such as CSP is a problem I would be happy to have.

Seatbelts are simple to understand. Make CSP as simple as seatbelts and 
I'll agree.


> >>> I don't think UAs should advertise support for this feature in their 
> >>> HTTP requests. Doing this for each feature doesn't scale.
> >>
> >> I personally agree for all the reasons you mention, but we still have 
> >> a potential versioning problem to resolve. Or not -- if we do nothing 
> >> we could always add a CSP-2 header in the future if necessary. I'm 
> >> just worried that it's unlikely that we thought of everything the 
> >> first time through.
> > 
> > Just make sure it's forwards-compatible, so you can add new features, 
> > then you don't need to version it. (The same way HTML and CSS and the 
> > DOM have been designed, for instance.)
> 
> I think Dan summarized the trade-off nicely here: 
> http://groups.google.com/group/mozilla.dev.security/msg/787c87362d08bf5e
> 
> I can see why folks want to avoid a version string but several of us 
> have limited confidence in our ability to design with 
> forward-compatibility.  Perhaps you could provide some guidance in this 
> particular area since you have a lot of experience doing so.

Make the BNF that defines the syntax be something that matches all 
possible strings.

For example, here is a trivial syntax like that:

   <list>              ::= "" | <directive> | <directive> ";" <list>
   <directive>         ::= <name> | <name> ":" | <name> ":" <value>
   <name>              ::= anything but ":" or ";"
   <value>             ::= anything but ";"

Then, extend this so that it covers the syntax you want to support, and 
leaves the invalid stuff in productions with "invalid" in the name, e.g.:

   <list>              ::= "" | <directive> | <directive> ";" <list>
   <directive>         ::= <valid-directive> | <invalid-directive>

   <valid-directive>   ::= <color> | <smell>
   <color>             ::= "color" ":" <color-values> <invalid-extra>
   <color-values>      ::= "red" | "green"
   <smell>             ::= "smell" ":" <smell-values> <invalid-extra>
   <smell-values>      ::= "flower" | "honey"

   <invalid-extra>     ::= <value>

   <invalid-directive> ::= <name> | <name> ":" | <name> ":" <value>
   <name>              ::= anything but ":" or ";"
   <value>             ::= anything but ";"

Then, define that the UA has to parse all input strings into the tree of 
directives that it maps to.

So for example, with the above syntax, "foo:bar" would parse to:

   list
    +- directive
        +- invalid-directive
            +- name
            |   +- "foo"
            +- ":"
            +- value
                +- "bar"

....and "color:red blue" would parse to:

   list
    +- directive
        +- valid-directive
            +- color
                +- "color:"
                +- color-values
                |   +- "red"
                +- invalid-extra
                    +- value
                        +- " blue"

....then, define how you walk this tree. Typically, you would say that you 
ignore any directives that have "invalid" in their name, so the examples 
above would be treated as "" and "color:red" respectively.

This then allows you to extend the syntax in both directions -- ignoring 
new bits, and ignoring entire directives. For example, "color:red except 
orange" would be treated as "color:red" in v1 UAs, whereas 
"color:excluding orange from red" would be treated as "".

So, let's look at the example Dan gave. Today, to allow scripts in the 
head, you need:

   script-src: inline;

If we ever add a keyword to make only scripts in the head work:

   script-src: head;

....legacy UAs will end up stripping the scripts you wanted kept, thus 
causing a backwards-compatibility problem. If you design it with the model 
above, you could introduce the feature as the following instead:

   script-src: inline only head;

Legacy UAs would see just "inline", new UAs would see both.

HTH,
-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
0
Ian
7/29/2009 10:23:46 PM
On 29/07/09 23:23, Ian Hickson wrote:
>   * Remove external policy files.

I'm not sure how that's a significant simplification; the syntax is 
exactly the same just with an extra level of indirection, and if that 
makes things too complicated for you, don't use them.

>   * Remove<meta>  policies.

Done.

>   * If there are multiple headers, fail to fully closed.

How is this a simplification? It means that if there are multiple people 
(e.g. an ISP and their customer) who want input into the policy, the ISP 
or the customer has to manually merge and intersect the policies to make 
one header, rather than the browser doing it. In other words, the 
intersection code gets written 1000 times, often badly, rather than 
once, hopefully right.

>   * Combine img-src, media-src, object-src, frame-src

But then the combined lotsofthings-src would have to be set to the 
intersection of all the above, which means e.g. far more potential 
sources of objects (in particular) than you might otherwise want. 
"object-src: none" sounds to me like a great idea for a load of sites 
which also want to display images.

OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a 
big improvement over now, where we effectively have "lotsofthings-src: all".

>   * Combine style-src and font-src

That makes sense.

>   * Drop the "allow" directive, default all the directives to "self"

That's an interesting idea.

>   * Move "inline" and "eval" keywords from "script-src" to a separate
>     directive, so that all the -src directives have the same syntax

Yes, we've done this.

> I'm concerned that people will eventually do something that causes the
> entire policy to be ignored, and not realise it ("yay, I fixed the
> problem") or will do something that other people will then copy and paste
> without understanding ("well this policy worked for that site... yay, now
> I'm secure").

These would be issues with any possible formulation.

>> I imagine sites starting with the simplest policy, e.g. "allow self",
>> and then progressively adding policy as required to let the site
>> function properly.  This will result in more-or-less minimal policies
>> being developed, which is obviously best from a security perspective.
>
> This is maybe how competentely written sites will do it. It's not how most
> sites will do it.

How do you expect them to do it? Start with "allow all"? That's like 
saying "some people will start their Ruby on Rails web application by 
writing it full of XSS holes, and then try and remove them later". This 
may be true, but we don't blame Ruby on Rails. Do we?

> You are assuming the person reading all this is familiar with security
> concepts, with Web technologies, with "whitelists" and wildcards and so
> on. This is a fundamentally flawed assumption.

I don't see how we could change CSP to make it understandable to people 
unfamiliar with Web technologies and wildcards. I think almost everyone 
is familiar with the concept of a whitelist, but perhaps under a 
different name. Any suggestions?

> Seatbelts are simple to understand. Make CSP as simple as seatbelts and
> I'll agree.

Ah, the magic "fix my security problems" header. Why didn't we think of 
implementing that before?

> Make the BNF that defines the syntax be something that matches all
> possible strings.
<snip>

This is great. We should do this.

Gerv
0
Gervase
7/30/2009 2:06:34 PM
Gervase Markham wrote on 7/30/2009 9:06 AM: 
> On 29/07/09 23:23, Ian Hickson wrote:
>>   * Remove external policy files.
> 
> I'm not sure how that's a significant simplification; the syntax is
> exactly the same just with an extra level of indirection, and if that
> makes things too complicated for you, don't use them.

If both a policy definition and a policy-uri field are present, CSP fails closed.  Not allowing external policy files means avoiding this issue entirely -- one less point of potential failure.

That said, the external policy file may actually make CSP easier to deploy for some organizations.  If authors are responsible for including the CSP header via a dynamic language, but another person is responsible for creating/maintaining the actual CSP policy definitions, then having them in multiple external policy files may make it easier to separate the duties.


>>   * If there are multiple headers, fail to fully closed.
> 
> How is this a simplification? It means that if there are multiple people
> (e.g. an ISP and their customer) who want input into the policy, the ISP
> or the customer has to manually merge and intersect the policies to make
> one header, rather than the browser doing it. In other words, the
> intersection code gets written 1000 times, often badly, rather than
> once, hopefully right.

Wouldn't an ISP have to leave all the restrictions wide-open?  Since intersecting policies can only result in a more restrictive policy, I don't think an ISP could lock down anything as it would disallow it for all of their client sites.  The only feature of intersecting policies that I see them taking advantage of is the report-uri, so that they get a report too.  Or maybe I just need to see a practical example of a policy that an ISP would implement.

 
>>   * Combine img-src, media-src, object-src, frame-src
> 
> But then the combined lotsofthings-src would have to be set to the
> intersection of all the above, which means e.g. far more potential
> sources of objects (in particular) than you might otherwise want.
> "object-src: none" sounds to me like a great idea for a load of sites
> which also want to display images.
> 
> OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a
> big improvement over now, where we effectively have "lotsofthings-src:
> all".

I like the granular control of img-src, media-src, etc, but wouldn't be opposed to a single directive that still achieves that:

	X-Content-Security-Policy: allow self; source host1.tld host2.tld object host3.tld image host4.tld;

Or maybe it's still too confusing?


>>   * Drop the "allow" directive, default all the directives to "self"
> 
> That's an interesting idea.

I like this idea.



- Bil

0
Bil
7/30/2009 5:36:44 PM
Ian Hickson wrote:
>> If a large site such as Twitter were to implement it, 
>> that's millions of users protected that otherwise wouldn't be.
> 
> Assuming they got it right.

If they don't some researcher gets an easy conference talk out of
bypassing the restrictions and poking fun at them, and then it gets
fixed. The sites most likely to use and benefit from CSP are the ones
most likely to be closely watched.

> I think that something like CSP can definitely be useful. I just think it 
> has to be orders of magnitude simpler.

That's what stalled "Content-Restrictions", and nothing simpler came out
of it. As-is, with all its complications, it has gotten favorable
interest from enough of the right people that it's worth continuing the
experiment.

> Here are some suggestions for simplification:
> 
>  * Remove external policy files.

I'd be happy to drop those, personally. Some people have expressed
bandwidth concerns that would be solved by a cacheable policy file.

>  * Combine style-src and font-src

Ian, meet Eric; Eric meet Ian -- you two work it out. If we don't get
agreement I'd tend to go the way that makes it more likely other browser
vendors will adopt the spec.

>  * Drop the "allow" directive, default all the directives to "self"

"allow" is an important simplification. A fairly simple site (but not
single host) could well use a policy like "CSP: allow *.mydomain.com"
wheras with your proposed simplification they would have to enter each
term of each web technology they use just so they can duplicate
"*.mydomain.com".

I strongly encourage sites to use simple "allow <list>" policies, and
only get into the others if they want to disable certain technologies or
specifically make relax the restrictions on something relatively safe
like img-src.

>  * Move "inline" and "eval" keywords from "script-src" to a separate 
>    directive, so that all the -src directives have the same syntax

I've argued that too and I think we agreed, although I don't see that
reflected in the spec or on the talk page.

>> Or should we do nothing and expect site authors to write correct and 
>> safe PHP+HTML+JavaScript as it stands. CSP seems far less complicated 
>> than the things authors already are expected to understand.
> 
> Authors get the things authors already are expected to understand wrong 
> all the time.

Whatever we create those guys are going to get wrong. I'd rather focus
on what features are useful and necessary for the ones who are able to
get it right.

> I'm concerned that people will eventually do something that causes the 
> entire policy to be ignored, and not realise it ("yay, I fixed the 
> problem") or will do something that other people will then copy and paste 
> without understanding ("well this policy worked for that site... yay, now 
> I'm secure").

Don't know that any CSP formulation would help prevent that. Not even if
we simplified it to the point of uselessness for complex sites.

>>>> We are not creating this tool for naive, untrained people.
>>> Naive, untrained people are who is going to use it.
>> Yes, but we're really trying to protect the millions of users who visit 
>> Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of 
>> high-traffic sites are run by smart people (yes, I am being naive).
> 
> It doesn't matter who you are trying to protect. This _will_ be used by 
> naive, untrained people, and so we have to make sure it works for them.

"If you make something idiot-proof they'll just make a better idiot"
comes to mind. Or perhaps "Build a system that even a fool can use, and
only a fool will want to use it." George Bernard Shaw (Shaw? Really?)

> We could remove many of the directives, for example. That would make it 
> much shorter.

Make what shorter? The spec, certainly, but probably not the typical
policy. And if a complex policy needed those directives then eliminating
them hasn't really helped.

Frankly we're going to resolve this as a mental exercise. Feedback from
people trying to use a working X-CSP implementation will be more
valuable than our guesses about how people will use it. That feedback
will go into the non-X- version of the spec.

>> Using a policy file and having a different one for every page would be 
>> horrid, but what would be the problem with having a cachable policy file 
>> per service? Only the user's initial visit would suffer.
> 
> Making the user's initial visit suffer wouldn't be acceptable to Google, 
> for several reasons; first, it seems that far more visits than just the 
> "initial" visit involve cache misses, and second, the first visit is the 
> most important one in terms of having a good (= fast) user experience.

That's good feedback. However, the ability to use a policy file doesn't
mean you'd have to.

>> If a site hosts all its own
>> content then a policy of "X-Content-Security-Policy: allow self" will
>> suffice and will provide all the XSS protection out of the box.
> 
> It will also break inline scripts, analytics, and ads.

Did you miss the "If a site hosts all its own content" part? Frankly
running scripts from a 3rd party ad provider in their domain should be
scaring sites shitless. Not only do they have to get the security of
their own content EXACTLY RIGHT they also have to trust one or more
other parties to do the same thing. Parties who don't necessarily care
about the site's content or security except to the extent breaches would
lose them business.

> If you design it with the model 
> above, you could introduce the feature as the following instead:
> 
>    script-src: inline only head;
> 
> Legacy UAs would see just "inline", new UAs would see both.

Thanks for that explanation and example.

-Dan
0
Daniel
7/30/2009 5:51:45 PM
Ian Hickson wrote:
>> If a large site such as Twitter were to implement it, 
>> that's millions of users protected that otherwise wouldn't be.
> 
> Assuming they got it right.

If they don't some researcher gets an easy conference talk out of
bypassing the restrictions and poking fun at them, and then it gets
fixed. The sites most likely to use and benefit from CSP are the ones
most likely to be closely watched.

> I think that something like CSP can definitely be useful. I just think it 
> has to be orders of magnitude simpler.

That's what stalled "Content-Restrictions", and nothing simpler came out
of it. As-is, with all its complications, it has gotten favorable
interest from enough of the right people that it's worth continuing the
experiment.

> Here are some suggestions for simplification:
> 
>  * Remove external policy files.

I'd be happy to drop those, personally. Some people have expressed
bandwidth concerns that would be solved by a cacheable policy file.

>  * Combine style-src and font-src

Ian, meet Eric; Eric meet Ian -- you two work it out. If we don't get
agreement I'd tend to go the way that makes it more likely other browser
vendors will adopt the spec.

>  * Drop the "allow" directive, default all the directives to "self"

"allow" is an important simplification. A fairly simple site (but not
single host) could well use a policy like "CSP: allow *.mydomain.com"
wheras with your proposed simplification they would have to enter each
term of each web technology they use just so they can duplicate
"*.mydomain.com".

I strongly encourage sites to use simple "allow <list>" policies, and
only get into the others if they want to disable certain technologies or
specifically make relax the restrictions on something relatively safe
like img-src.

>  * Move "inline" and "eval" keywords from "script-src" to a separate 
>    directive, so that all the -src directives have the same syntax

I've argued that too and I think we agreed, although I don't see that
reflected in the spec or on the talk page.

>> Or should we do nothing and expect site authors to write correct and 
>> safe PHP+HTML+JavaScript as it stands. CSP seems far less complicated 
>> than the things authors already are expected to understand.
> 
> Authors get the things authors already are expected to understand wrong 
> all the time.

Whatever we create those guys are going to get wrong. I'd rather focus
on what features are useful and necessary for the ones who are able to
get it right.

> I'm concerned that people will eventually do something that causes the 
> entire policy to be ignored, and not realise it ("yay, I fixed the 
> problem") or will do something that other people will then copy and paste 
> without understanding ("well this policy worked for that site... yay, now 
> I'm secure").

Don't know that any CSP formulation would help prevent that. Not even if
we simplified it to the point of uselessness for complex sites.

>>>> We are not creating this tool for naive, untrained people.
>>> Naive, untrained people are who is going to use it.
>> Yes, but we're really trying to protect the millions of users who visit 
>> Google, Yahoo, PayPal, banks, etc, and hopefully those kinds of 
>> high-traffic sites are run by smart people (yes, I am being naive).
> 
> It doesn't matter who you are trying to protect. This _will_ be used by 
> naive, untrained people, and so we have to make sure it works for them.

"If you make something idiot-proof they'll just make a better idiot"
comes to mind. Or perhaps "Build a system that even a fool can use, and
only a fool will want to use it." George Bernard Shaw (Shaw? Really?)

> We could remove many of the directives, for example. That would make it 
> much shorter.

Make what shorter? The spec, certainly, but probably not the typical
policy. And if a complex policy needed those directives then eliminating
them hasn't really helped.

Frankly we're going to resolve this as a mental exercise. Feedback from
people trying to use a working X-CSP implementation will be more
valuable than our guesses about how people will use it. That feedback
will go into the non-X- version of the spec.

>> Using a policy file and having a different one for every page would be 
>> horrid, but what would be the problem with having a cachable policy file 
>> per service? Only the user's initial visit would suffer.
> 
> Making the user's initial visit suffer wouldn't be acceptable to Google, 
> for several reasons; first, it seems that far more visits than just the 
> "initial" visit involve cache misses, and second, the first visit is the 
> most important one in terms of having a good (= fast) user experience.

That's good feedback. However, the ability to use a policy file doesn't
mean you'd have to.

>> If a site hosts all its own
>> content then a policy of "X-Content-Security-Policy: allow self" will
>> suffice and will provide all the XSS protection out of the box.
> 
> It will also break inline scripts, analytics, and ads.

Did you miss the "If a site hosts all its own content" part? Frankly
running scripts from a 3rd party ad provider in their domain should be
scaring sites shitless. Not only do they have to get the security of
their own content EXACTLY RIGHT they also have to trust one or more
other parties to do the same thing. Parties who don't necessarily care
about the site's content or security except to the extent breaches would
lose them business.

> If you design it with the model 
> above, you could introduce the feature as the following instead:
> 
>    script-src: inline only head;
> 
> Legacy UAs would see just "inline", new UAs would see both.

Thanks for that explanation and example.

-Dan
0
Daniel
7/30/2009 5:51:45 PM
On Thu, 30 Jul 2009 19:51:45 +0200, Daniel Veditz <dveditz@mozilla.com> wrote:
> Ian Hickson wrote:
>>> If a large site such as Twitter were to implement it,
>>> that's millions of users protected that otherwise wouldn't be.
>>
>> Assuming they got it right.
>
> If they don't some researcher gets an easy conference talk out of
> bypassing the restrictions and poking fun at them, and then it gets
> fixed. The sites most likely to use and benefit from CSP are the ones
> most likely to be closely watched.

I seriously doubt that. I was at a conference in Portugal where a major ISP got pointed out the enormous amounts of holes they had which makes me think that given the severity of the problem (that and Rasmus Lerdorf indicating this was nothing new) it needs a rather simple solution because authors will not get it. They are not informed about all the various attacks that can happen on sites. Not at all. And this is not surprising given the vast complexity of the Web platform.

(Tne conference was a few months ago.)


-- 
Anne van Kesteren
http://annevankesteren.nl/
0
Anne
7/30/2009 6:05:45 PM
On 30/07/09 18:51, Daniel Veditz wrote:
>>   * Remove external policy files.
>
> I'd be happy to drop those, personally. Some people have expressed
> bandwidth concerns that would be solved by a cacheable policy file.

Can we quantify that? At this stage, it's looking like most policies 
won't be significantly longer than a URL. And the extra RTT on first 
load, as Hixie says, means that big sites may well choose not to use 
them. So if removing it reduces implementation and spec complexity, why 
don't we do that? At least for the first "X-" version.

>>   * Move "inline" and "eval" keywords from "script-src" to a separate
>>     directive, so that all the -src directives have the same syntax
>
> I've argued that too and I think we agreed, although I don't see that
> reflected in the spec or on the talk page.

Yes, we did agree this.

Gerv
0
Gervase
8/10/2009 12:00:18 PM
On a related note (to Ian's initial message), I'd like to ask again to
see some real-world policy examples.  I suggested CNN last time, but
if something like Twitter would be an easier place to start, maybe we
could see that one?  Or see the example for mozilla.org, maybe?  Or
even just some toy problems to start, working up to real-world stuff
later.

I'm asking for a reason: I think the process of trying to determine
good policy for some real sites will give a lot of insight into where
CSP may be too complex, or equally, where it's unable to be
sufficiently precise.  And it provides a bit of a usability test:
remember that initially, many people wanting to use CSP will be
applying it to existing sites as opposed to designing sites such that
they work well with CSP.

People will want examples eventually as part of the documentation for
CSP because, as has been pointed out, they're more likely to cut and
paste from these examples than to generate policy from scratch.  So
let's see what sort of examples people will be cutting and pasting
from!

 Terri

PS - Full Disclosure: I'm one of the authors of a much simpler system
with similar goals, called SOMA: http://www.ccsl.carleton.ca/software/soma/
so obviously I'm a big believer in simpler policies.  We presented
SOMA last year at ACM CCS, so I promise this isn't just another system
from some random internet denizen -- This is peer-reviewed work from
professional security researchers.
0
TO
8/10/2009 5:27:59 PM
On 8/10/09 10:27 AM, TO wrote:
> I'd like to ask again to
> see some real-world policy examples.  I suggested CNN last time, but
> if something like Twitter would be an easier place to start, maybe we
> could see that one?  Or see the example for mozilla.org, maybe?  Or
> even just some toy problems to start, working up to real-world stuff
> later.

Working examples will be forthcoming as soon as we have Firefox builds
available which contain CSP.  Absent the working builds, do you think
it's valuable for people to compare page source for an existing popular
site and a CSP-converted version?

> I'm asking for a reason: I think the process of trying to determine
> good policy for some real sites will give a lot of insight into where
> CSP may be too complex, or equally, where it's unable to be
> sufficiently precise.  And it provides a bit of a usability test:
> remember that initially, many people wanting to use CSP will be
> applying it to existing sites as opposed to designing sites such that
> they work well with CSP.
> 
> People will want examples eventually as part of the documentation for
> CSP because, as has been pointed out, they're more likely to cut and
> paste from these examples than to generate policy from scratch.  So
> let's see what sort of examples people will be cutting and pasting
> from!
> 
>  Terri
> 
> PS - Full Disclosure: I'm one of the authors of a much simpler system
> with similar goals, called SOMA: http://www.ccsl.carleton.ca/software/soma/
> so obviously I'm a big believer in simpler policies.  We presented
> SOMA last year at ACM CCS, so I promise this isn't just another system
> from some random internet denizen -- This is peer-reviewed work from
> professional security researchers.

I read through your ACM CCS slides and the project whitepaper and SOMA
doesn't appear to address the XSS vector of inline scripts in any way.
Have I overlooked some major aspect of SOMA, or does the model only
provide controls for remotely-included content?

-Brandon
0
Brandon
8/10/2009 6:50:39 PM
On 8/10/09 5:00 AM, Gervase Markham wrote:
> On 30/07/09 18:51, Daniel Veditz wrote:
>>> * Move "inline" and "eval" keywords from "script-src" to a separate
>>> directive, so that all the -src directives have the same syntax
>>
>> I've argued that too and I think we agreed, although I don't see that
>> reflected in the spec or on the talk page.
>
> Yes, we did agree this.

I tried to find in my notes and email archives how exactly we decided to 
move the keywords out, and couldn't find anything specific.  Anyway, I 
added an "options" directive to the spec[0] that captures this change. 
I also added a thread on the wiki discussion page[1].

Cheers,
Sid

[0]https://wiki.mozilla.org/Security/CSP/Spec#options
[1]https://wiki.mozilla.org/Talk:Security/CSP/Spec#Option_.28not_source.29_Keywords_.28OPEN.29
0
Sid
8/10/2009 9:56:50 PM
On 10/08/09 19:50, Brandon Sterne wrote:
> Working examples will be forthcoming as soon as we have Firefox builds
> available which contain CSP.

We shouldn't need to wait for working builds to try and work out the 
policies, should we? Although perhaps it would be a lot easier if you 
could test them via trial and error.

Here's some possibilities for www.mozilla.org, based on the home page - 
which does repost RSS headlines, so there's at least the theoretical 
possibility of an injection. To begin with:

allow self; options inline-script;

would be a perfectly reasonable policy. The inline-script is required 
because the Urchin tracker script appears to need kicking off using a 
single line of inline script. If this could be avoided, you could remove 
that second directive.

A tighter alternative would be:

allow none; options inline-script; img-src self; script-src self; 
style-src self;

I used the Page Info tab on the home page to get lists of image URLs in 
some categories. An add-on which did this for all CSP categories and 
provided other help would definitely be useful.

(Note that mozilla.org is going through a redesign, so the new version 
might require a different policy.)

I must say I do find myself automatically wanting to use colons (like 
CSS) or equals signs in these directives...

Gerv
0
Gervase
8/11/2009 10:19:26 AM
On 10/08/09 22:56, Sid Stamm wrote:
> I tried to find in my notes and email archives how exactly we decided to
> move the keywords out, and couldn't find anything specific. Anyway, I
> added an "options" directive to the spec[0] that captures this change. I
> also added a thread on the wiki discussion page[1].

I think we agreed to make them standalone top-level directives. 
"Options" is a vague word and it doesn't make it clear that these are 
script-related.

Gerv
0
Gervase
8/11/2009 11:46:01 AM
On Thu, 30 Jul 2009, Gervase Markham wrote:
> On 29/07/09 23:23, Ian Hickson wrote:
> >   * Remove external policy files.
> 
> I'm not sure how that's a significant simplification; the syntax is 
> exactly the same just with an extra level of indirection, and if that 
> makes things too complicated for you, don't use them.

Complexity affects everyone, not just those who use it.


> >   * If there are multiple headers, fail to fully closed.
> 
> How is this a simplification? It means that if there are multiple people 
> (e.g. an ISP and their customer) who want input into the policy, the ISP 
> or the customer has to manually merge and intersect the policies to make 
> one header, rather than the browser doing it. In other words, the 
> intersection code gets written 1000 times, often badly, rather than 
> once, hopefully right.

I think in almost all cases, multiple headers will be a sign of an attack 
or error, not the sign of cooperation.


> >   * Combine img-src, media-src, object-src, frame-src
> 
> But then the combined lotsofthings-src would have to be set to the 
> intersection of all the above, which means e.g. far more potential 
> sources of objects (in particular) than you might otherwise want. 
> "object-src: none" sounds to me like a great idea for a load of sites 
> which also want to display images.
> 
> OTOH, "lotsofthings-src: host1.com host2.com host3.com" would still be a 
> big improvement over now, where we effectively have "lotsofthings-src: 
> all".

I think simplification is a win here, even if it makes the language less 
expressive. Obviously, it's a judgement call. I'm just letting you know 
what I think is needed to make this good.


> > I'm concerned that people will eventually do something that causes the 
> > entire policy to be ignored, and not realise it ("yay, I fixed the 
> > problem") or will do something that other people will then copy and 
> > paste without understanding ("well this policy worked for that site... 
> > yay, now I'm secure").
> 
> These would be issues with any possible formulation.

It's dramatically reduced if the format fails safe and is of minimal 
expressiveness.


> > > I imagine sites starting with the simplest policy, e.g. "allow 
> > > self", and then progressively adding policy as required to let the 
> > > site function properly.  This will result in more-or-less minimal 
> > > policies being developed, which is obviously best from a security 
> > > perspective.
> > 
> > This is maybe how competentely written sites will do it. It's not how 
> > most sites will do it.
> 
> How do you expect them to do it?

Copy-and-paste from sites that didn't understand the spec, for example 
copying from w3schools.com, and then modifying it more or less at random. 
Or copy-and-paste from some other site, without understanding what they're 
doing.


> That's like saying "some people will start their Ruby on Rails web 
> application by writing it full of XSS holes, and then try and remove 
> them later". This may be true, but we don't blame Ruby on Rails. Do we?

Ruby on Rails isn't purporting to be a standard.


> > You are assuming the person reading all this is familiar with security 
> > concepts, with Web technologies, with "whitelists" and wildcards and 
> > so on. This is a fundamentally flawed assumption.
> 
> I don't see how we could change CSP to make it understandable to people 
> unfamiliar with Web technologies and wildcards. I think almost everyone 
> is familiar with the concept of a whitelist, but perhaps under a 
> different name. Any suggestions?

I think the dramatic simplification I described would be a good start. I'd 
have to look at the result before I could really say what else could be 
done to make the language safer for novices.


On Thu, 30 Jul 2009, Daniel Veditz wrote:
> > 
> >  * Drop the "allow" directive, default all the directives to "self"
> 
> "allow" is an important simplification.

I don't think that making policies shorter is the same as simplification. 
In fact, when it comes to security policies, I think simplicity 
corresponds almost directly to how explicit the language is. Any 
implications can end up tripping up authors, IMHO.


> > We could remove many of the directives, for example. That would make 
> > it much shorter.
> 
> Make what shorter? The spec, certainly, but probably not the typical 
> policy. And if a complex policy needed those directives then eliminating 
> them hasn't really helped.

Making the spec shorter is a pretty important part of simplifying the 
language. The simpler the spec, the more people will be able to understand 
it, the fewer mistakes will occur.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
0
Ian
8/11/2009 11:11:36 PM
On 12/08/09 00:11, Ian Hickson wrote:
> I think in almost all cases, multiple headers will be a sign of an attack
> or error, not the sign of cooperation.

OK. I think that's a fair challenge. Can someone come up with a 
plausible and specific scenario where multiple headers would be useful?

The ones that come immediately to my mind are where the ISP would want a 
strict general policy but might allow customers to loosen it on a 
site-by-site basis (e.g. allowing media from a particular site). But 
that can't be achieved by multiple headers anyway, because you get the 
permissions intersection, not the union.

>> How do you expect them to do it?
>
> Copy-and-paste from sites that didn't understand the spec, for example
> copying from w3schools.com, and then modifying it more or less at random.
> Or copy-and-paste from some other site, without understanding what they're
> doing.

The fix for that seems to me to be good error reporting, both to the 
server and in the browser. If their site doesn't work, we want them to 
know why. If it does work, but it's because their policy is far too lax, 
then they've gained little benefit - but if you try and deploy 
technologies you don't understand, the best you can hope for is not to 
shoot yourself in the foot.

> Making the spec shorter is a pretty important part of simplifying the
> language. The simpler the spec, the more people will be able to understand
> it, the fewer mistakes will occur.

I don't think people should be writing policies based on reading the 
spec. People don't write HTML based on reading the HTML 4.01 spec - do 
they? A spec has to give as much space to error conditions and corner 
cases as it does the important, mainstream stuff. Whereas, a "How to 
write a CSP policy" document can just talk about best practice and 
common situations.

Gerv
0
Gervase
9/3/2009 11:06:39 AM
On 07/30/2009 07:06 AM, Gervase Markham wrote:
> On 29/07/09 23:23, Ian Hickson wrote:
>>   * Combine style-src and font-src
> 
> That makes sense.

I agree.  @font-face has to come from CSS which is already subject to
style-src restrictions.  I don't think there are any practical attacks
we are preventing by allowing a site to say "style can come from <foo>
but not fonts".  I propose we combine the two directives and will do so
if there aren't objections.

Separately, there is another style-src related problem with the current
model [1]:

style-src restricts which sources are valid for externally linked
stylesheets, but all inline style is still allowed.  The current model
offers no real protection against style injected by an attacker.  If
anything, it provides a way for sites to prevent outbound requests
(CSRF) via injected <link rel="stylesheet"> tags.  But if this is the
only protection we are providing, we could easily have stylesheets be
restricted to the "allow" list.

I think we face a decision:
A) we continue to allow inline styles and make external stylesheet loads
be subject to the "allow" policy, or
B) we disallow inline style and create an opt-in mechanism similar to
the inline-script option [2]

IOW, we need to decide if webpage defacement via injected style is in
the treat model for CSP and, if so, then we need to do B.

Thoughts?

-Brandon

[1] https://wiki.mozilla.org/Security/CSP/Spec#style-src
[2] https://wiki.mozilla.org/Security/CSP/Spec#options
0
Brandon
10/15/2009 9:20:56 PM
On 15/10/09 22:20, Brandon Sterne wrote:
> I think we face a decision:
> A) we continue to allow inline styles and make external stylesheet loads
> be subject to the "allow" policy, or
> B) we disallow inline style and create an opt-in mechanism similar to
> the inline-script option [2]

C) We do A, but disallow entirely some dangerous stylesheet constructs.

> IOW, we need to decide if webpage defacement via injected style is in
> the treat model for CSP and, if so, then we need to do B.

Is it just about defacement, or is it also about the fact that CSS can 
bring in behaviours etc?

If it's about defacement, then there's no set of "non-dangerous 
stylesheet constructs", and you can ignore my C. I think that, without 
executing JS code support, the successful attacks you could mount using 
CSS are limited. I guess you might put a notice on the bank website: 
"Urgent! Call this number and give them all your personal info!"...

Gerv
0
Gervase
10/19/2009 11:34:32 AM
On 19-Oct-09, at 7:34 AM, Gervase Markham wrote:
> On 15/10/09 22:20, Brandon Sterne wrote:
>> IOW, we need to decide if webpage defacement via injected style is in
>> the treat model for CSP and, if so, then we need to do B.
>
> Is it just about defacement, or is it also about the fact that CSS  
> can bring in behaviours etc?
>
> If it's about defacement, then there's no set of "non-dangerous  
> stylesheet constructs", and you can ignore my C. I think that,  
> without executing JS code support, the successful attacks you could  
> mount using CSS are limited. I guess you might put a notice on the  
> bank website: "Urgent! Call this number and give them all your  
> personal info!"...


Not as limited as you might like. Remember that even apparently non- 
dangerous constructs (e.g. background-image, the :visited pseudo  
class) can give people power to do surprising things (e.g. internal  
network ping sweeping, user history enumeration respectively).

J

---
Johnathan Nightingale
Human Shield
johnath@mozilla.com



0
Johnathan
10/19/2009 1:43:46 PM
On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
<johnath@mozilla.com> wrote:
> Not as limited as you might like. Remember that even apparently
> non-dangerous constructs (e.g. background-image, the :visited pseudo class)
> can give people power to do surprising things (e.g. internal network ping
> sweeping, user history enumeration respectively).

I'm not arguing for or against providing the ability to
block-inline-css, but keep in mind that an attacker can do all those
things as soon as you visit attacker.com.

There are many ways for the attacker to convince the user to visit
attacker.com.  In the past, I've found it helpful to simply assume the
user is always visiting attacker.com in some background tab.  After
all, Firefox is supposed to let you view untrusted web sites securely.

Adam
0
Adam
10/19/2009 9:39:52 PM
On 19-Oct-09, at 5:39 PM, Adam Barth wrote:
> On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
> <johnath@mozilla.com> wrote:
>> Not as limited as you might like. Remember that even apparently
>> non-dangerous constructs (e.g. background-image, the :visited  
>> pseudo class)
>> can give people power to do surprising things (e.g. internal  
>> network ping
>> sweeping, user history enumeration respectively).
>
> I'm not arguing for or against providing the ability to
> block-inline-css, but keep in mind that an attacker can do all those
> things as soon as you visit attacker.com.

Yeah, I think you're absolutely right that CSP is primarily about  
preventing attackers from exploiting your browser's trust relationship  
with victim.com, and the examples I offered are (for lack of a better  
term), victim-agnostic. They don't steal victim.com credentials or  
cause unwanted changes to, or transactions with, your victim.com  
presence.

I do think, though, that a helpful secondary effect of CSP is that it  
reduces attackers' ability to amplify the effect of their attacks.  
You're right that it doesn't take much to get users to click on a  
link, but I think it is nevertheless the case that a good history  
enumerator or ping sweep which happens in the background while you're  
reading a NYTimes article will have a substantially higher success  
rate than a link in the comment section that says "Click here for free  
goodies." Basically by definition, link-clickers are a subset of your  
total prospective victim pool.

I think this is more specifically what makes me feel like there's  
still value to locking down all inline styling, or at least providing  
that facility, but I appreciate you forcing me to refine my thinking a  
little more.

>   In the past, I've found it helpful to simply assume the
> user is always visiting attacker.com in some background tab.  After
> all, Firefox is supposed to let you view untrusted web sites securely.

Yes, absolutely so. We should continue to try to bend smarts towards  
fixing :visited and other nasty sleights-of-hand. But the one course  
of work doesn't preclude the other (and I don't think you were saying  
that it did).

Johnathan

---
Johnathan Nightingale
Human Shield
johnath@mozilla.com



0
Johnathan
10/20/2009 1:26:17 PM
If you want to make a module that prevents history sniffing completely
against specific sites and avoids assuming the user never interacts
with a bad site, you could have a CSP module that allows a server to
specify whether its history entries can be treated as visited by other
origins. Sites concerned about user privacy would then have control
over whether other sites could detect that they've been visited. A
similar module could be used for cross-origin cache loads to address
timing attacks.

On Tue, Oct 20, 2009 at 6:26 AM, Johnathan Nightingale
<johnath@mozilla.com> wrote:
> On 19-Oct-09, at 5:39 PM, Adam Barth wrote:
>>
>> On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
>> <johnath@mozilla.com> wrote:
>>>
>>> Not as limited as you might like. Remember that even apparently
>>> non-dangerous constructs (e.g. background-image, the :visited pseudo
>>> class)
>>> can give people power to do surprising things (e.g. internal network pi=
ng
>>> sweeping, user history enumeration respectively).
>>
>> I'm not arguing for or against providing the ability to
>> block-inline-css, but keep in mind that an attacker can do all those
>> things as soon as you visit attacker.com.
>
> Yeah, I think you're absolutely right that CSP is primarily about prevent=
ing
> attackers from exploiting your browser's trust relationship with victim.c=
om,
> and the examples I offered are (for lack of a better term), victim-agnost=
ic.
> They don't steal victim.com credentials or cause unwanted changes to, or
> transactions with, your victim.com presence.
>
> I do think, though, that a helpful secondary effect of CSP is that it
> reduces attackers' ability to amplify the effect of their attacks. You're
> right that it doesn't take much to get users to click on a link, but I th=
ink
> it is nevertheless the case that a good history enumerator or ping sweep
> which happens in the background while you're reading a NYTimes article wi=
ll
> have a substantially higher success rate than a link in the comment secti=
on
> that says "Click here for free goodies." Basically by definition,
> link-clickers are a subset of your total prospective victim pool.
>
> I think this is more specifically what makes me feel like there's still
> value to locking down all inline styling, or at least providing that
> facility, but I appreciate you forcing me to refine my thinking a little
> more.
>
>> =C2=A0In the past, I've found it helpful to simply assume the
>> user is always visiting attacker.com in some background tab. =C2=A0After
>> all, Firefox is supposed to let you view untrusted web sites securely.
>
> Yes, absolutely so. We should continue to try to bend smarts towards fixi=
ng
> :visited and other nasty sleights-of-hand. But the one course of work
> doesn't preclude the other (and I don't think you were saying that it did=
).
>
> Johnathan
>
> ---
> Johnathan Nightingale
> Human Shield
> johnath@mozilla.com
>
>
>
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>
0
Collin
10/20/2009 5:03:31 PM
I put together a brief description of the history module proposal on the wi=
ki:

https://wiki.mozilla.org/Security/CSP/HistoryModule

On Tue, Oct 20, 2009 at 10:03 AM, Collin Jackson
<mozilla@collinjackson.com> wrote:
> If you want to make a module that prevents history sniffing completely
> against specific sites and avoids assuming the user never interacts
> with a bad site, you could have a CSP module that allows a server to
> specify whether its history entries can be treated as visited by other
> origins. Sites concerned about user privacy would then have control
> over whether other sites could detect that they've been visited. A
> similar module could be used for cross-origin cache loads to address
> timing attacks.
>
> On Tue, Oct 20, 2009 at 6:26 AM, Johnathan Nightingale
> <johnath@mozilla.com> wrote:
>> On 19-Oct-09, at 5:39 PM, Adam Barth wrote:
>>>
>>> On Mon, Oct 19, 2009 at 6:43 AM, Johnathan Nightingale
>>> <johnath@mozilla.com> wrote:
>>>>
>>>> Not as limited as you might like. Remember that even apparently
>>>> non-dangerous constructs (e.g. background-image, the :visited pseudo
>>>> class)
>>>> can give people power to do surprising things (e.g. internal network p=
ing
>>>> sweeping, user history enumeration respectively).
>>>
>>> I'm not arguing for or against providing the ability to
>>> block-inline-css, but keep in mind that an attacker can do all those
>>> things as soon as you visit attacker.com.
>>
>> Yeah, I think you're absolutely right that CSP is primarily about preven=
ting
>> attackers from exploiting your browser's trust relationship with victim.=
com,
>> and the examples I offered are (for lack of a better term), victim-agnos=
tic.
>> They don't steal victim.com credentials or cause unwanted changes to, or
>> transactions with, your victim.com presence.
>>
>> I do think, though, that a helpful secondary effect of CSP is that it
>> reduces attackers' ability to amplify the effect of their attacks. You'r=
e
>> right that it doesn't take much to get users to click on a link, but I t=
hink
>> it is nevertheless the case that a good history enumerator or ping sweep
>> which happens in the background while you're reading a NYTimes article w=
ill
>> have a substantially higher success rate than a link in the comment sect=
ion
>> that says "Click here for free goodies." Basically by definition,
>> link-clickers are a subset of your total prospective victim pool.
>>
>> I think this is more specifically what makes me feel like there's still
>> value to locking down all inline styling, or at least providing that
>> facility, but I appreciate you forcing me to refine my thinking a little
>> more.
>>
>>> =C2=A0In the past, I've found it helpful to simply assume the
>>> user is always visiting attacker.com in some background tab. =C2=A0Afte=
r
>>> all, Firefox is supposed to let you view untrusted web sites securely.
>>
>> Yes, absolutely so. We should continue to try to bend smarts towards fix=
ing
>> :visited and other nasty sleights-of-hand. But the one course of work
>> doesn't preclude the other (and I don't think you were saying that it di=
d).
>>
>> Johnathan
>>
>> ---
>> Johnathan Nightingale
>> Human Shield
>> johnath@mozilla.com
>>
>>
>>
>> _______________________________________________
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>>
>
0
Collin
10/20/2009 5:24:58 PM
Collin Jackson wrote:
> If you want to make a module that prevents history sniffing completely
> against specific sites and avoids assuming the user never interacts
> with a bad site, you could have a CSP module that allows a server to
> specify whether its history entries can be treated as visited by other
> origins. Sites concerned about user privacy would then have control
> over whether other sites could detect that they've been visited. A
> similar module could be used for cross-origin cache loads to address
> timing attacks.

Collin Jackson wrote:
> I put together a brief description of the history module proposal on the wiki:
> 
> https://wiki.mozilla.org/Security/CSP/HistoryModule

The threat model of HistoryModule, as currently defined, seems to be 
precisely the threat model that would be addressed by a similar module 
implementing a per-origin cache partitioning scheme to defeat history 
timing attacks.

If these are to be kept as separate modules, then perhaps the threat 
model should be more tightly scoped, and directive names should be 
specific to the features they enable?

I like the idea of modularizing CSP.

Mike
0
Mike
10/20/2009 7:47:33 PM
> class) can give people power to do surprising things (e.g. internal
> network ping sweeping, user history enumeration respectively).

Isn't the ping sweeping threat already taken care of by CSP? No
requests to internal networks will be honored as they won't be allowed
by the policy. (although its not a threat present in the threat model
for CSP )

Regarding , History enumeration -- I don't see why it should be part
of CSP. A separate header - X-Safe-History can be used.

Cheers
Devdatta

On Oct 19, 6:43=A0am, Johnathan Nightingale <john...@mozilla.com> wrote:
> On 19-Oct-09, at 7:34 AM, Gervase Markham wrote:
>
> > On 15/10/09 22:20, Brandon Sterne wrote:
> >> IOW, we need to decide if webpage defacement via injected style is in
> >> the treat model for CSP and, if so, then we need to do B.
>
> > Is it just about defacement, or is it also about the fact that CSS =A0
> > can bring in behaviours etc?
>
> > If it's about defacement, then there's no set of "non-dangerous =A0
> > stylesheet constructs", and you can ignore my C. I think that, =A0
> > without executing JS code support, the successful attacks you could =A0
> > mount using CSS are limited. I guess you might put a notice on the =A0
> > bank website: "Urgent! Call this number and give them all your =A0
> > personal info!"...
>
> Not as limited as you might like. Remember that even apparently non-
> dangerous constructs (e.g. background-image, the :visited pseudo =A0
> class) can give people power to do surprising things (e.g. internal =A0
> network ping sweeping, user history enumeration respectively).
>
> J
>
> ---
> Johnathan Nightingale
> Human Shield
> john...@mozilla.com

0
Devdatta
10/20/2009 7:50:13 PM
On Tue, Oct 20, 2009 at 12:47 PM, Mike Ter Louw <mterlo1@uic.edu> wrote:
> The threat model of HistoryModule, as currently defined, seems to be
> precisely the threat model that would be addressed by a similar module
> implementing a per-origin cache partitioning scheme to defeat history timing
> attacks.

Good point.  I've added cache timing as an open issue at the bottom of
the HistoryModule wiki page.

> If these are to be kept as separate modules, then perhaps the threat model
> should be more tightly scoped, and directive names should be specific to the
> features they enable?

It's somewhat unclear when to break things into separate modules, but
having one module per threat seems to make sense.  The visited link
issue and the cache timing issue seem related enough (i.e., both about
history stealing) to be in the same module.

Adam
0
Adam
10/20/2009 7:55:26 PM
On Tue, Oct 20, 2009 at 12:50 PM, Devdatta <dev.akhawe@gmail.com> wrote:
> Regarding , History enumeration -- I don't see why it should be part
> of CSP. A separate header - X-Safe-History can be used.

I think one of the goals of CSP is to avoid having one-off HTTP
headers for each threat we'd like to mitigate.  Combining different
directives into a single policy mechanism has advantages:

1) It's easier for web site operators to manage one policy.
2) The directives can share common infrastructure, like the reporting
facilities.

Adam
0
Adam
10/20/2009 7:58:31 PM
On 10/20/09 12:58 PM, Adam Barth wrote:
> I think one of the goals of CSP is to avoid having one-off HTTP
> headers for each threat we'd like to mitigate.  Combining different
> directives into a single policy mechanism has advantages:
> 
> 1) It's easier for web site operators to manage one policy.
> 2) The directives can share common infrastructure, like the reporting
> facilities.

While I agree with your points enumerated above, we should be really
careful about scope creep and stuffing new goals into an old idea.  The
original point of CSP was not to provide a global security
infrastructure for web sites, but to provide content restrictions and
help stop XSS (mostly content restrictions).  Rolling all sorts of extra
threats like history sniffing into CSP will make it huge and complex,
and for not what was initially desired.  (A complex CSP isn't so bad if
it were modular, but I don't think 'wide-reaching' was the original aim
for CSP).

Brandon, Gerv, step in and correct me if I'm wrong -- you were working
on this long before me -- but I want to be really careful if we're going
to start changing the goals of this project.  If we want to come up with
something extensible and wide-reaching, we should probably step back and
seriously overhaul the design.

-Sid
0
Sid
10/20/2009 8:20:12 PM
On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com> wrote:
> While I agree with your points enumerated above, we should be really
> careful about scope creep and stuffing new goals into an old idea. =C2=A0=
The
> original point of CSP was not to provide a global security
> infrastructure for web sites, but to provide content restrictions and
> help stop XSS (mostly content restrictions). =C2=A0Rolling all sorts of e=
xtra
> threats like history sniffing into CSP will make it huge and complex,
> and for not what was initially desired. =C2=A0(A complex CSP isn't so bad=
 if
> it were modular, but I don't think 'wide-reaching' was the original aim
> for CSP).

I think we're completely in agreement, except that I don't think
making CSP modular is particularly hard. In fact, I think it makes the
proposal much more approachable because vendors can implement just
BaseModule (the CSP header syntax) and other modules they like such as
XSSModule without feeling like they have to implement the ones they
think aren't interesting. And they can experiment with their own
modules without feeling like they're breaking the spec.

One idea that might make a module CSP more approachable for vendors is
to have a status page that shows the various modules, like this:
https://wiki.mozilla.org/Security/CSP/Modules
0
Collin
10/20/2009 8:42:01 PM
In the modular approach, this is not true.  You simply send this header:

X-Content-Security-Policy: safe-history

The requirements to remove inline script, eval, etc aren't present
because you haven't opted into the XSSModule.  You can, of course,
combine them using this sort of policy:

X-Content-Security-Policy: safe-history, block-xss

but you certainly don't have to.

Adam


On Tue, Oct 20, 2009 at 1:59 PM, Devdatta <dev.akhawe@gmail.com> wrote:
> The history enumeration threat is a simple threat with a simple
> solution. Opting into Safe History protection shouldn't require me to
> do all the work of opting into CSP. In addition, I don't see any
> infrastructure that is needed by this feature that is in common with
> CSP.
>
> Lets say I am a website adminstrator, and I am concerned about this
> particular threat . Opting into CSP involves a lot of work -
> understanding the spec, noting down all the domains that interact
> everywhere on my site, removing inline scripts and evals and
> javascript URLs to corrected code etc. etc. My fear is that this will
> make admins write policies that are too lenient (say with allow-eval)
> , just to get the safe history feature.
>
> Cheers
> Devdatta
>
> 2009/10/20 Adam Barth <abarth-mozilla@adambarth.com>:
>> On Tue, Oct 20, 2009 at 12:50 PM, Devdatta <dev.akhawe@gmail.com> wrote:
>>> Regarding , History enumeration -- I don't see why it should be part
>>> of CSP. A separate header - X-Safe-History can be used.
>>
>> I think one of the goals of CSP is to avoid having one-off HTTP
>> headers for each threat we'd like to mitigate. =A0Combining different
>> directives into a single policy mechanism has advantages:
>>
>> 1) It's easier for web site operators to manage one policy.
>> 2) The directives can share common infrastructure, like the reporting
>> facilities.
>>
>> Adam
>>
>
0
Adam
10/20/2009 9:03:15 PM
On Tue, Oct 20, 2009 at 1:42 PM, Collin Jackson
<mozilla@collinjackson.com> wrote:
> I think we're completely in agreement, except that I don't think
> making CSP modular is particularly hard. In fact, I think it makes the
> proposal much more approachable because vendors can implement just
> BaseModule (the CSP header syntax) and other modules they like such as
> XSSModule without feeling like they have to implement the ones they
> think aren't interesting. And they can experiment with their own
> modules without feeling like they're breaking the spec.

I've factored the BaseModule out of the XSSModule, so it's clear that
you could implement the HistoryModule without the XSSModule.  I'd be
happy to take a crack at breaking up the main CSP spec into modules on
the wiki if you'd like to see what that would look like.  I don't
think it would be that hard.

Adam
0
Adam
10/20/2009 9:06:35 PM
Hi

Sorry, I didn't read your modular approach proposal before sending the
email.

Cheers
Devdatta

On Oct 20, 2:03=A0pm, Adam Barth <abarth-mozi...@adambarth.com> wrote:
> In the modular approach, this is not true. =A0You simply send this header=
:
>
> X-Content-Security-Policy: safe-history
>
> The requirements to remove inline script, eval, etc aren't present
> because you haven't opted into the XSSModule. =A0You can, of course,
> combine them using this sort of policy:
>
> X-Content-Security-Policy: safe-history, block-xss
>
> but you certainly don't have to.
>
> Adam
>
> On Tue, Oct 20, 2009 at 1:59 PM, Devdatta <dev.akh...@gmail.com> wrote:
> > The history enumeration threat is a simple threat with a simple
> > solution. Opting into Safe History protection shouldn't require me to
> > do all the work of opting into CSP. In addition, I don't see any
> > infrastructure that is needed by this feature that is in common with
> > CSP.
>
> > Lets say I am a website adminstrator, and I am concerned about this
> > particular threat . Opting into CSP involves a lot of work -
> > understanding the spec, noting down all the domains that interact
> > everywhere on my site, removing inline scripts and evals and
> > javascript URLs to corrected code etc. etc. My fear is that this will
> > make admins write policies that are too lenient (say with allow-eval)
> > , just to get the safe history feature.
>
> > Cheers
> > Devdatta
>
> > 2009/10/20 Adam Barth <abarth-mozi...@adambarth.com>:
> >> On Tue, Oct 20, 2009 at 12:50 PM, Devdatta <dev.akh...@gmail.com> wrot=
e:
> >>> Regarding , History enumeration -- I don't see why it should be part
> >>> of CSP. A separate header - X-Safe-History can be used.
>
> >> I think one of the goals of CSP is to avoid having one-off HTTP
> >> headers for each threat we'd like to mitigate. =A0Combining different
> >> directives into a single policy mechanism has advantages:
>
> >> 1) It's easier for web site operators to manage one policy.
> >> 2) The directives can share common infrastructure, like the reporting
> >> facilities.
>
> >> Adam

0
Devdatta
10/20/2009 9:07:42 PM
I'm not sure that providing a modular approach for vendors to  
implemented pieces of CSP is really valuable to our intended audience  
(web developers).  It will be hard enough for developers to keep track  
of which user agents support CSP, without requiring a matrix to  
understand which particular versions of which agents support the mix  
of CSP features they want to use, and what it means if a given browser  
only supports 2 of the 3 modules they want to use.  If this means some  
more up-front pain for vendors in implementation costs vs. pushing  
more complexity to web developers, the former approach seems to be a  
lot less expensive in the net.
   Lucas.

On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:

> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com> wrote:
>> While I agree with your points enumerated above, we should be really
>> careful about scope creep and stuffing new goals into an old idea.   
>> The
>> original point of CSP was not to provide a global security
>> infrastructure for web sites, but to provide content restrictions and
>> help stop XSS (mostly content restrictions).  Rolling all sorts of  
>> extra
>> threats like history sniffing into CSP will make it huge and complex,
>> and for not what was initially desired.  (A complex CSP isn't so  
>> bad if
>> it were modular, but I don't think 'wide-reaching' was the original  
>> aim
>> for CSP).
>
> I think we're completely in agreement, except that I don't think
> making CSP modular is particularly hard. In fact, I think it makes the
> proposal much more approachable because vendors can implement just
> BaseModule (the CSP header syntax) and other modules they like such as
> XSSModule without feeling like they have to implement the ones they
> think aren't interesting. And they can experiment with their own
> modules without feeling like they're breaking the spec.
>
> One idea that might make a module CSP more approachable for vendors is
> to have a status page that shows the various modules, like this:
> https://wiki.mozilla.org/Security/CSP/Modules
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

0
Lucas
10/20/2009 9:25:35 PM
Why do web developers need to keep track of which user agents support
CSP? I thought CSP was a defense in depth. I really hope people don't
use this as their only XSS defense. :)

On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski <lucas@mozilla.com> wrote:
> I'm not sure that providing a modular approach for vendors to implemented
> pieces of CSP is really valuable to our intended audience (web developers=
).
> =C2=A0It will be hard enough for developers to keep track of which user a=
gents
> support CSP, without requiring a matrix to understand which particular
> versions of which agents support the mix of CSP features they want to use=
,
> and what it means if a given browser only supports 2 of the 3 modules the=
y
> want to use. =C2=A0If this means some more up-front pain for vendors in
> implementation costs vs. pushing more complexity to web developers, the
> former approach seems to be a lot less expensive in the net.
> =C2=A0Lucas.
>
> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>
>> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com> wrote:
>>>
>>> While I agree with your points enumerated above, we should be really
>>> careful about scope creep and stuffing new goals into an old idea. =C2=
=A0The
>>> original point of CSP was not to provide a global security
>>> infrastructure for web sites, but to provide content restrictions and
>>> help stop XSS (mostly content restrictions). =C2=A0Rolling all sorts of=
 extra
>>> threats like history sniffing into CSP will make it huge and complex,
>>> and for not what was initially desired. =C2=A0(A complex CSP isn't so b=
ad if
>>> it were modular, but I don't think 'wide-reaching' was the original aim
>>> for CSP).
>>
>> I think we're completely in agreement, except that I don't think
>> making CSP modular is particularly hard. In fact, I think it makes the
>> proposal much more approachable because vendors can implement just
>> BaseModule (the CSP header syntax) and other modules they like such as
>> XSSModule without feeling like they have to implement the ones they
>> think aren't interesting. And they can experiment with their own
>> modules without feeling like they're breaking the spec.
>>
>> One idea that might make a module CSP more approachable for vendors is
>> to have a status page that shows the various modules, like this:
>> https://wiki.mozilla.org/Security/CSP/Modules
>> _______________________________________________
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>
>
0
Collin
10/20/2009 9:30:53 PM
We should think ahead, not just a year or two but to the point that  
all current browsers will be EOL and (just like every other feature  
that is currently in HTML5) this will be widely adopted and reliable.
   Lucas.

On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:

> Why do web developers need to keep track of which user agents support
> CSP? I thought CSP was a defense in depth. I really hope people don't
> use this as their only XSS defense. :)
>
> On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski <lucas@mozilla.com>  
> wrote:
>> I'm not sure that providing a modular approach for vendors to  
>> implemented
>> pieces of CSP is really valuable to our intended audience (web  
>> developers).
>>  It will be hard enough for developers to keep track of which user  
>> agents
>> support CSP, without requiring a matrix to understand which  
>> particular
>> versions of which agents support the mix of CSP features they want  
>> to use,
>> and what it means if a given browser only supports 2 of the 3  
>> modules they
>> want to use.  If this means some more up-front pain for vendors in
>> implementation costs vs. pushing more complexity to web developers,  
>> the
>> former approach seems to be a lot less expensive in the net.
>>  Lucas.
>>
>> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>>
>>> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com> wrote:
>>>>
>>>> While I agree with your points enumerated above, we should be  
>>>> really
>>>> careful about scope creep and stuffing new goals into an old  
>>>> idea.  The
>>>> original point of CSP was not to provide a global security
>>>> infrastructure for web sites, but to provide content restrictions  
>>>> and
>>>> help stop XSS (mostly content restrictions).  Rolling all sorts  
>>>> of extra
>>>> threats like history sniffing into CSP will make it huge and  
>>>> complex,
>>>> and for not what was initially desired.  (A complex CSP isn't so  
>>>> bad if
>>>> it were modular, but I don't think 'wide-reaching' was the  
>>>> original aim
>>>> for CSP).
>>>
>>> I think we're completely in agreement, except that I don't think
>>> making CSP modular is particularly hard. In fact, I think it makes  
>>> the
>>> proposal much more approachable because vendors can implement just
>>> BaseModule (the CSP header syntax) and other modules they like  
>>> such as
>>> XSSModule without feeling like they have to implement the ones they
>>> think aren't interesting. And they can experiment with their own
>>> modules without feeling like they're breaking the spec.
>>>
>>> One idea that might make a module CSP more approachable for  
>>> vendors is
>>> to have a status page that shows the various modules, like this:
>>> https://wiki.mozilla.org/Security/CSP/Modules
>>> _______________________________________________
>>> dev-security mailing list
>>> dev-security@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-security
>>
>>

0
Lucas
10/20/2009 9:49:57 PM
It seems to me that thinking ahead would tend to favor the modular
approach, since we're unlikely to guess the most compelling use cases
on the first try, and modules will provide a backwards-compatible
means of evolving the spec to what web authors actually need.

On Tue, Oct 20, 2009 at 2:49 PM, Lucas Adamski <lucas@mozilla.com> wrote:
> We should think ahead, not just a year or two but to the point that all
> current browsers will be EOL and (just like every other feature that is
> currently in HTML5) this will be widely adopted and reliable.
> =C2=A0Lucas.
>
> On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:
>
>> Why do web developers need to keep track of which user agents support
>> CSP? I thought CSP was a defense in depth. I really hope people don't
>> use this as their only XSS defense. :)
>>
>> On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski <lucas@mozilla.com> wrote=
:
>>>
>>> I'm not sure that providing a modular approach for vendors to implement=
ed
>>> pieces of CSP is really valuable to our intended audience (web
>>> developers).
>>> =C2=A0It will be hard enough for developers to keep track of which user=
 agents
>>> support CSP, without requiring a matrix to understand which particular
>>> versions of which agents support the mix of CSP features they want to
>>> use,
>>> and what it means if a given browser only supports 2 of the 3 modules
>>> they
>>> want to use. =C2=A0If this means some more up-front pain for vendors in
>>> implementation costs vs. pushing more complexity to web developers, the
>>> former approach seems to be a lot less expensive in the net.
>>> =C2=A0Lucas.
>>>
>>> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>>>
>>>> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com> wrote:
>>>>>
>>>>> While I agree with your points enumerated above, we should be really
>>>>> careful about scope creep and stuffing new goals into an old idea. =
=C2=A0The
>>>>> original point of CSP was not to provide a global security
>>>>> infrastructure for web sites, but to provide content restrictions and
>>>>> help stop XSS (mostly content restrictions). =C2=A0Rolling all sorts =
of
>>>>> extra
>>>>> threats like history sniffing into CSP will make it huge and complex,
>>>>> and for not what was initially desired. =C2=A0(A complex CSP isn't so=
 bad if
>>>>> it were modular, but I don't think 'wide-reaching' was the original a=
im
>>>>> for CSP).
>>>>
>>>> I think we're completely in agreement, except that I don't think
>>>> making CSP modular is particularly hard. In fact, I think it makes the
>>>> proposal much more approachable because vendors can implement just
>>>> BaseModule (the CSP header syntax) and other modules they like such as
>>>> XSSModule without feeling like they have to implement the ones they
>>>> think aren't interesting. And they can experiment with their own
>>>> modules without feeling like they're breaking the spec.
>>>>
>>>> One idea that might make a module CSP more approachable for vendors is
>>>> to have a status page that shows the various modules, like this:
>>>> https://wiki.mozilla.org/Security/CSP/Modules
>>>> _______________________________________________
>>>> dev-security mailing list
>>>> dev-security@lists.mozilla.org
>>>> https://lists.mozilla.org/listinfo/dev-security
>>>
>>>
>
>
0
Collin
10/20/2009 9:53:26 PM
I actually think the modular approach is better for the web developer
as the policy is easier to write and understand.

But I do share your concern, Atleast right now, it is pretty easy to
say -- user agents that support XSSModule are protected against XSS
and user agents that support history module are protected against
history enumeration attacks.  Going forward, we want to keep the
separation just as clear and simple.

* This would require very clear and simply stated threat models for
each module. Each module's threats should be (ideally) disjoint.
* A module should be small and complete. We should make it clear why
every part of the module is important for the given threat model. This
would hopefully ensure that browser vendors either implement the whole
module or none of it. (I.E implementing half of a module will give no
security)

I think this breakup of the spec into modules is useful to the
webdevelopers (making it easier to understand) and easier for the
browser vendors to implement.

Regards
Devdatta
0
Devdatta
10/20/2009 10:07:54 PM
I've been a firm believer that CSP will evolve over time but that's an  
argument for versioning though, not modularity. We are as likely to  
have to modify existing behaviors as introduce whole new sets.  It's  
also not a reason to split the existing functionality into modules.
   Lucas

On Oct 20, 2009, at 14:53, Collin Jackson <mozilla@collinjackson.com>  
wrote:

> It seems to me that thinking ahead would tend to favor the modular
> approach, since we're unlikely to guess the most compelling use cases
> on the first try, and modules will provide a backwards-compatible
> means of evolving the spec to what web authors actually need.
>
> On Tue, Oct 20, 2009 at 2:49 PM, Lucas Adamski <lucas@mozilla.com>  
> wrote:
>> We should think ahead, not just a year or two but to the point that  
>> all
>> current browsers will be EOL and (just like every other feature  
>> that is
>> currently in HTML5) this will be widely adopted and reliable.
>>  Lucas.
>>
>> On Oct 20, 2009, at 2:30 PM, Collin Jackson wrote:
>>
>>> Why do web developers need to keep track of which user agents  
>>> support
>>> CSP? I thought CSP was a defense in depth. I really hope people  
>>> don't
>>> use this as their only XSS defense. :)
>>>
>>> On Tue, Oct 20, 2009 at 2:25 PM, Lucas Adamski <lucas@mozilla.com>  
>>> wrote:
>>>>
>>>> I'm not sure that providing a modular approach for vendors to  
>>>> implemented
>>>> pieces of CSP is really valuable to our intended audience (web
>>>> developers).
>>>>  It will be hard enough for developers to keep track of which  
>>>> user agents
>>>> support CSP, without requiring a matrix to understand which  
>>>> particular
>>>> versions of which agents support the mix of CSP features they  
>>>> want to
>>>> use,
>>>> and what it means if a given browser only supports 2 of the 3  
>>>> modules
>>>> they
>>>> want to use.  If this means some more up-front pain for vendors in
>>>> implementation costs vs. pushing more complexity to web  
>>>> developers, the
>>>> former approach seems to be a lot less expensive in the net.
>>>>  Lucas.
>>>>
>>>> On Oct 20, 2009, at 1:42 PM, Collin Jackson wrote:
>>>>
>>>>> On Tue, Oct 20, 2009 at 1:20 PM, Sid Stamm <sid@mozilla.com>  
>>>>> wrote:
>>>>>>
>>>>>> While I agree with your points enumerated above, we should be  
>>>>>> really
>>>>>> careful about scope creep and stuffing new goals into an old  
>>>>>> idea.  The
>>>>>> original point of CSP was not to provide a global security
>>>>>> infrastructure for web sites, but to provide content  
>>>>>> restrictions and
>>>>>> help stop XSS (mostly content restrictions).  Rolling all sorts  
>>>>>> of
>>>>>> extra
>>>>>> threats like history sniffing into CSP will make it huge and  
>>>>>> complex,
>>>>>> and for not what was initially desired.  (A complex CSP isn't  
>>>>>> so bad if
>>>>>> it were modular, but I don't think 'wide-reaching' was the  
>>>>>> original aim
>>>>>> for CSP).
>>>>>
>>>>> I think we're completely in agreement, except that I don't think
>>>>> making CSP modular is particularly hard. In fact, I think it  
>>>>> makes the
>>>>> proposal much more approachable because vendors can implement just
>>>>> BaseModule (the CSP header syntax) and other modules they like  
>>>>> such as
>>>>> XSSModule without feeling like they have to implement the ones  
>>>>> they
>>>>> think aren't interesting. And they can experiment with their own
>>>>> modules without feeling like they're breaking the spec.
>>>>>
>>>>> One idea that might make a module CSP more approachable for  
>>>>> vendors is
>>>>> to have a status page that shows the various modules, like this:
>>>>> https://wiki.mozilla.org/Security/CSP/Modules
>>>>> _______________________________________________
>>>>> dev-security mailing list
>>>>> dev-security@lists.mozilla.org
>>>>> https://lists.mozilla.org/listinfo/dev-security
>>>>
>>>>
>>
>>
0
Lucas
10/20/2009 10:21:39 PM
I'm confident we can figure out how best to communicate CSP use cases  
to developers independent of implementation.  What we should have are  
documentation modules that walk a web dev through specific goal-driven  
examples, for example.

The problem with modules I see is they will complicate the model in  
the long run, as the APIs they govern will not be mutually exlusive.   
What if 3 different modules dictate image loading behaviors?  What if  
the given user agent in a scenario does not implement the module where  
the most restrictive of the 3 policies is specified?
   Lucas

On Oct 20, 2009, at 15:07 Devdatta <dev.akhawe@gmail.com> wrote:

> I actually think the modular approach is better for the web developer
> as the policy is easier to write and understand.
>
> But I do share your concern, Atleast right now, it is pretty easy to
> say -- user agents that support XSSModule are protected against XSS
> and user agents that support history module are protected against
> history enumeration attacks.  Going forward, we want to keep the
> separation just as clear and simple.
>
> * This would require very clear and simply stated threat models for
> each module. Each module's threats should be (ideally) disjoint.
> * A module should be small and complete. We should make it clear why
> every part of the module is important for the given threat model. This
> would hopefully ensure that browser vendors either implement the whole
> module or none of it. (I.E implementing half of a module will give no
> security)
>
> I think this breakup of the spec into modules is useful to the
> webdevelopers (making it easier to understand) and easier for the
> browser vendors to implement.
>
> Regards
> Devdatta
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
0
Lucas
10/20/2009 10:35:54 PM
On a related note, just to have one more example (and for my learning)
, I went ahead and wrote a draft for ClickJackingModule.
https://wiki.mozilla.org/Security/CSP/ClickJackingModule

In general I like how short and simple each individual module is.

Cheers
Devdatta

2009/10/20 Lucas Adamski <lucas@mozilla.com>:
> I'm confident we can figure out how best to communicate CSP use cases to
> developers independent of implementation. =A0What we should have are
> documentation modules that walk a web dev through specific goal-driven
> examples, for example.
>
> The problem with modules I see is they will complicate the model in the l=
ong
> run, as the APIs they govern will not be mutually exlusive. =A0What if 3
> different modules dictate image loading behaviors? =A0What if the given u=
ser
> agent in a scenario does not implement the module where the most restrict=
ive
> of the 3 policies is specified?
> =A0Lucas
>
> On Oct 20, 2009, at 15:07 Devdatta <dev.akhawe@gmail.com> wrote:
>
>> I actually think the modular approach is better for the web developer
>> as the policy is easier to write and understand.
>>
>> But I do share your concern, Atleast right now, it is pretty easy to
>> say -- user agents that support XSSModule are protected against XSS
>> and user agents that support history module are protected against
>> history enumeration attacks. =A0Going forward, we want to keep the
>> separation just as clear and simple.
>>
>> * This would require very clear and simply stated threat models for
>> each module. Each module's threats should be (ideally) disjoint.
>> * A module should be small and complete. We should make it clear why
>> every part of the module is important for the given threat model. This
>> would hopefully ensure that browser vendors either implement the whole
>> module or none of it. (I.E implementing half of a module will give no
>> security)
>>
>> I think this breakup of the spec into modules is useful to the
>> webdevelopers (making it easier to understand) and easier for the
>> browser vendors to implement.
>>
>> Regards
>> Devdatta
>> _______________________________________________
>> dev-security mailing list
>> dev-security@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security
>
0
Devdatta
10/21/2009 1:05:52 AM
On 20/10/09 21:20, Sid Stamm wrote:
> While I agree with your points enumerated above, we should be really
> careful about scope creep and stuffing new goals into an old idea.  The
> original point of CSP was not to provide a global security
> infrastructure for web sites, but to provide content restrictions and
> help stop XSS (mostly content restrictions).  Rolling all sorts of extra
> threats like history sniffing into CSP will make it huge and complex,
> and for not what was initially desired.  (A complex CSP isn't so bad if
> it were modular, but I don't think 'wide-reaching' was the original aim
> for CSP).

I think we need to differentiate between added complexity in syntax and 
added complexity in implementation.

If we design the syntax right, there is no need for additional CSP 
directives to make the syntax more complicated for those who neither 
wish to know nor care about them.

If we modularise CSP correctly, there is no necessity that additional 
ideas lead to greater implementation complexity for those browsers who 
don't want to adopt those ideas (yet).

I think it would be good if we didn't have to invent a new header for 
each idea of ways to lock down content. I think it would be great if 
people could experiment with Content-Security-Policy: x-my-cool-idea, 
and see if it was useful before standardization. Any idea which is a 
policy for content security should be in scope for experimentation.

I agree with your concerns about scope creep, but I don't think making 
sure the syntax is forwards-compatible requires a fundamental redesign. 
And I don't think allowing the possibility of other things means we are 
on the hook to implement them, either for Firefox 3.6 or for any other 
release.

We may wish to say "OK, CSP 1.0 is these 3 modules", so that a browser 
could say "I support CSP 1.0" without having to be more specific and 
detailed. But given that CSP support is unlikely to be a major marketing 
sell, I don't think that's a big factor.

Gerv
0
Gervase
10/21/2009 9:49:31 AM
On 10/21/09 2:49 AM, Gervase Markham wrote:
> I think we need to differentiate between added complexity in syntax and
> added complexity in implementation.
> 
> If we design the syntax right, there is no need for additional CSP
> directives to make the syntax more complicated for those who neither
> wish to know nor care about them.

Additional Directives are not a problem either, unless they're mandatory
for all policies (which is not the case ... yet).  I'm still more in
favor of extension via new directives than extension by modifying
existing ones: this seems more obviously backward compatible and in
reality probably more forward compatible too.

> If we modularise CSP correctly, there is no necessity that additional
> ideas lead to greater implementation complexity for those browsers who
> don't want to adopt those ideas (yet).

Agreed.   I'm not against modularization at all, I just want to be
careful so that it is specked out that way -- we just need to keep this
in mind.

> I think it would be good if we didn't have to invent a new header for
> each idea of ways to lock down content. I think it would be great if
> people could experiment with Content-Security-Policy: x-my-cool-idea,
> and see if it was useful before standardization. Any idea which is a
> policy for content security should be in scope for experimentation.

Right.  This was proposed a while back (I don't recall the thread off
hand) as one header to convey all relevant security policies.  Something
like Accept-Policies I think.  If we want to turn CSP into that, we
could, but it surely wasn't designed from the ground up with that in mind.

> I agree with your concerns about scope creep, but I don't think making
> sure the syntax is forwards-compatible requires a fundamental redesign.
> And I don't think allowing the possibility of other things means we are
> on the hook to implement them, either for Firefox 3.6 or for any other
> release.

Point taken.  I'm on board for modularization so long as we don't have
to completely redesign the policy syntax.

I'm also a bit worried that we might lose sight of the original goals of
CSP and so I wanted to bring up the fact that we have wandered far far
away from where CSP started.  If everyone is okay with the diversion, I
see no cause for concern.

> We may wish to say "OK, CSP 1.0 is these 3 modules", so that a browser
> could say "I support CSP 1.0" without having to be more specific and
> detailed. But given that CSP support is unlikely to be a major marketing
> sell, I don't think that's a big factor.

What?  No "CSP 1.0 Compatible!" stickers for my laptop?  Or "CSP
inside"?  :)

-Sid
0
Sid
10/21/2009 4:25:44 PM
On 21/10/09 17:25, Sid Stamm wrote:
> Additional Directives are not a problem either, unless they're mandatory
> for all policies (which is not the case ... yet).  I'm still more in
> favor of extension via new directives than extension by modifying
> existing ones: this seems more obviously backward compatible and in
> reality probably more forward compatible too.

Ideally, this would always be the case. And the thinking that's going 
into the modularization should help us to correctly separate concerns.

> Right.  This was proposed a while back (I don't recall the thread off
> hand) as one header to convey all relevant security policies.  Something
> like Accept-Policies I think.  If we want to turn CSP into that, we
> could, but it surely wasn't designed from the ground up with that in mind.

I think the name "Content Security Policy" is generic enough already :-)

Gerv
0
Gervase
10/22/2009 9:21:02 AM
Gervase Markham wrote:
> I think it would be good if we didn't have to invent a new header for 
> each idea of ways to lock down content. I think it would be great if 
> people could experiment with Content-Security-Policy: x-my-cool-idea, 
> and see if it was useful before standardization. Any idea which is a 
> policy for content security should be in scope for experimentation.

I've added a CSRF straw-man:

https://wiki.mozilla.org/Security/CSP/CSRFModule

This page borrows liberally from XSSModule.  Comments are welcome!

Mike
0
Mike
10/22/2009 3:58:33 PM
On 8/11/09 3:19 AM, Gervase Markham wrote:
> Here's some possibilities for www.mozilla.org, based on the home page -
> which does repost RSS headlines, so there's at least the theoretical
> possibility of an injection. To begin with:
>
> allow self; options inline-script;

blocking inline-script is key to stopping XSS. We added the ability to 
turn that bit of CSP off as an interim crutch for complex sites trying 
to convert, but if our proof-of-concept site has to rely on it we've 
clearly failed and will be setting a bad example to boot.
0
Daniel
10/23/2009 12:50:59 AM
On 23/10/09 01:50, Daniel Veditz wrote:
> blocking inline-script is key to stopping XSS. We added the ability to
> turn that bit of CSP off as an interim crutch for complex sites trying
> to convert, but if our proof-of-concept site has to rely on it we've
> clearly failed and will be setting a bad example to boot.

What I was doing in my message was creating a policy for the site as it 
is now exactly - i.e. one you could use without any modifications. So as 
the site had inline-script, I had to add the inline-script directive. 
What else would you have me do? :-)

If we are doing a proof-of-concept conversion, then let's actually do 
some conversion work. That would mean moving the one line of JS which 
kicks off Urchin into an external file.

Gerv
0
Gervase
10/23/2009 8:37:53 AM
Reply:

Similar Artilces:

Why is it an error to have both X-Content-Security-Policy and X-Content-Security-Policy-Report-Only ?
https://wiki.mozilla.org/Security/CSP/Spec#Report-Only_mode If both a X-Content-Security-Policy-Report-Only header and a X-Content-Security-Policy header are present in the same response, a warning is posted to the user agent's error console and any policy specified in X-Content-Security-Policy-Report-Only is ignored. The policy specified in X-Content-Security-Policy headers is enforced. Why is this? This seems like an unnecessary burden which prevents groups from tightening their security policies over time. For example, here at Google, I'm interested in helping resol...

ClickJackingModule (was Re: Comments on the Content Security Policy specification)
Thanks Devdatta. One of the nice thing about separating the clickjacking concerns from the XSS concerns is that developers can deploy a policy like X-Content-Security-Policy: frame-ancestors self without having to make sure that all the setTimeout calls in their web app use function objects instead of strings. Adam On Tue, Oct 20, 2009 at 6:05 PM, Devdatta <dev.akhawe@gmail.com> wrote: > On a related note, just to have one more example (and for my learning) > , I went ahead and wrote a draft for ClickJackingModule. > https://wiki.mozilla.org/Security/CSP/Cli...

Module granularity (Re: Comments on the Content Security Policy specification)
On Tue, Oct 20, 2009 at 3:35 PM, Lucas Adamski <lucas@mozilla.com> wrote: > The problem with modules I see is they will complicate the model in the l= ong > run, as the APIs they govern will not be mutually exlusive. =A0What if 3 > different modules dictate image loading behaviors? =A0What if the given u= ser > agent in a scenario does not implement the module where the most restrict= ive > of the 3 policies is specified? This seems like a question of granularity. Presumably a decomposition that has three modules competing to control image loads is too granul...

CSRF Module (was Re: Comments on the Content Security Policy specification)
On Thu, Oct 22, 2009 at 8:58 AM, Mike Ter Louw <mterlo1@uic.edu> wrote: > I've added a CSRF straw-man: > > https://wiki.mozilla.org/Security/CSP/CSRFModule > > This page borrows liberally from XSSModule. =A0Comments are welcome! Two comments: 1) The attacker goal is very syntactic. It would be better to explain what the attacker is trying to achieve instead of how we imagine the attack taking place. 2) It seems like an attacker can easily circumvent this module by submitting a form to attacker.com and then generating the forged request (which will be...

Versioning vs. Modularity (was Re: Comments on the Content Security Policy specification)
On Tue, Oct 20, 2009 at 3:21 PM, Lucas Adamski <lucas@mozilla.com> wrote: > I've been a firm believer that CSP will evolve over time but that's an > argument for versioning though, not modularity. We are as likely to have = to > modify existing behaviors as introduce whole new sets. =A0It's also not a > reason to split the existing functionality into modules. I'm not sure versioning is the best approach for web technologies. For example, versioning has been explicitly rejected for HTML, ECMAScript, and cookies. In fact, I can't really think of a...

How secure is secure?
Thanks to this group and all the high tech individuals who frequent it I have learned how to protect my PC from the inside out. But what about security risks to my info 'before' it gets to my computer? Like my mail box on the server. Could someone hack into that and thumb through my mail? If so, how would I ever know? (The short story) We have a rogue employee at my work who one day decided to run the web site, she got in tight with the ISP, got tools to set and delete passwords on a protected directory on the server. Who knows if she has telnet access to other things, li...

security too secure
Name: joe Product: Firefox Summary: security too secure Comments: The security thing won't let me in this sight no matter how I accept, confirm, get certificate, etc. https://www.vtext.com/customer_site/jsp/messaging_lo.jsp Browser Details: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-GB; rv:1.9.1b4) Gecko/20090423 Firefox/3.5b4 From URL: http://hendrix.mozilla.org/ Note to readers: Hendrix gives no expectation of a response to this feedback but if you wish to provide one you must BCC (not CC) the sender for them to see it. ...

when is secure, secure?
Lo everyone, I wrote a custom authentication handler for PureFTPD, using a combination of authentication methods, for about 4 different types of users. So far, from testing it, it does look to work properly, and does it's job pretty well (and fast). I use #!/usr/bin/perl -W as well as use Strict, and use warnings, and the code returns no errors or warnings when run. I am right to presume that this basically only really tells me the my syntax and structure of the application is right? What's a good way to see whether it is actually SECURE... There is a couple of lines of...

Content Security Policy
Hello all, We have been working hard lately to finish documenting the Content Security Policy proposal, which we plan to start implementing very soon. For those of you who have followed the progression of CSP, you have seen the model grow quite a bit in complexity. As one thinks through the CSP model, it becomes clear that a certain amount of complexity is in fact necessary for the model to be useful. I have done my best to describe the model and provide justification for the various restrictions here: http://people.mozilla.org/~bsterne/content-security-policy/details.html We ...

Mixed Secure and Non Secured Content
I didn't get a feel for how safe SSL pages are that contain both secure and non secured content. If a page contains both is the secured content (say a form or password entry) safe? John John Pearce wrote: >I didn't get a feel for how safe SSL pages are that contain both secure >and non secured content. If a page contains both is the secured content >(say a form or password entry) safe? Depends on what's secure and what isn't. For example, if it's just graphics that are insecure, you're fine. It's the code itself (HTML plus possibly...

How secure is AuthenticationTypes.Secure?
I understand that AuthenticationTypes.Secure requests secure authentication using Kerberos or NTLM (??). However, here is a scenario I am trying to understand. Let us say that I am having a regular ASP.NET site - with SSL certificates not installed on the web server. The login sends the request out to an AD server which also does not have certificates installed. However, I have set Secure flag to AuthenticationTypes.Secure. When the username and password data gets transmitted between the application and the LDAP server, how secure are the password and username info? In other words is this in...

Password secure...is it secure?
Yes I just got this baby and I LOVE it! Its great. I have stored all my passwords inside of it (and yes made a few backups from them in secure locations) How secure is this program really? It uses blowfish to encrypt the database but how strong blowfish? 128bits? 256? 448? Anything else I should think about it? I have putted it and its databases inside PGPdisk just to play it safe...but then again Im a paranoid. :) -- Markus Jansson ************************************ My privacy related homepage and PGP keys: http://www.geocities.com/jansson_markus/ ********...

Content Security Policy feedback
Giorgio Maone mentioned CSP on the OWASP Intrinsic Security list[1] and I wanted to provide some feedback. (1) Something that appears to be missing from the spec is a way for the browser to advertise to the server that it will support Content Security Policy, possibly with the CSP version. By having the browser send an additional header, it allows the server to make decisions about the browser, such as limiting access to certain resources, denying access, redirecting to an alternate site that tries to mitigate using other techniques, etc. Without the browser advertising if it will...

Content Security Policy updates
Sid has updated the Content Security Policy spec to address some of the issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec You can see the issues we've been tracking and the resolutions at the Talk page: https://wiki.mozilla.org/Talk:Security/CSP/Spec There are still a few open issues. Daniel Veditz wrote on 7/23/2009 10:32 AM: > Sid has updated the Content Security Policy spec to address some of the > issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec Under "Policy Refinements with a Multiply-Specified Header" there is a misspe...

Web resources about - Comments on the Content Security Policy specification - mozilla.dev.security

Publicly Available Specification - Wikipedia, the free encyclopedia
The PAS is a consultative document where the development process and written format is based on the British Standard model. Any organisation, ...

INFOGRAPHIC: Where To Find Dimensions, Specifications For Facebook Ad Units
As Facebook continues to simplify and tweak its advertising offerings, staying on top of details such as dimensions and specifications can be ...

Infographic: A visual guide to Facebook’s ad creative specifications
... should the text run? Ampush , a Facebook Strategic Preferred Marketing Developer, put together a comprehensive infographic detailing the specifications ...

ASUS Transformer Prime - All Details and Specifications - YouTube
ASUS Transformer Prime - http://www.netbooknews.com/38965/asus-eee-pad-transformer-prime-full-details/ - Check out the ASUS Eee Pad Transformer ...

Volvo four-cylinder engines: specifications and details
New range of four-cylinder engines promises big fuel economy gains.

Samsung Galaxy Note 5 release date, specifications, pricing and photos
Samsung Galaxy Note 5 is official. Its release date, pricing and specifications have been revealed.

Apple iPhone 4S Specifications - Mobile Phones - Smart Phones - Good Gear Guide by PC World Australia ...
... to the iPhone 4, but it boasts several improvements under the hood 3G, Bluetooth, Wireless 802.11b, Wireless 802.11g, Wireless 802.11n *Specifications ...

Ford confirms Australian Mustang specifications
Details of the Australian models locked down ahead of launch.

MIX11: Microsoft updates Windows Phone 7 chassis specification to include Adreno - Pocket Gamer.biz ...
Microsoft's minimum specs for handsets looking to run Windows Phone 7 were, at the time of the platform's release, considered competitive. The ...

Agile Test Case Management – Specifications and Executable Specifications
Long gone are the relative safe topics from the first three posts in this series and we’re well into realm of dangerous with Specification and ...

Resources last updated: 11/30/2015 4:04:47 PM