Content Security Policy feedback

Giorgio Maone mentioned CSP on the OWASP Intrinsic Security list[1]
and I wanted to provide some feedback.

(1) Something that appears to be missing from the spec is a way for
the browser to advertise to the server that it will support Content
Security Policy, possibly with the CSP version.  By having the browser
send an additional header, it allows the server to make decisions
about the browser, such as limiting access to certain resources,
denying access, redirecting to an alternate site that tries to
mitigate using other techniques, etc.  Without the browser advertising
if it will follow the CSP directives, one would have to test for
browser compliance, much like how tests are done now for cookie and
JavaScript support (maybe that isn't a bad thing?).

(2) Currently the spec allows/denies based on the host name, it might
be worthwhile to allow limiting it to a specific path as well.  For
example, say you use Google's custom search engine, one way to
implement it is to use a script that sits on www.google.com (e.g.
http://www.google.com/coop/cse/brand?form=cse-search-box&lang=en).
By having an allowed path, you could prevent loading other scripts
from the www.google.com domain.

(3) Currently the spec focuses on the "host items" -- has any thought
be given to allowing CSP to extend to sites being referenced by "host
items"?  That is, allowing a site to specify that it can't be embedded
on another site via frame or object, etc?  I imagine it would be
similar to the Access Control for XS-XHR[2].


- Bil

[1] https://lists.owasp.org/pipermail/owasp-intrinsic-security/2008-November/000062.html
[2] http://www.w3.org/TR/access-control/
0
Bil
11/17/2008 10:19:53 PM
mozilla.dev.security 649 articles. 0 followers. Post Follow

41 Replies
732 Views

Similar Articles

[PageSpeed] 13
Get it on Google Play
Get it on Apple App Store

Bil Corry wrote:
> Giorgio Maone mentioned CSP on the OWASP Intrinsic Security list[1]
> and I wanted to provide some feedback.
> 
> (1) Something that appears to be missing from the spec is a way for
> the browser to advertise to the server that it will support Content
> Security Policy, possibly with the CSP version. 

That's intentional. CSP is a backstop solution, not front-line security.
If you are depending on the presence of CSP, as the lolcats say, U R
Doin It Wrong.

> (2) Currently the spec allows/denies based on the host name, it might
> be worthwhile to allow limiting it to a specific path as well.  For
> example, say you use Google's custom search engine, one way to
> implement it is to use a script that sits on www.google.com (e.g.
> http://www.google.com/coop/cse/brand?form=cse-search-box&lang=en).
> By having an allowed path, you could prevent loading other scripts
> from the www.google.com domain.

For this and the next one, I'll wait for bsterne to reply, as he's doing
the implementation and speccing work.

> (3) Currently the spec focuses on the "host items" -- has any thought
> be given to allowing CSP to extend to sites being referenced by "host
> items"?  That is, allowing a site to specify that it can't be embedded
> on another site via frame or object, etc?  I imagine it would be
> similar to the Access Control for XS-XHR[2].

I would suspect that would be out of scope.

Gerv
0
Gervase
11/20/2008 5:39:45 PM
On Nov 17, 2:19=A0pm, Bil Corry <b...@corry.biz> wrote:
> (1) Something that appears to be missing from the spec is a way for
> the browser to advertise to the server that it will support Content
> Security Policy, possibly with the CSP version. =A0By having the browser
> send an additional header, it allows the server to make decisions
> about the browser, such as limiting access to certain resources,
> denying access, redirecting to an alternate site that tries to
> mitigate using other techniques, etc. =A0Without the browser advertising
> if it will follow the CSP directives, one would have to test for
> browser compliance, much like how tests are done now for cookie and
> JavaScript support (maybe that isn't a bad thing?).

This isn't a bad idea, as I have seen this sort of "compatibility
level" used successfully elsewhere.  If future changes are made to the
model which would define restrictions for new types of content (e.g.
<video>), or which would affect the default behaviors for how content
is allowed to load, then it will be useful to servers to have their
clients' CSP version information.  If we are going to add this to the
model, then we should do so from the beginning to avoid the
potentially messy browser compliance testing that would result after
the first set of changes.

> (2) Currently the spec allows/denies based on the host name, it might
> be worthwhile to allow limiting it to a specific path as well. =A0For
> example, say you use Google's custom search engine, one way to
> implement it is to use a script that sits on www.google.com (e.g.http://w=
ww.google.com/coop/cse/brand?form=3Dcse-search-box&lang=3Den).
> By having an allowed path, you could prevent loading other scripts
> from the www.google.com domain.

I don't have a strong opinion on this one.  My initial reaction is
that it adds complexity to the model, but perhaps complexity that's
warranted if people feel it's a useful feature.  Do you have some
specific use cases to share which would demonstrate the usefulness of
your suggestion?

> (3) Currently the spec focuses on the "host items" -- has any thought
> be given to allowing CSP to extend to sites being referenced by "host
> items"? =A0That is, allowing a site to specify that it can't be embedded
> on another site via frame or object, etc? =A0I imagine it would be
> similar to the Access Control for XS-XHR[2].

I would agree with Gerv, that this feels a bit out of scope for this
particular proposal.

Cheers,
Brandon
0
bsterne
11/20/2008 10:37:49 PM
On Nov 20, 4:37=C2=A0pm, bsterne <bste...@mozilla.com> wrote:
> On Nov 17, 2:19=C2=A0pm, Bil Corry <b...@corry.biz> wrote:
>
> > (1) Something that appears to be missing from the spec is a way for
> > the browser to advertise to the server that it will support Content
> > Security Policy, possibly with the CSP version. =C2=A0By having the bro=
wser
> > send an additional header, it allows the server to make decisions
> > about the browser, such as limiting access to certain resources,
> > denying access, redirecting to an alternate site that tries to
> > mitigate using other techniques, etc. =C2=A0Without the browser adverti=
sing
> > if it will follow the CSP directives, one would have to test for
> > browser compliance, much like how tests are done now for cookie and
> > JavaScript support (maybe that isn't a bad thing?).
>
> This isn't a bad idea, as I have seen this sort of "compatibility
> level" used successfully elsewhere. =C2=A0If future changes are made to t=
he
> model which would define restrictions for new types of content (e.g.
> <video>), or which would affect the default behaviors for how content
> is allowed to load, then it will be useful to servers to have their
> clients' CSP version information. =C2=A0If we are going to add this to th=
e
> model, then we should do so from the beginning to avoid the
> potentially messy browser compliance testing that would result after
> the first set of changes.

I personally see value there for the website, but if 99.9% of websites
will never do anything with the header, then it probably isn't
worthwhile (or it may take version 2 before the need is evident).  The
big challenge here is making sure the CSP announcement header can not
be spoofed via XHR, so to that end, I'd recommend prefixing the header
name with "Sec-" such as "Sec-Content-Security-Policy" -- the latest
draft of XHR2 specifies that any header beginning with "Sec-" is not
allowed to be overwritten with setRequestHeader():

http://www.w3.org/TR/XMLHttpRequest2/#setrequestheader

Of course, XHR2 would have to be implemented in the browsers first in
order to take advantage of the requirement.


> > (2) Currently the spec allows/denies based on the host name, it might
> > be worthwhile to allow limiting it to a specific path as well. =C2=A0Fo=
r
> > example, say you use Google's custom search engine, one way to
> > implement it is to use a script that sits onwww.google.com(e.g.http://w=
ww.google.com/coop/cse/brand?form=3Dcse-search-box=E2=8C=A9=3Den).
> > By having an allowed path, you could prevent loading other scripts
> > from thewww.google.comdomain.
>
> I don't have a strong opinion on this one. =C2=A0My initial reaction is
> that it adds complexity to the model, but perhaps complexity that's
> warranted if people feel it's a useful feature. =C2=A0Do you have some
> specific use cases to share which would demonstrate the usefulness of
> your suggestion?

I don't have a specific use case, I'm thinking more of the edge cases
where content is allowed from a domain that allows a multitude of
third-party content.  Maybe this is something to explore for v2 if
warranted.


> > (3) Currently the spec focuses on the "host items" -- has any thought
> > be given to allowing CSP to extend to sites being referenced by "host
> > items"? =C2=A0That is, allowing a site to specify that it can't be embe=
dded
> > on another site via frame or object, etc? =C2=A0I imagine it would be
> > similar to the Access Control for XS-XHR[2].
>
> I would agree with Gerv, that this feels a bit out of scope for this
> particular proposal.

Then maybe something to consider down the road.  It would be useful to
prevent hot linking and clickjacking
..

- Bil
0
Bil
11/21/2008 4:56:30 PM
Bil Corry wrote:
> 
> I personally see value there for the website, but if 99.9% of websites
> will never do anything with the header, then it probably isn't
> worthwhile (or it may take version 2 before the need is evident).  The
> big challenge here is making sure the CSP announcement header can not
> be spoofed via XHR, so to that end, I'd recommend prefixing the header
> name with "Sec-" such as "Sec-Content-Security-Policy" -- the latest
> draft of XHR2 specifies that any header beginning with "Sec-" is not
> allowed to be overwritten with setRequestHeader():
> 
> http://www.w3.org/TR/XMLHttpRequest2/#setrequestheader
> 
> Of course, XHR2 would have to be implemented in the browsers first in
> order to take advantage of the requirement.

My 2c is that if we do this we should do versioning from the get go,
otherwise servers will have a hard time telling CSP v1.0 from CSP
unsupported clients in the future.  On one hand this may waste some
bandwidth now, but then again if it saves the server from sending CSP
responses to clients that don't support it, it may actually save
bandwidth and simplify server logic (since servers will be able to
determine conclusively that CSP is supported, rather than guessing).

> > I don't have a specific use case, I'm thinking more of the edge cases
> where content is allowed from a domain that allows a multitude of
> third-party content.  Maybe this is something to explore for v2 if
> warranted.
> 

I think part of the challenge is that CSP governs a number of different
operations, some of which may be meaningful to restrict to a specific
path but others may not be (i.e. scripting vs asset loading).  If we had
a few specific examples that would help us get our brains around whether
or not enforcing restrictions on a per-path basis would actually be a
contract that is enforcable.

For (a contrived) example, say mashup.com hosts a number of different
widgets, but myapp.com wants to restrict loading of iframes from only
mashup.com/good.  If the user happens to have another app from
mashup.com/bad loaded in another window/tab, then in theory content from
mashup.com/bad could script directly into the iframe contain
mashup.com/good within myapp.com, bypassing the loading restriction.

That is probably not the best example, but the root of this problem is
that scripting permissions are really still only enforceable on a per
fully-qualified domain name basis, regardless of any loading restrictions.

> 
>>> (3) Currently the spec focuses on the "host items" -- has any thought
>>> be given to allowing CSP to extend to sites being referenced by "host
>>> items"?  That is, allowing a site to specify that it can't be embedded
>>> on another site via frame or object, etc?  I imagine it would be
>>> similar to the Access Control for XS-XHR[2].
>> I would agree with Gerv, that this feels a bit out of scope for this
>> particular proposal.
> 
> Then maybe something to consider down the road.  It would be useful to
> prevent hot linking and clickjacking
> .

I think the primary reason this seems out of scope is that CSP is a
mechanism for servers to govern their own content, rather than
specifying policies for 3rd party content.  The latter seems more like
the domain of Access Control.  Access Control AFAIK is not intended just
for XHR2, so I could image it being extended to govern opt-out of
cross-domain content loading, as well as to opt-in.

Thank you for your feedback btw, it is much appreciated.
  Lucas.

> 
> - Bil
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
0
Lucas
11/22/2008 12:50:28 AM
On Nov 21, 6:50=A0pm, Lucas Adamski <lu...@mozilla.com> wrote:
> >>> (3) Currently the spec focuses on the "host items" -- has any thought
> >>> be given to allowing CSP to extend to sites being referenced by "host
> >>> items"? =A0That is, allowing a site to specify that it can't be embed=
ded
> >>> on another site via frame or object, etc? =A0I imagine it would be
> >>> similar to the Access Control for XS-XHR[2].
> >> I would agree with Gerv, that this feels a bit out of scope for this
> >> particular proposal.
>
> > Then maybe something to consider down the road. =A0It would be useful t=
o
> > prevent hot linking and clickjacking
> > .
>
> I think the primary reason this seems out of scope is that CSP is a
> mechanism for servers to govern their own content, rather than
> specifying policies for 3rd party content. =A0The latter seems more like
> the domain of Access Control. =A0Access Control AFAIK is not intended jus=
t
> for XHR2, so I could image it being extended to govern opt-out of
> cross-domain content loading, as well as to opt-in.

I was thinking Access Control was close, but it currently has this as
its abstract:

-----
This document defines a mechanism to enable client-side cross-site
requests. Specifications that want to enable cross-site requests in an
API they define can use the algorithms defined by this specification.
If such an API is used on http://example.org resources, a resource on
http://hello-world.example can opt in using the mechanism described by
this specification (e.g., specifying Access-Control-Allow-Origin:
http://example.org as response header), which would allow that
resource to be fetched cross-site from http://example.org.
-----

That to me means it's geared strictly for XHR, but maybe "cross-site
requests" is suppose to include any type of cross-site request,
including img, script, object, etc.

I agree though, Access Control seems like a better fit for this type
of functionality.  I'll approach Anne and see what he thinks.

Thanks for the reply,


- Bil
0
Bil
11/22/2008 4:22:55 PM
Yes, my understanding is that Access Control is actually intended as a
generic cross-site server policy mechanism, and XHR is just its first
implementation.  Thanks,
 Lucas.

Bil Corry wrote:
> On Nov 21, 6:50 pm, Lucas Adamski <lu...@mozilla.com> wrote:
>>>>> (3) Currently the spec focuses on the "host items" -- has any thought
>>>>> be given to allowing CSP to extend to sites being referenced by "host
>>>>> items"?  That is, allowing a site to specify that it can't be embedded
>>>>> on another site via frame or object, etc?  I imagine it would be
>>>>> similar to the Access Control for XS-XHR[2].
>>>> I would agree with Gerv, that this feels a bit out of scope for this
>>>> particular proposal.
>>> Then maybe something to consider down the road.  It would be useful to
>>> prevent hot linking and clickjacking
>>> .
>> I think the primary reason this seems out of scope is that CSP is a
>> mechanism for servers to govern their own content, rather than
>> specifying policies for 3rd party content.  The latter seems more like
>> the domain of Access Control.  Access Control AFAIK is not intended just
>> for XHR2, so I could image it being extended to govern opt-out of
>> cross-domain content loading, as well as to opt-in.
> 
> I was thinking Access Control was close, but it currently has this as
> its abstract:
> 
> -----
> This document defines a mechanism to enable client-side cross-site
> requests. Specifications that want to enable cross-site requests in an
> API they define can use the algorithms defined by this specification.
> If such an API is used on http://example.org resources, a resource on
> http://hello-world.example can opt in using the mechanism described by
> this specification (e.g., specifying Access-Control-Allow-Origin:
> http://example.org as response header), which would allow that
> resource to be fetched cross-site from http://example.org.
> -----
> 
> That to me means it's geared strictly for XHR, but maybe "cross-site
> requests" is suppose to include any type of cross-site request,
> including img, script, object, etc.
> 
> I agree though, Access Control seems like a better fit for this type
> of functionality.  I'll approach Anne and see what he thinks.
> 
> Thanks for the reply,
> 
> 
> - Bil
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
0
Lucas
11/22/2008 8:03:29 PM
Lucas Adamski wrote:
> My 2c is that if we do this we should do versioning from the get go,
> otherwise servers will have a hard time telling CSP v1.0 from CSP
> unsupported clients in the future.  On one hand this may waste some
> bandwidth now, but then again if it saves the server from sending CSP
> responses to clients that don't support it, 

What do you mean by "CSP responses to clients that don't support it"?
What is a "CSP response"? CSP is not supposed to make page authors do
anything different, it's supposed to cover their asses when they mess
up. Relying on CSP is using it for something it's not designed for.

bsterne - I'm not talking crack, right?

Gerv
0
Gervase
11/25/2008 10:43:59 PM
On Nov 25, 2:43=A0pm, Gervase Markham <g...@mozilla.org> wrote:
> What do you mean by "CSP responses to clients that don't support it"?
> What is a "CSP response"? CSP is not supposed to make page authors do
> anything different, it's supposed to cover their asses when they mess
> up. Relying on CSP is using it for something it's not designed for.
>
> bsterne - I'm not talking crack, right?

I think what Lucas is saying is that servers won't send policy to
clients who don't announce that they support CSP.

-Brandon
0
bsterne
11/26/2008 5:01:40 PM
On Nov 22, 2:03=A0pm, Lucas Adamski <lu...@mozilla.com> wrote:
> Yes, my understanding is that Access Control is actually intended as a
> generic cross-site server policy mechanism, and XHR is just its first
> implementation.

Anne confirmed that it's not intended to be XHR-only, however it's not
intended for all types of requests either.  He specifically said it
would not work for <iframe> due to cross-site scripting issues.


- Bil
0
Bil
12/1/2008 9:00:21 PM
I think this is true, but it kind of depends on how you look at it.  I
think sometimes different types of cross-domain operations can get
conflated together:

* cross-domain scripting - when code in one domain has the ability to
access another domain's code or DOM
* cross-domain data importing - transferring data from the context of
one domain into another domain (XHR with AC, stylesheets)
* cross-domain content loading - hands-off content loading operations
such as IFRAME and IMG tags that leave content in their respective
security domains--aka embedding

In this (conveniently simplified) model, since iframe is a content
loading operation, it doesn't need Access Control.   Nor am I sure what
it would really even mean to apply Access Control to it (would it be
permitting data importing or scripting?)

Probably the biggest fly in my otherwise nicely-simple ointment is
<SCRIPT SRC=>.  Is it cross-domain scripting or data importing?  It may
seem like scripting at first blush, but you may not have even
instantiated any code from the source domain, and in the end its not
much different than loading data via XHR+AC and then calling eval() on
it.  So I would argue that even <SCRIPT SRC=> is a data import
operation, just one that is (alas) permitted by default and
automatically evals everything it loads.

So perhaps we are just agreeing insofar that Access Control should never
govern cross-domain scripting.  Whether it could/should be extended to
govern (opt-out of) cross-domain loading/embedding is an interesting
one.  Thanks,
  Lucas.

Bil Corry wrote:
> On Nov 22, 2:03 pm, Lucas Adamski <lu...@mozilla.com> wrote:
>> Yes, my understanding is that Access Control is actually intended as a
>> generic cross-site server policy mechanism, and XHR is just its first
>> implementation.
> 
> Anne confirmed that it's not intended to be XHR-only, however it's not
> intended for all types of requests either.  He specifically said it
> would not work for <iframe> due to cross-site scripting issues.
> 
> 
> - Bil
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
0
Lucas
12/2/2008 1:07:38 AM
bsterne wrote:
> I think what Lucas is saying is that servers won't send policy to
> clients who don't announce that they support CSP.

To save 60 bytes in a header?

Gerv
0
Gervase
12/3/2008 10:56:26 PM
Gervase Markham wrote on 12/3/2008 4:56 PM: 
> bsterne wrote:
>> I think what Lucas is saying is that servers won't send policy to
>> clients who don't announce that they support CSP.
> 
> To save 60 bytes in a header?

No, so that in the event CSPv2 is incompatible with CSPv1, it won't require two response headers to be sent to every client.  Instead, since the browser tells the server which version of CSP it's accepting, the server can send back the CSP header in the most recent format that both the client and server understand (e.g. server knows CSPv2, client knows CSPv3, server sends CSPv2 header).


- Bil

0
Bil
12/3/2008 11:22:43 PM
Bil Corry wrote:
> No, so that in the event CSPv2 is incompatible with CSPv1, it won't
> require two response headers to be sent to every client.  Instead,
> since the browser tells the server which version of CSP it's
> accepting, the server can send back the CSP header in the most recent
> format that both the client and server understand (e.g. server knows
> CSPv2, client knows CSPv3, server sends CSPv2 header).

That makes no sense. You are saying that servers won't send any policy
at all, now, because in the future they might have to send two headers?

Gerv
0
Gervase
12/8/2008 6:32:21 PM
Gervase Markham wrote on 12/8/2008 12:32 PM: 
> Bil Corry wrote:
>> No, so that in the event CSPv2 is incompatible with CSPv1, it won't
>> require two response headers to be sent to every client.  Instead,
>> since the browser tells the server which version of CSP it's
>> accepting, the server can send back the CSP header in the most recent
>> format that both the client and server understand (e.g. server knows
>> CSPv2, client knows CSPv3, server sends CSPv2 header).
> 
> That makes no sense. You are saying that servers won't send any policy
> at all, now, because in the future they might have to send two headers?

Let's back up.  The CSP method you support (correct me if I'm wrong) is for the server to send a CSP header to all clients.  And if the client understands the header, it'll kick on some extra protections not currently afforded to the site.  And that's great for CSPv1.  But lets take it to the extreme, say there is now five different CSP versions, and none of them are compatible with each other.  The server then will have to issue five headers for all five CSP versions and hope the client supports one or more of them:

	X-Content-Security-Policy: ...
	X-Content-Security-Policy2: ...
	X-Content-Security-Policy3: ...
	X-Content-Security-Policy4: ...
	X-Content-Security-Policy5: ...

I'm suggesting instead that the client announce the CSP version it supports; something like:

	Sec-Content-Security-Policy: v3

And the server can respond with just that CSP version:

	X-Content-Security-Policy: ... v3 format here ...

So the main benefit is unambiguous communication, not saving bytes in a header.

Beyond that, it has other benefits, perhaps the biggest one is being able to measure how many clients are using CSP.  How will you measure the success of CSP if you have no way of knowing if 1% of browsers are using it, or 99%?  And websites may not want to implement it if they can't see the number of clients affected; if there is only 1% of their visitors using it, maybe they don't want to spend the effort to devise and keep a CSP header up-to-date.  But if 99% of their visitors use it, it now becomes more worthwhile.

And there's also debugging -- when some site visitors are having trouble using the site, but others are not, how can the website debug the problem when it's a misconfigured CSP?  Will the browser pop up an alert each time there's a CSP violation?  If not, and without a client sending a CSP header, it'll be hard to debug.



- Bil

0
Bil
12/8/2008 9:53:37 PM
Bil Corry wrote:
> Let's back up.  The CSP method you support (correct me if I'm wrong)
> is for the server to send a CSP header to all clients.  And if the
> client understands the header, it'll kick on some extra protections
> not currently afforded to the site.  And that's great for CSPv1.  But
> lets take it to the extreme, say there is now five different CSP
> versions, and none of them are compatible with each other. 

Stop right there. How does this potential future problem dissuade people
from deploying CSP now (which is what you were worried about)?

Anyway, CSP is designed to be forwardly-compatible in syntax. HTTP, a
complex protocol, has had one backwardly-compatible revision in 10+
years. I suspect we won't have five, or even two versions of CSP.
Particularly as it's currently an X- header for testing purposes, and
will move to not being an X- header when it hits 1.0, which allows
breaking changes at that point.

> Beyond that, it has other benefits, perhaps the biggest one is being
> able to measure how many clients are using CSP.  How will you measure
> the success of CSP if you have no way of knowing if 1% of browsers
> are using it, or 99%?

This is a feature. The only reason you'd want to do this is to see if
you could rely on it.

Anyway, you could get approximate stats by mapping from browser versions.

Gerv
0
Gervase
12/12/2008 5:22:15 PM
Gervase Markham wrote on 12/12/2008 11:22 AM: 
> Bil Corry wrote:
>> Let's back up.  The CSP method you support (correct me if I'm wrong)
>> is for the server to send a CSP header to all clients.  And if the
>> client understands the header, it'll kick on some extra protections
>> not currently afforded to the site.  And that's great for CSPv1.  But
>> lets take it to the extreme, say there is now five different CSP
>> versions, and none of them are compatible with each other. 
> 
> Stop right there. How does this potential future problem dissuade people
> from deploying CSP now (which is what you were worried about)?

It doesn't.  For this, I am worried about future, non-compatible revisions.  But your point is taken that it may be unlikely to happen.


 >> Beyond that, it has other benefits, perhaps the biggest one is being
>> able to measure how many clients are using CSP.  How will you measure
>> the success of CSP if you have no way of knowing if 1% of browsers
>> are using it, or 99%?
> 
> This is a feature. The only reason you'd want to do this is to see if
> you could rely on it.

The reason site owners want to know how many people are using it is to see if it's worth the effort to implement and maintain it.  Take Cookie2 for example.  Only Opera supports it, so most sites only use Cookie.


> Anyway, you could get approximate stats by mapping from browser versions.

Browsers without built-in CSP support might have a plug-in available, and browsers with built-in CSP support might have it turned off.  But yes, once most browsers have built-in support, you could approximate stats based on user-agent.


In the end, sites that want to know which browsers support CSP will simply test for it (just like cookies and JavaScript); I see the client header as a convenience vs. having to test for it and it offers some other limited benefits, but if it violates the CSP paradigm, then certainly skip the suggestion.


- Bil

0
Bil
12/12/2008 7:08:01 PM
 From this discussion I'm still seeing good reasons to have a version  
flag; mainly to allow servers to detect whether a given client  
supports CSP (and what version of it) in an unequivocal manner.   
Browser version sniffing is not a good solution to that problem IMHO.

If a server is to rely on CSP to reliably enforce security constraints  
it needs to know what version the client supports, so it can tailor  
its content accordingly.  Even if the API is future-compatible, it is  
very likely that as web technologies and attacks evolve we we need to  
revise CSP to take into account new APIs that need to be governed, and/ 
or new policies that need to be applied to existing APIs.   At which  
point the server may need to modify its behavior/content based upon  
the specific version of CSP provided.
   Lucas.

On Dec 12, 2008, at 11:08 AM, Bil Corry wrote:

> Gervase Markham wrote on 12/12/2008 11:22 AM:
>> Bil Corry wrote:
>>> Let's back up.  The CSP method you support (correct me if I'm wrong)
>>> is for the server to send a CSP header to all clients.  And if the
>>> client understands the header, it'll kick on some extra protections
>>> not currently afforded to the site.  And that's great for CSPv1.   
>>> But
>>> lets take it to the extreme, say there is now five different CSP
>>> versions, and none of them are compatible with each other.
>>
>> Stop right there. How does this potential future problem dissuade  
>> people
>> from deploying CSP now (which is what you were worried about)?
>
> It doesn't.  For this, I am worried about future, non-compatible  
> revisions.  But your point is taken that it may be unlikely to happen.
>
>
>>> Beyond that, it has other benefits, perhaps the biggest one is being
>>> able to measure how many clients are using CSP.  How will you  
>>> measure
>>> the success of CSP if you have no way of knowing if 1% of browsers
>>> are using it, or 99%?
>>
>> This is a feature. The only reason you'd want to do this is to see if
>> you could rely on it.
>
> The reason site owners want to know how many people are using it is  
> to see if it's worth the effort to implement and maintain it.  Take  
> Cookie2 for example.  Only Opera supports it, so most sites only use  
> Cookie.
>
>
>> Anyway, you could get approximate stats by mapping from browser  
>> versions.
>
> Browsers without built-in CSP support might have a plug-in  
> available, and browsers with built-in CSP support might have it  
> turned off.  But yes, once most browsers have built-in support, you  
> could approximate stats based on user-agent.
>
>
> In the end, sites that want to know which browsers support CSP will  
> simply test for it (just like cookies and JavaScript); I see the  
> client header as a convenience vs. having to test for it and it  
> offers some other limited benefits, but if it violates the CSP  
> paradigm, then certainly skip the suggestion.
>
>
> - Bil
>
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

0
Lucas
12/16/2008 7:51:59 PM
It seems to me that version-compatibility announcement is helpful
unless CSP is intended to be a short-lived or rarely-used security
construct (which I gather is not the point).   It is very likely that
CSP's governing scope will change in the future, or an additional
policy may be created to govern other pieces.  It is fairly likely
that different, incompatible policy-version support may need to be
treated differently by the web server.

In fact, supported version announcement is probably necessary, not
just useful, if implementations of CSP are initially rolled out in
browser add-ons; suddenly there's the possibility of multiple browser/
add-on version combinations, especially when people delay updating
browsers or updating plugins when prompted (old-browser + new-addon,
new-browser + old-addon, etc).  In these scenarios user-agent
profiling just won't work reliably.  Also, it's not clear we should
burden the web site developers to stay up-to-date on which browsers
support which policies; it might be difficult to track which agents
support which engines, especially for unknown or niche browsers.
Instead, it might be ideal to explicitly tell the server what is
supported so regardless of the user-agent, the server can be fairly
confident it serves an appropriate policy.

A reasonable approach to specify the CSP version might be seen in the
Accept-Charset header, or really any of the Accept-* request headers.
It need not be present, but if it is, such an Accept-Security-Policy
request header can contain which versions the user agent supports
(e.g., CSP-1.0), and can be comma-separated in case multiple versions
or multiple policies are supported.  Another option would be to shove
the supported CSP versions into the  user agent string, but that's a
nasty abuse of the User-Agent header (though arguably the security
policy enforcement is part of the "platform" on which web apps will
run).

-Sid
0
Sid
12/17/2008 6:38:01 PM
Lucas Adamski wrote:
> From this discussion I'm still seeing good reasons to have a version
> flag; mainly to allow servers to detect whether a given client supports
> CSP (and what version of it) in an unequivocal manner. 

How do you react to my point that they shouldn't need to know that
because, if they do, it means they are relying on CSP, which they
shouldn't be?

> If a server is to rely on CSP to reliably enforce security constraints

If it's doing that, it's broken. CSP is explicitly not designed for
this. (As I understand it.)

Gerv
0
Gervase
12/17/2008 8:23:35 PM
Gervase Markham wrote:
> > If a server is to rely on CSP to reliably enforce security constraints
> If it's doing that, it's broken. CSP is explicitly not designed for
> this. (As I understand it.)

Maybe it's not completely bad for browsers to advertise whether or not
they support CSP (and which versions).  There's a benefit for web
developers who can decide to serve more restricted/filtered content to
browsers that won't "catch them when they fall".  This benefit is not
there if the browser's don't advertise what they will enforce.  For
example, consider a webmaster who is just learning some new technology
X may not be comfortable enough to serve X content without a safety
net that CSP provides, but is being pressured to add features to his
site.  If a client doesn't support CSP, his server can simply not
serve any script content that he isn't sure about, but when CSP is
present and can be enforced, he has that to fall back on and can serve
experimental stuff.   While in an ideal world, all developers should
understand how all the code their site serves will behave in every
situation, but I doubt this is the case in reality, especially for
smaller, feature-driven sites.

I can see both sides of this issue, though.  It is not healthy to rely
on CSP for a primary layer of security, especially since it will take
some time for CSP to be adopted widely (and we *really* don't want to
encourage sloppy design).

-Sid
0
Sid
12/18/2008 2:31:43 AM
Hi Gerv,

Well, I think any security feature/model has to have some properties  
that are reliable.  So CSP may not prevent XSS is the blanket sense,  
but it still needs to be able to enforce some set of restrictions that  
the developer can rely upon.

Certainly the language within http://people.mozilla.org/~bsterne/content-security-policy/details.html 
  is unambiguous (i.e. "Scripts from non-white-listed hosts will not  
be requested or executed", not "Scripts from non-white-listed hosts  
may or may not be requested or executed").  Thanks,
   Lucas.

On Dec 17, 2008, at 12:23 PM, Gervase Markham wrote:

> Lucas Adamski wrote:
>> From this discussion I'm still seeing good reasons to have a version
>> flag; mainly to allow servers to detect whether a given client  
>> supports
>> CSP (and what version of it) in an unequivocal manner.
>
> How do you react to my point that they shouldn't need to know that
> because, if they do, it means they are relying on CSP, which they
> shouldn't be?
>
>> If a server is to rely on CSP to reliably enforce security  
>> constraints
>
> If it's doing that, it's broken. CSP is explicitly not designed for
> this. (As I understand it.)
>
> Gerv
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

0
Lucas
12/18/2008 6:01:14 AM
Gervase Markham wrote on 12/17/2008 2:23 PM: 
> Lucas Adamski wrote:
>> From this discussion I'm still seeing good reasons to have a version
>> flag; mainly to allow servers to detect whether a given client supports
>> CSP (and what version of it) in an unequivocal manner. 
> 
> How do you react to my point that they shouldn't need to know that
> because, if they do, it means they are relying on CSP, which they
> shouldn't be?

Is CSP suppose to be user-centric or site-centric?

By user-centric, I mean is CSP going to be similar to NoScript and AdBlockPlus where it's up to the user to configure its use and behavior, with the site being able to helpfully suggest the appropriate rules for itself?  If so, then I agree, sites should not rely on CSP because who knows how the user has configured CSP to behave.

By site-centric, I mean is CSP going to be entirely drive by the site, so the lack of a CSP header from the site means there is no CSP protection in place?  If so, then it is counter-intuitive that the entire model is premised on the site implementing the CSP header, but the site is blind to how many visitors use it and must not rely on CSP to actually do anything.  What I think will happen instead is sites that implement it will have some expectation that it does something (otherwise, why implement it?), and they will test to see which browsers are supporting it.  And if there is more than one version of CSP, they'll create multiple tests.


- Bil

0
Bil
12/18/2008 3:30:25 PM
It is site-centric.  Someone might write an add-in to monitor or  
modify content policies but that's not a core use case.
   Lucas.

On Dec 18, 2008, at 7:30 AM, Bil Corry wrote:

> Gervase Markham wrote on 12/17/2008 2:23 PM:
>> Lucas Adamski wrote:
>>> From this discussion I'm still seeing good reasons to have a version
>>> flag; mainly to allow servers to detect whether a given client  
>>> supports
>>> CSP (and what version of it) in an unequivocal manner.
>>
>> How do you react to my point that they shouldn't need to know that
>> because, if they do, it means they are relying on CSP, which they
>> shouldn't be?
>
> Is CSP suppose to be user-centric or site-centric?
>
> By user-centric, I mean is CSP going to be similar to NoScript and  
> AdBlockPlus where it's up to the user to configure its use and  
> behavior, with the site being able to helpfully suggest the  
> appropriate rules for itself?  If so, then I agree, sites should not  
> rely on CSP because who knows how the user has configured CSP to  
> behave.
>
> By site-centric, I mean is CSP going to be entirely drive by the  
> site, so the lack of a CSP header from the site means there is no  
> CSP protection in place?  If so, then it is counter-intuitive that  
> the entire model is premised on the site implementing the CSP  
> header, but the site is blind to how many visitors use it and must  
> not rely on CSP to actually do anything.  What I think will happen  
> instead is sites that implement it will have some expectation that  
> it does something (otherwise, why implement it?), and they will test  
> to see which browsers are supporting it.  And if there is more than  
> one version of CSP, they'll create multiple tests.
>
>
> - Bil
>
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

0
Lucas
12/18/2008 9:48:03 PM
Sid Stamm wrote:
> Gervase Markham wrote:
>>> If a server is to rely on CSP to reliably enforce security constraints
>> If it's doing that, it's broken. CSP is explicitly not designed for
>> this. (As I understand it.)
> 
> Maybe it's not completely bad for browsers to advertise whether or not
> they support CSP (and which versions).  There's a benefit for web
> developers who can decide to serve more restricted/filtered content to
> browsers that won't "catch them when they fall". 

If there's additional filtering they know how to do, they should be
doing it for everyone.

> example, consider a webmaster who is just learning some new technology
> X may not be comfortable enough to serve X content without a safety
> net that CSP provides, but is being pressured to add features to his
> site.  

Then he shouldn't use X. (Who designed X to be unsafe by default? Go
shoot them. :-)

Gerv
0
Gervase
12/19/2008 5:30:55 PM
Bil Corry wrote:
> Is CSP suppose to be user-centric or site-centric?

Using your definitions, it's site-centric.

> By site-centric, I mean is CSP going to be entirely drive by the
> site, so the lack of a CSP header from the site means there is no CSP
> protection in place?  If so, then it is counter-intuitive that the
> entire model is premised on the site implementing the CSP header, but
> the site is blind to how many visitors use it and must not rely on
> CSP to actually do anything.  What I think will happen instead is
> sites that implement it will have some expectation that it does
> something (otherwise, why implement it?), 

Because it might save you when you screw up. That's the entire point of
it. If you never screw up, you don't need to use it (and please come and
work for me).

If you do screw up, people using browser which support CSP will be saved
(and will, perhaps, be able to warn you that you've screwed up) and
people using other browsers won't be saved. Such is life. It was still
worth implementing it, even if you didn't mean to screw up and even if
some people still get attacked.

Gerv
0
Gervase
12/19/2008 5:35:54 PM
Lucas Adamski wrote:
> Well, I think any security feature/model has to have some properties
> that are reliable.  So CSP may not prevent XSS is the blanket sense, but
> it still needs to be able to enforce some set of restrictions that the
> developer can rely upon.

Your second sentence doesn't follow from your first, in this context.

Yes, if CSP promises it'll prevent exact attack scenario X, it should
prevent X, and if it doesn't prevent X, it's a bug. (But all that's
really saying is that it's deterministic.) No, that doesn't mean that
developers should rely on a particular browser preventing attack X.
There may be a bug, the user may have turned it off, there may be a very
similar attack Y using the same flaw which CSP can't prevent, and so on.

Gerv
0
Gervase
12/19/2008 5:35:57 PM
Developers rely on the browser security model in countless ways  
already.   A fundamental attribute of security models is reliability.   
I realize that not all browsers will have CSP in the foreseeable  
future, but that is orthogonal from being able to detect & rely upon  
CSP when it is present.  And so no,  I don't think there is an  
inconsistency in my earlier statements below.

> there may be a bug
- we fix it

> the user may have turned it off
- that's why you need to send a CSP supported header, and not rely on  
version sniffing.  Furthermore, not sure why the user would turn it  
off (does the user turn off same-origin restrictions, or cross-frame  
navigation restrictions, or ...)

> there may be a very similar attack Y using the same flaw which CSP  
> can't prevent, and so on.
- which is why we aren't prevent attacks, we are enforcing policies.    
There is no "PREVENT XSS" switch in CSP for that reason.   If  
anything, this is a compelling argument for versioning because we may  
have to update CSP in the future to modify existing policies or add  
new ones

   Lucas.

On Dec 19, 2008, at 9:35 AM, Gervase Markham wrote:

> Lucas Adamski wrote:
>> Well, I think any security feature/model has to have some properties
>> that are reliable.  So CSP may not prevent XSS is the blanket  
>> sense, but
>> it still needs to be able to enforce some set of restrictions that  
>> the
>> developer can rely upon.
>
> Your second sentence doesn't follow from your first, in this context.
>
> Yes, if CSP promises it'll prevent exact attack scenario X, it should
> prevent X, and if it doesn't prevent X, it's a bug. (But all that's
> really saying is that it's deterministic.) No, that doesn't mean that
> developers should rely on a particular browser preventing attack X.
> There may be a bug, the user may have turned it off, there may be a  
> very
> similar attack Y using the same flaw which CSP can't prevent, and so  
> on.
>
> Gerv
> _______________________________________________
> dev-security mailing list
> dev-security@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

0
Lucas
12/19/2008 6:18:07 PM
On Dec 19, 12:30=A0pm, Gervase Markham <g...@mozilla.org> wrote:
> > Maybe it's not completely bad for browsers to advertise whether or not
> > they support CSP (and which versions). =A0There's a benefit for web
> > developers who can decide to serve more restricted/filtered content to
> > browsers that won't "catch them when they fall".
> If there's additional filtering they know how to do, they should be
> doing it for everyone.

I'm not sure I agree with that... take for instance a browser that
only supports SSL v2 (and not 3): a site concerned with avoiding MITM
attacks might serve different content (or none) to someone whose
browser only supports SSL v2, and serve all the site's content to
someone whose browser supports v3.  That doesn't warrant blocking
content to all visitors regardless of what security constructs their
browser supports.  If the filtering in question just removes possibly-
evil data, then yeah, it should be done for everyone. However, the
filtering in question might remove site functionality because the
client's browser may not play nice.

> > consider a webmaster who is just learning some new technology
> > X may not be comfortable enough to serve X content without a safety
> > net that CSP provides, but is being pressured to add features to his
> > site. =A0
> Then he shouldn't use X. (Who designed X to be unsafe by default? Go
> shoot them. :-)

I see your point.  One would hope X is not *designed* to be unsafe,
but it might not be rock-solid, with a history of security issues
(like Flash).  The webmaster might not feel completely comfortable
with his mastery of it, so only feels comfortable providing Flash-
based content to people whose browsers will help protect them.  I
block Flash content from most sites (and don't employ it on my own web
sites), but may change my ways if CSP were available to help out with
more CSRF protection.

-Sid
0
Sid
12/19/2008 6:53:32 PM
Bil Corry wrote on 12/18/2008 9:30 AM: 
> By user-centric, I mean is CSP going to be similar to NoScript and
> AdBlockPlus where it's up to the user to configure its use and
> behavior, with the site being able to helpfully suggest the
> appropriate rules for itself?  If so, then I agree, sites should not
> rely on CSP because who knows how the user has configured CSP to
> behave.

Here's a good example of "user-centric", Giorgio Maone's ABE:

	http://hackademix.net/2008/12/20/introducing-abe/

The details of it are here:

	http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf

So while ABE doesn't send a request header advertising itself, due to the user-centric nature of the protection, it doesn't seem necessary to me.  I do admit there's a fine line here that's entirely based on how CSP and ABE have been framed for use.


- Bil

0
Bil
12/20/2008 1:40:55 PM
Sid Stamm wrote:
> I'm not sure I agree with that... take for instance a browser that
> only supports SSL v2 (and not 3): 

That's a difficult "for instance" to accept, because there aren't any.
At least, not that anyone uses.

> a site concerned with avoiding MITM
> attacks might serve different content (or none) to someone whose
> browser only supports SSL v2, and serve all the site's content to
> someone whose browser supports v3.  That doesn't warrant blocking
> content to all visitors regardless of what security constructs their
> browser supports.  

Right. In that far-fetched scenario, they might. But the security
provided by SSL (privacy, authentication) is very different to the
security provided by CSP (anti-XSS), so the analogy doesn't hold.
Security is a multi-faceted beast.

> I see your point.  One would hope X is not *designed* to be unsafe,
> but it might not be rock-solid, with a history of security issues
> (like Flash).  The webmaster might not feel completely comfortable
> with his mastery of it, so only feels comfortable providing Flash-
> based content to people whose browsers will help protect them. 

In which case, for the forseeable future, he won't be providing it to
many people. :-) Again, CSP is here being used as a front line of
defence, and it shouldn't be.

Another feature of CSP is "herd immunity" - it doesn't have to be used
by everyone to be helpful.

Gerv
0
Gervase
12/23/2008 3:33:02 PM
Lucas Adamski wrote:
> Developers rely on the browser security model in countless ways
> already.   A fundamental attribute of security models is reliability. 

I am not arguing we should make CSP work a random 50% of the time. I am
arguing that CSP is not a "security model", it's a "phew, I would have
just got stuffed, but it saved me this time" model. Security models are
things you rely on. CSP is a second line of defence for when your
security model fails, and it doesn't promise to save your ass every time.

Gerv
0
Gervase
12/23/2008 3:34:46 PM
Gervase Markham <g...@mozilla.org> wrote:
> Security is a multi-faceted beast.
Point taken, and I agree, it was a crappy analogy.

> Again, CSP is here being used as a front line of
> defence, and it shouldn't be.
I agree with you... optimally, CSP should not be front-line defense.
But for it to be helpful in practice, there must be a motivation for
people to put it on their sites.

What worries me is that with no assurance that they're enforced, CSP
policies won't be provided by web sites since it takes time (granted,
not much of it) to compose them.  It's likely that a profit-driven
company might rather have their engineers spend time fuzzing or bug
fixing than designing a good CSP string that may or may not ever be
used.

One point of view is, screw 'em... sites that don't provide CSP will
just be vulnerable to more XSS attacks, and it is only skin off their
own back.  On the other hand, the client through his browser is
usually the real victim, not the site, and I think we want to
encourage sites to give as much protection to the client as possible.
This might mean tailoring CSP a bit to give companies motivation to
put CSP into their sites.

Though, perhaps in the long run a good policy can help them later
identify possible vulnerabilities, it may not be obviously beneficial
in the short run and won't be enough to make up for the fact that the
site can't tell whether or not if their CSP is helping out at all (and
so they won't provide it).

> Another feature of CSP is "herd immunity" -
> it doesn't have to be used by everyone to
> be helpful.
Surely using CSP won't *hurt*, but I think that it will only help the
people who use it.  Herd immunity applies mainly to viral spreads or
epidemics, and I would argue that most of what CSP prevents are not
viral attacks.  A few browsers with CSP can help slow an XSS worm from
spreading to the rest of the "herd", but it won't change the
persistent or reflected XSS attacks to steal contact lists or deface a
site that doesn't use CSP.

These one-shot (non-viral) attacks only become less frequent when it
becomes more futile to try. CSP actually has to be adopted enough by
sites in practice (and not just theorized) to make attacks it prevents
less attractive, and thus reduce the overall number of attempted
attacks.  For instance, if only 10% of visitors to an XSS-defaced site
enforce CSP, attackers will probably still deface that site because
90% isn't bad.  If we can make it irrational to attack a site (by
having 60% of browsers and sites implement CSP), then we'll see
attackers stop trying.  Until then, only those implementing CSP will
get the benefit of extra security.

-Sid
0
Sid
1/5/2009 7:52:40 PM
Sid Stamm wrote:
> Gervase Markham <g...@mozilla.org> wrote:
>> Security is a multi-faceted beast.
> Point taken, and I agree, it was a crappy analogy.
> 
>> Again, CSP is here being used as a front line of
>> defence, and it shouldn't be.
> I agree with you... optimally, CSP should not be front-line defense.
> But for it to be helpful in practice, there must be a motivation for
> people to put it on their sites.
> 
> What worries me is that with no assurance that they're enforced, CSP
> policies won't be provided by web sites since it takes time (granted,
> not much of it) to compose them.  It's likely that a profit-driven
> company might rather have their engineers spend time fuzzing or bug
> fixing than designing a good CSP string that may or may not ever be
> used.

It really doesn't take long - it's not a complicated spec. I'm not sure
we need to make it "more attractive" by promising what we can't deliver.

>> Another feature of CSP is "herd immunity" -
>> it doesn't have to be used by everyone to
>> be helpful.

Sorry, I realise that in hindsight I was ambiguous here. I meant that
not all end-users have to use it for it to be helpful in the case of a
particular site which is using it. I say this because once the site
owner is warned of the problem, he can fix it. If no-one has CSP, it may
take much longer for people to notice the compromise.

Gerv
0
Gervase
1/12/2009 10:53:16 AM
Gervase Markham wrote:
> Sid Stamm wrote:
>> What worries me is that with no assurance that they're enforced, CSP
>> policies won't be provided by web sites since it takes time (granted,
>> not much of it) to compose them.  It's likely that a profit-driven
>> company might rather have their engineers spend time fuzzing or bug
>> fixing than designing a good CSP string that may or may not ever be
>> used.
> 
> It really doesn't take long - it's not a complicated spec. I'm not sure
> we need to make it "more attractive" by promising what we can't deliver.

One concern is the time and effort required to refactor existing code to 
use only external scripts (a non-trivial task).  Development of new web 
code can take this restriction into account but still requires 
deliberate effort throughout the development cycle to maintain support 
for CSP.

I think utilizing CSP will be a very conscious decision by web site 
operators, weighing the benefits CSP offers, the cost of implementing 
and maintaining CSP support, and the risks of not adding CSP to their 
web site.  While it would be nice to have a low cost, effective, add-on 
layer of security, it seems the requirement of no inline script code 
adds significantly to the cost of CSP.  Therefore site owners should be 
able to estimate the benefit CSP will give them by measuring the level 
of browser support among the site's visitors, so it can be weighed 
against the cost of CSP deployment.

Is it correct that the rule against inline scripts is in effect for all 
CSP policies, even when script-src is not used?

Mike
0
Mike
1/12/2009 6:34:10 PM
On Jan 12, 5:53=A0am, Gervase Markham <g...@mozilla.org> wrote:
> not all end-users have to use it for it to be helpful in the case of a
> particular site which is using it. I say this because once the site
> owner is warned of the problem, he can fix it. If no-one has CSP, it may
> take much longer for people to notice the compromise.

Of course, unless the site breaks in a noticeable way when violations
of CSP occur, there is no additional help for the site developer...
and I don't believe that CSP is intended to have a violation reporting
mechanism.  Additionally, it is my impression that a lot of attacks
stopped by CSP would break un-noticed.  For example, a cross-site
exploit that simply embeds a <script> and steals cookies would likely
not modify the page visually, so whether or not it fails, the end-user
wouldn't notice.

Maybe something to add value to CSP support would be a CSP developer
mode or warning logo somewhere in the browser that alerts the end-user
when a policy is violated.  That would indeed be an easy-addon, and
perhaps testers could just flip it on for sites they fool with on a
daily basis.

Or do we want phone-home features for CSP so the browser will
automatically tell a site when its policy is violated?  This sounds
like it could be abused to help sites identify which browsers support
CSP (essentially providing that 'this-browser-supports-csp' flag
you're arguing against).

-Sid
0
Sid
1/12/2009 6:52:11 PM
Sid Stamm wrote on 1/12/2009 12:52 PM: 
> Or do we want phone-home features for CSP so the browser will
> automatically tell a site when its policy is violated?

It already has this feature, see #6:

	http://people.mozilla.org/~bsterne/content-security-policy/details.html


- Bil

0
Bil
1/12/2009 7:23:12 PM
On Jan 12, 2:23=A0pm, Bil Corry <b...@corry.biz> wrote:
> It already has this feature, see #6:

Ah, sorry for my blindness Bil.  It has been a while since I read
that, and simply spaced on that feature.

Gerv: what are your thoughts on (mis)use of the Report-URI to
determine which browsers support CSP?  For example, given a policy "X-
Content-Security-Policy: allow self", Report-URI "http://self.com/
report" and a tag served "<script src=3D'http://forbidden.com/js'>", a
report would be generated.  Assuming the report URI and the page
containing the violation are in the same domain, cookies could be used
to connect the report to a specific client.   It seems to me that
unless client browsers *never* send CSP-related data to the server
then the server can ultimately determine which clients are using CSP.

-Sid
0
Sid
1/12/2009 9:19:52 PM
Sid Stamm wrote on 1/12/2009 3:19 PM: 
> It seems to me that unless client browsers *never* send CSP-related
> data to the server then the server can ultimately determine which
> clients are using CSP.

I agree, without the client advertising CSP-support, sites will test for CSP just as they test for JavaScript, cookies, etc.  You could probably test for CSP by using policy-uri, if the browser requests it from your server, then it supports CSP.  To prevent an attacker from causing a browser to load it ala CSRF, you could even add a nonce to the request:

	X-Content-Security-Policy: policy-uri /policy.csp?nonce=ABC123



- Bil

0
Bil
1/12/2009 10:40:45 PM
Sorry I haven't been more vocal on this thread lately.  I think it's
important that we keep our momentum moving forward here if we hope to
get something meaningful implemented any time soon.

I am getting the sense that we aren't in agreement on one or two of
the fundamental goals of this project and I think it potentially
jeopardizes overall progress if we are working with different base
assumptions.  My near-term goal is to start driving toward a stable
design (if not specification) for CSP.  The design is certainly still
open for comments and feedback, but those discussions will be easier
to resolve after we've settled the issue of project goals.  More
below...

On Dec 23 2008, 7:34=A0am, Gervase Markham <g...@mozilla.org> wrote:
> I am not arguing we should make CSP work a random 50% of the time. I am
> arguing that CSP is not a "security model", it's a "phew, I would have
> just got stuffed, but it saved me this time" model.  Security models are
> things you rely on. CSP is a second line of defence for when your
> security model fails, and it doesn't promise to save your ass every time.

I think that CSP should be considered part of the browser security
model.  Mike and others have made the excellent point that there are
significant costs to bear for a website that wants to start using this
model: policy development as well as migrating inline scripts to
external script files.  Websites will not be willing to pay this cost
if user agents are not strongly committed to enforcing the policies.
We won't be able to make security guarantees like "XSS will never
happen on your site", but we can provide smaller guarantees like
"inline script will not execute in this page if the CSP header is
sent".

I have previously agreed with Gerv's "belt-and-(suspenders|braces)"
logic with regard to CSP as it had twofold appeal to me: 1) it is
consistent with the defense-in-depth approach found elsewhere in
computer security, and 2) it provided an escape hatch from design
flaws, implementation bugs, or other deficiencies later discovered
with the model.  It appears now, though, that this issue is impeding
us a bit and I am going to weigh in on the side of stronger commitment
to policy enforcement.  Perhaps a stronger design is produced as the
result of a firm commitment to CSP as a part of the browser security
model (or perhaps it is required by such a commitment).
0
bsterne
1/14/2009 1:24:11 AM
Sid Stamm wrote:
> Gerv: what are your thoughts on (mis)use of the Report-URI to
> determine which browsers support CSP?  For example, given a policy "X-
> Content-Security-Policy: allow self", Report-URI "http://self.com/
> report" and a tag served "<script src='http://forbidden.com/js'>", a
> report would be generated.  Assuming the report URI and the page
> containing the violation are in the same domain, cookies could be used
> to connect the report to a specific client.   It seems to me that
> unless client browsers *never* send CSP-related data to the server
> then the server can ultimately determine which clients are using CSP.

I have no objection in principle to servers knowing that clients have
CSP capability. What I object to is bulking up every HTTP request with
that information, or making the protocol or system more complicated in
order to allow people to do things they shouldn't be doing (like relying
on it as a first line of defence).

Gerv
0
Gervase
1/16/2009 5:55:41 AM
bsterne wrote:
> I think that CSP should be considered part of the browser security
> model.  Mike and others have made the excellent point that there are
> significant costs to bear for a website that wants to start using this
> model: policy development as well as migrating inline scripts to
> external script files.  Websites will not be willing to pay this cost
> if user agents are not strongly committed to enforcing the policies.
> We won't be able to make security guarantees like "XSS will never
> happen on your site", but we can provide smaller guarantees like
> "inline script will not execute in this page if the CSP header is
> sent".

I completely agree that we should make these guarantees, in the sense
that if that doesn't work, it's a bug :-) That's not the sort of
guarantee I'm objecting to. The sort I'm objecting to is "you don't have
to validate and escape user input properly because even if you let a
<script> tag through accidentally, CSP will catch it and save you".

Some understandings of "CSP being strongly part of the browser security
model" would have us making such guarantees. And I think they would be a
mistake. If "CSP being strongly part of the browser security model" just
means "we guarantee that it does what it says on the tin" then I have no
problem with it :-) My reduced commitment to guarantees was not designed
as an ass-covering measure for shoddy coding ;-)

Gerv
0
Gervase
1/16/2009 5:58:36 AM
Reply:

Similar Artilces:

Why is it an error to have both X-Content-Security-Policy and X-Content-Security-Policy-Report-Only ?
https://wiki.mozilla.org/Security/CSP/Spec#Report-Only_mode If both a X-Content-Security-Policy-Report-Only header and a X-Content-Security-Policy header are present in the same response, a warning is posted to the user agent's error console and any policy specified in X-Content-Security-Policy-Report-Only is ignored. The policy specified in X-Content-Security-Policy headers is enforced. Why is this? This seems like an unnecessary burden which prevents groups from tightening their security policies over time. For example, here at Google, I'm interested in helping resol...

when is secure, secure?
Lo everyone, I wrote a custom authentication handler for PureFTPD, using a combination of authentication methods, for about 4 different types of users. So far, from testing it, it does look to work properly, and does it's job pretty well (and fast). I use #!/usr/bin/perl -W as well as use Strict, and use warnings, and the code returns no errors or warnings when run. I am right to presume that this basically only really tells me the my syntax and structure of the application is right? What's a good way to see whether it is actually SECURE... There is a couple of lines of...

security too secure
Name: joe Product: Firefox Summary: security too secure Comments: The security thing won't let me in this sight no matter how I accept, confirm, get certificate, etc. https://www.vtext.com/customer_site/jsp/messaging_lo.jsp Browser Details: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-GB; rv:1.9.1b4) Gecko/20090423 Firefox/3.5b4 From URL: http://hendrix.mozilla.org/ Note to readers: Hendrix gives no expectation of a response to this feedback but if you wish to provide one you must BCC (not CC) the sender for them to see it. ...

How secure is secure?
Thanks to this group and all the high tech individuals who frequent it I have learned how to protect my PC from the inside out. But what about security risks to my info 'before' it gets to my computer? Like my mail box on the server. Could someone hack into that and thumb through my mail? If so, how would I ever know? (The short story) We have a rogue employee at my work who one day decided to run the web site, she got in tight with the ISP, got tools to set and delete passwords on a protected directory on the server. Who knows if she has telnet access to other things, li...

Content Security Policy Spec questions and feedback
(I'm moving a thread to NNTP at the request of Gerv. Thanks for reading!) The following is a weakly-organized list of my questions and thoughts on the current CSP spec draft. (See http://blogs.msdn.com/ie/archive/2009/= 06/25/declaring-security.aspx if you're interested in higher-level feedback) --------------- Versioning --------------- Server CSP Versioning Can the server define which version of CSP policies it wants to use, allowing the client to ignore? I know that backward compatibility is the goal, but other successful features (E.g. Cookies) have had tons of ...

Mixed Secure and Non Secured Content
I didn't get a feel for how safe SSL pages are that contain both secure and non secured content. If a page contains both is the secured content (say a form or password entry) safe? John John Pearce wrote: >I didn't get a feel for how safe SSL pages are that contain both secure >and non secured content. If a page contains both is the secured content >(say a form or password entry) safe? Depends on what's secure and what isn't. For example, if it's just graphics that are insecure, you're fine. It's the code itself (HTML plus possibly...

Content Security Policy updates
Sid has updated the Content Security Policy spec to address some of the issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec You can see the issues we've been tracking and the resolutions at the Talk page: https://wiki.mozilla.org/Talk:Security/CSP/Spec There are still a few open issues. Daniel Veditz wrote on 7/23/2009 10:32 AM: > Sid has updated the Content Security Policy spec to address some of the > issues discussed here. https://wiki.mozilla.org/Security/CSP/Spec Under "Policy Refinements with a Multiply-Specified Header" there is a misspe...

Password secure...is it secure?
Yes I just got this baby and I LOVE it! Its great. I have stored all my passwords inside of it (and yes made a few backups from them in secure locations) How secure is this program really? It uses blowfish to encrypt the database but how strong blowfish? 128bits? 256? 448? Anything else I should think about it? I have putted it and its databases inside PGPdisk just to play it safe...but then again Im a paranoid. :) -- Markus Jansson ************************************ My privacy related homepage and PGP keys: http://www.geocities.com/jansson_markus/ ********...

Secure connections: how secure are they?
*QUOTE* ......... both useful and malicious information can be transmitted via network connections. Standard solutions protect computers against threats present in standard network connections, but aren't able to counter threats present in secure connections. Verifying the contents of a secure connection is impossible by virtue of its secure nature, as demonstrated by the different types of protection listed above. As a result, malicious data within secure channels can cause a significant amount of damage, and sometimes more than if it were to be transmitted via a standard, non-s...

How secure is secure enough?
July 28, 2008 (Computerworld) This story originally appeared in Computerworld's print edition. If there is a Holy Grail in the information security industry, it surely is the answer to the question, "How secure is secure enough?" It's a question that many security managers have either avoided answering altogether or tried to quickly sidestep by throwing a fistful of mainly pointless operational metrics at anyone who cared to ask. http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=321921&intsrc=hm_list -- "Never d...

form security against security
i have a form in my website which is to be filled by user and that form stores in database(sql server 2005). but someone told me that anyone can run script  in textboxes in that form and can damage database, so how to avoid such security lack.  it is common practice to use parameterized sql statements or stored procs to insure you are protected from sql injections attacks. if you concatenate user input directly into a sql statement, then you are at risk.Mike Banavige~~~~~~~~~~~~Need a site code sample in a different language? Try converting it with: http://converte...

Security
This is a multi-part message in MIME format. --------------080100010401000103080002 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit I'm a Mac user 10.4.8 of Thunderbird 1.5.0.7 & am wondering how "Enabling FIPS" will improve my security? I can't seem to find any explanation of FIPS under Thunderbird help. -- Have a good day R Schwager --------------080100010401000103080002 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Tr...

How secure is AuthenticationTypes.Secure?
I understand that AuthenticationTypes.Secure requests secure authentication using Kerberos or NTLM (??). However, here is a scenario I am trying to understand. Let us say that I am having a regular ASP.NET site - with SSL certificates not installed on the web server. The login sends the request out to an AD server which also does not have certificates installed. However, I have set Secure flag to AuthenticationTypes.Secure. When the username and password data gets transmitted between the application and the LDAP server, how secure are the password and username info? In other words is this in...

Secure By Design: How Guardian Digital Secures EnGarde Secure Linux
"EnGarde Secure Linux is not just another "repackaged" Linux distribution, but a modern open source system built from the ground up to provide secure services in the threatening world of the modern Internet."... http://www.linuxsecurity.com/content/view/125195/171/ ...."The Community edition of EnGarde Secure Linux is completely free and open source, and online security and application updates are freely available with GDSN registration."... http://www.engardelinux.org/modules/index/index.cgi -- js ...

Web resources about - Content Security Policy feedback - mozilla.dev.security

Krebs on Security
The House Financial Services Committee is slated to hold a hearing this Friday on the impact of cyber heists against small- to mid-sized businesses. ...

Security Middle East - Latest news from the Middle East.
Security Middle East is a news portal for the entire security industry, focussed specifically on latest security news from the Middle East. Security ...

Information Security News, IT Security News & Expert Insights: SecurityWeek.Com
IT Security News and Information Security News, Cyber Security, Network Security, Enterprise Security Threats, Cybercrime News and more. Information ...

Security (finance) - Wikipedia, the free encyclopedia
equity securities, e.g., common stocks ; and, The company or other entity issuing the security is called the issuer . A country's regulatory ...

James Packer in altercation with security guard at Crown Casino
Billionaire&#160;James Packer has been involved in an&#160;altercation with a security guard who failed to recognise him at his casino in Melbourne. ...

James Packer in altercation with security guard at Crown Casino, reports
Billionaire&#160;James Packer has allegedly been involved in a physical&#160;altercation with a security guard at his casino in Melbourne.

James Packer in altercation with security guard at Crown Casino, reports
Billionaire&#160;James Packer has allegedly been involved in a physical&#160;altercation with a security guard at his casino in Melbourne.

James Packer in altercation with security guard at Crown Casino, reports
Billionaire&#160;James Packer has allegedly been involved in a physical&#160;altercation with a security guard at his casino in Melbourne.

Home security camera startup Canary says its first year sales were 'bigger than Fitbit, GoPro, and Dropcam's ...
Canary , an app-powered security camera for your home, finally hit the market in December of 2014, following a crazy-successful IndieGoGo crowdfunding ...

Former SecDef: Obama’s National Security Advisers Are Totally Naive
Former SecDef: Obama’s National Security Advisers Are Totally Naive

Resources last updated: 1/10/2016 10:19:32 AM