web-platform-tests now running in automation

The web-platform-tests testsuite has just landed on
Mozilla-Central. It is an import of a testsuite collated by the W3C
[1], which we intend to keep up-to-date with upstream. The tests are
located in /testing/web-platform/tests/ and are now running in automation.

Initially the testsuite, excluding the reftests, is running on Linux
64 opt builds only. If it doesn't cause problems there it will be
rolled out to other configurations, once we are confident they will
be equally stable.

The jobs indicated on tbpl and treeherder by the symbols W1-W4. The
reftests will be Wr once they are enabled.

== How does this affect me? ==

Because web-platform-tests is imported from upstream we can't make
assumptions like "all tests will pass". Instead we explicitly store
the expected result of every test that doesn't just pass in an
ini-like file with the same name as the test and a .ini suffix in
/testing/web-platform/meta/. If you make a change that affects the
result of a web-platform-test you need to update the expected results
or the testsuite will go orange.

Instructions for performing the updates are in the README file
[2]. There is tooling available to help in the update process.

== OK, so how do I run the tests? ==

Locally, using mach:

mach web-platform-tests

or, to run only a subset of tests:

mach web-platform-tests --include=dom/

To run multiple tests at once (at the expense of undefined ordering
and greater in-determinism), use the --processes=N option.

The tests are also available on Try; the trychooser syntax is

-u web-platform-tests

Individual chunks can also be run, much like for mochitest.

It's also possible to just start the web server and load tests into
the browser, as long as you add the appropriates entries to your hosts
file. These are documented in the web-platform-tests README file
[3]. Once these are added running

python serve.py

in testing/web-platform/tests will start the server and allow the
tests to be loaded from http://web-platform.test:8000.

== What does it mean if the tests are green? ==

It means that there are no "unexpected" results. These expectations
are set based on the existing behaviour of the browser. Every time the
tests are updated the expectations will be updated to account for
changes in the tests. It does *not* mean that there are no tests that
fail. Indeed there may be tests that have even worse behaviour like
hanging or crashing; as long as the behaviour is stable, the test will
remain enabled (this can ocassionally have somewhat wonky interaction
with the tbpl UI. When looking at jobs, unexpected results always start
TEST-UNEXPECTED-).

So far I haven't spent any time filing bugs about issues found by the
tests, but there is a very basic report showing those that didn't pass
at [4]. I am very happy to work with people with some insight into
what bugs have already been filed to get new issues into Bugzilla. I
will also look at making a continually updated HTML report. In the
longer term I am hopeful that this kind of reporting can become part
of the Treeherder UI so it's easy to see not just where we have
unexpected results but also where there are expected failures
indicating buggy code.

== What kinds of things are covered by these tests? ==

web-platform-tests is, in theory, open to any tests for web
technologies. In practice most of the tests cover technologies in the
WHATWG/W3C stable e.g. HTML, DOM, various WebApps specs, and so
on. The notable omission is CSS; for historical reasons the CSS tests
are still in their own repository. Convergence here is a goal for the
future.

== We already have mochitests; why are we adding a new testsuite? ==

Unlike mochitests, web-platform-tests are designed to work in any
browser. This means that they aren't just useful for avoiding
regressions in Gecko, but also for improving cross-browser interop;
when developing features we can run tests that other implementers have
written, and they can run tests we have written. This will allow us to
detect compatibility problems early in a feature's life-cycle, before
they have the chance to become a source of frustration for
authors. With poor browser compatibility being one of the main
complaints about developing for the web, improvements in this area are
critical for the ongoing success of the platform.

== So who else is running the web-platform-tests? ==

* Blink run some of the tests in CI ([5] and various other locations
  scattered though their tree)
* The Servo project are running all the tests for spec areas they have
  implemented in CI [6]
* Microsoft have an Internet-Explorer compatible version of the test runner.

In addition we are using web-platform-tests as one component of the
FirefoxOS certification suite.

The harness [7] we are using for testing Gecko is browser-agnostic so
it's possible to experiment with running tests in other browsers. In
particular it supports Firefox OS, Servo and Chrome, and Microsoft
have patches to support IE. Adding support for other browsers
supporting some sort of remote-control protocol (e.g. WebDriver)
should be straigtforward.

== Does this mean I should be writing web-platform-tests ==

Yes.

When we are implementing web technologies, writing cross-browser tests
is generally better than writing proprietary tests. Having tests that
multiple vendors run helps advance the mission, by providing a
concrete way of assessing spec conformance and improving interop. It
also provides short term wins since we will discover compatibility
issues closer to the time that the code is originally written, rather
than having to investigate broken sites later on. This also applies to
other vendors of course; by encouraging them to run tests that we have
written they are less likely to introduce bugs that manifest as
compatibility issues which, in the worst case, lead to us having to
"fix" our implementation to match their mistakes.

But.

At the moment, the process for interacting with web-platform-tests
requires direct submission to the upstream GitHub repository. In the
near future this workflow will be improved by adding a directory for
local modifications or additions to web-platform-tests in the Mozilla
tree (e.g. testing/web-platform/local). Once landed in m-c any tests
here will automatically be pushed upstream during the next
web-platform-tests sync (as long as the test has r+ in Bugzilla it
doesn't need to be reviewed again to land upstream). This, combined
with the more limited featureset and platform coverage of
web-platform-tests compared to mochitest, means that this email is
explicitly *not* a call to change any policy around test formats at this
time.

== I'm feeling virtuous! Where's the documentation for writing tests? ==

The main documentation is at Test The Web Forward [8]. I am in the
process of updating this to be more current; for now the most up to
date documentation is in my fork of the website at [9]. This will be
merged upstream in the near future.

For tests that require server-side logic web-platform-tests uses a
custom python-based server which allows test-specific behaviour
through simple .py files. Documentation for this is found at [10].

If you have any questions, feel free to ask me.

== How do I write tests that require non-web-exposed APIs? ==

One of the disadvantages of cross-browser testing is that you are
limited to APIs that work in multiple browsers. This means that tests
in web-platform-tests can't use e.g. SpecialPowers. For anything
requiring this you will have to write a mochitest like today.

In the future we plan to integrate WebDriver support into
web-platform-tests which will make some privileged operations, and
simulation of user interaction with the content area, possible.

== You didn't answer my question! ==

If you have any further questions I'm very happy to answer them,
either here, by email or on irc (#ateam on the Mozilla server or
#testing on irc.mozilla.org).

[1] https://github.com/w3c/web-platform-tests/
[2]
https://hg.mozilla.org/mozilla-central/file/tip/testing/web-platform/README.md
(or formatted
https://github.com/mozilla/gecko-dev/blob/master/testing/web-platform/README.md)
[3]
https://github.com/mozilla/gecko-dev/blob/master/testing/web-platform/tests/README.md
[4]
http://hoppipolla.co.uk/web-platform-tests/gecko_failures_2014-08-28.html
[5]
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/w3c/web-platform-tests/
[6] https://travis-ci.org/servo/servo/ (see the AFTER_BUILD=wpt jobs)
[7] http://wptrunner.readthedocs.org/en/latest/
[8] http://testthewebforward.org
[9] http://jgraham.github.io/docs/
[10] http://wptserve.readthedocs.org/en/latest/
0
James
9/5/2014 3:55:13 PM
mozilla.dev.platform 6651 articles. 0 followers. Post Follow

13 Replies
1226 Views

Similar Articles

[PageSpeed] 3

On 9/5/14, 11:55 AM, James Graham wrote:
> The web-platform-tests testsuite has just landed on
> Mozilla-Central.

This is fantastic.  Thank you!

Does this obsolete our existing "imptests" tests, or is this a set of 
tests disjoint from those?

-Boris
0
Boris
9/5/2014 1:01:01 AM
On 05/09/14 18:00, Boris Zbarsky wrote:
> On 9/5/14, 11:55 AM, James Graham wrote:
>> The web-platform-tests testsuite has just landed on
>> Mozilla-Central.
> 
> This is fantastic.  Thank you!
> 
> Does this obsolete our existing "imptests" tests, or is this a set of
> tests disjoint from those?

I think Ms2ger has a better answer here, but I believe it obsoletes most
of them, except a few that never got submitted to web-platform-tests
(the editing tests are in that class, because the spec effort sort of died).

I've filed bug 1063632 to remove the imptests once we have better
platform coverage from web-platform-tests.

0
James
9/5/2014 5:23:47 PM
On 9/5/14, 11:55 AM, James Graham wrote:
> Instructions for performing the updates are in the README file
> [2]. There is tooling available to help in the update process.

Is there a way to document the spec or test suite bugs in the 
expectations file?  e.g. if I want to add an "expected: FAIL" and link 
to https://github.com/w3c/web-platform-tests/issues/1223 as an 
explanation for why exactly we're failing it.

-Boris
0
Boris
9/6/2014 1:01:01 AM
On Fri, Sep 5, 2014 at 8:23 PM, James Graham <james@hoppipolla.co.uk> wrote:
> I think Ms2ger has a better answer here, but I believe it obsoletes most
> of them, except a few that never got submitted to web-platform-tests
> (the editing tests are in that class, because the spec effort sort of died).

FWIW, the editing tests are still very useful for regression-testing.
They often catch unintended behavior changes when changing editor
code, just because they test quite a lot of code paths.  I think it
would be very valuable for web-platform-tests to have a section for
"tests we don't know are even vaguely correct, so don't try to use
them to improve your conformance, but they're useful for regression
testing anyway."  That might not help interop, but it will help QoI,
and it makes sense for browsers to share in that department as well.

(This is leaving aside the fact that the editing tests are
pathologically large and should be chopped up into a lot of smaller
files.  I have a vague idea to do this someday.  They would also
benefit from only being run by the Mozilla testing framework on
commits that actually touch editor/, because it's very unlikely that
they would be affected by code changes elsewhere that don't fail other
tests as well.  I think.)
0
Aryeh
9/7/2014 11:34:53 AM
On 06/09/14 05:05, Boris Zbarsky wrote:
> On 9/5/14, 11:55 AM, James Graham wrote:
>> Instructions for performing the updates are in the README file
>> [2]. There is tooling available to help in the update process.
> 
> Is there a way to document the spec or test suite bugs in the
> expectations file?  e.g. if I want to add an "expected: FAIL" and link
> to https://github.com/w3c/web-platform-tests/issues/1223 as an
> explanation for why exactly we're failing it.

There isn't anything at the moment, but it seems like a good idea to
invent something. The easiest thing would be a new key-value pair like

expected-reason: Some reason string

Do you have a preferred syntax here?
0
James
9/7/2014 2:21:34 PM
On 07/09/14 12:34, Aryeh Gregor wrote:
> On Fri, Sep 5, 2014 at 8:23 PM, James Graham <james@hoppipolla.co.uk> wrote:
>> I think Ms2ger has a better answer here, but I believe it obsoletes most
>> of them, except a few that never got submitted to web-platform-tests
>> (the editing tests are in that class, because the spec effort sort of died).
> 
> FWIW, the editing tests are still very useful for regression-testing.
> They often catch unintended behavior changes when changing editor
> code, just because they test quite a lot of code paths.  I think it
> would be very valuable for web-platform-tests to have a section for
> "tests we don't know are even vaguely correct, so don't try to use
> them to improve your conformance, but they're useful for regression
> testing anyway."  That might not help interop, but it will help QoI,
> and it makes sense for browsers to share in that department as well.

Well, it would also make sense to have interop for editing of course :)
I would certainly be in favour of someone pushing those tests through
review so that they can land in web-platform-tests, but historically we
haven't been that successful in getting review for large submissions
where no one is actively working on the code (e.g. [1] which has a lot
of tests, mostly written by me, for document loading). I don't really
know how to fix that other than say "it's OK to land stuff no one has
looked at because we can probably sort it out post-hoc", which has some
appeal, but also substantial downsides if no one is making even basic
checks for correct usage of the server or for patterns that are known to
result in unstable tests.

> (This is leaving aside the fact that the editing tests are
> pathologically large and should be chopped up into a lot of smaller
> files.  I have a vague idea to do this someday.  They would also
> benefit from only being run by the Mozilla testing framework on
> commits that actually touch editor/, because it's very unlikely that
> they would be affected by code changes elsewhere that don't fail other
> tests as well.  I think.)

In the long term I'm hopeful that we can end up with a much smarter
testing system that uses a combination of human input and recorded data
to prioritise the tests most likely to break for a given commit. For
example a push only changing code in editor/ to Try, with default
settings, might first run the editing tests and then, once they passed,
run some additional tests from, say dom, or whatever else turns out to
be likely to regress for broken patches in the changed code. On inbound
a somewhat larger set of tests would run, and then on m-c we'd do a full
testrun.

Obviously we're a long way from that at the moment, but it's a
reasonable thing to aim for and I think that some of the pieces are
starting to come together.

[1] https://critic.hoppipolla.co.uk/r/282

0
James
9/7/2014 2:49:38 PM
On 9/7/14, 10:21 AM, James Graham wrote:
> There isn't anything at the moment, but it seems like a good idea to
> invent something. The easiest thing would be a new key-value pair like
>
> expected-reason: Some reason string
>
> Do you have a preferred syntax here?

Nope.  Pretty much anything works for me.

-Boris

0
Boris
9/8/2014 1:01:01 AM
On Sun, Sep 7, 2014 at 5:49 PM, James Graham <james@hoppipolla.co.uk> wrote:
> Well, it would also make sense to have interop for editing of course :)

Not a single major browser has significant resources invested in
working on their editing code.  Until that changes, nothing much is
going to happen.

> I would certainly be in favour of someone pushing those tests through
> review so that they can land in web-platform-tests, but historically we
> haven't been that successful in getting review for large submissions
> where no one is actively working on the code (e.g. [1] which has a lot
> of tests, mostly written by me, for document loading). I don't really
> know how to fix that other than say "it's OK to land stuff no one has
> looked at because we can probably sort it out post-hoc", which has some
> appeal, but also substantial downsides if no one is making even basic
> checks for correct usage of the server or for patterns that are known to
> result in unstable tests.

I think unreviewed tests should still be run by browsers' automated
testing framework (obviously unless they take too long, are
unreliable, etc.).  They just shouldn't be counted toward any claims
of conformance.  Even if the expected values are entirely silly, which
they probably aren't, they'll still help regression testing.  There's
already an external set of tests that Mozilla runs (browserscope)
which I think is wrong in a number of its expected results, but it's
still been useful for catching regressions in my experience.
0
Aryeh
9/8/2014 10:47:02 AM
On 2014-09-08, 6:47 AM, Aryeh Gregor wrote:
> On Sun, Sep 7, 2014 at 5:49 PM, James Graham <james@hoppipolla.co.uk> wrote:
>> Well, it would also make sense to have interop for editing of course :)
>
> Not a single major browser has significant resources invested in
> working on their editing code.  Until that changes, nothing much is
> going to happen.
>
>> I would certainly be in favour of someone pushing those tests through
>> review so that they can land in web-platform-tests, but historically we
>> haven't been that successful in getting review for large submissions
>> where no one is actively working on the code (e.g. [1] which has a lot
>> of tests, mostly written by me, for document loading). I don't really
>> know how to fix that other than say "it's OK to land stuff no one has
>> looked at because we can probably sort it out post-hoc", which has some
>> appeal, but also substantial downsides if no one is making even basic
>> checks for correct usage of the server or for patterns that are known to
>> result in unstable tests.
>
> I think unreviewed tests should still be run by browsers' automated
> testing framework (obviously unless they take too long, are
> unreliable, etc.).  They just shouldn't be counted toward any claims
> of conformance.  Even if the expected values are entirely silly, which
> they probably aren't, they'll still help regression testing.  There's
> already an external set of tests that Mozilla runs (browserscope)
> which I think is wrong in a number of its expected results, but it's
> still been useful for catching regressions in my experience.

Yeah, I second this.  There is a lot of value in having tests that 
detect the changes in Gecko's behavior.

0
Ehsan
9/8/2014 6:42:51 PM
On 08/09/14 19:42, Ehsan Akhgari wrote:
>> I think unreviewed tests should still be run by browsers' automated
>> testing framework (obviously unless they take too long, are
>> unreliable, etc.).  They just shouldn't be counted toward any claims
>> of conformance.  Even if the expected values are entirely silly, which
>> they probably aren't, they'll still help regression testing.  There's
>> already an external set of tests that Mozilla runs (browserscope)
>> which I think is wrong in a number of its expected results, but it's
>> still been useful for catching regressions in my experience.
> 
> Yeah, I second this.  There is a lot of value in having tests that
> detect the changes in Gecko's behavior.

Yes, I agree too. One option I had considered was making a suite
"web-platform-tests-mozilla" for things that we can't push upstream e.g.
because the APIs aren't (yet) undergoing meaningful standardisation.
Putting the editing tests into this bucket might make some sense.

0
James
9/9/2014 12:44:39 PM
On 2014-09-09, 8:44 AM, James Graham wrote:
> On 08/09/14 19:42, Ehsan Akhgari wrote:
>>> I think unreviewed tests should still be run by browsers' automated
>>> testing framework (obviously unless they take too long, are
>>> unreliable, etc.).  They just shouldn't be counted toward any claims
>>> of conformance.  Even if the expected values are entirely silly, which
>>> they probably aren't, they'll still help regression testing.  There's
>>> already an external set of tests that Mozilla runs (browserscope)
>>> which I think is wrong in a number of its expected results, but it's
>>> still been useful for catching regressions in my experience.
>>
>> Yeah, I second this.  There is a lot of value in having tests that
>> detect the changes in Gecko's behavior.
>
> Yes, I agree too. One option I had considered was making a suite
> "web-platform-tests-mozilla" for things that we can't push upstream e.g.
> because the APIs aren't (yet) undergoing meaningful standardisation.
> Putting the editing tests into this bucket might make some sense.

That sounds good to me.  As long as we recognize and support this use 
case, I'd be happy to leave the exact solution to you.  :-)

0
Ehsan
9/9/2014 3:20:35 PM
On Tue, Sep 9, 2014 at 3:44 PM, James Graham <james@hoppipolla.co.uk> wrote:
> Yes, I agree too. One option I had considered was making a suite
> "web-platform-tests-mozilla" for things that we can't push upstream e.g.
> because the APIs aren't (yet) undergoing meaningful standardisation.
> Putting the editing tests into this bucket might make some sense.

That definitely sounds like a great idea, but I think it would be even
better if upstream had a place for these tests, so we could share them
with other engines (and hopefully they would reciprocate).  Anyone
who's just interested in conformance test figures would be free not to
run these extra tests, of course.  I don't see why upstream would mind
hosting these tests.

In the longer term, I think it would be very interesting if all simple
mochitests were written in a shareable format, and if other engines
did similarly.  I imagine we'd find lots of interesting regressions if
we ran a large chunk of WebKit/Blink tests as part of our regular test
suite, even if many of the tests will expect the wrong results from
our perspective.
0
Aryeh
9/10/2014 6:32:58 PM
On 10/09/14 19:32, Aryeh Gregor wrote:
> On Tue, Sep 9, 2014 at 3:44 PM, James Graham <james@hoppipolla.co.uk> wrote:
>> Yes, I agree too. One option I had considered was making a suite
>> "web-platform-tests-mozilla" for things that we can't push upstream e.g.
>> because the APIs aren't (yet) undergoing meaningful standardisation.
>> Putting the editing tests into this bucket might make some sense.
> 
> That definitely sounds like a great idea, but I think it would be even
> better if upstream had a place for these tests, so we could share them
> with other engines (and hopefully they would reciprocate).  Anyone
> who's just interested in conformance test figures would be free not to
> run these extra tests, of course.  I don't see why upstream would mind
> hosting these tests.

I tend to agree, but I suggest that you bring this up on public-test-infra.

> In the longer term, I think it would be very interesting if all simple
> mochitests were written in a shareable format, and if other engines
> did similarly.  I imagine we'd find lots of interesting regressions if
> we ran a large chunk of WebKit/Blink tests as part of our regular test
> suite, even if many of the tests will expect the wrong results from
> our perspective.

Yes, insofar as "written in a sharable format" means "written in one of
the formats that is accepted into wpt". We should strive to make sharing
our tests just as fundamental a part of our culture as working with open
standards is today.
0
James
9/12/2014 5:17:19 PM
Reply:

Similar Artilces:

Is dev-platform for platform users or platform developers?
So since dev-tech-xpcom closed, there's been an awful lot of traffic on dev-platform from platform users. I don't really have time to read this, and it probably means I'll be paying less attention to the platform developer traffic on dev-platform (if any at all; I'd long since unsubscribed to dev-tech-xpcom until told to resubscribe to follow the XPCOM memory management discussion). Does this discussion belong on dev-platform, or should it be redirected elsewhere? -David -- L. David Baron http://dbaron.org/ Mozilla Corporation ...

web-platform-tests on debug builds running on try
Web-platform-tests are now running in debug builds on try only. However due to some teething problems, they are not currently all green. This is expected to be fixed in the next 24 hours but, in the meantime, if you see some orange that seems unrelated to your change, particularly orange that looks like [1], please don't panic; it's likely safe to ignore (if there is a sheriff reading this, we should possibly hide those jobs temporarily). Sorry for the inconvenience this has caused. [1] https://treeherder.mozilla.org/logviewer.html#?job_id=8927888&repo=try -----...

automation-tests: heads up; platform-name changes for browsers we test
This mostly affects qa, Shane, Jared and Lloyd, but thought I'd post this if anyone else is interested. In https://github.com/mozilla/browserid/pull/3072, I updated some things in config/sauce-platforms.js. In particular, we were claiming to run 'VISTA' but that actually meant 'Windows 7'. So a few of the plaform names have changed: Old New vista_chrome -> win7_chrome vista_firefox_16 -> win7_firefox_16 vista_ie_9 -> win7_ie_9 xp_ie_8 -> winxp_ie_8 NOTE: none of the actual browser ...

Changes to web-platform-tests
This is just an update to make sure everyone is aware of recent improvements to the web-platform-tests, which should improve the UX for gecko hackers and make them suitable for more testing situations. == Summary == * Now possible to set per-test prefs so experimental features can be tested. * It is possible to check in test changes directly to mozilla-inbound and have them upstreamed when we next sync. * The command line behaviour is more compatible with mochitest. In particular individual tests can now be run using positional arguments containing paths to the test files. ...

Do we run all AWFY tests on all platforms?
AWFY seems to be missing test results for a number of platforms since August. Is this a known issue? Do we no longer run all the tests on all the plaforms? Mac OS X 64-bit (Mac Mini) * No Kraken or Sunspider results for Safari or Firefox (Ion or GGC) * No Octane results for any browser Mac OS X 32-bit (Mac Mini) * No Kraken or Sunspider results for Safari or Firefox (GGC) * No Octane results for any browser ARMv7 (Tegra 2) * No Chrome results for any tests chris On 10/16/2013 06:23 PM, Chris Peterson wrote: > AWFY seems to be missing test results for a ...

superreview granted: [Bug 297573] Modify JSS tests such that they access certs from platform specific directories : [Attachment 186564] Modified JSS/JSSE Client-Server tests to use platform specific
glen beasley <glen.beasley@sun.com> has granted Sandeep Konchady <Sandeep.Konchady@Sun.COM>'s request for superreview: Bug 297573: Modify JSS tests such that they access certs from platform specific directories https://bugzilla.mozilla.org/show_bug.cgi?id=297573 Attachment 186564: Modified JSS/JSSE Client-Server tests to use platform specific cert directory https://bugzilla.mozilla.org/attachment.cgi?id=186564&action=edit ------- Additional Comments from glen beasley <glen.beasley@sun.com> The logic is fine, but please fix the identation. ...

superreview requested: [Bug 297573] Modify JSS tests such that they access certs from platform specific directories : [Attachment 186564] Modified JSS/JSSE Client-Server tests to use platform specifi
Sandeep Konchady <Sandeep.Konchady@Sun.COM> has asked glen beasley <glen.beasley@sun.com> for superreview: Bug 297573: Modify JSS tests such that they access certs from platform specific directories https://bugzilla.mozilla.org/show_bug.cgi?id=297573 Attachment 186564: Modified JSS/JSSE Client-Server tests to use platform specific cert directory https://bugzilla.mozilla.org/attachment.cgi?id=186564&action=edit ------- Additional Comments from Sandeep Konchady <Sandeep.Konchady@Sun.COM> Modified JSS/JSSE client server tests to use platform specific cert direc...

Running SSIS package developed in 32 Bit Platform to 64 bit Platform.
Componets for SSIS package I used are as follows: 1) SQL Server 2005 2) SSIS in Visual studio 2005 3) Excel File as Input. I have devloped SSIS application on Windows XP which is an 32 bit Application but client also want that package on 64 bit Platform. I dont have 64 bit platform hence I am not sure whether that package will run on 64 bit. Will you help me out for whether this package will be run on 64 bit platform?  Sachin KaleJunior Developer,C# & BI Developer Hi, The 64-bit editions of Microsoft SQL Server include Integration Services, but some Integrat...

Stress Testing: Using Modular Stress Test to Test Platforms and Components
Modular Stress is a flexible system that performs several different types of stress testing. It is based on a set of test modules that each target individual features or components in a platform. The CETK includes a number of operating system (OS) stress modules that target major system components (for example, GWES, FileSys, Kernel, and so on). In addition, the user can create custom modules to target other platform components. The Modular Stress Test harness controls how these tests are run, that is, the sort of test environment that is desired, and collects data on system health thr...

testing on other platforms
If you have already started to argue about Windows vs Unix issues is there a way - besides uploading to CPAN and waiting for the CPAN testers to test my modules on platforms I don't have? [*] Is there some publicly available Solaris/HP-UX/etc (see full list on this page http://www.cpan.org/ports/index.html ) set of servers where one could test his modules? Maybe even setup CPAN smoking? Gabor [*] and actually even the CPAN testers provide only a relatively small set of platforms. On 26 May 2007, at 19:08, Gabor Szabo wrote: > Is there some publicly available Solaris/HP...

Web Platform
WEB PLATFORM: The project aims to create =93a new, authoritative open web standards documentation site,=94 Its great to see Initiatives like these being backed by Industry Giants. Spread the Word: http://www.webplatform.org/ <http://www1.webplatform.org/> Lets, spread it across each and every developer event we participate or conduct. --=20 Thanks and Regards *Ankit Bahuguna* LinkedIn: http://www.linkedin.com/in/ankitbahuguna ...

platform
Does Power Designer (Physical Architect) run on OS-X? Thanks No. PowerDesigner is a Windows product. Please submit an enhancement request to Sybase, asking for additional platforms. I would love to see a PowerDesigner-Linux. -- Mike Nicewarner [TeamSybase] http://www.datamodel.org mike@nospam!datamodel.org Sybase product enhancement requests: http://www.isug.com/cgi-bin/ISUG2/submit_enhancement "Mike Connell" <mikeconnell@prodigy.net> wrote in message news:40e116b6@forums-1-dub... > Does Power Designer (Physical Architect) run on OS-X? > > ...

PLATForm
Hi, I am just moving to Windows8 and receive the message below. Where do I put the Platform variable? I am sure I never heard of it before. .. wHER DO I WRITE wIN64? Checking project dependencies... Compiling gamma.cbproj (Debug, Win32) [Error Error] Invalid PLATFORM variable "HPD". PLATFORM must be one of the following: "Win32", "Win64", or "OSX32". If PLATFORM is defined by your system's environment, it must be overridden in the RAD Studio IDE or passed explicitly on the command line to MSBuild; e.g., /p:Platform=Win32. ...

PLATFORM
Builds, all tests (that aren't skipped) succeed on FreeBSD 5.2-CURRENT on i386. I think this gives a PLATFORMS line of: freebsd5.2-i386 Y Y Y ? ? Y Y How do I verify which runloops/features are working? --kag myconfig: Summary of my parrot 0.0.13 configuration: configdate='Fri Feb 27 15:49:20 2004' Platform: osname=freebsd, archname=i386-freebsd jitcapable=1, jitarchname=i386-freebsd, jitosname=FREEBSD, jitcpuarch=i386 execcapable=1 perl=perl Compiler: cc='cc', ccflags='-DAPPLLIB_E...

Web resources about - web-platform-tests now running in automation - mozilla.dev.platform

CircleCI Raises $1.5M From Eric Ries And Heroku Founders For Platform To Test Web Apps
CircleCI , a continuous integration platform for web application developers, has announced $1.5 millon in seed funding from a group of individual ...

Resources last updated: 11/21/2015 10:48:46 AM