pdf.js/test/test.py

575 lines
20 KiB
Python
Raw Normal View History

import json, platform, os, shutil, sys, subprocess, tempfile, threading, time, urllib, urllib2
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import SocketServer
from optparse import OptionParser
2011-06-29 08:29:52 +09:00
from urlparse import urlparse, parse_qs
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
USAGE_EXAMPLE = "%prog"
# The local web server uses the git repo as the document root.
DOC_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__),".."))
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
ANAL = True
2011-06-22 06:53:57 +09:00
DEFAULT_MANIFEST_FILE = 'test_manifest.json'
EQLOG_FILE = 'eq.log'
2011-06-22 06:53:57 +09:00
REFDIR = 'ref'
TMPDIR = 'tmp'
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
VERBOSE = False
SERVER_HOST = "localhost"
class TestOptions(OptionParser):
def __init__(self, **kwargs):
OptionParser.__init__(self, **kwargs)
self.add_option("-m", "--masterMode", action="store_true", dest="masterMode",
help="Run the script in master mode.", default=False)
self.add_option("--manifestFile", action="store", type="string", dest="manifestFile",
help="A JSON file in the form of test_manifest.json (the default).")
self.add_option("-b", "--browser", action="store", type="string", dest="browser",
help="The path to a single browser (right now, only Firefox is supported).")
self.add_option("--browserManifestFile", action="store", type="string",
dest="browserManifestFile",
help="A JSON file in the form of those found in resources/browser_manifests")
self.add_option("--reftest", action="store_true", dest="reftest",
help="Automatically start reftest showing comparison test failures, if there are any.",
default=False)
self.add_option("--port", action="store", dest="port", type="int",
help="The port the HTTP server should listen on.", default=8080)
self.set_usage(USAGE_EXAMPLE)
def verifyOptions(self, options):
if options.masterMode and options.manifestFile:
self.error("--masterMode and --manifestFile must not be specified at the same time.")
if not options.manifestFile:
options.manifestFile = DEFAULT_MANIFEST_FILE
if options.browser and options.browserManifestFile:
print "Warning: ignoring browser argument since manifest file was also supplied"
if not options.browser and not options.browserManifestFile:
print "Starting server on port %s." % options.port
return options
def prompt(question):
'''Return True iff the user answered "yes" to |question|.'''
inp = raw_input(question +' [yes/no] > ')
return inp == 'yes'
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
MIMEs = {
'.css': 'text/css',
'.html': 'text/html',
2011-06-23 10:50:38 +09:00
'.js': 'application/javascript',
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
'.json': 'application/json',
'.svg': 'image/svg+xml',
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
'.pdf': 'application/pdf',
'.xhtml': 'application/xhtml+xml',
'.ico': 'image/x-icon',
'.log': 'text/plain'
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
}
class State:
browsers = [ ]
manifest = { }
taskResults = { }
remaining = 0
results = { }
done = False
numErrors = 0
numEqFailures = 0
numEqNoSnapshot = 0
numFBFFailures = 0
numLoadFailures = 0
eqLog = None
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
class Result:
def __init__(self, snapshot, failure, page):
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
self.snapshot = snapshot
self.failure = failure
self.page = page
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
class TestServer(SocketServer.TCPServer):
allow_reuse_address = True
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
class PDFTestHandler(BaseHTTPRequestHandler):
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
# Disable annoying noise by default
def log_request(code=0, size=0):
if VERBOSE:
BaseHTTPRequestHandler.log_request(code, size)
def sendFile(self, path, ext):
self.send_response(200)
self.send_header("Content-Type", MIMEs[ext])
self.send_header("Content-Length", os.path.getsize(path))
self.end_headers()
with open(path, "rb") as f:
self.wfile.write(f.read())
2011-08-28 00:10:36 +09:00
def sendIndex(self, path, query):
if not path.endswith("/"):
# we need trailing slash
self.send_response(301)
redirectLocation = path + "/"
if query:
redirectLocation += "?" + query
self.send_header("Location", redirectLocation)
self.end_headers()
return
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
if query == "frame":
self.wfile.write("<html><frameset cols=*,200><frame name=pdf>" +
"<frame src='" + path + "'></frameset></html>")
return
location = os.path.abspath(os.path.realpath(DOC_ROOT + os.sep + path))
self.wfile.write("<html><body><h1>PDFs of " + path + "</h1>\n")
for filename in os.listdir(location):
if filename.lower().endswith('.pdf'):
self.wfile.write("<a href='/web/viewer.html?file=" + path + filename + "' target=pdf>" +
filename + "</a><br>\n")
self.wfile.write("</body></html>")
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
def do_GET(self):
2011-06-22 06:53:57 +09:00
url = urlparse(self.path)
# Ignore query string
path, _ = url.path, url.query
path = os.path.abspath(os.path.realpath(DOC_ROOT + os.sep + path))
prefix = os.path.commonprefix(( path, DOC_ROOT ))
_, ext = os.path.splitext(path.lower())
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
if url.path == "/favicon.ico":
self.sendFile(os.path.join(DOC_ROOT, "test", "resources", "favicon.ico"), ext)
return
2011-08-28 00:10:36 +09:00
if os.path.isdir(path):
self.sendIndex(url.path, url.query)
return
if not (prefix == DOC_ROOT
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
and os.path.isfile(path)
and ext in MIMEs):
print path
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
self.send_error(404)
return
if 'Range' in self.headers:
# TODO for fetch-as-you-go
self.send_error(501)
return
self.sendFile(path, ext)
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
2011-06-29 08:32:27 +09:00
def do_POST(self):
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
numBytes = int(self.headers['Content-Length'])
self.send_response(200)
self.send_header('Content-Type', 'text/plain')
self.end_headers()
2011-06-29 08:29:52 +09:00
url = urlparse(self.path)
if url.path == "/tellMeToQuit":
tellAppToQuit(url.path, url.query)
return
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
result = json.loads(self.rfile.read(numBytes))
2011-06-22 06:53:57 +09:00
browser, id, failure, round, page, snapshot = result['browser'], result['id'], result['failure'], result['round'], result['page'], result['snapshot']
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
taskResults = State.taskResults[browser][id]
taskResults[round].append(Result(snapshot, failure, page))
def isTaskDone():
numPages = result["numPages"]
rounds = State.manifest[id]["rounds"]
for round in range(0,rounds):
if len(taskResults[round]) < numPages:
return False
return True
if isTaskDone():
# sort the results since they sometimes come in out of order
for results in taskResults:
results.sort(key=lambda result: result.page)
2011-07-07 03:26:35 +09:00
check(State.manifest[id], taskResults, browser,
self.server.masterMode)
# Please oh please GC this ...
del State.taskResults[browser][id]
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
State.remaining -= 1
State.done = (0 == State.remaining)
2011-06-29 08:29:52 +09:00
# Applescript hack to quit Chrome on Mac
def tellAppToQuit(path, query):
if platform.system() != "Darwin":
return
d = parse_qs(query)
path = d['path'][0]
cmd = """osascript<<END
tell application "%s"
quit
end tell
END""" % path
os.system(cmd)
2011-06-29 07:20:31 +09:00
class BaseBrowserCommand(object):
def __init__(self, browserRecord):
self.name = browserRecord["name"]
self.path = browserRecord["path"]
self.tempDir = None
self.process = None
if platform.system() == "Darwin" and (self.path.endswith(".app") or self.path.endswith(".app/")):
self._fixupMacPath()
if not os.path.exists(self.path):
2011-07-10 23:54:19 +09:00
raise Exception("Path to browser '%s' does not exist." % self.path)
def setup(self):
self.tempDir = tempfile.mkdtemp()
self.profileDir = os.path.join(self.tempDir, "profile")
def teardown(self):
# If the browser is still running, wait up to ten seconds for it to quit
if self.process and self.process.poll() is None:
checks = 0
while self.process.poll() is None and checks < 20:
checks += 1
time.sleep(.5)
# If it's still not dead, try to kill it
if self.process.poll() is None:
print "Process %s is still running. Killing." % self.name
self.process.kill()
if self.tempDir is not None and os.path.exists(self.tempDir):
shutil.rmtree(self.tempDir)
2011-06-29 07:20:31 +09:00
def start(self, url):
raise Exception("Can't start BaseBrowserCommand")
class FirefoxBrowserCommand(BaseBrowserCommand):
def _fixupMacPath(self):
self.path = os.path.join(self.path, "Contents", "MacOS", "firefox-bin")
def setup(self):
super(FirefoxBrowserCommand, self).setup()
shutil.copytree(os.path.join(DOC_ROOT, "test", "resources", "firefox"),
self.profileDir)
def start(self, url):
cmds = [self.path]
if platform.system() == "Darwin":
cmds.append("-foreground")
cmds.extend(["-no-remote", "-profile", self.profileDir, url])
self.process = subprocess.Popen(cmds)
2011-06-29 07:20:31 +09:00
class ChromeBrowserCommand(BaseBrowserCommand):
def _fixupMacPath(self):
self.path = os.path.join(self.path, "Contents", "MacOS", "Google Chrome")
def start(self, url):
cmds = [self.path]
cmds.extend(["--user-data-dir=%s" % self.profileDir,
"--no-first-run", "--disable-sync", url])
self.process = subprocess.Popen(cmds)
def makeBrowserCommand(browser):
path = browser["path"].lower()
name = browser["name"]
if name is not None:
name = name.lower()
types = {"firefox": FirefoxBrowserCommand,
"chrome": ChromeBrowserCommand }
command = None
for key in types.keys():
if (name and name.find(key) > -1) or path.find(key) > -1:
command = types[key](browser)
command.name = command.name or key
2011-06-30 03:17:34 +09:00
break
if command is None:
2011-06-29 07:20:31 +09:00
raise Exception("Unrecognized browser: %s" % browser)
return command
def makeBrowserCommands(browserManifestFile):
with open(browserManifestFile) as bmf:
2011-06-29 07:20:31 +09:00
browsers = [makeBrowserCommand(browser) for browser in json.load(bmf)]
return browsers
def downloadLinkedPDFs(manifestList):
for item in manifestList:
f, isLink = item['file'], item.get('link', False)
if isLink and not os.access(f, os.R_OK):
linkFile = open(f +'.link')
link = linkFile.read()
linkFile.close()
sys.stdout.write('Downloading '+ link +' to '+ f +' ...')
sys.stdout.flush()
response = urllib2.urlopen(link)
2011-07-05 06:39:29 +09:00
with open(f, 'wb') as out:
out.write(response.read())
print 'done'
def setUp(options):
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
# Only serve files from a pdf.js clone
assert not ANAL or os.path.isfile('../pdf.js') and os.path.isdir('../.git')
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
if options.masterMode and os.path.isdir(TMPDIR):
print 'Temporary snapshot dir tmp/ is still around.'
print 'tmp/ can be removed if it has nothing you need.'
if prompt('SHOULD THIS SCRIPT REMOVE tmp/? THINK CAREFULLY'):
subprocess.call(( 'rm', '-rf', 'tmp' ))
assert not os.path.isdir(TMPDIR)
testBrowsers = []
if options.browserManifestFile:
testBrowsers = makeBrowserCommands(options.browserManifestFile)
elif options.browser:
testBrowsers = [makeBrowserCommand({"path":options.browser, "name":None})]
if options.browserManifestFile or options.browser:
assert len(testBrowsers) > 0
with open(options.manifestFile) as mf:
manifestList = json.load(mf)
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
downloadLinkedPDFs(manifestList)
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
for b in testBrowsers:
State.taskResults[b.name] = { }
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
for item in manifestList:
id, rounds = item['id'], int(item['rounds'])
State.manifest[id] = item
taskResults = [ ]
for r in xrange(rounds):
2011-06-22 06:53:57 +09:00
taskResults.append([ ])
State.taskResults[b.name][id] = taskResults
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
2011-06-23 12:47:37 +09:00
State.remaining = len(testBrowsers) * len(manifestList)
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
return testBrowsers
def startBrowsers(browsers, options):
for b in browsers:
b.setup()
print 'Launching', b.name
host = 'http://%s:%s' % (SERVER_HOST, options.port)
path = '/test/test_slave.html?'
qs = 'browser='+ urllib.quote(b.name) +'&manifestFile='+ urllib.quote(options.manifestFile)
2011-06-29 08:29:52 +09:00
qs += '&path=' + b.path
b.start(host + path + qs)
def teardownBrowsers(browsers):
for b in browsers:
try:
b.teardown()
except:
print "Error cleaning up after browser at ", b.path
print "Temp dir was ", b.tempDir
print "Error:", sys.exc_info()[0]
2011-07-07 03:26:35 +09:00
def check(task, results, browser, masterMode):
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
failed = False
for r in xrange(len(results)):
pageResults = results[r]
for p in xrange(len(pageResults)):
pageResult = pageResults[p]
if pageResult is None:
continue
failure = pageResult.failure
if failure:
failed = True
State.numErrors += 1
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
print 'TEST-UNEXPECTED-FAIL | test failed', task['id'], '| in', browser, '| page', p + 1, 'round', r, '|', failure
if failed:
return
kind = task['type']
2011-06-22 06:53:57 +09:00
if 'eq' == kind:
2011-07-07 03:26:35 +09:00
checkEq(task, results, browser, masterMode)
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
elif 'fbf' == kind:
checkFBF(task, results, browser)
elif 'load' == kind:
checkLoad(task, results, browser)
else:
assert 0 and 'Unknown test type'
2011-07-07 03:26:35 +09:00
def checkEq(task, results, browser, masterMode):
2011-06-22 06:53:57 +09:00
pfx = os.path.join(REFDIR, sys.platform, browser, task['id'])
results = results[0]
taskId = task['id']
2011-06-22 06:53:57 +09:00
passed = True
for page in xrange(len(results)):
snapshot = results[page].snapshot
2011-06-22 06:53:57 +09:00
ref = None
eq = True
path = os.path.join(pfx, str(page + 1))
if not os.access(path, os.R_OK):
print 'WARNING: no reference snapshot', path
State.numEqNoSnapshot += 1
else:
2011-06-22 06:53:57 +09:00
f = open(path)
ref = f.read()
f.close()
eq = (ref == snapshot)
if not eq:
print 'TEST-UNEXPECTED-FAIL | eq', taskId, '| in', browser, '| rendering of page', page + 1, '!= reference rendering'
if not State.eqLog:
State.eqLog = open(EQLOG_FILE, 'w')
eqLog = State.eqLog
# NB: this follows the format of Mozilla reftest
# output so that we can reuse its reftest-analyzer
# script
print >>eqLog, 'REFTEST TEST-UNEXPECTED-FAIL |', browser +'-'+ taskId +'-page'+ str(page + 1), '| image comparison (==)'
print >>eqLog, 'REFTEST IMAGE 1 (TEST):', snapshot
print >>eqLog, 'REFTEST IMAGE 2 (REFERENCE):', ref
passed = False
State.numEqFailures += 1
2011-07-07 03:26:35 +09:00
if masterMode and (ref is None or not eq):
tmpTaskDir = os.path.join(TMPDIR, sys.platform, browser, task['id'])
try:
os.makedirs(tmpTaskDir)
except OSError, e:
2011-07-07 03:26:35 +09:00
print >>sys.stderr, 'Creating', tmpTaskDir, 'failed!'
of = open(os.path.join(tmpTaskDir, str(page + 1)), 'w')
of.write(snapshot)
of.close()
2011-06-22 06:53:57 +09:00
if passed:
print 'TEST-PASS | eq test', task['id'], '| in', browser
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
def checkFBF(task, results, browser):
round0, round1 = results[0], results[1]
assert len(round0) == len(round1)
2011-06-22 06:53:57 +09:00
passed = True
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
for page in xrange(len(round1)):
r0Page, r1Page = round0[page], round1[page]
if r0Page is None:
break
if r0Page.snapshot != r1Page.snapshot:
print 'TEST-UNEXPECTED-FAIL | forward-back-forward test', task['id'], '| in', browser, '| first rendering of page', page + 1, '!= second'
2011-06-22 06:53:57 +09:00
passed = False
State.numFBFFailures += 1
2011-06-22 06:53:57 +09:00
if passed:
print 'TEST-PASS | forward-back-forward test', task['id'], '| in', browser
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
def checkLoad(task, results, browser):
# Load just checks for absence of failure, so if we got here the
# test has passed
print 'TEST-PASS | load test', task['id'], '| in', browser
def processResults():
print ''
numFatalFailures = (State.numErrors + State.numFBFFailures)
if 0 == State.numEqFailures and 0 == numFatalFailures:
print 'All tests passed.'
else:
print 'OHNOES! Some tests failed!'
if 0 < State.numErrors:
print ' errors:', State.numErrors
if 0 < State.numEqFailures:
print ' different ref/snapshot:', State.numEqFailures
if 0 < State.numFBFFailures:
print ' different first/second rendering:', State.numFBFFailures
def maybeUpdateRefImages(options, browser):
if options.masterMode and (0 < State.numEqFailures or 0 < State.numEqNoSnapshot):
print "Some eq tests failed or didn't have snapshots."
print 'Checking to see if master references can be updated...'
numFatalFailures = (State.numErrors + State.numFBFFailures)
if 0 < numFatalFailures:
print ' No. Some non-eq tests failed.'
else:
print ' Yes! The references in tmp/ can be synced with ref/.'
if options.reftest:
2011-08-06 16:07:33 +09:00
startReftest(browser, options)
if not prompt('Would you like to update the master copy in ref/?'):
print ' OK, not updating.'
else:
sys.stdout.write(' Updating ... ')
# XXX unclear what to do on errors here ...
# NB: do *NOT* pass --delete to rsync. That breaks this
# entire scheme.
subprocess.check_call(( 'rsync', '-arv', 'tmp/', 'ref/' ))
print 'done'
def startReftest(browser, options):
url = "http://%s:%s" % (SERVER_HOST, options.port)
url += "/test/resources/reftest-analyzer.xhtml"
url += "#web=/test/eq.log"
try:
browser.setup()
browser.start(url)
print "Waiting for browser..."
browser.process.wait()
finally:
teardownBrowsers([browser])
print "Completed reftest usage."
def runTests(options, browsers):
t1 = time.time()
try:
startBrowsers(browsers, options)
while not State.done:
time.sleep(1)
processResults()
finally:
teardownBrowsers(browsers)
t2 = time.time()
print "Runtime was", int(t2 - t1), "seconds"
if options.masterMode:
maybeUpdateRefImages(options, browsers[0])
elif options.reftest and State.numEqFailures > 0:
print "\nStarting reftest harness to examine %d eq test failures." % State.numEqFailures
startReftest(browsers[0], options)
def main():
optionParser = TestOptions()
options, args = optionParser.parse_args()
options = optionParser.verifyOptions(options)
if options == None:
sys.exit(1)
httpd = TestServer((SERVER_HOST, options.port), PDFTestHandler)
2011-07-07 03:26:35 +09:00
httpd.masterMode = options.masterMode
httpd_thread = threading.Thread(target=httpd.serve_forever)
httpd_thread.setDaemon(True)
httpd_thread.start()
browsers = setUp(options)
if len(browsers) > 0:
runTests(options, browsers)
else:
# just run the server
print "Running HTTP server. Press Ctrl-C to quit."
try:
while True:
time.sleep(1)
except (KeyboardInterrupt):
print "\nExiting."
Initial import of first test harness The harness (test.py) operates as follows. First it locates executable browsers (or symlinks or scripts) named "[browser][version]", e.g. "firefox4". It then launches the located browsers and asks them to load the file test_slave.html. At the same time, test.py sets up an HTTP server on localhost:8080 (there's a race condition here currently ;). After test_slave loads in the browser(s), it fetches the task manifest (test_manifest.json). The entries in the manifest specify which PDF to load and how many times to cycle through page rendering. This will probably evolve over time. test_slave then performs the requested tasks and POSTs the results back to test.py, which saves them. When all the results of for a task are in, test.py checks them. There are three types of tests currently. "==" tests compare the rendering of a PDF against a master copy. This is not yet implemented because setting up a master copy is complicated. "fbf" tests render all a PDF's pages, then go back to page 1 and render all pages a second time. The renderings from the first round must match the ones from the second round. "load" tests just check that a PDF's pages load without errors. Currently the test harness will only launch a "firefox4" target. This can be a bash script in your pdf.js checkout, pdf.js/firefox4, something like the following #!/bin/bash dist="/path/to/firefox4/installation" profile=`mktemp -dt 'pdf.js-test-ff-profile-XXXXXXXXXX'` $dist/firefox -no-remote -profile $profile $* rm -rf $profile (Yes, this script doesn't clean up properly on early termination.) It's possible to run the tests in a normal browsing session, but that might be annoying. With that set up, run the harness like so python test.py If all goes well, you'll see all "TEST-PASS" messages printed to stdout. If something goes wrong, you'll see "TEST-UNEXPECTED-FAIL" printed to stdout.
2011-06-19 10:09:21 +09:00
if __name__ == '__main__':
main()