Since we really don't want to let a particular Webpack-mode dictate how we can/can't write code, let's add a note in the webpack-example README about minification instead.
A significant portion of the code-base has now been converted to use `let`/`const`, rather than `var`, hence it should be possible to simply enable the ESLint `no-var` rule globally.
This way we can ensure that new code won't accidentally use `var`, and it also removes the need to manually enable the rule in various folders.
Obviously it makes sense to continue the efforts to replace `var`, but that should probably happen on a file and/or folder basis.
Please note that this patch excludes the following code:
- The `extensions/` folder, since that seemed easiest for now (and I don't know exactly what the support situation is for the Chromium-extension).
- The entire `external/` folder is ignored, since most of it's currently excluded from linting.
For the code that isn't imported from elsewhere (and should be ignored), we should probably (at some point) bring the code up to the same linting/formatting standard as the rest of the code-base.
- Various files in the `test/` folder are ignored, as necessary, since the way that a lot of this code is loaded will require some care (or perhaps larger re-factoring) when removing `var` usage.
(This is something that I happened to notice while working on the previous patch.)
The code in the `examples/mobile-viewer/viewer.js` file is essentially copied from an older version of the default viewer, hence we can slightly simplify the `animationStarted` handling here.
The *main* purpose of this patch is to allow scripting to be used together with the viewer components, note the updated "simpleviewer"/"singlepageviewer" examples, in addition to the full default viewer.
Given how the scripting functionality is currently implemented in the default viewer, trying to re-use this with the standalone viewer components would be *very* hard and ideally you'd want it to work out-of-the-box.
For an initial implementation, in the default viewer, of the scripting functionality it probably made sense to simply dump all of the code in the `app.js` file, however that cannot be used with the viewer components.
To address this, the functionality is moved into a new `PDFScriptingManager` class which can thus be handled in the same way as all other viewer components (and e.g. be passed to the `BaseViewer`-implementations).
Obviously the scripting functionality needs quite a lot of data, during its initialization, and for the default viewer we want to maintain the current way of doing the lookups since that helps avoid a number of redundant API-calls.
To that end, the `PDFScriptingManager` implementation accepts (optional) factories/functions such that we can maintain the current behaviour for the default viewer. For the viewer components specifically, fallback code-paths are provided to ensure that scripting will "just work"[1].
Besides moving the viewer handling of the scripting code to its own file/class, this patch also takes the opportunity to re-factor the functionality into a number of helper methods to improve overall readability[2].
Note that it's definitely possible that the `PDFScriptingManager` class could be improved even further (e.g. for general re-use), since it's still heavily tailored to the default viewer use-case, however I believe that this patch is still a good step forward overall.
---
[1] Obviously *all* the relevant document properties might not be available in the viewer components use-case (e.g. the various URLs), but most things should work just fine.
[2] The old `PDFViewerApplication._initializeJavaScript` method, where everything was simply inlined, have over time (in my opinion) become quite large and somewhat difficult to *easily* reason about.
At this point in time the standalone `acroforms` example is no longer needed, since both the `simpleviewer` and `singlepageviewer` examples already support AcroForms by default.
Moreover, the `acroforms` example no longer illustrate best practices, given the direct usage of `PDFPageView` for a multipage document.
Once the next PDF.js release is made, the `webpack` example will no longer work since the non-translated builds now use ECMAScript features not supported by older `webpack`-versions.
The `debugger`-statement would only, potentially, make sense during development and we thus want to prevent it from being accidentally included when landing code.
The `alert`, `confirm`, and `prompt` functions should generally be avoided, with the few intended cases manually allowed.
Please find additional details about the ESLint rules at:
- https://eslint.org/docs/rules/no-debugger
- https://eslint.org/docs/rules/no-alert
This moves, and slightly simplifies, code that's currently residing in the unit-test utils into the actual library, such that it's bundled with `GENERIC`-builds and used in e.g. the API-code.
As an added bonus, this also brings out-of-the-box support for CMaps in e.g. the Node.js examples.
Using `require.resolve("worker-loader")` to check if `worker-loader` is installed causes webpack to include `worker-loader` in the output bundle, which is not the intended effect. Aside from increasing the bundle size unnecessarily, it also causes errors for webpack configs with targets that don't have node's built-in modules.
These errors can be fixed by configuring webpack `externals` to exclude `worker-loader`, but it's more difficult to figure out this solution than to figure out that `worker-loader` needs to be installed (even without this explicit error message).
To solve this, the explicit check for `worker-loader` has been removed. An alternative solution would be to use webpack's `resolveWeak`. Documentation has also been added in `examples/webpack` to help users.
Currently some JPEG images are decoded by the built-in PDF.js decoder in `src/core/jpg.js`, while others attempt to use the browser JPEG decoder. This inconsistency seem unfortunate for a number of reasons:
- It adds, compared to the other image formats supported in the PDF specification, a fair amount of code/complexity to the image handling in the PDF.js library.
- The PDF specification support JPEG images with features, e.g. certain ColorSpaces, that browsers are unable to decode natively. Hence, determining if a JPEG image is possible to decode natively in the browser require a non-trivial amount of parsing. In particular, we're parsing (part of) the raw JPEG data to extract certain marker data and we also need to parse the ColorSpace for the JPEG image.
- While some JPEG images may, for all intents and purposes, appear to be natively supported there's still cases where the browser may fail to decode some JPEG images. In order to support those cases, we've had to implement a fallback to the PDF.js JPEG decoder if there's any issues during the native decoding. This also means that it's no longer possible to simply send the JPEG image to the main-thread and continue parsing, but you now need to actually wait for the main-thread to indicate success/failure first.
In practice this means that there's a code-path where the worker-thread is forced to wait for the main-thread, while the reverse should *always* be the case.
- The native decoding, for anything except the *simplest* of JPEG images, result in increased peak memory usage because there's a handful of short-lived copies of the JPEG data (see PR 11707).
Furthermore this also leads to data being *parsed* on the main-thread, rather than the worker-thread, which you usually want to avoid for e.g. performance and UI-reponsiveness reasons.
- Not all environments, e.g. Node.js, fully support native JPEG decoding. This has, historically, lead to some issues and support requests.
- Different browsers may use different JPEG decoders, possibly leading to images being rendered slightly differently depending on the platform/browser where the PDF.js library is used.
Originally the implementation in `src/core/jpg.js` were unable to handle all of the JPEG images in the test-suite, but over the last couple of years I've fixed (hopefully) all of those issues.
At this point in time, there's two kinds of failure with this patch:
- Changes which are basically imperceivable to the naked eye, where some pixels in the images are essentially off-by-one (in all components), which could probably be attributed to things such as different rounding behaviour in the browser/PDF.js JPEG decoder.
This type of "failure" accounts for the *vast* majority of the total number of changes in the reference tests.
- Changes where the JPEG images now looks *ever so slightly* blurrier than with the native browser decoder. For quite some time I've just assumed that this pointed to a general deficiency in the `src/core/jpg.js` implementation, however I've discovered when comparing two viewers side-by-side that the differences vanish at higher zoom levels (usually around 200% is enough).
Basically if you disable [this downscaling in canvas.js](8fb82e939c/src/display/canvas.js (L2356-L2395)), which is what happens when zooming in, the differences simply vanish!
Hence I'm pretty satisfied that there's no significant problems with the `src/core/jpg.js` implementation, and the problems are rather tied to the general quality of the downscaling algorithm used. It could even be seen as a positive that *all* images now share the same downscaling behaviour, since this actually fixes one old bug; see issue 7041.
This rule complements the existing `accessor-pairs` nicely, and ensures that a getter/setter pair is always consistently ordered.
Please find additional details about this rule at https://eslint.org/docs/rules/grouped-accessor-pairs
Given that none of the PDF.js contributors know React, maintaining and/or providing supporting for the example isn't really feasible unfortunately.
Even something as simple as running/testing the example becomes difficult for anyone completely unfamiliar with React, and furthermore:
- It's very difficult to tell if the example demonstrates React best-practices, since the PDF.js contributors don't know React.
- We also have no reasonable way of keeping the example up-to-date with changes in React.
- The React example, in its current form, is even *hard-coding* the PDF.js version to a now unsupported version.
- The example is currently triggering "fake worker" usage, see issue 11729, which is really *really* bad. Note that the "fake worker" functionality is *only* intended as a fallback, and it should absolutely *not* under any circumstances be advertised and certainly shouldn't be triggered in official PDF.js examples.
*This patch implements https://github.com/mozilla/pdf.js/pull/11777#issuecomment-609741348*
This extends the work from PR 11773 and 11777 further, by immediately releasing the `font.data` property once the font been attached to the DOM. By not unnecessarily holding onto this data on the main-thread, we'll thus reduce the memory usage of fonts even further (especially beneficial in longer documents with composite fonts).
The new behaviour is controlled by the recently added `fontExtraProperties` API option (adding a new option just for this patch didn't seem necessary), since there's one edge-case in the SVG renderer where the `font.data` property is necessary (see the `pdf2svg` example).
Note that while the default viewer does run clean-up with an idle timeout, that timeout will be reset whenever rendering occurs *or* when scrolling happens in the viewer. In practice this means that unless the user doesn't interact with the viewer in *any* way during an extended period of time, currently set to 30 seconds, the `PDFDocumentProxy.cleanup` method will never be called and font resources will thus not be cleaned-up.
*Please note:* These changes were done automatically, using the `gulp lint --fix` command.
This rule is already enabled in mozilla-central, see https://searchfox.org/mozilla-central/rev/567b68b8ff4b6d607ba34a6f1926873d21a7b4d7/tools/lint/eslint/eslint-plugin-mozilla/lib/configs/recommended.js#103-104
The main advantage, besides improved consistency, of this rule is that it reduces the size of the code (by 3 bytes for each case). In the PDF.js code-base there's close to 8000 instances being fixed by the `dot-notation` ESLint rule, which end up reducing the size of even the *built* files significantly; the total size of the `gulp mozcentral` build target changes from `3 247 456` to `3 224 278` bytes, which is a *reduction* of `23 178` bytes (or ~0.7%) for a completely mechanical change.
A large number of these changes affect the (large) lookup tables used on the worker-thread, but given that they are still initialized lazily I don't *think* that the new formatting this patch introduces should undo any of the improvements from PR 6915.
Please find additional details about the ESLint rule at https://eslint.org/docs/rules/dot-notation
Please note that these changes were done automatically, using `gulp lint --fix`.
Given that the major version number was increased, there's a fair number of (primarily whitespace) changes; please see https://prettier.io/blog/2020/03/21/2.0.0.html
In order to reduce the size of these changes somewhat, this patch maintains the old "arrowParens" style for now (once mozilla-central updates Prettier we can simply choose the same formatting, assuming it will differ here).
To avoid outright breaking third-party usages of the "viewer components" the `getGlobalEventBus` functionality is left intact, but a deprecation message is printed if the function is invoked.
The various examples are updated to *explicitly* initialize an `EventBus` instance, and provide that when initializing the relevant viewer components.
This patch makes the follow changes:
- Remove no longer necessary inline `// eslint-disable-...` comments.
- Fix `// eslint-disable-...` comments that Prettier moved down, thus causing new linting errors.
- Concatenate strings which now fit on just one line.
- Fix comments that are now too long.
- Finally, and most importantly, adjust comments that Prettier moved down, since the new positions often is confusing or outright wrong.
Note that Prettier, purposely, has only limited [configuration options](https://prettier.io/docs/en/options.html). The configuration file is based on [the one in `mozilla central`](https://searchfox.org/mozilla-central/source/.prettierrc) with just a few additions (to avoid future breakage if the defaults ever changes).
Prettier is being used for a couple of reasons:
- To be consistent with `mozilla-central`, where Prettier is already in use across the tree.
- To ensure a *consistent* coding style everywhere, which is automatically enforced during linting (since Prettier is used as an ESLint plugin). This thus ends "all" formatting disussions once and for all, removing the need for review comments on most stylistic matters.
Many ESLint options are now redundant, and I've tried my best to remove all the now unnecessary options (but I may have missed some).
Note also that since Prettier considers the `printWidth` option as a guide, rather than a hard rule, this patch resorts to a small hack in the ESLint config to ensure that *comments* won't become too long.
*Please note:* This patch is generated automatically, by appending the `--fix` argument to the ESLint call used in the `gulp lint` task. It will thus require some additional clean-up, which will be done in a *separate* commit.
(On a more personal note, I'll readily admit that some of the changes Prettier makes are *extremely* ugly. However, in the name of consistency we'll probably have to live with that.)
For performance reasons, and to avoid hanging the browser UI, the PDF.js library should *always* be used with web workers enabled.
At this point in time all of the supported browsers should have proper worker support, and Node.js is thus the only environment where workers aren't supported. Hence it no longer seems relevant/necessary to provide, by default, fake worker loaders for various JS builders/bundlers/frameworks in the PDF.js code itself.[1]
In order to simplify things, the fake worker loader code is thus simplified to now *only* support Node.js usage respectively "normal" browser usage out-of-the-box.[2]
*Please note:* The officially intended way of using the PDF.js library is with workers enabled, which can be done by setting `GlobalWorkerOptions.workerSrc`, `GlobalWorkerOptions.workerPort`, or manually providing a `PDFWorker` instance when calling `getDocument`.
---
[1] Note that it's still possible to *manually* disable workers, simply my manually loading the built `pdf.worker.js` file into the (current) global scope, however this's mostly intended for testing/debugging purposes.
[2] Unfortunately some bundlers such as Webpack, when used with third-party deployments of the PDF.js library, will start to print `Critical dependency: ...` warnings when run against the built `pdf.js` file from this patch. The reason is that despite the `require` calls being protected by *runtime* `isNodeJS` checks, it's not possible to simply tell Webpack to just ignore the `require`; please see [Webpack issue 8826](https://github.com/webpack/webpack) and libraries such as [require-fool-webpack](https://github.com/sindresorhus/require-fool-webpack).
This patch addresses a couple of smaller issues with the `PDFHistory` class:
- Most, if not all, other viewer components can be reset in one way or another, and there's no good reason for the `PDFHistory` implementation to be different here.
- Currently it's (technically) possible to keep adding entries to the browser history, via the `PDFHistory` instance, even after the document has been closed. That obviously makes no sense, and is caused by the lack of a `reset` method.
- The internal `this._isPagesLoaded` property was never actually reset, which would lead to it being temporarily wrong when a new document was opened in the default viewer.
This major version bump required two changes:
- The global line in the mobile viewer example should be removed because
the `.eslintrc` file already defines these globals and with the new
`eslint` version we otherwise get an error saying "'pdfjsLib' is already
defined as a built-in global variable".
- The ECMA version for the examples must be set to 6 since we're using
modules, otherwise we get an error saying "sourceType 'module' is not
supported when ecmaVersion < 2015". It turns out that the previous
version of `eslint` already used ECMA version 6 silently even though
we set 5, see https://github.com/eslint/eslint/issues/9687#issuecomment-432413384,
so in terms of our code nothing really changes.
If, as PR 10368 suggests, more parameters should be added to `getViewport` I think that it would be a mistake to not change the signature *first* to avoid needlessly unwieldy call-sites.
To not break any existing code and third-party use-cases, this is obviously implemented with a deprecation warning *and* with a working fallback[1] for the old method signature.
---
[1] This is limited to `GENERIC` builds, which should be sufficient.
This required the following changes in the Gulpfile:
- Defining a series of tasks is no longer done with arrays, but with the
`gulp.series` function. The `web` target is refactored to use a
smaller number of tasks to prevent tasks from running multiple times.
- Getting all tasks must now be done through the task registry.
- Tasks that don't return anything must call `done` upon completion.
Moreover, this upgrade allows us to use the latest Node.js on Travis CI
again.
*This patch is based on something that I noticed while working on PR 10126.*
The recent re-factoring of `PDFFindController` brought many improvements, among those the fact that access to `BaseViewer` is no longer required. However, with these changes there's one thing which now strikes me as not particularly user-friendly[1]: The fact that in order for searching to actually work, `PDFFindController.setDocument` must be called *and* a 'pagesinit' event must be dispatched (from somewhere).
For all other viewer components, calling the `setDocument` method[2] is enough in order for the component to actually be usable.
The `PDFFindController` thus stands out quite a bit, and it also becomes difficult to work with in any sort of custom implementation. For example: Imagine someone trying to use `PDFFindController` separately from the viewer[3], which *should* now be relatively simple given the re-factoring, and thus having to (somehow) figure out that they'll also need to manually dispatch a 'pagesinit' event for searching to work.
Note that the above even affects the unit-tests, where an out-of-place 'pagesinit' event is being used.
To attempt to address these problems, I'm thus suggesting that *only* `setDocument` should be used to indicate that searching may start. For the default viewer and/or the viewer components, `BaseViewer.setDocument` will now call `PDFFindController.setDocument` when the document is ready, thus requiring no outside configuration anymore[4]. For custom implementation, and the unit-tests, it's now as simple as just calling `PDFFindController.setDocument` to allow searching to start.
---
[1] I should have caught this during review of PR 10099, but unfortunately it's sometimes not until you actually work with the code in question that things like these become clear.
[2] Assuming, obviously, that the viewer component in question actually implements such a method :-)
[3] There's even a very recent issue, filed by someone trying to do just that.
[4] Short of providing a `PDFFindController` instance when creating a `BaseViewer` instance, of course.
With `PDFFindController` instances no longer (directly) depending on
`BaseViewer` instances, we can pass a single `findController` when
initializing a viewer, similar to other components.
This removes the dependency on a `PDFViewer` instance from the find
controller, which makes it more similar to other components and makes it
easier to unit test with a mock link service.
Finally, we remove the search capabilities from the SVG example since it
doesn't work there because there is no separate text layer.
Now it follows the same pattern as e.g., the document properties
component, which allows us to have one instance of the find controller
and set a new document to search upon switching documents.
Moreover, this allows us to get rid of the dependency on `pdfViewer` in
order to fetch the text content for a page. This is working towards
getting rid of the `pdfViewer` dependency upon initializing the
component entirely in future commits.
Finally, we make the `reset` method private since it's not supposed to
be used from the outside anymore now that `setDocument` takes care of
this, similar to other components.
The purpose of this patch is to provide a better default behaviour when `JpegImage` is used to parse standalone JPEG images with CMYK colour spaces.
Since the issue that the patch concerns is somewhat of a special-case, the implementation utilizes the already existing decode support in an attempt to minimize the impact w.r.t. code size.
*Please note:* It's always possible for the user of `JpegImage` to control image inversion, and thus override the new behaviour, by simply passing a custom `decodeTransform` array upon initialization.
This should provide a better out-of-the-box experience when using PDF.js in a Node.js environment, since it's missing native support for both `@font-face` and `Image`.
Please note that this change *only* affects the default values, hence it's still possible for an API consumer to override those values when calling `getDocument`.
Also, prevents "ReferenceError: document is not defined" errors, when running the unit-tests in Node.js/Travis.
We need to pass `disableFontFace` and `nativeImageDecoderSupport`
because Node.js has no native support for `@font-face` and `Image`.
Doing so makes it possible to render e.g., the Tracemonkey paper, which
failed before. I made this PDF file the default because it's also the
default in other examples/demos and because it showcases the
possibilities better than the very simple hello world PDF file.
Building the library with `gulp dist-install` is easier and is already
recommended in the other examples.
Rather than having two different (but connected) options for the textLayer, I think that it makes sense to try and unify this. For example: currently if `disableTextLayer === true`, then the value of `enhanceTextSelection` is simply ignored.
Since PDF.js version `2.0` already won't be backwards compatible in lots of ways, I don't think that we need to worry about migrating existing preferences here.
This removes the `PDFJS.useOnlyCssZoom` dependency from the viewer components, but please note that as a *temporary* solution the default viewer still uses it.
Despite this patch removing the `disableWorker` option itself, please note that we'll still fallback to loading the worker file(s) on the main-thread when running in environments without proper Web Worker support.
Furthermore it's still possible, even with this patch, to force the use of fake workers by manually loading the necessary file using a `<script>` tag on the main-thread.[1]
That way, the functionality of the now removed `SINGLE_FILE` build target and the resulting `build/pdf.combined.js` file can still be achieved simply by adding e.g. `<script src="build/pdf.worker.js"></script>` to the HTML (obviously with the path adjusted as needed).
Finally note that the `disableWorker` option is a performance footgun, and unfortunately many existing third-party examples actually use it without providing any sort of warning/justification.
---
[1] This approach is used in the default viewer, since certain kind of debugging may be easier if the code is running directly on the main-thread.
These were removed in PR 9170, since they were unused in the browsers that we'll support in PDF.js version `2.0`.
However looking at the output of Travis, where a subset of the unit-tests are run using Node.js, there's warnings about `btoa` being undefined. This doesn't appear to cause any errors, which probably explains why we didn't notice this before (despite PR 9201).
The `DOMParser` is most likely overkill and may be less secure.
Moreover, it is not supported in Node.js environments.
This patch replaces the `DOMParser` with a simple XML parser. This
should be faster and gives us Node.js support for free. The simple XML
parser is a port of the one that existed in the examples folder with a
small regex fix to make the parsing work correctly.
The unit tests are extended for increased test coverage of the metadata
code. The new method `getAll` is provided so the example does not have
to access internal properties of the object anymore.
Implement a serialization "generator" for `DOMElement` in domutils.js
that yields the serialization of the SVG element. This method is used by
a newly added `ReadableSVGStream` class, which can be used like any
other readable stream in Node.js.
This reduces the memory requirements. Now, it is not needed to require
the serialization to fully fit in memory.
Note: The implementation of the serializer is a state machine in ES5
since the rest of the file is also in ES5. Its functionality is
equivalent to:
```
function* serializeSVGElement(elem) {
yield '<' + elem.nodeName;
if (elem.nodeName === 'svg:svg') {
yield ' xmlns:xlink="http://www.w3.org/1999/xlink"' +
' xmlns:svg="http://www.w3.org/2000/svg"';
}
for (let i in elem.attributes) {
yield ' ' + i + '="' + xmlEncode(elem.attributes[i]) + '"';
}
yield '>';
if (elem.nodeName === 'svg:tspan' || elem.nodeName === 'svg:style') {
yield xmlEncode(elem.textContent);
} else {
for (let childNode of elem.childNodes) {
yield* serializeSVGElement(childNode);
}
}
yield '</' + elem.nodeName + '>';
}
```
- Mark the test as async, and don't swallow exceptions.
- Fix the DOMElement polyfill to behave closer to the actual getAttributeNS
method, which excludes the namespace prefix.
Do not directly export to global. Instead, export all stubs in domstubs.js and
add a method setStubs to assign all exported stubs to a namespace. Then replace
the import domstubs with an explicit call to this setStubs method. Also added
unsetStubs for undoing the changes. This is done to allow unit testing of the
SVG backend without namespace pollution.
Wait for the completion of writing the generated SVG file before
processing the next page. This is to enable the garbage collector to
garbage-collect the (potentially large) SVG string before trying to
allocate memory again for the next page.
Note that since the PDF-to-SVG conversion is now sequential instead of
parallel, the time to generate all pages increases.
Test case:
node --max_old_space_size=200 examples/node/pdf2svg.js /tmp/FatalProcessOutOfMemory.pdf
Before this patch:
- Node.js crashes due to OOM after processing 20 pages.
After this patch:
- Node.js is able to convert all 203 PDFs to SVG without crashing.
Test case:
Using the PDF file from https://github.com/mozilla/pdf.js/issues/8534
node --max_old_space_size=200 examples/node/pdf2svg.js /tmp/FatalProcessOutOfMemory.pdf
Before this patch:
Node.js crashes due to OOM after processing 10 pages.
After this patch:
Node.js crashes due to OOM after processing 19 pages.