With the changes made in the previous patch, the `web/app_options.js` file no longer depends on anything *except* files residing in the `web/` folder. Hence the `gulp default_preferences` task can now be further simplified and thus becomes even faster than before; see also PR 11724.
With the changes in previous patches, the `disableCreateObjectURL` option/functionality is no longer used for anything in the API and/or in the Worker code.
Note however that there's some functionality, mainly related to file loading/downloading, in the GENERIC version of the default viewer which still depends on this option.
Hence the `disableCreateObjectURL` option (and related compatibility code) is moved into the viewer, see e.g. `web/app_options.js`, such that it's still available in the default viewer.
Currently some JPEG images are decoded by the built-in PDF.js decoder in `src/core/jpg.js`, while others attempt to use the browser JPEG decoder. This inconsistency seem unfortunate for a number of reasons:
- It adds, compared to the other image formats supported in the PDF specification, a fair amount of code/complexity to the image handling in the PDF.js library.
- The PDF specification support JPEG images with features, e.g. certain ColorSpaces, that browsers are unable to decode natively. Hence, determining if a JPEG image is possible to decode natively in the browser require a non-trivial amount of parsing. In particular, we're parsing (part of) the raw JPEG data to extract certain marker data and we also need to parse the ColorSpace for the JPEG image.
- While some JPEG images may, for all intents and purposes, appear to be natively supported there's still cases where the browser may fail to decode some JPEG images. In order to support those cases, we've had to implement a fallback to the PDF.js JPEG decoder if there's any issues during the native decoding. This also means that it's no longer possible to simply send the JPEG image to the main-thread and continue parsing, but you now need to actually wait for the main-thread to indicate success/failure first.
In practice this means that there's a code-path where the worker-thread is forced to wait for the main-thread, while the reverse should *always* be the case.
- The native decoding, for anything except the *simplest* of JPEG images, result in increased peak memory usage because there's a handful of short-lived copies of the JPEG data (see PR 11707).
Furthermore this also leads to data being *parsed* on the main-thread, rather than the worker-thread, which you usually want to avoid for e.g. performance and UI-reponsiveness reasons.
- Not all environments, e.g. Node.js, fully support native JPEG decoding. This has, historically, lead to some issues and support requests.
- Different browsers may use different JPEG decoders, possibly leading to images being rendered slightly differently depending on the platform/browser where the PDF.js library is used.
Originally the implementation in `src/core/jpg.js` were unable to handle all of the JPEG images in the test-suite, but over the last couple of years I've fixed (hopefully) all of those issues.
At this point in time, there's two kinds of failure with this patch:
- Changes which are basically imperceivable to the naked eye, where some pixels in the images are essentially off-by-one (in all components), which could probably be attributed to things such as different rounding behaviour in the browser/PDF.js JPEG decoder.
This type of "failure" accounts for the *vast* majority of the total number of changes in the reference tests.
- Changes where the JPEG images now looks *ever so slightly* blurrier than with the native browser decoder. For quite some time I've just assumed that this pointed to a general deficiency in the `src/core/jpg.js` implementation, however I've discovered when comparing two viewers side-by-side that the differences vanish at higher zoom levels (usually around 200% is enough).
Basically if you disable [this downscaling in canvas.js](8fb82e939c/src/display/canvas.js (L2356-L2395)), which is what happens when zooming in, the differences simply vanish!
Hence I'm pretty satisfied that there's no significant problems with the `src/core/jpg.js` implementation, and the problems are rather tied to the general quality of the downscaling algorithm used. It could even be seen as a positive that *all* images now share the same downscaling behaviour, since this actually fixes one old bug; see issue 7041.
Currently image resources, as opposed to e.g. font resources, are handled exclusively on a page-specific basis. Generally speaking this makes sense, since pages are separate from each other, however there's PDF documents where many (or even all) pages actually references exactly the same image resources (through the XRef table). Hence, in some cases, we're decoding the *same* images over and over for every page which is obviously slow and wasting both CPU and memory resources better used elsewhere.[1]
Obviously we cannot simply treat all image resources as-if they're used throughout the entire PDF document, since that would end up increasing memory usage too much.[2]
However, by introducing a `GlobalImageCache` in the worker we can track image resources that appear on more than one page. Hence we can switch image resources from being page-specific to being document-specific, once the image resource has been seen on more than a certain number of pages.
In many cases, such as e.g. the referenced issue, this patch will thus lead to reduced memory usage for image resources. Scrolling through all pages of the document, there's now only a few main-thread copies of the same image data, as opposed to one for each rendered page (i.e. there could theoretically be *twenty* copies of the image data).
While this obviously benefit both CPU and memory usage in this case, for *very* large image data this patch *may* possibly increase persistent main-thread memory usage a tiny bit. Thus to avoid negatively affecting memory usage too much in general, particularly on the main-thread, the `GlobalImageCache` will *only* cache a certain number of image resources at the document level and simply fallback to the default behaviour.
Unfortunately the asynchronous nature of the code, with ranged/streamed loading of data, actually makes all of this much more complicated than if all data could be assumed to be immediately available.[3]
*Please note:* The patch will lead to *small* movement in some existing test-cases, since we're now using the built-in PDF.js JPEG decoder more. This was done in order to simplify the overall implementation, especially on the main-thread, by limiting it to only the `OPS.paintImageXObject` operator.
---
[1] There's e.g. PDF documents that use the same image as background on all pages.
[2] Given that data stored in the `commonObjs`, on the main-thread, are only cleared manually through `PDFDocumentProxy.cleanup`. This as opposed to data stored in the `objs` of each page, which is automatically removed when the page is cleaned-up e.g. by being evicted from the cache in the default viewer.
[3] If the latter case were true, we could simply check for repeat images *before* parsing started and thus avoid handling *any* duplicate image resources.
With these changes SystemJS is now only used, during development, on the worker-thread and in the unit/font-tests, since Firefox is currently missing support for worker modules; please see https://bugzilla.mozilla.org/show_bug.cgi?id=1247687
Hence all the JavaScript files in the `web/` and `src/display/` folders are now loaded *natively* by the browser (during development) using standard `import` statements/calls, thanks to a nice `import-maps` polyfill.
*Please note:* As soon as https://bugzilla.mozilla.org/show_bug.cgi?id=1247687 is fixed in Firefox, we should be able to remove all traces of SystemJS and thus finally be able to use every possible modern JavaScript feature.
As part of reducing our reliance on SystemJS in the development viewer, this patch replaces usage of `require` statements with modern standards `import`/`export` statements instead.
If we want to try and move forward with reducing usage of SystemJS, we don't have much choice but to make these kind changes (despite what prior test-results showed, however I'm no longer able to reproduce the issues locally).
As part of trying to reduce the usage of SystemJS in the development viewer, this patch is a necessary step that will allow removal of some `require` statements.
Currently this uses `SystemJS.import` in non-PRODUCTION mode, but it should be possible to replace those with standard *dynamic* `import` calls in the future.
Given that the `getPDFFileNameFromURL` helper function has a specific code-path for handling non-string inputs, this empty string fallback really isn't necessary at the call-site in `web/pdf_document_properties.js`.
Apparently the old link format used in MOZCENTRAL-builds, with the blob URL separated from the filename with a `?` character violates the specification; see https://bugzilla.mozilla.org/show_bug.cgi?id=1632644#c5
Obviously just removing the `?`-part of the URL would have worked, but that would also have meant that we'd no longer be able to provide the correct filename when the user attempts to download the opened PDF attachment.
To fix this we'll instead append the filename in the hash-part of the URL, which however required using a *custom* hash-parameter to avoid triggering the fallback "named destination" code-paths in the viewer.
Note that only changing the `web/pdf_attachment_viewer.js` file wasn't sufficient to fix the bug, and we also need to tweak the `webViewerInitialized` function in `web/app.js` since MOZCENTRAL-builds used to ignore *everything* in the URL hash.
This particular code is very old, but changing it *should* be completely safe given that the `PDFViewerApplication.setTitleUsingUrl` method since some time now stores both the original URL (in `this.url`) as well as one without the hash (in `this.baseUrl`). The latter one is already used everywhere where it matters, so this change seem fine to me.
This patch thus restores the original behaviour for PDF attachments in the MOZCENTRAL-build, by once again allowing them to be opened *directly* in the browser without downloading. (The fallback added in PR 11845 is obviously kept, since it seems generally useful to have.)
Originally the `default_preferences.json` file was checked into the repository, and we thus needed to load it in non-PRODUCTION mode (which was originally done asynchronously using `XMLHttpRequest`). Over the years a lot has changed and the `default_preferences.json` file is now built, by the `gulp default_preferences` task, from the `web/app_options.js` file. Hence it's no longer necessary, in non-PRODUCTION mode, to use SystemJS here since we can simply use a standard `import` statement instead.
Note how e.g. `web/app.js` already imports from `web/app_options.js` in the same exact way that `web/preferences.js` now does, hence this patch will *not* result in any significant changes in the built/bundled viewer file.
This is another (small) part in trying to reduce usage of SystemJS, with the goal of hopefully getting rid of it completely. (I've started working on this, and doing so has identified a number of problem areas; this patch addresses one of them.)
This replaces some additional `require`/`exports` usage with standard `import`/`export` statements instead.
Hence another, small, part in the effort to reduce the reliance on SystemJS-specific functionality in the development viewer.
By re-factoring the existing gulp tasks, most of the code can be re-used for both the existing `gulp minified` as well as the new `gulp minified-es5` task.
While working on PR 11872, it occurred to me that it probably wouldn't be a bad idea to change the `_parsedAnnotations` getter to handle errors individually for each annotation. This way, one broken/corrupt annotation won't prevent the rest of them from being e.g. fetched through the API.
Given that the `PDFLinkService.setHash` method itself if completely synchronous, moving the handling of "nameddest" to occur last *shouldn't* cause any problems (famous last words).
This way the destination will still override any previous parameter, such as e.g. the "page", as expected. Furthermore, given that the `PDFLinkService.navigateTo` method is asynchronous that should provide additional guarantees that the "nameddest" parameter is always respected.
As sort-of expected, this fairly innocent looking change also required some tweaks in the `PDFHistory` to prevent dummy history entires upon document load (only an issue when both "page" *and* "nameddest" parameters are provided in the hash).
This rule complements the existing `accessor-pairs` nicely, and ensures that a getter/setter pair is always consistently ordered.
Please find additional details about this rule at https://eslint.org/docs/rules/grouped-accessor-pairs
Having `assert` calls without a message string isn't very helpful when debugging, and it turns out that it's easy enough to make use of ESLint to enforce better `assert` call-sites.
In a couple of cases the `assert` calls were changed to "regular" throwing of errors instead, since that seemed more appropriate.
Please find additional details about the ESLint rule at https://eslint.org/docs/rules/no-restricted-syntax
This should ensure that a page will always render successfully, even if there's errors during the Annotation fetching/parsing.
Additionally the `OperatorList.addOpList` method is also adjusted to ignore invalid data, to make it slightly more robust.
All of the other *similar* helper functions only target one file per function, and there's no particular reason for this one to be different.
This patch will simplify future changes, e.g. experimenting with using `gulp watch` instead of SystemJS for the development viewer.
Not only is there no code depending on it now, the actual task itself doesn't even make sense as-is. Note that it uses the default `DEFINES` configuration *unaltered*, which is neither useful nor correct since the resulting build thus won't make sense without an actual built target set.
Given that none of the PDF.js contributors know React, maintaining and/or providing supporting for the example isn't really feasible unfortunately.
Even something as simple as running/testing the example becomes difficult for anyone completely unfamiliar with React, and furthermore:
- It's very difficult to tell if the example demonstrates React best-practices, since the PDF.js contributors don't know React.
- We also have no reasonable way of keeping the example up-to-date with changes in React.
- The React example, in its current form, is even *hard-coding* the PDF.js version to a now unsupported version.
- The example is currently triggering "fake worker" usage, see issue 11729, which is really *really* bad. Note that the "fake worker" functionality is *only* intended as a fallback, and it should absolutely *not* under any circumstances be advertised and certainly shouldn't be triggered in official PDF.js examples.
By preserving the exception type, more fine-grained error handling can be performed via client-side logic (e.g. redirect to a search page if a PDF is not found, or to a ticket system in case of invalid PDF files).
The original exception is now re-thrown.
Fixes#11658