Rather than adding two buttons, it seems easier to simply link to the relevant section of the README instead (since it also means fewer things to keep up-to-date).
This complements the existing `PDFViewerApplication.initialized` boolean property, and may be helpful for custom implementations of the default viewer. This will thus provide users of the default viewer an alternative to setting the preference to dispatch events to the DOM (and listen for the "localized" event), since they can instead use:
```javascript
document.addEventListener("webviewerloaded", function() {
PDFViewerApplication.initializedPromise.then(function() {
// The viewer has now been initialized.
})
});
```
Note that in order to avoid manually tracking the initialization state *twice*, this implementation purposely uses the `PromiseCapability` functionality to handle both `PDFViewerApplication.initialized` and `PDFViewerApplication.initializedPromise` internally.
*I've had this patch locally for awhile, but have apparently missed to upstream it.*
This simplifies enabling of new ESLint rules, since most of them support automatic fixing of errors, without having to edit `gulpfile.js` or manually invoke ESLint directly.
With the way that the `default_preferences.json` file is now generated at build time, the `gulp lint` task is now noticeably slower than before. This slowdown has been, and still is, somewhat annoying during the deployment of new ESLint rules.
Hence this patch, which moves the `chromium/preferences_schema.json` validation from `gulp lint` and into a new `gulp lint-chromium` task instead. *Obviously* this new task is run as part of the `gulp npm-test` task, and thus through `npm test` on Node.js/Travis, such that it's still being tested as before.
Please note that these changes do *not* affect the *public* interface of the `Metadata` class, but only touches internal structures.[1]
These changes were prompted by looking at the `getAll` method, which simply returns the "private" metadata object to the consumer. This seems wrong conceptually, since it allows way too easy/accidental changes to the internal parsed metadata.
As part of fixing this, the internal metadata was changed to use a `Map` rather than a plain Object.
---
[1] Basically, we shouldn't need to worry about someone depending on internal implementation details.
- Remove the "capturing group" in the regular expression that removes leading "junk" from the raw metadata, since it's not necessary here (it's simply a case of too much copy-pasting in a prior patch).
According to [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions/Cheatsheet#Groups_and_ranges) you want to, for performance reasons, avoid "capturing groups" unless actually needed.
- Add inline comments to document a bunch of magic values in the code.
Given that all `TypedArray` polyfills were removed in PDF.js version `2.0`, since native support is now required, this branch has been dead code for awhile.
In practice it's extremely rare[1] for the padding to be zero in *all* components, hence it seems better to just set it directly rather than creating a temporary variable and checking for the "no padding"-case.
---
[1] In the `tracemonkey.pdf` file that only happens with `0.08%` of all text elements.
Over the years there's been a fair number of issues/PRs opened, where people have wanted to add `hasOwnProperty` checks in (hot) loops in the font parsing code. This has always been rejected, since we don't want to risk reducing performance in the Firefox PDF viewer simply because some users of the general PDF.js library are *incorrectly* extending the `Array.prototype` with enumerable properties.
With this patch the general PDF.js library will now fail immediately with a hopefully useful Error message, rather than having (some) fonts fail to render, when the `Array.prototype` is incorrectly extended.
Note that I did consider making this a warning, but ultimately decided against it since it's first of all possible to disable those (with the `verbosity` parameter). Secondly, even when printed, warnings can be easy to overlook and finally a warning may also *seem* OK to ignore (as opposed to an actual Error).
This patch contains some *much* needed clean-up of, and improvements to, this old code thus addressing a number of issues:
- Set more reasonable *default* values for the widths, in `web/viewer.css`, since the current ones are actually too small even for the (default) `en-US` locale.
This obviously result in a slightly larger zoom dropdown width for many locales, but the more consistent UI does look good to me.
- Stop setting the `min-width`/`max-width` and just use `width` instead.
- Set an explicit `height` of the zoom dropdown, in an attempt to get Google Chrome to display it with the same height as the toolbar buttons.
- Remove additional `Element.setAttribute("style", ...)` usage.
- Actually check *all* of the predefined l10n strings, since the old implementation (implicitly) assumed that the currently selected one was the longest (note e.g. the `ja-JP` locale where one string is considerably longer than the rest).
- Stop invalidating the DOM multiple times when doing the measurements. This was achieved by using a temporary in-memory `canvas`, and we now only need to query the DOM once in order to get the current font properties of the zoom dropdown.
This patch makes the following changes, to improve these API methods:
- Let `PDFPageProxy.cleanup` return a boolean indicating if clean-up actually happened, since ongoing rendering will block clean-up.
Besides being used in other parts of this patch, it seems that an API user may also be interested in the return value given that clean-up isn't *guaranteed* to happen.
- Let `PDFDocumentProxy.cleanup` return the promise indicating when clean-up is finished.
- Improve the JSDoc comment for `PDFDocumentProxy.cleanup` to mention that clean-up is triggered on *both* threads (without going into unnecessary specifics regarding what *exactly* said data actually is).
Add a note in the JSDoc comment about not calling this method when rendering is ongoing.
- Change `WorkerTransport.startCleanup` to throw an `Error` if it's called when rendering is ongoing, to prevent rendering from breaking.
Please note that this won't stop *worker-thread* clean-up from happening (since there's no general "something is rendering"-flag), however I'm not sure if that's really a problem; but please don't quote me on that :-)
All of the caches that's being cleared in `Catalog.cleanup`, on the worker-thread, *should* be re-filled automatically even if cleared *during* parsing/rendering, and the only thing that probably happens is that e.g. font data would have to be re-parsed.
On the main-thread, on the other hand, clearing the caches is more-or-less guaranteed to cause rendering errors, since the rendering code in `src/display/canvas.js` isn't able to re-request any image/font data that's suddenly being pulled out from under it.
- Last, but not least, add a couple of basic unit-tests for the clean-up functionality.
While it would be nice to change the `PDFFormatVersion` property, as returned through `PDFDocumentProxy.getMetadata`, to a number (rather than a string) that would unfortunately be a breaking API change.
However, it does seem like a good idea to at least *validate* the PDF header version on the worker-thread, rather than potentially returning an arbitrary string.
This should hopefully be useful in environments where restrictive CSPs are in effect.
In most cases the replacement is entirely straighforward, and there's only a couple of special cases:
- For the `src/display/font_loader.js` and `web/pdf_outline_viewer.js `cases, since the elements aren't appended to the document yet, it shouldn't matter if the style properties are set one-by-one rather than all at once.
- For the `web/debugger.js` case, there's really no need to set the `padding` inline at all and the definition was simply moved to `web/viewer.css` instead.
*Please note:* There's still *a single* case left, in `web/toolbar.js` for setting the width of the zoom dropdown, which is left intact for now.
The reasons are that this particular case shouldn't matter for users of the general PDF.js library, and that it'd make a lot more sense to just try and re-factor that very old code anyway (thus fixing the `setAttribute` usage in the process).
Having thought *briefly* about using `css-vars-ponyfill`, I'm no longer convinced that it'd be a good idea. The reason is that if we actually want to properly support CSS variables, then that functionality should be available in *all* of our CSS files.
Note in particular the `pdf_viewer.css` file that's built as part of the `COMPONENTS` target, in which case I really cannot see how a rewrite-at-the-client solution would ever be guaranteed to always work correctly and without accidentally touching other CSS in the surrounding application.
All-in-all, simply re-writing the CSS variables at build-time seems much easier and is thus the approach taken in this patch; courtesy of https://github.com/MadLittleMods/postcss-css-variables
By using its `preserve` option, the built files will thus include *both* a fallback and a modern `var(...)` format[1]. As a proof-of-concept this patch removes a couple of manually added fallback values, and converts an additional sidebar related property to use a CSS variable.
---
[1] Comparing the `master` branch with this patch, when using `gulp generic`, produces the following diff for the built `web/viewer.css` file:
```diff
@@ -408,6 +408,7 @@
:root {
--sidebar-width: 200px;
+ --sidebar-transition-duration: 200ms;
}
* {
@@ -550,27 +551,28 @@
position: absolute;
top: 32px;
bottom: 0;
- width: 200px; /* Here, and elsewhere below, keep the constant value for compatibility
- with older browsers that lack support for CSS variables. */
+ width: 200px;
width: var(--sidebar-width);
visibility: hidden;
z-index: 100;
border-top: 1px solid rgba(51, 51, 51, 1);
-webkit-transition-duration: 200ms;
transition-duration: 200ms;
+ -webkit-transition-duration: var(--sidebar-transition-duration);
+ transition-duration: var(--sidebar-transition-duration);
-webkit-transition-timing-function: ease;
transition-timing-function: ease;
}
html[dir='ltr'] #sidebarContainer {
-webkit-transition-property: left;
transition-property: left;
- left: -200px;
+ left: calc(-1 * 200px);
left: calc(-1 * var(--sidebar-width));
}
html[dir='rtl'] #sidebarContainer {
-webkit-transition-property: right;
transition-property: right;
- right: -200px;
+ right: calc(-1 * 200px);
right: calc(-1 * var(--sidebar-width));
}
@@ -640,6 +642,8 @@
#viewerContainer:not(.pdfPresentationMode) {
-webkit-transition-duration: 200ms;
transition-duration: 200ms;
+ -webkit-transition-duration: var(--sidebar-transition-duration);
+ transition-duration: var(--sidebar-transition-duration);
-webkit-transition-timing-function: ease;
transition-timing-function: ease;
}
```
Based on the PDF spec, with `v` operator, current point should be used as the first control point of the curve.
Do not overwrite current point before an SVG curve is built, so it can b actually used as first control point.
Since this rule is now enforced in the entire `/web` folder, there's no good reason for it to remain disabled in this file. Most likely, it's added to reduce the number of linting errors when we started enforcing block-scoped variables.
As can be seen in the code, the `xScaleBlockOffset` typed array doesn't depend on the actual image data but only on the width and x-scale. The width is obviously consistent for an image, and it turns out that in practice the `componentScaleX` is quite often identical between two (or more) adjacent image components.
All-in-all it's thus not necessary to *unconditionally* re-compute the `xScaleBlockOffset` when getting the JPEG image data.
While avoiding, in many cases, one or more loops can never be a bad thing these changes are unfortunately completely dominated by the rest of the JpegImage code and consequently doesn't really show up in benchmark results. *Hence I'd understand if this patch is ultimately deemed not necessary.*