Given that none of the relevant options are marked as optional in the code/JSDocs, and that the `PDFDocumentProperties` class is specific to the default viewer (and not exposed as part of the viewer components), there's no good reason as far as I can tell for these checks.
The `BaseViewer.setDocument` method in particular is necessary for rendering to start when the viewer loads, hence you obviously want that to happen as soon as possible and without any unnecessary delays.
Unfortunately *some* API calls need to be done before that, note existing comments, however the `ViewHistory` initialization (and subsequent fetching of data) in particular can be moved slightly without any adverse effects.
As part of testing I've used logging with `performance.now()` inserted in various parts of this code, and there's *obviously* no discernible changes between `master` and this patch for e.g. rendering starting in the viewer.
*Note:* The vast majority of this patch is simple indentation changes, which were forced by Prettier (and done automatically with `gulp lint --fix`).
- Ensure that `database.files` actually contains an Array, rather than some arbitrary data.
- Only try to lookup an existing entry when the `database` existed on load, since there's obviously nothing to find when `database.files = []` was set (this case is very common in the MOZCENTRAL build since `sessionStorage` is being used there).
*This is part of a series of patches that will try to split PR 11566 into smaller chunks, to make reviewing more feasible.*
Once all the code has been fixed, we'll be able to eventually enable the ESLint no-shadow rule; see https://eslint.org/docs/rules/no-shadow
Letting Travis update npm packages can lead to unexpected and completely unrelated failures for any PR and/or merge, see e.g. 11719, if there's ever backwards *incompatible* changes when a package is updated.
The *exact* package versions are specified in `package-lock.json`, and they should thus be used when running tests. Note that the bots won't update npm packages, and neither should Travis as far as I'm concerned.
This property has never been documented and/or *intentionally* exposed through the API, instead the `PDFPageProxy.pageNumber` property is the documented/intended API to use here.
Hence pageIndex is changed to a "private" property on `PDFPageProxy` instances, and internal API functionality is also updated to *consistently* use `this._pageIndex` rather than a mix of formats.
The viewer doesn't currently support executing of any JavaScript, as found in some PDF documents, for security reasons. (Although there's a small hack to at least provide basic support for automatic printing on document load, without running scripts.)
However, in the event that the browser doesn't support printing we're not run *any* of this code. In particular that means that we're also not displaying the "Warning: JavaScript is not supported" message, which seems strange since JavaScript found in a PDF document can really contain *anything* (and not only printing instructions).
It thus seem reasonable, as far as I'm concerned, to always display the JavaScript-warning *even* if printing happens to be unsupported.
Don't accidentally accept invalid glyphNames which *appear* to follow the Cdd{d}/cdd{d} format in `PartialEvaluator._buildSimpleFontToUnicode` (issue 11697)
The /Differences array of the problematic font contains a `/c.1` entry, which is consequently detected as a *possible* Cdd{d}/cdd{d} glyphName by the existing heuristics.
Because of how the base 10 conversion is implemented, which is necessary for the base 16 special case, the parsed charCode becomes `0.1` thus causing `String.fromCodePoint` to throw since that obviously isn't a valid code point.
To fix the referenced issue, and to hopefully prevent similar ones in the future, the patch adds *additional* validation of the charCode found by the heuristics.
*This is part of a series of patches that will try to split PR 11566 into smaller chunks, to make reviewing more feasible.*
Once all the code has been fixed, we'll be able to eventually enable the ESLint `no-shadow` rule; see https://eslint.org/docs/rules/no-shadow
Trying to enable the ESLint rule `no-shadow`, against the `master` branch, would result in a fair number of errors in the `Glyph` class in `src/core/fonts.js`.
Since the glyphs are exposed through the API, we can't very well change the `isSpace` property on `Glyph` instances. Thus the best approach seems, at least to me, to simply rename the `isSpace` helper function to `isWhiteSpace` which shouldn't cause any issues given that it's only used in the `src/core/` folder.
Rather than duplicating the lookup and caching in multiple files, it seems easier to simply move all of this functionality into `src/shared/util.js` instead.
This will also help avoid a bunch of ESLint errors once the `no-shadow` rule is eventually enabled.
While Firefox originally used transition effects for browser UI toolbar buttons, that was removed years ago in https://bugzilla.mozilla.org/show_bug.cgi?id=1393057
Since the PDF.js viewer toolbar transitions were likely based on the Firefox ones, it seems reasonable that these transition effects are removed from PDF.js as well. Besides removing a bunch of CSS, this also makes the toolbar feel ever so slightly more "snappy" without these delays on mouse interaction.
(In order to make it more feasible to modernize/improve the viewer UI, trying to clean-up/simplify existing rules iteratively seems like the most reasonble way to make any progress here w.r.t. being able to test/review things.)
When reftest analyzer shows magnified pixels, there is a seemingly random offset between the mouse position and the magnified position. The reason for this is that reftest analyzer assumes all images have 800 * 1000 pixels but actually the test images have varying sizes.
*This patch fixes something that's annoyed me every now and then over the years, when debugging/fixing corrupt PDF documents.*
For corrupt PDF documents where `Lexer.getHexString` encounters invalid characters, there's very rarely just a handful of them. In practice it's not uncommon for there to be many hundreds, or even many thousands, invalid hex characters found.
Not only is the resulting console warning spam utterly useless in these cases, there's often enough of it that performance may even suffer; hence this patch which limits the amount of messages that any one `Lexer.getHexString` invocation may print.
The PDF document in question is *corrupt*, since it contains an XObject with a truncated dictionary and where the stream contents start without a "stream" operator.
For reasons that I don't even pretend to understand, the `textLayerFactory` property is determined for *every single* page in the PDF document.
Given that the `TextLayerMode` should be consistent for *all* pages in a document, we obviously could/should define `textLayerFactory` just once instead.