The rotation handling that's currently living in `PDFViewerApplication` is *very* old, and pre-dates the introduction of the viewer components by years.
As can be seen in the `BaseViewer.pagesRotation` setter, we're not actually normalizing the rotation as intended and instead rely on the caller to handle that correctly. This is first of all inconsistent, given how other setters are implemented, and secondly it could also lead to the rotation being set to a value outside of the `[0, 360)`-range.
Finally, for improved consistency the rotation handling in `PageViewport` is updated similarly. Please note that this case, it's *not* changing the pre-existing logic.
- implement few positioning properties: position, width, height, anchor;
- implement font element;
- implement fill element (used by font) and its children (linear, radial, ...);
- font property is inherited from ancestor container (see https://www.pdfa.org/wp-content/uploads/2020/07/XFA-3_3.pdf#page=43) so let CSS handles that stuff;
- in order to reduce the number of properties to set, only set non default properties and put the default in CSS;
- set a background to some containers to be able to see them (will be removed in a future commit).
Similar to the existing `annotationsPromise` and `_jsActionsPromise` properties, the new `_xfaPromise` should obviously also be reset, since otherwise you might end up holding onto a lot of data for pages that are no longer active.
(That caching wasn't present in the original version of PR 13069, which is why I didn't spot it until now.)
- add an option to enable XFA rendering if any;
- for now, let the canvas layer: it could be useful to implement XFAF forms (embedded pdf in xml stream for the background and xfa form for the foreground);
- ui elements in template DOM are pretty close to their html counterpart so we generate a fake html DOM from template one:
- it makes easier to translate template properties to html ones;
- it makes faster the creation of the html element in the main thread.
While there is nothing *outright* wrong with the existing implementation, it can however lead to increased memory usage in one particular case (that I completely overlooked when implementing this):
For "data:"-URLs, which by definition contains the entire PDF document and can thus be arbitrarily large, we obviously want to avoid sending, storing, and/or logging the "raw" docBaseUrl in that case.
To address this, this patch makes the following changes:
- Ignore any non-string in the `docBaseUrl` option passed to `getDocument`, since those are unsupported anyway, already on the main-thread.
- Ignore "data:"-URLs in the `docBaseUrl` option passed to `getDocument`, to avoid having to send what could potentially be a *very* long string to the worker-thread.
- Parse the `docBaseUrl` option *directly* in the `BasePdfManager`-constructors, on the worker-thread, to avoid having to store the "raw" docBaseUrl in the first place.
It seems reasonable to place this alongside the *similar* `getFilenameFromUrl` helper function. This way, with the changes in the next patch, we also avoid having to expose the `isDataScheme` function in the API itself and we instead expose `getPdfFilenameFromUrl` in the API (which feels overall more appropriate).
This extends PR 13033 slightly, with a heuristic to support corrupt PDF documents where the `LineAnnotation`s have an empty /Rect-entry. Please note that while I have no idea if this is "correct", this patch at least makes us output the same /BBox as re-saving in Adobe Reader does.
This is mostly done using `gulp lint --fix` with a few manual changes in
the following diff:
```diff
diff --git a/src/core/pattern.js b/src/core/pattern.js
index 365491ed3..eedd8b686 100644
--- a/src/core/pattern.js
+++ b/src/core/pattern.js
@@ -105,7 +105,7 @@ const Pattern = (function PatternClosure() {
return Pattern;
})();
-var Shadings = {};
+const Shadings = {};
// A small number to offset the first/last color stops so we can insert ones to
// support extend. Number.MIN_VALUE is too small and breaks the extend.
@@ -597,16 +597,15 @@ Shadings.Mesh = (function MeshClosure() {
if (!(0 <= f && f <= 3)) {
throw new FormatError("Unknown type6 flag");
}
- var i, ii;
const pi = coords.length;
- for (i = 0, ii = f !== 0 ? 8 : 12; i < ii; i++) {
+ for (let i = 0, ii = f !== 0 ? 8 : 12; i < ii; i++) {
coords.push(reader.readCoordinate());
}
const ci = colors.length;
- for (i = 0, ii = f !== 0 ? 2 : 4; i < ii; i++) {
+ for (let i = 0, ii = f !== 0 ? 2 : 4; i < ii; i++) {
colors.push(reader.readComponents());
}
- var tmp1, tmp2, tmp3, tmp4;
+ let tmp1, tmp2, tmp3, tmp4;
switch (f) {
// prettier-ignore
case 0:
@@ -729,16 +728,15 @@ Shadings.Mesh = (function MeshClosure() {
if (!(0 <= f && f <= 3)) {
throw new FormatError("Unknown type7 flag");
}
- var i, ii;
const pi = coords.length;
- for (i = 0, ii = f !== 0 ? 12 : 16; i < ii; i++) {
+ for (let i = 0, ii = f !== 0 ? 12 : 16; i < ii; i++) {
coords.push(reader.readCoordinate());
}
const ci = colors.length;
- for (i = 0, ii = f !== 0 ? 2 : 4; i < ii; i++) {
+ for (let i = 0, ii = f !== 0 ? 2 : 4; i < ii; i++) {
colors.push(reader.readComponents());
}
- var tmp1, tmp2, tmp3, tmp4;
+ let tmp1, tmp2, tmp3, tmp4;
switch (f) {
// prettier-ignore
case 0:
@@ -897,7 +895,7 @@ Shadings.Mesh = (function MeshClosure() {
decodeType4Shading(this, reader);
break;
case ShadingType.LATTICE_FORM_MESH:
- var verticesPerRow = dict.get("VerticesPerRow") | 0;
+ const verticesPerRow = dict.get("VerticesPerRow") | 0;
if (verticesPerRow < 2) {
throw new FormatError("Invalid VerticesPerRow");
}
```
A significant portion of the code-base has now been converted to use `let`/`const`, rather than `var`, hence it should be possible to simply enable the ESLint `no-var` rule globally.
This way we can ensure that new code won't accidentally use `var`, and it also removes the need to manually enable the rule in various folders.
Obviously it makes sense to continue the efforts to replace `var`, but that should probably happen on a file and/or folder basis.
Please note that this patch excludes the following code:
- The `extensions/` folder, since that seemed easiest for now (and I don't know exactly what the support situation is for the Chromium-extension).
- The entire `external/` folder is ignored, since most of it's currently excluded from linting.
For the code that isn't imported from elsewhere (and should be ignored), we should probably (at some point) bring the code up to the same linting/formatting standard as the rest of the code-base.
- Various files in the `test/` folder are ignored, as necessary, since the way that a lot of this code is loaded will require some care (or perhaps larger re-factoring) when removing `var` usage.
While the JSDocs have never advertised `getDocument` as supporting Node.js `Buffer`s, that apparently doesn't stop users from passing such data structures to `getDocument`.
In theory the existing `instanceof Uint8Array` check ought to have caught Node.js `Buffer`s, however for reasons that I don't even pretend to understand that check actually passes. Hence this patch which, *only* in Node.js environments, will special-case `Buffer`s to hopefully provide a slightly better out-of-the-box behaviour in Node.js environments[1].
---
[1] Although I'm not sure that we necessarily want to advertise this in the JSDocs, given the specialized use-case.
Replace the `objectFromEntries` helper function with an `objectFromMap` one instead, and simplify the data lookup in the AnnotationStorage.getValue method
Note that the majority of these changes were done automatically, by using `gulp lint --fix`, and the manual changes were limited to the following diff:
```diff
diff --git a/src/core/cff_parser.js b/src/core/cff_parser.js
index d684c200e..2e2b811e4 100644
--- a/src/core/cff_parser.js
+++ b/src/core/cff_parser.js
@@ -555,7 +555,7 @@ const CFFParser = (function CFFParserClosure() {
stackSize %= 2;
validationCommand = CharstringValidationData[value];
} else if (value === 10 || value === 29) {
- var subrsIndex;
+ let subrsIndex;
if (value === 10) {
subrsIndex = localSubrIndex;
} else {
@@ -886,15 +886,15 @@ const CFFParser = (function CFFParserClosure() {
format = bytes[pos++];
switch (format & 0x7f) {
case 0:
- var glyphsCount = bytes[pos++];
+ const glyphsCount = bytes[pos++];
for (i = 1; i <= glyphsCount; i++) {
encoding[bytes[pos++]] = i;
}
break;
case 1:
- var rangesCount = bytes[pos++];
- var gid = 1;
+ const rangesCount = bytes[pos++];
+ let gid = 1;
for (i = 0; i < rangesCount; i++) {
const start = bytes[pos++];
const left = bytes[pos++];
@@ -938,7 +938,7 @@ const CFFParser = (function CFFParserClosure() {
}
break;
case 3:
- var rangesCount = (bytes[pos++] << 8) | bytes[pos++];
+ const rangesCount = (bytes[pos++] << 8) | bytes[pos++];
for (i = 0; i < rangesCount; ++i) {
let first = (bytes[pos++] << 8) | bytes[pos++];
if (i === 0 && first !== 0) {
@@ -1173,7 +1173,7 @@ class CFFDict {
}
}
-var CFFTopDict = (function CFFTopDictClosure() {
+const CFFTopDict = (function CFFTopDictClosure() {
const layout = [
[[12, 30], "ROS", ["sid", "sid", "num"], null],
[[12, 20], "SyntheticBase", "num", null],
@@ -1229,7 +1229,7 @@ var CFFTopDict = (function CFFTopDictClosure() {
return CFFTopDict;
})();
-var CFFPrivateDict = (function CFFPrivateDictClosure() {
+const CFFPrivateDict = (function CFFPrivateDictClosure() {
const layout = [
[6, "BlueValues", "delta", null],
[7, "OtherBlues", "delta", null],
@@ -1265,11 +1265,12 @@ var CFFPrivateDict = (function CFFPrivateDictClosure() {
return CFFPrivateDict;
})();
-var CFFCharsetPredefinedTypes = {
+const CFFCharsetPredefinedTypes = {
ISO_ADOBE: 0,
EXPERT: 1,
EXPERT_SUBSET: 2,
};
+
class CFFCharset {
constructor(predefined, format, charset, raw) {
this.predefined = predefined;
@@ -1695,7 +1696,7 @@ class CFFCompiler {
// For offsets we just insert a 32bit integer so we don't have to
// deal with figuring out the length of the offset when it gets
// replaced later on by the compiler.
- var name = dict.keyToNameMap[key];
+ const name = dict.keyToNameMap[key];
// Some offsets have the offset and the length, so just record the
// position of the first one.
if (!offsetTracker.isTracking(name)) {
```
Rather than first checking if data exists before fetching it from storage, we can simply do the lookup directly and then check its value.
Note that this follows the same pattern as utilized in the `AnnotationStorage.setValue` method.
Given that it's only used with `Map`s, and that it's currently implemented in such a way that we (indirectly) must iterate through the data *twice*, some simplification cannot hurt here.
Note that the only reason that we're not using `Object.fromEntries(...)` directly, at each call-site, is that that one won't guarantee that a `null` prototype is being used.
Now that we have scripting support, warning about e.g. JavaScript actions doesn't seem necessary anymore. Especially considering that scripting-related actions are/will not be parsed by the `Catalog.parseDestDictionary` method anyway, since it's intended for handling "simple" actions.
All of this code predates the existence of native JS classes, however we can now clean this up a bit. This patch thus let us remove some variable "shadowing" from the code.
This helper function is first of all only called *twice*, and secondly it also leads to unnecessary intermediate allocations given how the `TypedArray`s are handled.
Hence we can simply inline this small function, and thus directly allocate the combined `TypedArray` instead.
The `compareByteArrays` is first of all duplicated in multiple closures in the `src/core/crypto.js` file. Secondly, despite its name, it's also functionally equivalent to the now existing `isArrayEqual` helper function.
The `isArrayEqual` helper function is changed to use a standard `for`-loop, rather than `Array.prototype.every`, since that ought to be slightly more efficient given that we're now using it with (potentially) larger data.
All of this code predates the existence of native JS classes, however we can now clean this up a bit. This patch thus let us remove some variable "shadowing" from the code.
Note that this particular helper function is, with the exception of the `GENERIC` default viewer and the (unsupported) SVG-backend, mostly unused at this point in time. Hence we should be able to clean-up this helper function slightly.
Also, fixes a small inconsistency in the `SVGGraphics` initialization in the viewer, by passing in the `disableCreateObjectURL` compatibility-option. Given that the SVG-backend isn't officially supported/recommended this shouldn't have been an issue, but given that I spotted this it can't hurt to fix it.
Note how the `PDFAttachmentViewer` handles PDF file attachments specially, by opening them in a new window/tab, rather than forcing them to be downloaded. This is done to improve the overall UX, since browsers in general are able to handle PDF files internally.
However, for file *annotations* we're currently not attempting to do the same thing and are instead just downloading them directly. In order to unify the behaviour, without having to duplicate a lot of code, the opening of PDF file attachments is thus moved into a new `DownloadManager.openOrDownloadData` method.