- since resetForm function reset a field value a calculateNow is consequently triggered.
But the calculate callback can itself call resetForm, hence an infinite recursive loop.
So basically, prevent calculeNow to be triggered by itself.
- in Firefox, the letters entered in some fields were duplicated: "AaBb" instead of "AB".
It was mainly because beforeInput was triggering a Keystroke which was itself triggering
an input value update and then the input event was triggered.
So in order to avoid that, beforeInput calls preventDefault and then it's up to the JS to
handle the event.
- fields have a property valueAsString which returns the value as a string. In the
implementation it was wrongly used to store the formatted value of a field (2€ when the user
entered 2). So this patch implements correctly valueAsString.
- non-rendered fields can be updated in using JS but when they're, they must take some properties
in the annotationStorage. It was implemented for field values, but it wasn't for
display, colors, ...
- it fixes#14862 and #14705.
Given that the `isNodeJS`-constant will, after PR 14858, *always* be `false` in non-GENERIC builds we can simplify a couple of `getDocument`-parameter default values slightly.
The old format, with inline `PDFJSDev`-checks, wasn't exactly a wonder of readability; which was my fault.
This first of all simplifies the file, since we no longer need dummy-classes and can instead *directly* define the actual classes.
Furthermore, and more importantly, this means that we no longer need to bundle this code in e.g. MOZCENTRAL-builds which reduces the size of *built* `pdf.js` file slightly.
Given that the `compileType3Glyph` function *returns* a function, see `drawOutline`, we'll thus keep the surrounding scope alive. Hence it shouldn't hurt to *explicitly* mark the temporary `Uint8Array`s, used during parsing, as no longer needed. Given the current `MAX_SIZE_TO_COMPILE`-value these `Uint8Array`s may be approximately two mega-bytes large *for every* Type3-glyph.
Interestingly enough this appears to be the very first case of *encoded* dest-strings, in /GoTo destination dictionaries, that we've actually come across. What's really fascinating is that it's less than a week after issue 14847, given that these issues are *somewhat* similar.
This moves the `COMPILE_TYPE3_GLYPHS`/`MAX_SIZE_TO_COMPILE`-checks into the `compileType3Glyph` function itself, which allows for some simplification at the call-site.
These changes also mean that the `COMPILE_TYPE3_GLYPHS`-check is now done *once* per Type3-glyph, rather than everytime that the glyph is being rendered.
- it aims to fix https://bugzilla.mozilla.org/show_bug.cgi?id=1264608;
- it's only a partial fix for #3351;
- some tiled images have some spurious white lines between the tiles.
When the current transform is applyed the corners of an image can have
some non-integer coordinates leading to some extra transparency added
to handle that. So with this patch the current transform is applied on the
point and on the dimensions in order to have at the end only integer values.
As it turns out, most of the code-paths in the `PDFImage`-class won't actually pass the TypedArray (containing the image-data) to the `ColorSpace`-code. Hence we *generally* don't need to force the image-data to be a `Uint8ClampedArray`, and can just as well directly use a `Uint8Array` instead.
In the following cases we're returning the data without any `ColorSpace`-parsing, and the exact TypedArray used shouldn't matter:
- b72a448327/src/core/image.js (L714)
- b72a448327/src/core/image.js (L751)
In the following cases the image-data is only used *internally*, and again the exact TypedArray used shouldn't matter:
- b72a448327/src/core/image.js (L762) with the actual image-data being defined (as `Uint8ClampedArray`) further below
- b72a448327/src/core/image.js (L837)
*Please note:* This is tagged `api-minor` because it's API-observable, given that *some* image/mask-data will now be returned as `Uint8Array` rather than using `Uint8ClampedArray` unconditionally. However, that seems like a small price to pay to (slightly) reduce memory usage during image-conversion.
Initially I considered updating the `NameOrNumberTree`-implementation to handle encoded keys, however that quickly became somewhat messy (especially in the `NameOrNumberTree.get`-method) since only NameTrees using string-keys.
Hence the easiest solution, as far as I'm concerned, was thus to just update the `Catalog.destinations`-getter instead. Please note that in the referenced PDF document the `Catalog.destination`-method will thus fallback to fetch all destinations, which should be fine since this is the very first case of encoded keys that we've seen.
Also changes the `NameOrNumberTree.getAll`-method to prevent a possible run-time error, although we've so far not seen such a case, for any non-Array Kids-entries found in a NameTree/NumberTree.
Finally, to improve overall consistency and to hopefully prevent future bugs, the patch also updates a couple of other `NameTree` call-sites to correctly handle encoded keys. (Note that the `Catalog.attachments`-getter was already doing this.)
While working on PR 14825, I couldn't help noticing that the code to increment the `count` for cached ImageMasks was repeated multiple times. Hence it makes sense, as far as I'm concerned, to move this into a helper function instead.
Currently we only insert optionalContent-data into the operatorList the first time that an image is parsed, which will (in hindsight) obviously cause problems for cached images.
Hence we also need to insert the optionalContent-data in the various worker-thread image caches, such that it can be accessed in the fast-paths that are used to skip re-parsing of images.
In order to reduce the amount of repeated code, this patch also adds a new `OperatorList`-method that takes care of inserting the necessary data in the operatorList.
In the referenced PDF document the fonts have /Encoding-entries that are Streams (containing completely bogus data), which are thus obviously not valid here.
Hence, only when `ignoreErrors` is set, we'll now ignore these corrupt /Encoding-entries and fallback to the existing code to try and infer a usable encoding.
Given that this is *clearly* a case of corrupt PDF documents, there's no guarantee that this will "fix" all such cases, however it's the best that we do here and shouldn't really be worse than ignoring an entire font.
- most of the time the current transform is a scaling one (modulo translation),
hence it's possible to avoid to apply the transform on each bbox and then apply
it a posteriori;
- compute the bbox when it's possible in the worker.
- it's the second part of the fix for https://bugzilla.mozilla.org/show_bug.cgi?id=857031;
- some image masks can be used several times but at different positions;
- an image need to be pre-process before to be rendered:
* rescale it;
* use the fill color/pattern.
- the two operations above are time consuming so we can cache the generated canvas;
- the cache key is based on the current transform matrix (without the translation part)
and the current fill color when it isn't a pattern.
- the rendering of the pdf in the above bug is really faster than without this patch.
- avoid to call normalizeRect which clones the rectangles: it's useless
and time consuming;
- in profiling the pdf in bug 1135277, the time spent in intersect drops
from ~1s to ~30ms.
Because of a bug in previous `core-js` versions, which caused an Error to be thrown if its `structuredClone` polyfill was called with an *explicit* `null`/`undefined` transfer-parameter, the `LoopbackPort`-class contained a work-around.
In the latest `core-js` version this has been fixed, and we can thus simplify our code ever so slightly; please see https://github.com/zloirock/core-js/releases/tag/v3.22.0
This CSS variable is only used together with the `annotationCanvasMap`-functionality in the canvas-code, however its value can be *trivially* computed by using the older `--zoom-factor` CSS variable together with the `PixelsPerInch`-structure.
Rather than having *two different* CSS variables that are this closely linked, it seems better to simplify things by using just one CSS variable instead.
- write some uint32 instead of uint8 to avoid the check before clamping;
- unroll the loop to write data in the buffer
- but keep a loop for the last element of a line: it likely doesn't hurt
that much since it's executed only for one time for each line;
- I tested on a macbook with an Apple chip, and on Firefox nightly the new
code is almost 3.5x faster than before (~1.8x with Chrome).
- it aims to partially fix performance issue reported: https://bugzilla.mozilla.org/show_bug.cgi?id=857031;
- the idea is too avoid to use byte arrays but use ImageBitmap which are a way faster to draw:
* an ImageBitmap is Transferable which means that it can be built in the worker instead of in the main thread:
- this is achieved in using an OffscreenCanvas when it's available, there is a bug to enable them
for pdf.js: https://bugzilla.mozilla.org/show_bug.cgi?id=1763330;
- or in using createImageBitmap: in Firefox a task is sent to the main thread to build the bitmap so
it's slightly slower than using an OffscreenCanvas.
* it's transfered from the worker to the main thread by "reference";
* the byte buffers used to create the image data have a very short lifetime and ergo the memory used is globally
less than before.
- Use the localImageCache for the mask;
- Fix the pdf issue4436r.pdf: it was expected to have a binary stream for the image;
- Move the singlePixel trick from operator_list to image: this way we can use this trick even if it isn't in a set
as defined in operator_list.
We don't need to first check if the Dictionary contains the key, since trying to get a non-existent key simply returns `undefined` and we're already ensuring that the value is a boolean.
Furthermore, we shouldn't need to worry about the `Object.prototype` containing enumerable properties since the checks (in `src/core/worker.js`) done for `Array.prototype` *indirectly* also cover `Object`s. (Keep in mind that an `Array` is just a special kind of `Object` in JavaScript.)
*Please note:* This is possibly bad/wrong in general, but I figured that submitting it for review wouldn't hurt.
It seems that even Adobe Reader doesn't handle the non-ASCII characters that appear in some of the fields correctly, however it should be pretty easy to improve things on the PDF.js side.
- it aims to fix#14685;
- add a basic object to get values from the parsed datasets;
- these annotations don't have an appearance so we must create one when printing or saving.
- it aims to fix issue #14627;
- the basic idea of the recent text refactoring was to only consider the rendered visible whitespaces.
But sometimes, the heuristics aren't correct and although some whitespaces are in the text stream
they weren't in the text chunks because they were too small. Hence we added some exceptions, for example,
we always add a whitespace when it is between two non-whitespace chars but only when in the same Tj.
So basically, this patch removes the constraint to have the chars in the same Tj
(in using a circular buffer to save the two last chars) but don't add a space when the visible space is really
too small (hence `NOT_A_SPACE_FACTOR`).
Given that the textLayer-code has been using a `DocumentFragment` ever since PR 3356 (back in 2013), simply updating the type of the `container` property should be fine.
This patch also tries to, ever so slightly, improve the grammar of a couple of other properties in the typedef.
There's a couple of `getDocument` parameters that should be numbers, but which are currently not *fully* validated to prevent issues elsewhere in the code-base.
Also, improves validation of the `ownerDocument` parameter since we currently accept more-or-less anything here.
Note that the Prettier update made it possible to move a couple of comments after `default:`-cases back to their original/intended positions, please see https://prettier.io/blog/2022/03/16/2.6.0.html
This patch removes the existing `forEach` methods, in favor of making the classes properly iterable instead. Given that the classes are using a `Set` respectively a `Map` internally, implementing this is very easy/efficient and allows us to simplify some existing code.