In the rare situation that an optional content dictionary lacks a /Type-entry we currently throw, which may prevent e.g. Form XObjects from rendering completely.
Fixes https://bugs.ghostscript.com/show_bug.cgi?id=707147
Especially on slower bots there is some time between clicking the
element and the actual visibility change, but we didn't await this and
checked the visibility state immediately after clicking. This can be
reproduced 100% of the time by introducing a delay in the `display` and
`hidden` handlers of the `_commonActions` shadow call.
This commit fixes the problem by waiting until the first visibility
change actually happened before continuing with the assertions.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay just before dispatching the `switchannotationeditormode` event.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
The x/y-coordinates are floats instead of integers like one might
expect. The current approach rounds both the old and the new
coordinates in order to do integer comparison. However, rounding each
coordinate individually causes too much loss of precision because,
depending on the decimal value, they are either rounded up or down
which causes intermittent off-by-one errors.
This commit fixes the problem by comparing coordinate differences
instead of the coordinates themselves. The precision loss is avoided
by subtracting the old from the new coordinate as-is and only rounding
the final result.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay in the `WidgetAnnotationElement.showElementAndHideCanvas` method.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
We already use `page.$eval` in most other integration tests and it's
simpler because it already takes the selector as argument, so we don't
have to do a separate `querySelector` call ourselves.
When there is no tree, the tags for the new annotions are just put under the root element.
When there is a tree, we insert the new tags at the right place in using the value
of structTreeParentId (added in PR #16916).
The unit test is re-enabled because it no longer seems to fail after 10
runs on Linux where this used to fail often. Code inspection also shows
that the code is correct and should raise the previous exception
(anymore). Finally, a lot has changed since this test was disabled such
as new Jasmine versions, new Linux bot OS version and new browser
versions.
Over time the amount of "document level" data potentially needed during parsing of Annotations have increased a fair bit, which means that we currently need to ensure that a bunch of data is available for each individual Annotation.
Given that this data is "constant" for a PDF document we can instead create (and cache) it lazily, only when needed, *before* starting to parse the Annotations on a page. This way the parsing of individual Annotations should become slightly less asynchronous, which really cannot hurt.
An additional benefit of these changes is that we can reduce the number of parameters that need to be explicitly passed around in the annotation-code, which helps overall readability in my opinion.
One potential drawback of these changes is that the `AnnotationFactory.create` method no longer handles "everything" on its own, however given how few call-sites there are I don't think that's too much of a problem.
The classes were stripped out during when creating the field name but
it led to a wrong name.
Since class components in a path are irrelevant, they're just ignored
when searching for a node in the datasets.
Focus callback must be called only when the element has been blurred.
For example, blur callback (which implies some potential validation) is not called
because the newly focused element is an other tab, an alert dialog, ... so consequently
the focus callback mustn't be called when the element gets its focus back.
The goal is to always have something which is focusable to let the user select
it with the keyboard.
It fixes the mentioned bug because, the annotation layer will now have a container
to attach the canvas for annotations having their own canvas.
Testing the `tagged_stamp.pdf` document locally in the viewer, I noticed that e.g. the /Alt entry for the StampAnnotation contains "Secondary text for stamp\u0000".
Elsewhere in the viewer we're skipping null-chars and it's easy enough to do that in the `StructTreeLayerBuilder` class as well. (Note that we generally let the API itself return the data as-is.)
The issue described in the mentioned bug is reall because
Acrobat is rendering the XFA instead of the Acroform.
The original patch just tried to workaround the issue but it
induces some regressions.
Currently this unit-test will pass just fine if compression is disabled, e.g. by commenting out the relevant code in the `src/core/writer.js` file.
While we don't have a simple way of *directly* checking that the Annotation text-content is compressed, we can however use the resulting file-size as a fairly good proxy. (Note that if compression is disabled the file-size is more than doubled.)
Please note that for performance reasons it's not really advised to use the same worker-thread *in parallel* for parsing multiple PDF documents, since they will then unnecessarily compete for resources.
However, given that it's still possible to do that e.g. when using the global `workerPort` it probably won't hurt to add a unit-test for this particular situation.
Given that the `PDFDocumentLoadingTask.destroy()`-method is documented as being asynchronous, you thus need to await its completion before attempting to load a new PDF document when using the global `workerPort`.
If you don't await destruction as intended then a new `getDocument`-call can remain pending indefinitely, without any kind of indication of the problem, as shown in the issue.
In order to improve the current situation, without unnecessarily complicating the API-implementation, we'll now throw during the `getDocument`-call if the global `workerPort` is in the process of being destroyed.
This part of the code-base has apparently never been covered by any tests, hence the patch adds unit-tests for both the *correct* usage (awaiting destruction) as well as the specific case outlined in the issue.
The main stamp button will be used to just enter in a add/edit image mode:
- the user can add a new image in using the new button.
- the user can edit an image in resizing, moving it.
In image mode, when the user clicks outside on the page but not on an editor,
then all the selected editors will be unselected.
Given that the FieldObjects are parsed in parallel, in combination with the existing caching in the `getPage`-method and `annotations`-getter, adding additional caches for this fallback code-path doesn't seem entirely necessary.
When moving an element in the DOM, the focus is potentially lost, so we need to make sure
that the focused element before the translation will get back its focus after it.
But we must take care to not execute any focus/blur callbacks because the user didn't
do anything which should trigger such events: it's a detail of implementation. For example,
when several editors are selected and moved, then at the end the same must be selected, so
no element receive a focus event which will set it as selected.
There are 2 rotation we've to deal with: the viewer one and the editor one.
The previous implementation was a bit complex and having to deal with these
rotation would have potentially increase it.
So this patch aims to simplify the implementation and deal with all the possible
cases.
The main idea is to transform the mouse deltas according to the rotations and then
apply the resizing in the page coordinates system.