*Please note:* These changes only affect the GENERIC build, since `NullL10n` is only a stub elsewhere (see PR 17135).
After the changes in PR 17115, which modernized and improved l10n-handling, the `NullL10n`-implementation is no longer a good fallback for the "proper" `L10n`-classes.
To improve this situation, especially for the *standalone* viewer-components, this patch makes the following changes:
- Let the `NullL10n`-implementation extend an actual `L10n`-class, which is constant and lazily initialized, to ensure that it works *exactly* like the "proper" ones.
- Automatically bundle the "en-US" l10n-strings in the build, via the pre-processor, such that we don't need to remember to manually update them.
- Ensure that the *standalone* viewer-components register their DOM-elements for translation, similar to the default viewer, since this will allow future code improvements by using "data-l10n-id"/"data-l10n-args" in most (if not all) parts of the viewer.
- Remove the `NullL10n` from the `AnnotationLayer`, to avoid affecting bundle size too much.
For third-party users that access the `AnnotationLayer`, as exposed in the main PDF.js library, they'll now need to *manually* register it for translation. (However, the *standalone* viewer-components still works given the point above.)
Use existing helper to calculate the Box
Co-authored-by: Jonas Jenwald <jonas.jenwald@gmail.com>
Ensure that there are non-zero
Co-authored-by: Jonas Jenwald <jonas.jenwald@gmail.com>
Add a reference test for #17147
- For the generic viewer we use @fluent/dom and @fluent/bundle
- For the builtin pdf viewer in Firefox, we set a localization url
and then we rely on document.l10n which is a DOMLocalization object.
The current test fails intermittently only on Windows for unknown
reasons: the code is correct and on Linux it always passes. However, we
have already spent quite a lot of time on this test, so rather than
spending even more time on it I figured we should look at what behavior
the test is trying to check and find an alternative way to do it that
can't trigger this intermittent issue anymore.
This commit changes the test to use a term that only exists once in the
entire document so we cannot accidentally highlight another match
anymore. This doesn't change anything about the behavior that this test
aims to check: we still test searching in the XFA layer, we still test
that the original term is matched case-insensitively and we still test
that that match is actually highlighted. Note that the only objective of
the test is confirming that the search functionality covers the XFA
layer, so the exact phrase/match is not the interesting bit.
Given that we only use standard `import`/`export` statements now, after recent PRs, the "exports" global is unused.
Instead we add "__non_webpack_import__" to the `globals` to avoid having to sprinkle disable statements throughout the code.
Finally, the way that `globals` are defined has changed in ESLint and we should thus explicitly specify them as "readonly"; please find additional details at https://eslint.org/docs/latest/use/configure/language-options#specifying-globals
The previous change that set the timeout had effect because we have seen
quite a few protocol timeouts now correctly being raised in the context
of the active test, however we have also still seen a handful of cases
where this wasn't the case and the one second difference turned out to
be too low (likely because the operation was started slightly after one
second into the test run). We therefore tweak the value to be 75% of the
Jasmine timeout. This should be enough to catch operations that happen
later on in the test run, and if a single operation takes that long any
hope for success is already gone anyway.
It's not necessary because we have configured silent printing for
Firefox and Chrome in the browser arguments we pass in `test.mjs`. This
means that the print dialog is not even shown at all or disappears
automatically once printing is done, so the Escape key press serves no
purpose. Since it has been shown to time out, likely because the page
loses focus during printing, and because the page itself doesn't know
when the printing dialog is shown and we therefore can't possibly do the
key press at the right time anyway, this commit gets rid of it to
stabilize the test.
Given the amount of work put into removing `require`-calls from the code-base, let's ensure that new ones aren't accidentally added in the future.
Note that we still have a couple of files where `require` is being used, in particular:
- The Node.js examples, however those will be updated to use `import` in PR 17081.
- The Webpack examples, and related support files, however I unfortunately don't know enough about Webpack to be able to update those. (Hopefully users of that code will help out here, once version `4` is released.)
- The `statcmp`-tool, since *some* of those `require`-calls cannot be converted to `import` without other code changes (and that file is only used during benchmarking).
Please find additional details at https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-commonjs.md
This *finally* allows us to mark the entire PDF.js library as a "module", which should thus conclude the (multi-year) effort to re-factor and improve how we import files/resources in the code-base.
This also means that the `gulp ci-test` target, which is what's run in GitHub Actions, now uses JavaScript modules since that's supported in modern Node.js versions.
The default protocol timeout is 180 seconds according to the
documentation at https://pptr.dev/api/puppeteer.browserconnectoptions,
but the Jasmine timeout we configure in the individual boot files is 30
seconds. The consequence of this is that if a protocol (CDP) error
occurs after 30 seconds Jasmine will fail the test, but the actual
protocol error from Puppeteer is raised much later in the context of
another test, which causes unrelated failures or tracebacks.
This commit fixes the problem by configuring Puppeteer to always use a
lower protocol timeout than the Jasmine timeout so that protocol errors
are always raised in the context of the test that actually triggered it.
The Windows bot is usually slower than the Linux bot, and therefore
text layer rendering is as well. However, the `autoprint` test awaited
text layer rendering to complete before activating the selector check,
which makes it timing-sensitive and causes it to never resolve because
the page is already printed (and the printed page div removed) by then.
This commit should fix the issue by activating the selector check as
soon as possible, namely as soon as the viewer appears, which should
ensure we're always registering the selector check in time because we're
doing it even before rendering is starting.
At this point in time all browsers, and also Node.js, support standard `import`/`export` statements and we can now finally consider outputting modern JavaScript modules in the builds.[1]
In order for this to work we can *only* use proper `import`/`export` statements throughout the main code-base, and (as expected) our Node.js support made this much more complicated since both the official builds and the GitHub Actions-based tests must keep working.[2]
One remaining issue is that the `pdf.scripting.js` file cannot be built as a JavaScript module, since doing so breaks PDF scripting.
Note that my initial goal was to try and split these changes into a couple of commits, however that unfortunately didn't really work since it turned out to be difficult for smaller patches to work correctly and pass (all) tests that way.[3]
This is a classic case of every change requiring a couple of other changes, with each of those changes requiring further changes in turn and the size/scope quickly increasing as a result.
One possible "issue" with these changes is that we'll now only output JavaScript modules in the builds, which could perhaps be a problem with older tools. However it unfortunately seems far too complicated/time-consuming for us to attempt to support both the old and modern module formats, hence the alternative would be to do "nothing" here and just keep our "old" builds.[4]
---
[1] The final blocker was module support in workers in Firefox, which was implemented in Firefox 114; please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#browser_compatibility
[2] It's probably possible to further improve/simplify especially the Node.js-specific code, but it does appear to work as-is.
[3] Having partially "broken" patches, that fail tests, as part of the commit history is *really not* a good idea in general.
[4] Outputting JavaScript modules was first requested almost five years ago, see issue 10317, and nowadays there *should* be much better support for JavaScript modules in various tools.
Setting the alpha-value explicitly to `1` in `rgb` colors is unnecessary, since that's the default value, and this way we ever so slightly reduce the size of our CSS files.
Unfortunately I've not found a Stylelint rule to enforce this automatically, and the patch was generated using search and replace.
When an editing button is disabled, focused and the user press Enter (or space), an
editor is automatically added at the center of the current page.
Next creations can be done in using the same keys within the focused page.
When an element has the hasOwnCanvas flag we must have an HTML container to attach
the canvas where the element will be rendered.
So the noHTML flag must take this information into account:
- in some cases the noHTML flag is resetted depending on the hasOwnCanvas value;
- in some others, the hasOwnCanvas flag is set depending on the value of noHTML.
To reduced the risk of regressing something else, given that the issue only applies to a (for the default viewer) non-default configuration, this patch is purposely limited to only TextWidget-annotations in the display layer.
It happens only on windows with chrome.
For any reason, click event isn't correctly triggered and it seems work correctly
with pointerup.
And it seems that when drawing a svg on an OffscreenCanvas we need to wait
a little in order to be able to transfer it: it's why this patch adds
a check on the canvas content.
This has been deprecated since version `2.15.349`, which is a year ago.
Removing this will also simplify some upcoming changes, specifically outputting of JavaScript modules in the builds.
This removes the only remaining old and non-standard handling of exports in the `web/`-folder, since some initial attempts at outputting JavaScript modules in the builds have identified this file as a potential problem.
While this uses a hard-coded list, for overall simplicity, I don't believe that that's a big problem since:
- Generating this file automatically would require a bunch more parsing *every single time* that the library is built.
- The official API-surface doesn't change often enough for this to really impede development in any significant way.
- The added unit-test helps ensure that this list cannot accidentally become outdated.
When the editor is invisible (because on a non-rendered page) its parent is null.
But when we undo its deletion, we need to have a parent to attach it.
In the scripting integration tests we use a few different typing
delays, mostly 100 or 200 milliseconds. According to for example
https://www.typingpal.com/en/documentation/school-edition/pedagogical-resources/typing-speed,
a fast typing speed is around 300 characters per minute, which is 5
characters per second and therefore a delay of 200 milliseconds between
each keystroke. Note that this is already above average, so in practice
the delay will be even larger. Therefore the 100 milliseconds variant
is unrealistically fast and therefore not suitable for the integration
tests which aim to simulate the average user behavior.
On top of that, the quick typing speeds are problematic for the tests
that involve validation alert dialogs appearing during typing. In those
tests a handler is registered to close the dialog once it pops up, but
it takes time for Puppeteer to notice the dialog, trigger the handler
and close it. If the typing delay, which is the delay between the key
down and key up events according to the Puppeteer source code at
https://github.com/puppeteer/puppeteer/blob/master/packages/puppeteer-core/src/cdp/Input.ts#L209-L215,
is too short, the key up event will be fired before the dialog is
closed. In that time the text box we're typing in is not focused, so
when the dialog is closed the `page.type()` call on the text box will
never resolve because the key up event never reached the text box.
This commit aims to fix the issues by converting all 100 millisecond
delays to 200 milliseconds. For instance the "must check input for US
zip format" failed pretty consistently locally before and hasn't failed
anymore with a 200 millisecond delay.
This integration test fails often because we wait for scripting to be
ready before we check the printed page, but most of the time the PDF
is already done printing before scripting is reported to be ready.
This happens because the print trigger is on the `Open` event, which is
one of the first events to be dispatched and, most notably, before
scripting is marked as ready; please see
https://github.com/mozilla/pdf.js/blob/master/web/pdf_scripting_manager.js#L176-L191.
Given that the PDF document is only one page, printing it is usually
finished between triggering the `Open` event and scripting reported
to be ready. If this happens the printed page is already destroyed
before we get to our actual test, which will then timeout because it
will never find the printed page in the DOM.
This commit fixes the problem by not awaiting scripting to be ready
because the fact that the printed page appears is already enough to know
that autoprint was triggered (after all, there is no other user
interaction involved here). While we're here we also switch to the
shorter `page.waitForSelector` function.
In the rare situation that an optional content dictionary lacks a /Type-entry we currently throw, which may prevent e.g. Form XObjects from rendering completely.
Fixes https://bugs.ghostscript.com/show_bug.cgi?id=707147
Especially on slower bots there is some time between clicking the
element and the actual visibility change, but we didn't await this and
checked the visibility state immediately after clicking. This can be
reproduced 100% of the time by introducing a delay in the `display` and
`hidden` handlers of the `_commonActions` shadow call.
This commit fixes the problem by waiting until the first visibility
change actually happened before continuing with the assertions.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay just before dispatching the `switchannotationeditormode` event.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
The x/y-coordinates are floats instead of integers like one might
expect. The current approach rounds both the old and the new
coordinates in order to do integer comparison. However, rounding each
coordinate individually causes too much loss of precision because,
depending on the decimal value, they are either rounded up or down
which causes intermittent off-by-one errors.
This commit fixes the problem by comparing coordinate differences
instead of the coordinates themselves. The precision loss is avoided
by subtracting the old from the new coordinate as-is and only rounding
the final result.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay in the `WidgetAnnotationElement.showElementAndHideCanvas` method.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
We already use `page.$eval` in most other integration tests and it's
simpler because it already takes the selector as argument, so we don't
have to do a separate `querySelector` call ourselves.
When there is no tree, the tags for the new annotions are just put under the root element.
When there is a tree, we insert the new tags at the right place in using the value
of structTreeParentId (added in PR #16916).
The unit test is re-enabled because it no longer seems to fail after 10
runs on Linux where this used to fail often. Code inspection also shows
that the code is correct and should raise the previous exception
(anymore). Finally, a lot has changed since this test was disabled such
as new Jasmine versions, new Linux bot OS version and new browser
versions.
Over time the amount of "document level" data potentially needed during parsing of Annotations have increased a fair bit, which means that we currently need to ensure that a bunch of data is available for each individual Annotation.
Given that this data is "constant" for a PDF document we can instead create (and cache) it lazily, only when needed, *before* starting to parse the Annotations on a page. This way the parsing of individual Annotations should become slightly less asynchronous, which really cannot hurt.
An additional benefit of these changes is that we can reduce the number of parameters that need to be explicitly passed around in the annotation-code, which helps overall readability in my opinion.
One potential drawback of these changes is that the `AnnotationFactory.create` method no longer handles "everything" on its own, however given how few call-sites there are I don't think that's too much of a problem.
The classes were stripped out during when creating the field name but
it led to a wrong name.
Since class components in a path are irrelevant, they're just ignored
when searching for a node in the datasets.
Focus callback must be called only when the element has been blurred.
For example, blur callback (which implies some potential validation) is not called
because the newly focused element is an other tab, an alert dialog, ... so consequently
the focus callback mustn't be called when the element gets its focus back.
The goal is to always have something which is focusable to let the user select
it with the keyboard.
It fixes the mentioned bug because, the annotation layer will now have a container
to attach the canvas for annotations having their own canvas.
Testing the `tagged_stamp.pdf` document locally in the viewer, I noticed that e.g. the /Alt entry for the StampAnnotation contains "Secondary text for stamp\u0000".
Elsewhere in the viewer we're skipping null-chars and it's easy enough to do that in the `StructTreeLayerBuilder` class as well. (Note that we generally let the API itself return the data as-is.)
The issue described in the mentioned bug is reall because
Acrobat is rendering the XFA instead of the Acroform.
The original patch just tried to workaround the issue but it
induces some regressions.
Currently this unit-test will pass just fine if compression is disabled, e.g. by commenting out the relevant code in the `src/core/writer.js` file.
While we don't have a simple way of *directly* checking that the Annotation text-content is compressed, we can however use the resulting file-size as a fairly good proxy. (Note that if compression is disabled the file-size is more than doubled.)
Please note that for performance reasons it's not really advised to use the same worker-thread *in parallel* for parsing multiple PDF documents, since they will then unnecessarily compete for resources.
However, given that it's still possible to do that e.g. when using the global `workerPort` it probably won't hurt to add a unit-test for this particular situation.
Given that the `PDFDocumentLoadingTask.destroy()`-method is documented as being asynchronous, you thus need to await its completion before attempting to load a new PDF document when using the global `workerPort`.
If you don't await destruction as intended then a new `getDocument`-call can remain pending indefinitely, without any kind of indication of the problem, as shown in the issue.
In order to improve the current situation, without unnecessarily complicating the API-implementation, we'll now throw during the `getDocument`-call if the global `workerPort` is in the process of being destroyed.
This part of the code-base has apparently never been covered by any tests, hence the patch adds unit-tests for both the *correct* usage (awaiting destruction) as well as the specific case outlined in the issue.
The main stamp button will be used to just enter in a add/edit image mode:
- the user can add a new image in using the new button.
- the user can edit an image in resizing, moving it.
In image mode, when the user clicks outside on the page but not on an editor,
then all the selected editors will be unselected.
Given that the FieldObjects are parsed in parallel, in combination with the existing caching in the `getPage`-method and `annotations`-getter, adding additional caches for this fallback code-path doesn't seem entirely necessary.
When moving an element in the DOM, the focus is potentially lost, so we need to make sure
that the focused element before the translation will get back its focus after it.
But we must take care to not execute any focus/blur callbacks because the user didn't
do anything which should trigger such events: it's a detail of implementation. For example,
when several editors are selected and moved, then at the end the same must be selected, so
no element receive a focus event which will set it as selected.
There are 2 rotation we've to deal with: the viewer one and the editor one.
The previous implementation was a bit complex and having to deal with these
rotation would have potentially increase it.
So this patch aims to simplify the implementation and deal with all the possible
cases.
The main idea is to transform the mouse deltas according to the rotations and then
apply the resizing in the page coordinates system.
This method is very old, however with the exception of the auto-print hack (when scripting is disabled) in the viewer it's never actually been used.
Most likely the idea with `PDFDocumentProxy.getJavaScript` was that it'd be useful if scripting support was added, however it turned out that it was a bit too simplistic and instead a number of new methods were added for the scripting use-cases.
When searching for "endobj"-operators, make sure that we don't accidentally match a "trailer"-string in /Content-streams without /Filter-entries (i.e. streams that contain "raw" and thus human-readable data).
When an editor is selected in using the keyboard then it has the focus.
But then if the editor is unselected with Escape key then the focus must
be removed otherwise we still have a blue outline around it.
And add few missing timeout in the integration tests.
Selected editors can be moved in using the arrows:
- up/down/left/right will move the editors of 1 in page unit;
- ctrl (or meta)+up/down/left/right will move them of 10 in page unit.
The keyboard shortcuts (copy, paste, ...) didn't work correctly when the
main container was not focused.
This patch adds few waitForTimeout in the integration test for FreeText
in order to avoid possible intermittent failures.
When the flag is set, the appearance has to be generated from the value so it's
useless/meaningless to extract the content from the existing appearance.
When a pdf has /NeedAppearances set to true, the annotation appearance must be
generated from its value and we must take into account the hasOwnCanvas property.
By leveraging import maps we can get rid of *most* of the remaining `require`-calls in the `src/display/`-folder, since we should strive to use modern `import`-statements wherever possible.
The only remaining cases are Node.js-specific dependencies, since those seem very difficult to convert unless we start producing a bundle *specifically* for Node.js environments.
With the changes in the previous patch the `isNodeJS`-helper no longer needs to live in its own file, which helps get rid of a closure in the *built* files.
Currently this class contains a few "special" code-paths for the COMPONENTS build-target, which normally wouldn't be a problem. However, in this particular case that means accessing code that we don't want to include unconditionally in all builds.
This is currently implemented using build-time `require`-calls which we nowadays want to avoid, and we should strive to remove all such cases from the code-base. (Generally speaking `import` is the future, and build-tools may not always play well with a mix of both formats.)
We can easily improve things here by using sub-classing for the COMPONENTS build-target, and then use the ability to re-name when exporting (to avoid breaking existing code).
Occasionally some test-suites may fail to start on the bots, however that's not correctly reflected in the botio-output posted to GitHub which makes it easy to accidentally overlook this situation.
Looking at the raw logs when that happens they always seem to contain a line such as `Run NaN tests` which means that we should be able to easily make this situation a *failure* as intended.
- Take into account the page translation,
- Take into account the correct translation for the editor border,
- Take into account the position of the first glyph in the annotation,
- Take into account the rotation of the editor.
Close#16633.
createImageBitmap doesn't work with svg files (see bug 1841972), so we need to workaround
this in using an Image.
When printing/saving we must rasterize the image, hence we get the biggest bitmap as image
reference to avoid duplications or poor quality on rendering.
The existing code is unable to *correctly* extract the color from the appearance-stream when the ColorSpace-data is "complex". To reproduce this:
- Open `freetexts.pdf` in the viewer.
- Note the purple color of the "Hello World from Preview" annotation.
- Enable any of the Editors.
- Note how the relevant annotation is now black.
When there was a rotation, the generated bbox was wrong because of an inversion
between width and height.
This patch aims to fix this issue in re-writing the FreeText code generation
to have something similar to what Acrobat does.
And fix the name of the font which wasn't the correct one when calling the
evaluator.
Rather than having to *manually* determine the potential `transfers` at various spots in the API, we can let the `AnnotationStorage.serializable` getter include this.
To further simplify things, we can also let the `serializable` getter compute and include the `hash`-string as well.
In order to minimize the size the of a saved pdf, we generate only one
image and use a reference in each annotation using it.
When printing, it's slightly different since we have to render each page
independantly but we use the same image within a page.
It occurred to me that we can actually run this unit-test in Node.js environments by making use of the preprocessor to stub out the browser globals there.
Until now we've not actually had *any* tests that ensure that the *official* PDF.js-viewer API exposes the intended functionality, which means that things can easily break accidentally.
*Please note:* This unit-test cannot (easily) be run in Node.js-environments, since the `external/webL10n/l10n.js` file contains various browser-specific functionality.
Until now we've not actually had *any* tests that ensure that the *official* PDF.js API exposes the intended functionality, which means that things can easily break accidentally.
With the changes in PR 16552 we can now move general translation into the `AnnotationLayer` itself, which should improve things ever so slightly in third-party implementations where the default viewer isn't used.
- it'll help to be able to move popups on screen to let the user read the text
- popups won't inherit some properties from their parent:
- the popup can be misrendered if for example the parent has a clip-path property.
- add an outline to the popup when the parent is focused.
- hide a popup when it's clicked.
Fix handling of /Filter-entries, since the current implementation could potentially corrupt the data if there's multiple filters present.
Please note that filters are applied *sequentially* during decoding, starting from the first one in the Array, hence the first Array-entry needs to be /FlateDecode in order for things to actually work correctly.
To prevent a future bug, if we want to save more "complex" data such as images, also ensure that we include any existing /DecodeParms-entries when updating the /Filter-entry.
The existing unit-test doesn't work as intended, since the page never actually renders. Note how `cleanup` is *not* allowed to run when parsing and/or rendering is ongoing, however an (old) incorrect condition could prevent rendering from ever starting.
This is very old code, which has been slightly re-factored a couple of times (many years ago), however this doesn't appear to affect e.g. the default viewer since the incorrect behaviour seem highly dependent on "unlucky" timing.
Note also how at the start of the `PDFPageProxy.prototype.render`-method we purposely cancel any pending `cleanup`-call, to prevent unnecessary re-parsing for multiple sequential `render`-calls.
Finally, avoid running `cleanup` when document/page destruction has already started since it's pointless in that case.
The original `trimCache` functionality was intended to be exposed on the
top-level `puppeteer` module, but due to a bug in Puppeteer this didn't
work correctly and we had to call `trimCache` on the default Puppeteer
node instance instead, which was fortunately exposed. However, since
this didn't feel like intended API usage, this bug was reported and is
now fixed in Puppeteer 20.5.0, so this commits updates Puppeteer to that
version so we can use the intended API.
The full history of this issue can be found at
https://github.com/puppeteer/puppeteer/issues/10174.
This patch is the result of me going through some old issues regarding non-embedded Wingdings support.
There's a few different things wrong in the referenced PDF document:
- The /BaseFont and /FontName entries don't agree on the name of the fonts, with one font using `/BaseFont /Wingdings-Regular` and `/FontName /wg09np` which obviously makes no sense.
To address this we'll compare the font-names against our lists of known ones and ignore /FontName entries that don't make sense iff the /BaseFont entry is a known font-name.
- The non-embedded Wingdings font also set an incorrect /Encoding, in this case /MacRomanEncoding, which should have been fixed by PR 16465. However this doesn't work since the font has *bogus* font-flags, that fail to categorize the font as Symbolic.
To address this we'll also compare the font-name against the list of known symbol fonts.
This commit makes the following required changes:
- Replace custom cache trimming logic in favor of the (per our request)
newly added `trimCache` method in Puppeteer. Not only does this greatly
simplify our code and prevents having to import Puppeteer internals,
it's also necessary because Puppeteer 20 removed the `BrowserFetcher`
API in favor of the new separate `@puppeteer/browsers` package.
- Start browsers in series instead of in parallel. Parallel browser
starts broke since Puppetter 19.1.0 and it turns out that it has never
been supported officially, so it worked more-or-less by accident.
Starting browsers in series is the supported way, is almost equally
fast and ensures that we avoid any race conditions during startup.
Finally, it also allows us to remove the `browserPromise` state on our
session objects.
Fixes#15865.
Now that font-substitution has been implemented, we should be able to do much a better job at supporting non-embedded Wingdings fonts.
Given that this is a Windows-specific font, see https://en.wikipedia.org/wiki/Wingdings, this is however not guaranteed to work (well) on other platforms.
The affected font is non-embedded ZapfDingbats, however the PDF document for some inexplicable reason specifies the encoding as "WinAnsiEncoding" (which is obviously wrong).
To work-around this bug in the PDF generator, we'll simply ignore any explicitly specified named encoding for non-embedded symbol fonts.
- Remove the dependency on fit-curve;
- Improve the way to draw the current line in using a Path2D and
in clearing only the last part of the curve instead of clearing
all the canvas;
- Smooth the curve when drawing to avoid to have some changes after
the drawing ends;
- Make the smoothing a bit less agressive.
Given that inline images may contain "EI"-sequences in the image-data itself, actually finding the end-of-image operator isn't always straightforward.
Here we extend the implementation from PR 12028 to potentially check all of the following bytes, rather than stopping immediately. While we have fairly decent test-coverage for this code, whenever you're changing it there's unfortunately a slightly higher than normal risk of regressions. (You'd really wish that PDF generators just stop using inline images.)