The following are some highlights of this patch:
- In the Worker we only extract a *subset* of the potential contents of the `Usage` dictionary, to avoid having to implement/test a bunch of code that'd be completely unused in the viewer.
- In order to still allow the user to *manually* override the default visible layers in the viewer, the viewable/printable state is purposely *not* enforced during initialization in the `OptionalContentConfig` constructor.
- Printing will now always use the *default* visible layers, rather than using the same state as the viewer (as was the case previously).
This ensures that the printing-output will correctly take the `Usage` dictionary into account, and in practice toggling of visible layers rarely seem to be necessary except in the viewer itself (if at all).[1]
---
[1] In the unlikely case that it'd ever be deemed necessary to support fine-grained control of optional content visibility during printing, some new (additional) UI would likely be needed to support that case.
Given that the Fetch API is supported since Node.js 18 we should be able to use it when downloading l10n files, which allows us to simplify the code and to make it fully `async`.
The fact that the highlight-thickness can only be changed in "free" mode isn't really obvious visually in the toolbar, so attempt to provide at least some indication of the `disabled`-state by "dimming" the slider.
In implementing caret browsing mode in pdf.js, I didn't notice that selectstart isn't always triggered.
So this patch removes the use of selectstart and rely only on selectionchange.
In order to simplify the selection management, the selection code is moved in the AnnotationUIManager:
- it simplifies the code;
- it allows to have only one listener for selectionchange instead of having one by visible page
for selectstart.
I had to add a delay in the integration tests for highlighting (there's a comment with an explanation),
it isn't really nice, but it's the only way I found and in real life there always is a delay between
press and release.
The function caretPositionFromPoint return the position within the last visible element
and sometimes there are some elements on top of the ones in the text layer.
So the idea is to hide the visible elements which aren't in the text layer in order
to get the right caret position.
*Please note:* This is a micro optimization, hence I fully understand if the patch is rejected.
Currently we create two temporary Arrays and have to iterate twice in total when building the final `hexNumbers` Array.
With this patch there's only one temporary Array and a single iteration required to build the final `hexNumbers` Array.
When highlighting, the annotation editor layer is disabled to get pointer events
from the text layer, but the annotation layer must be then disabled either in
order to avoid bad interactions.
Previously we'd simply export this directly from `web/app_options.js`, which meant that it'd be technically possible to *accidentally* modify the `compatibilityParams` Object when accessing it.
To avoid this we instead introduce a new `AppOptions`-method that is used to lookup data in `compatibilityParams`, which means that we no longer need to export this Object.
Based on these changes, it's now possible to simplify some existing code in `AppOptions` by taking full advantage of the nullish coalescing (`??`) operator.
Given that the "PREFERENCE" kind is used e.g. to generate the preference-list for the Firefox PDF Viewer, those options need to be carefully validated.
With this patch we'll now check this unconditionally in development mode, during testing, and when creating the preferences in the gulpfile.
As part of the changes in PR 17686 we "accidentally" enabled source-maps for the *minified* builds, which seems unnecessary since those have never been included in the `pdfjs-dist` output.
Locally this patch reduces the run-time of `gulp minified` by ~15 percent.
Rather than first building the library and then use Terser "manually" to minify the files, we can utilize a Webpack plugin to combine these steps which helps to simplify the gulpfile.
The `handler` method contained this code in two inline functions,
triggered via callbacks, which made the `handler` method big and harder
to read. Moreover, this code relied on variables from the outer scope,
which made it harder to reason about because the inputs and outputs
weren't easily visible.
This commit fixes the problems by extracting the request checking code
into a dedicated private method, and modernizing it to use e.g. `const`/
`let` instead of `var` and using template strings. The logic is now
self-contained in a single method that can be read from top to bottom
without callbacks and with comments annotating each check/section.
- Run the minification in "parallel" since that should be a *tiny* bit more efficient.
- Don't rename the minified files since that seems unnecessary, especially considering that they are only used in the `dist-pre` target where we currently change the name back manually.
After the changes in PR 17637 there's no longer any reason to invoke `tweakWebpackOutput` without an argument, since the `__non_webpack_import__` re-writing was moved into the Babel plugin.
This way we can avoid a (little) bit of unnecessary parsing during building.
The specs are unclear about what kind of xref table format must be used.
In checking the validity of some pdfs in the preflight tool from Acrobat
we can guess that having the same format is the correct way to do.
The pdf in the mentioned bug, after having been changed, wasn't correctly
displayed in neither Chrome nor Acrobat: it's now fixed.
Note how we're using custom `__non_webpack_import__`-calls in the code-base, that we replace during the post-processing stage of the build, to be able to write `import`-calls that Webpack will leave alone during parsing.
This work-around is necessary since we let Babel discards all comments, given that we generally don't need/want them in the builds, hence why we cannot utilize `/* webpackIgnore: true */`-comments in the source-code.
After the changes in PR 17563 it thus seems to me that we should be able to just move this re-writing into the Babel plugin instead.
The `handler` method contained this code in an inline function, which
made the `handler` method big and harder to read. Moreover, this code
relied on variables from the outer scope, which made it harder to reason
about because the inputs and outputs weren't easily visible.
This commit fixes the problems by extracting the directory listing code
into a dedicated private method, and modernizing it to use e.g. `const`/
`let` instead of `var` and using template strings.
The `handler` method contained this code in an inline function, which
made the `handler` method big and harder to read. Moreover, this code
relied on variables from the outer scope, which made it harder to reason
about because the inputs and outputs weren't easily visible.
This commit fixes the problems by extracting the range file serving code
into a dedicated private method, and modernizing it to use e.g. `const`/
`let` instead of `var` and using template strings.
The `handler` method contained this code in an inline function, which
made the `handler` method big and harder to read. Moreover, this code
relied on variables from the outer scope, which made it harder to reason
about because the inputs and outputs weren't easily visible.
This commit fixes the problems by extracting the file serving code into
a dedicated private method, and modernizing it to use e.g. `const`/`let`
instead of `var` and using template strings.
Currently the `web/app.js` file pulls in various build-specific dependencies, via the use of import maps, and those files in turn import from `web/app.js` thus creating undesirable import cycles.
To avoid this we instead pass in a `PDFViewerApplication`-reference, immediately after it's been created, to the relevant code.
Note that we use an ESLint plugin rule, see `import/no-cycle`, that is normally able to catch import cycles. However, in this case import maps are involved which is why this wasn't caught.
Looking at the *built* files you'll notice some lines containing nothing more than a semicolon. This is the result of (mostly top-level) `if`-statements, which include `PDFJSDev`-checks, that evalute to `false` during Babel parsing.
This has always annoyed me a bit, and looking at Babel plugin it seems that we can fix this simply by *removing* the relevant nodes.
This part of the (modern) preprocessor is now dead code, since we no longer use `require` statements anywhere in the main code-base.
Note that as part of the changes leading up to PDF.js version `4` we removed all[1] the remaining `require` statements, and we also have an ESLint rule to ensure that no new ones are accidentally added.
---
[1] With two small exceptions, in benchmarking-code and in the Webpack-example.
Given that we need to pass in a `PDFDataRangeTransport`-instance a number of the needed parameters can be obtained from it, rather than having to specify them manually.
This should be a *tiny* bit more efficient, since it avoids parsing substrings that we don't care about.
*Please note:* I cannot find an ESLint rule to enforce this automatically.
- Ensure that localization works in the GENERIC viewer, even if the necessary locale files cannot be loaded.
This was the behaviour prior to the introduction of Fluent, and it seems worthwhile to keep that (especially since we already bundle the en-US strings anyway).
- Let the `GenericL10n`-implementation use the *bundled* en-US strings directly when no language is provided.
- Remove the `NullL10n`-implementation, and simply fallback to `GenericL10n`, to reduce the maintenance burden of viewer-components localization.
- Indirectly, given the previous point, stop exporting `NullL10n` in the viewer-components since it's now removed.
Note that it was never really intended to be used directly and only existed as a fallback.
*Please note:* This doesn't affect the Firefox PDF Viewer, thanks to the use of import maps.
When an highlight is self-intersecting, the outline was drawn inside.
In order to remove it, we use a svg mask to exclude the shape inside
when drawing the outlines.
That leads to change the outline 1px,white-2px,blue-1px,white to a
2px,white-2px,blue: the part of the stroke which is inside the shape
is removed because of the mask.
All of our static evaluation & dead-code elimination transforms need to
happen in post-order, transforming inner nodes first. This is so that
in complex nested cases all transforms see the simplified version of
their inner nodes.
For example:
async getNimbusExperimentData() {
if (!PDFJSDev.test("GECKOVIEW")) { return null; }
// other code
}
-> [evaluation of PDFJSDev.*]
async getNimbusExperimentData() {
if (!false) { return null; }
// other code
}
-> [!false -> true]
async getNimbusExperimentData() {
if (true) { return null; }
// other code
}
-> [if (true) -> replace with the if branch]
async getNimbusExperimentData() {
return null;
// other code
}
-> [early return -> remove dead code]
async getNimbusExperimentData() {
return null;
// other code
}
This was done correctly in all cases except for our `UnaryExpression`
transform, which was happening in pre-order.
Having this parameter among a list of DOM-elements seems slightly strange now, however this is very old code hence the explanation for why this was done is for historical reasons (as is often the case).
Hence we can simply move this into `AppOptions` instead, which seems more appropriate overall.
Given that only the GENERIC viewer supports opening more than one PDF document, we can simplify things a tiny bit by instead generating the necessary DOM-element in JavaScript.
This unit-test is now failing in up to date versions of Node.js respectively Chromium-browsers, since `CompressionStream` no longer produces consistent data across all environments/browsers.
However logging the compressed TypedArray produced by `writeStream`, with Firefox respectively Chrome, and then feeding *both* of those TypedArray as input to `DecompressionStream` produced the same (correct) result in both browsers.
Hence the *exact* output of `CompressionStream` shouldn't matter, as long as we're able to successfully decompress it when the resulting PDF document is opened with the PDF.js library, and the unit-test is thus extended to check this.
Starting with Chrome 120.0.6099.109 (shipped with Puppeteer 21.8.0+) the
unit test fails in Chrome as well. The issue is tracked in #17399, but
for now we'll only run the unit test in Firefox so we can continue to
update Puppeteer while also still having a browser in which it runs,
until we figure out why the behavior of `CompressionStream` changed.
The `DefaultExternalServices` code, which is used to provide build-specific functionality, is very old. This results in a pattern where we first initialize `PDFViewerApplication.externalServices` and then *override* it for the different builds.
By converting `DefaultExternalServices` into a "regular" class, and leveraging import maps, we can directly initialize the correct instance depending on the build.
Given the simplicity of the `createPreferences` method, we can leverage import maps to directly initialize the correct `Preferences`-instance depending on the build.
Given the simplicity of the `createDownloadManager` method, we can leverage import maps to directly initialize the correct `DownloadManager`-instance depending on the build.
The latest mozilla-central update has test failures, because some CSS variables are not "properly" referenced; in particular:
- Give `--hcm-highlight-selected-filter` a default value, of `none`, similar to the previously existing HCM filter.
- Remove the `--mix-blend-mode` variable, since it's unused.
It isn't really a fix for the mentioned bug but it slightly improve things.
In reducing the memory use, the time spent in the GC is reduced either.
The algorithm to compute the bounding box is the same as before but it has just
been rewritten to be more efficient.
This commit converts the pdfjsdev-loader transform into a Babel plugin,
to skip a AST->string->AST round-trip.
Before this commit, the webpack build process was:
1. Babel parses the code
2. Babel transforms the AST
3. Babel generates the code
4. Acorn parses the code
5. pdfjsdev-loader transforms the AST
6. @javascript-obfuscator/escodegen generates the code
7. Webpack parses the file
8. Webpack concatenates the files
After this commit, it is reduced to:
1. Babel parses the code
2. Babel transforms the AST
3. babel-plugin-pdfjs-preprocessor transforms the AST
4. Babel generates the code
5. Webpack parses the file
6. Webpack concatenates the files
This change improves the build time by ~25% (tested on MacBook Air M2):
- `gulp lib`: 3.4s to 2.6s
- `gulp dist`: 36s to 29s
- `gulp generic`: 5.5s to 4.0s
- `gulp mozcentral`: 4.7s to 3.2s
The new Babel plugin doesn't support the `saveComments` option of
pdfjsdev-loader, and it just always discards comments. Even though
pdfjsdev-loader supported multiple values for that option, it was
effectively ignored due to `acorn` dropping comments by default.
This is in preparation for the next commit, which will convert
preprocessor2.mjs to a Babel plugin. The purpose of this commit
is to help git track the rename regardless of the large amount
of changes.
In the Gulpfile only the exit codes of `test.mjs` child processes
erroneously aren't checked. This causes failures in `test.mjs` to be
logged but not propagated to the master process, which in turn causes
test runners such as GitHub Actions to succeed because they only
monitor the master process. This is easy to reproduce by throwing an
error at the top of `test.mjs` and running `gulp makeref` or `gulp
unittest`: the error is logged, but the task that spawned the child
process succeeds and the master process exits with exit code 0. This is
problematic because it can easily cause errors to go by unnoticed.
This commit fixes the issue by making sure that the `test.mjs`
invocations are handled in the same way as the other child processes
in the file, i.e., if the child process exits with a non-zero exit code
then the master process also exits with a non-zero exit code. After this
patch the error is still logged, but the task now also fails and the
master process exits with exit code 1 to properly signal failure.
This manually ignores some cases where the resulting auto-formatting would not, as far as I'm concerned, constitute a readability improvement or where we'd just end up with more overall indentation.
Please see https://eslint.org/docs/latest/rules/arrow-body-style
For arrow functions that are both simple and short, we can avoid using explicit `return` to shorten them even further without hurting readability.
For the `gulp mozcentral` build-target this reduces the overall size of the output by just under 1 kilo-byte (which isn't a lot but still can't hurt).
The `if` statement is no longer necessary because the Node.js versions
that didn't provide `dns.setDefaultResultOrder` are no longer supported,
but looking into this a bit more it turns out that the entire workaround
is no longer necessary because the issue got fixed in Firefox 105 in bug
1769994. Indeed, Firefox now starts nicely with the workaround removed.
Reverts 60ed3cd297c4045b90f4114a74e5baa4ef1c5056.
In order to do that we must change the text layer opacity to 1 but
it has several implications:
- the selection color must have an alpha component,
- the background color of the span used for highlighted words
must have an alpha component either, but now the opacity is 1
we can use some backdrop-filters in HCM making the highlighted
words more visible.
- fix a regression caused by #17196: the css variable --hcm-highlight-filter
has to live under the #viewer element because in HCM it's overwritten
by js at this level, hence links annotations for example didn't
have the right colors when hovered.
The free highlighting is enabled when the mouse pointer isn't on some text.
Then we draw a shape with smoothed borders corresponding to the movement of
the mouse.
Printing/saving and changing the thickness will come later.
and try to load the font family (guessed from the font name) before trying
the local substitution.
The local(...) command expects to have a real font name and not a predefined
substitution it's why we try the font family.
By always removing the "visibilitychange" listener in the `PDFViewer.#onePageRenderedOrForceFetch`-method we can (ever so slightly) reduce duplication in the code.
Ensure that users cannot provide incorrect values when trying to set the global worker-options.
This patch was prompted by occasionally seeing users manually loading the `pdf.worker.mjs`-file and then assigning it to the `workerSrc`-option, something that obviously doesn't make sense and will cause fake-workers to be used (with poor performance as a result).
When the text of an annotation is extracted in using getTextContent, consecutive white spaces
are just replaced by one space and. So this patch add an option to make sure that white
spaces are preserved when appearance is parsed.
For the case where there's no appearance, we can have a fast path to get the correct string
from the Content entry.
When an existing FreeText is edited, space (0x20) are replaced by non-breakable (0xa0) ones
to make to see all of them on screen.
The system locale (used in OffscreenCanvas) can be different from the one guessed by Fluent,
consequently, in order to avoid any mismatch, we just use an attached canvas element.
The original issue can easily be reproduced locally in adding a lang="ja" in viewer.html
(or with an other language for Japanese users).
With modern JavaScript class features we can move the relevant event handling into private methods, and thus invoke it directly when resetting the toolbar UI-state.
*Please note:* This patch slightly reduces the size of the `web/secondary_toolbar.js` file.
With modern JavaScript class features we can move the relevant event handling into private methods, and thus invoke it directly when resetting the toolbar UI-state.
*Please note:* This patch slightly reduces the size of the `web/toolbar.js` file.
In PR 11912 we started caching images that occur on multiple pages globally, which improved performance a lot in many PDF documents.
However, one slightly annoying limitation of the implementation is the need to re-parse the image once the global-caching threshold has been reached. Previously this was difficult to avoid, since large image-resources will cause cleanup to run on the main-thread after rendering has finished. In PR 16108 we started delaying this cleanup a little bit, to improve performance if a user e.g. zooms and/or rotates the document immediately after rendering completes.
Taking those two PRs together, we now have a situation where it's much more likely that the main-thread has "globally used" images cached at the page-level. Hence we can instead attempt to *copy* a locally cached image into the global object-cache on the main-thread and thus reduce unnecessary re-parsing of large/complex global images, which significantly reduces the rendering time in many cases.
For the PDF document in issue 11878, the rendering time of *the second page* changes as follows (on my computer):
- With the `master`-branch it takes >600 ms to render.
- With this patch that goes down to ~50 ms, which is one order of magnitude faster.
(Note that all other pages are, as expected, completely unaffected by these changes.)
This new main-thread copying is limited to "large" global images, since:
- Re-parsing of small images, on the worker-thread, is usually fast enough to not be an issue.
- With the delayed cleanup after rendering, it's still not guaranteed that an image is available in a page-level cache on the main-thread.
- This forces the worker-thread to wait for the main-thread, which is a pattern that you always want to avoid unless absolutely necessary.
This commit changes the code to use a template string and to use `const`
instead of `var`. Combined with the previous commits this allows for
enabling the ESLint `no-var` rule for this file now.
The test helper code largely predates the introduction of modern
JavaScript features and should be refactored to improve readability.
In particular callbacks make the code harder to understand and maintain.
This commit:
- replaces the callback argument with returning a promise;
- replaces the recursive function calls with a simple loop;
- uses `const`/`let` instead of `var`;
- uses arrow functions for shorter code;
- uses template strings for shorter string formatting code.
The test helper code largely predates the introduction of modern
JavaScript features and should be refactored to improve readability.
In particular callbacks make the code harder to understand and maintain.
This commit:
- replaces the callback argument with returning a promise;
- uses `const` instead of `var`;
- uses arrow functions for shorter code;
- uses template strings for shorter string formatting code;
- uses `Array.includes` for shorter response code checking code.
This test intermittently fails, likely because the auto-print is triggered fast enough
that we don't manage to get it.
So this patch aims to try to set a listener very early in order to be sure that
we'll be aware that a print has been triggered.
It seems this unit-test now fails consistently in "all" up-to-date Node.js versions. We should probably try and understand why, but for now just disable it to get passing CI tests.
There's obviously a few things wrong with the Annotations in the referenced PDF document, however parsing of an Annotation shouldn't just break if the /BS-entry isn't a dictionary.
When opening a pdf from the secondary toolbar, a second color picker is added.
So in order to avoid that, we just stop listening for annotationeditoruimanager
in the toolbar.
The doorhanger for highlighting has a basic color picker composed of 5 predefined colors
to set the default color to use.
These colors can be changed thanks to a preference for now but it's something which could
be changed in the Firefox settings in the future.
Each highlight has in its own toolbar a color picker to just change its color.
The different color pickers are so similar (modulo few differences in their styles) that
this patch introduces a new class ColorPicker which provides a color picker component
which could be reused in future editors.
All in all, a large part of this patch is dedicated to color picker itself and its style
and the rest is almost a matter of wiring the component.
When a pdf as a FreeText without appearance, we use a fake font in order to render it
and that leads to create few new refs for the font.
But then when we're saving, we create some new refs which start at the same number
as the previous created ones.
Consequently, when saving we're using some wrong objects (like a font) to check if
we're able to render the newly added FreeText.
In order to fix this bug, we just remove the persistent refs (which are only used
when rendering/printing) during the saving.
Having just tested PR 17337 locally I noticed that especially the `JpxImage`-test causes a "ridiculous" amount of warning messages to be printed, which doesn't seem helpful.
Given that only actual `Error`s should be relevant here, we can easily disable this logging during the tests.
The test helper code largely predates the introduction of modern
JavaScript features and should be refactored to improve readability.
In particular callbacks and recursive function calls make the code
harder to understand and maintain.
This commit:
- replaces the callback argument with returning a promise;
- replaces the recursive function calls with a simple loop;
- uses `const`/`let` instead of `var`;
- uses template strings for shorter string formatting code;
- improves the error messages to have more details.
The test helper code largely predates the introduction of modern
JavaScript features and should be refactored to improve readability.
In particular callbacks and recursive function calls make the code
harder to understand and maintain.
This commit:
- replaces the callback argument with returning a promise;
- uses `const` instead of `var`;
- uses arrow functions for shorter code;
- uses template strings for shorter string formatting code;
- improves the error messages to have more details.
This unfortunately broke in PR 17060, since I had completely forgotten about https://bugzilla.mozilla.org/show_bug.cgi?id=1632644#c5 when writing that patch.
The easiest solution, while slightly unfortunate, seems to be to add a couple of non-standard hash parameters specifically for the PDF attachment use-case in the Firefox PDF Viewer. (Note that we cannot use "nameddest" here, since we also need to support the stringified destination-Array case.)
It seems this unit-test started failing in Node.js version 20.10 as well. We should probably try and understand why, but for now just disable it to get passing CI tests.
Given that this event listener is only used to trigger rendering after the sidebar has been opened/closed, we can utilize the existing one in the `PDFSidebar` class for this purpose instead. That one is registered on the sidebar DOM-element, and is needed to remove a CSS-class indicating that the sidebar is moving.
It fixes few errors in the CSS for HCM.
It now complies to the specs from UI/UX.
Only the foreground must change in HCM and not the background, similarly to what
we had for the alt-text button before moving it.
After the two previous commits, which removed the remaining call-sites, this method is no longer used and can thus be removed.
As mentioned in the JSDocs for the now removed method, synchronous communication between the viewer and the platform code isn't really a good idea.
Once this patch has landed in mozilla-central some additional clean-up of the platform code will also be possible.
The return value is not, nor has it ever been, used for anything and we should thus be able to just send the message.
Note that the responses are already handled by the "message" event listener registered above.
This commit fixes the JSDoc comment for the `annotationEditorMode` setter.
The types tests fail on that now because the input value was changed from
a number to an object with various properties in recent patches, but the
JSDoc comment was not updated accordingly.
Moreover, the types tests also fail because TypeScript 5.3 assumes that
getters and setters have equal return and input value types, which is
arguably also what one would expect, but our `annotationEditorMode`
getter and setter deviate from that because the getter returns a number
while the setter accepts an object. Given that it seems more important
to document the setter entirely, including the meaning and types of its
properties, and the type of the getter can easily be inferred from this
comment and the other JSDoc comments that have `annotationEditorMode` in
it, we remove the getter type to make the types tests pass again.
- Extend the `fetchData` helper function to also support fetching of "blob" data.
- Use the `fetchData` helper function more in the code-base, when fetching non-PDF data. Given that the Fetch API isn't supported for all protocols, this should improve compatibility for the PDF.js library.
Currently the SVG images for the loading-icons exist in two versions, for the light- respectively dark-theme, which nowadays are the only "duplicated" icons left.
The reason for this is that these icons are being used in `input`-elements, where the regular `mask-image` approach used for all buttons don't work.
To address this we add containers for the `input`-elements, such that we have a "regular" DOM-element where we can use `mask-image`.
The goal is to be able to get these outlines to fill the shape corresponding
to a text selection in order to highlight some text contents.
The outlines will be used either to show selected/hovered highlights.
- Re-factor the existing `fetchData` helper function such that it can fetch more types of data, and it now supports "arraybuffer", "json", and "text".
This only needed minor adjustments in the `DOMCMapReaderFactory` and `DOMStandardFontDataFactory` classes.[1]
- Expose the `fetchData` helper function in the API, such that the viewer is able to access it.
- Use the `fetchData` helper function in the `GenericL10n` class, since this should allow fetching of localization-data even if the default viewer is run in an environment without support for the Fetch API.
---
[1] While testing this I also noticed a minor inconsistency when handling standard font-data on the worker-thread.
This is consistent with the implementation used in the (now removed) webL10n-library, and by only using lowercase language-codes internally in the `L10n`-implementations we should avoid future issues e.g. when users manually set the `locale`-option (in the default viewer).
Some fields, somewhere under the Fields entry in Acroform, could have no name (in T)
but with a parent which has a name but which isn't somewhere under Fields.
As a side-effect, this patch prevents infinite loops because of potential cycles
under Fields.
This commit migrates the font tests away from the bots. Not only are the
font tests broken on the Windows bot since some time, they also run on
Python 2 (end of life since January 2020) and `ttx` 3.19.0 (released in
November 2017). The latter is installed via a submodule, which requires
more complicated logic for finding and running `ttx`.
We solve the issues by implementing a modern workflow that installs the
most recent stable Python and `ttx` (`fonttools` package) versions. This
simplifies the `ttx` driver code as well because it can now assume `ttx`
is available on the path (just like we do for e.g. `node` invocations).
GitHub Actions takes care of creating a virtual environment with
`fonttools` in it so that the `ttx` entrypoint is available. Locally
the font tests can be run in a similar way by creating and sourcing a
virtual environment with `fonttools` in it before running the font
tests, and a README file is included with instructions for doing so.
This commit prepares for running the font tests on GitHub Actions where
we can't spin up headful browsers because there are no display
capabilities on the workers. This will also be useful for porting other
test targets to GitHub Actions at a later time, as well as running the
tests locally in headless mode.
This commit prepares for the introduction of extra options in later
commits by changing the function signatures of the `startBrowser(s)`
functions to take parameter objects instead of plain parameters. This
makes the call sites explicitly state which parameters they pass,
improving overall readability as well.
The current logic is more complicated than it needs to be because it's
passing a callback function to `startBrowsers` instead of a string.
This commit simplifies the logic by passing the base URL as a string to
`startBrowsers` and having it do further augmentation internally,
thereby removing all indirection of the function calls to `makeTestUrl`
and the inner function it returned.
After recent PRs the size and scope of the CI workflow is now reduced, and this patch tries to simplify things further. More specifically we can directly specify the gulp-tasks in the workflow, and thus clean-up the `gulpfile` a tiny bit.
Note that this will technically be slower, since the tests are now run in series (rather than in parallel), however `gulp externaltest` runs so quickly that it really won't matter in practice.
Hopefully this is enough to address the problem of initializing the Worker in Chromium-based browsers.
Locally I've tried to *force* use of `createCDNWrapper` in development mode, by commenting out the `isSameOrigin` checks, and worker-loading fails against `master` and works with this patch.
Currently the background-color of the `editorParamsToolbar`s don't match that of the arrow, which is especially noticable in dark mode (see zoomed-in screen-shots below).
The simplest solution seem to be to just style the `editorParamsToolbar`s like the `secondaryToolbar`, to limit the amount of CSS changes required.
This should *hopefully* fix 17228, by tweaking the build scripts to give the GENERIC viewer something to await to avoid breaking third-party users of the standalone viewer components.
This button is *only* used in the GENERIC viewer, and will currently be visible either in the main or secondary toolbars (depending on the viewer width).
To simplify upcoming changes, and to avoid then having to complicate the relevant CSS rules unnecessarily, let's place the "Open file"-button permanently in the secondary toolbar instead.
(Note that the GENERIC viewer also, since five years, supports drag-and-drop in order to open local files.)
The `fieldObjects`-getter is implemented in the `PDFDocument` class, which means that the `this._localIdFactory`-property that we pass to `AnnotationFactory.create` doesn't actually exist.
The reason that this hasn't caused any bugs, that I'm aware of, is that all /Fields-entries need to be References to actually make sense.
The `fieldObjects`-getter itself is called, from `src/core/worker.js`, in a way that'll ensure that any `MissingDataException`s are handled. However the problem is that the actual data-lookups in `fieldObjects` and `#collectFieldObjects` are done inside of a Promise, which means that `MissingDataException`s won't be handled and parsing could thus break.
To address this we change all data-lookups to be asynchronous instead.
The `viewerCssTheme`-implementation has always been somewhat hacky, and now it's also *partially* broken ever since we've started using CSS nesting.
Trying to support nested media queries would thus require a lot more parsing of the CSS rules, which seems inefficient and thus generally undesirable.[1]
As discussed on Matrix, let's try to remove the `viewerCssTheme`-option and see if there's any (significant) fallout from this.
---
[1] If this option is brought back, it seems to me that it (in Firefox) should probably be set through the platform-code that handles theming.
Depending on the structure of the outline we could potentially need to expand a few levels, especially in long PDF documents, hence it cannot hurt to pause translation in that case as well.
With the changes in PR 17208, where browser-preferences are now handled as "regular" viewer-options, we can tweak the definition of `canvasMaxAreaInBytes` to slightly simplify things in the `PDFViewerApplication.open` method.
Hopefully this will allow us to catch bugs in new Node.js versions earlier, rather than having to wait for bug reports.
Given that `CompressionStream` is (currently) only potentially used when saving a *modified* PDF document, which is unlikely to be a common use-case in Node.js environments, let's just disable the affected unit-test for now.
Given that this branch is only necessary in development mode and *during* building, but is never actually used in the final viewer-bundles, we can utilize the pre-processor to ignore this code.
Currently we *synchronously* fetch a number of browser preferences/options, from the platform code, during the viewer respectively PDF document initialization paths.
This seems unnecessary, and we can re-factor the code to instead include the relevant data when fetching the regular viewer preferences.
The active LTS version is now based on Node.js version 20, hence let's update the relevant workflows to use that one instead; see https://en.wikipedia.org/wiki/Node.js#Releases
Given that we still support Node.js version 18, i.e. the maintenance LTS version, in the PDF.js library we'll keep testing both versions in GitHub Actions to prevent regressions.
I noticed the following warning in the GitHub Actions workflow logs:
`Configuration file not found: .github/linter_config.yml`
The configuration file is called `fluent_linter_config.yml` instead, so
this commit fixes the path so it points to the correct file.
Fixes 487816b.
The current stable version of Python is Python 3.12, see
https://www.python.org/downloads, so we should switch to that since
Python 3.10 is older and only receives security updates.
This commit tweaks the Fluent linter workflow to match the other
workflow files we have, so we make sure the steps have a newline between
them for better readability and align names and descriptions of steps
with how they are called in the other workflow files we have.
There are environments that include *incomplete* polyfills for the `navigator`-object, which may thus cause the PDF.js library to break.
Despite that clearly not being our fault, it may still result in bug reports filed against the PDF.js project; see e.g. 15728.
Currently this even seem to affect *the latest* version of Node.js; see e.g. [here].
*Please note:* Thanks to the pre-processor none of these changes affect the Firefox PDF Viewer, however it does add "overhead" when working with and reviewing the affected code (which is why I'm not crazy about this).
*Please note:* While following the steps in the README still works with this patch, in the sense that the example runs and successfully renders a PDF document, I unfortunately cannot tell if it illustrates Webpack best practices.
- Remove the `errorWrapper`-element, since it simplifies the example and is consistent with the default viewer; see PR 15533.
- Simplify the l10n-handling, since the `NullL10n` should be able to translate everything e.g. without fallback values; see PR 17146.
Note that we must append the textLayer to the DOM *before* enabling the `highlighter` and `accessibilityManager`, to avoid breaking e.g. a pending searching operation.
The least invasive solution, that I was able to come up with, is to introduce a new `TextLayerBuilder` callback-function for this purpose.
Currently the `WidgetAnnotationElement._getKeyModifier` method will always be falsy on Linux, which seems like a simple oversight. Looking at all the other `FeatureTest.platform` accesses we only handle the `isMac`-case specially, and it seems reasonable to do the same thing here.
The reason that this hasn't led to any bug reports is most likely that the `modifier`-property seems completely unused in the scripting-implementation.
Finally, with these changes we can (slightly) simplify the `FeatureTest.platform` implementation.
After PR 17177 the interface of `XfaLayerBuilder` is now inconsistent, since whether or not we directly append the xfaLayer to the DOM now depends on the rendering intent.
I forgot to include `web/l10n_utils.js` in PR 17161, which currently breaks `ConstL10n` since there's no longer a method called `setL10n`; sorry about that!
Most of the strings shouldn't contain special chars (<= 0x1F) so we can
have a fast path which just checks if the string contains at least one such
a char.
To prevent the *standalone* viewer-components from breaking, we need to ensure that the `NullL10n`-interface won't accidentally diverge from the actual `L10n`-implementations.
Looking at the `PDFThumbnailView.setPageLabel` method you'll see that we update e.g. the "aria-label" of the thumbnail-image for documents that contain (valid) pageLabels.
This isn't done in `PDFPageView`, which seems inconsistent, hence this patch.
This patch changes almost all viewer-components[1] to use "data-l10n-id"/"data-l10n-args" for localization, which means that in many cases we no longer need to pass around the `L10n`-instance any more.
One part of the code-base where the `L10n`-instance is still being used "directly" is the AnnotationEditors, however while it might be possible to convert (most of) that code as well that's not attempted in this patch.
---
[1] The one exception is the `PDFDocumentProperties` dialog, since the way it's currently implemented makes that less straightforward to fix without a lot of code changes.
*Please note:* In the Firefox PDF Viewer this findbar is only used for PDF documents placed in e.g. `<iframe>` elements.
By registering a `ResizeObserver` when the `PDFFindBar` is open we slightly unify and simplify how the findbar layout (row vs column) is handled.
This will be especially helpful with upcoming changes, where we'll make use of "data-l10n-id"/"data-l10n-args" to trigger translation in the viewer.
- The old translation engine handled language code casing slightly differently, hence we need to tweak the non-metric locale check in `PDFDocumentProperties` to account for that.
- Use only lowercase names for the pre-defined page names, to improve overall consistency.
*Please note:* These changes only affect the GENERIC build, since `NullL10n` is only a stub elsewhere (see PR 17135).
After the changes in PR 17115, which modernized and improved l10n-handling, the `NullL10n`-implementation is no longer a good fallback for the "proper" `L10n`-classes.
To improve this situation, especially for the *standalone* viewer-components, this patch makes the following changes:
- Let the `NullL10n`-implementation extend an actual `L10n`-class, which is constant and lazily initialized, to ensure that it works *exactly* like the "proper" ones.
- Automatically bundle the "en-US" l10n-strings in the build, via the pre-processor, such that we don't need to remember to manually update them.
- Ensure that the *standalone* viewer-components register their DOM-elements for translation, similar to the default viewer, since this will allow future code improvements by using "data-l10n-id"/"data-l10n-args" in most (if not all) parts of the viewer.
- Remove the `NullL10n` from the `AnnotationLayer`, to avoid affecting bundle size too much.
For third-party users that access the `AnnotationLayer`, as exposed in the main PDF.js library, they'll now need to *manually* register it for translation. (However, the *standalone* viewer-components still works given the point above.)
Use existing helper to calculate the Box
Co-authored-by: Jonas Jenwald <jonas.jenwald@gmail.com>
Ensure that there are non-zero
Co-authored-by: Jonas Jenwald <jonas.jenwald@gmail.com>
Add a reference test for #17147
Given that there's now a bit more asynchronicity in the l10n-initialization in the Firefox PDF Viewer, after PR 17115, try to limit the impact of that by moving it to occur a tiny bit earlier in the default viewer initialization.
In Firefox debug builds, there is an assertion to check that we don't connect
a subelement of an already connected root. Thanks to this assertion, we can see
that the root has already been added to Fluent, hence we don't need to do it
a second time.
We don't need to await anymore on the translation in order to update the
toolbar: it'll be done by Fluent, so we can safely remove the "localized"
event and avoid to wait for it.
*Please note:* This patch contains a couple of micro-optimizations, hence I understand if it's deemed unnecessary.
Move the `AppOptions` initialization into the `Preferences` constructor, since that allows us to remove a couple of function calls, a bit of asynchronicity and one loop that's currently happening in the early stages of the default viewer initialization.
Finally, move the `Preferences` initialization to occur a *tiny* bit earlier since that cannot hurt given that the entire viewer initialization depends on it being available.
Note that CSS-features such as e.g. `flex` didn't exist, or had poor cross-browser support, back when the JavaScript-based solution was initially implemented.
- For the generic viewer we use @fluent/dom and @fluent/bundle
- For the builtin pdf viewer in Firefox, we set a localization url
and then we rely on document.l10n which is a DOMLocalization object.
Currently we're unnecessarily converting data between strings and typed-arrays, when dealing with compressible data, in the `writeStream` function.
Note how we're *first* getting a string-representation of the stream, which involves converting the underlying typed-array into a string, only to immediately convert this back into a typed-array. This seems completely unnecessary, and is easy enough to avoid, and we'll now only do a *single* type-conversion in this function.
The current test fails intermittently only on Windows for unknown
reasons: the code is correct and on Linux it always passes. However, we
have already spent quite a lot of time on this test, so rather than
spending even more time on it I figured we should look at what behavior
the test is trying to check and find an alternative way to do it that
can't trigger this intermittent issue anymore.
This commit changes the test to use a term that only exists once in the
entire document so we cannot accidentally highlight another match
anymore. This doesn't change anything about the behavior that this test
aims to check: we still test searching in the XFA layer, we still test
that the original term is matched case-insensitively and we still test
that that match is actually highlighted. Note that the only objective of
the test is confirming that the search functionality covers the XFA
layer, so the exact phrase/match is not the interesting bit.
Given that we only use standard `import`/`export` statements now, after recent PRs, the "exports" global is unused.
Instead we add "__non_webpack_import__" to the `globals` to avoid having to sprinkle disable statements throughout the code.
Finally, the way that `globals` are defined has changed in ESLint and we should thus explicitly specify them as "readonly"; please find additional details at https://eslint.org/docs/latest/use/configure/language-options#specifying-globals
The previous change that set the timeout had effect because we have seen
quite a few protocol timeouts now correctly being raised in the context
of the active test, however we have also still seen a handful of cases
where this wasn't the case and the one second difference turned out to
be too low (likely because the operation was started slightly after one
second into the test run). We therefore tweak the value to be 75% of the
Jasmine timeout. This should be enough to catch operations that happen
later on in the test run, and if a single operation takes that long any
hope for success is already gone anyway.
It's not necessary because we have configured silent printing for
Firefox and Chrome in the browser arguments we pass in `test.mjs`. This
means that the print dialog is not even shown at all or disappears
automatically once printing is done, so the Escape key press serves no
purpose. Since it has been shown to time out, likely because the page
loses focus during printing, and because the page itself doesn't know
when the printing dialog is shown and we therefore can't possibly do the
key press at the right time anyway, this commit gets rid of it to
stabilize the test.
Those files only contain old debugging code that is not used/imported
anywhere anymore, which is generating code scanning alerts. Moreover,
they rely on globals/platform-specific code and don't import/export
logic properly.
Given the amount of work put into removing `require`-calls from the code-base, let's ensure that new ones aren't accidentally added in the future.
Note that we still have a couple of files where `require` is being used, in particular:
- The Node.js examples, however those will be updated to use `import` in PR 17081.
- The Webpack examples, and related support files, however I unfortunately don't know enough about Webpack to be able to update those. (Hopefully users of that code will help out here, once version `4` is released.)
- The `statcmp`-tool, since *some* of those `require`-calls cannot be converted to `import` without other code changes (and that file is only used during benchmarking).
Please find additional details at https://github.com/import-js/eslint-plugin-import/blob/main/docs/rules/no-commonjs.md
This *finally* allows us to mark the entire PDF.js library as a "module", which should thus conclude the (multi-year) effort to re-factor and improve how we import files/resources in the code-base.
This also means that the `gulp ci-test` target, which is what's run in GitHub Actions, now uses JavaScript modules since that's supported in modern Node.js versions.
For large/complex images it's possible that the image-data arrives in the API *after* the page has been scrolled out-of-view and thus been cleaned-up. In this case we obviously shouldn't cache such page-level data, since it'll first of all be unused and secondly can increase memory usage *a lot*.
Also, ensure that we *immediately* release any `ImageBitmap` data in this case to help reclaim memory faster.
The examples themselves were updated to account for JavaScript modules, which didn't require changing the actual URLs.
However, since it seems that JSFiddle doesn't support JavaScript modules in its separate "JavaScript" editing-area we need to change how we embed the examples to avoid showing a blank "JavaScript"-tab.
When pdfBug is true, the substitution font is used in the text layer in order
to be able to know what is the font really used thanks to the devtools.
And to be sure that fonts are loaded, the font cache isn't cleaned up when
the debugger is active.
The default protocol timeout is 180 seconds according to the
documentation at https://pptr.dev/api/puppeteer.browserconnectoptions,
but the Jasmine timeout we configure in the individual boot files is 30
seconds. The consequence of this is that if a protocol (CDP) error
occurs after 30 seconds Jasmine will fail the test, but the actual
protocol error from Puppeteer is raised much later in the context of
another test, which causes unrelated failures or tracebacks.
This commit fixes the problem by configuring Puppeteer to always use a
lower protocol timeout than the Jasmine timeout so that protocol errors
are always raised in the context of the test that actually triggered it.
It's been loaded as a JavaScript module for a long time, and given that the file is bundled as-is (without building) it seems reasonable to just change the file extension now.
The Windows bot is usually slower than the Linux bot, and therefore
text layer rendering is as well. However, the `autoprint` test awaited
text layer rendering to complete before activating the selector check,
which makes it timing-sensitive and causes it to never resolve because
the page is already printed (and the printed page div removed) by then.
This commit should fix the issue by activating the selector check as
soon as possible, namely as soon as the viewer appears, which should
ensure we're always registering the selector check in time because we're
doing it even before rendering is starting.
Comparing the currently supported browsers/environments, see [the FAQ](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#faq-support) and the [MDN compatibility data](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone#browser_compatibility), the `structuredClone` polyfill is *only* needed in Google Chrome versions < 98. Because of some limitations in the core-js polyfill we're currently forced to special-case the `transfer` handling to prevent bugs, and it'd be nice to avoid that.
Note that `structuredClone`, with transfers, is only used in two spots:
- The `LoopbackPort` class, which is only used with fake workers. Given that fake workers should *never* be used in browsers, breaking that edge-case in older Google Chrome versions seem fine.
- The `AnnotationStorage` class, when Stamp-annotations have been added to the document. Given that Google Chrome isn't the main focus of development, breaking *part* of the editing-functionality in older Google Chrome versions should hopefully be acceptable.
To avoid problems with `export` statements in the QuickJS Javascript Engine, we can work-around that by *explicitly* exposing `pdfjsScripting` globally instead.
At this point in time all browsers, and also Node.js, support standard `import`/`export` statements and we can now finally consider outputting modern JavaScript modules in the builds.[1]
In order for this to work we can *only* use proper `import`/`export` statements throughout the main code-base, and (as expected) our Node.js support made this much more complicated since both the official builds and the GitHub Actions-based tests must keep working.[2]
One remaining issue is that the `pdf.scripting.js` file cannot be built as a JavaScript module, since doing so breaks PDF scripting.
Note that my initial goal was to try and split these changes into a couple of commits, however that unfortunately didn't really work since it turned out to be difficult for smaller patches to work correctly and pass (all) tests that way.[3]
This is a classic case of every change requiring a couple of other changes, with each of those changes requiring further changes in turn and the size/scope quickly increasing as a result.
One possible "issue" with these changes is that we'll now only output JavaScript modules in the builds, which could perhaps be a problem with older tools. However it unfortunately seems far too complicated/time-consuming for us to attempt to support both the old and modern module formats, hence the alternative would be to do "nothing" here and just keep our "old" builds.[4]
---
[1] The final blocker was module support in workers in Firefox, which was implemented in Firefox 114; please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#browser_compatibility
[2] It's probably possible to further improve/simplify especially the Node.js-specific code, but it does appear to work as-is.
[3] Having partially "broken" patches, that fail tests, as part of the commit history is *really not* a good idea in general.
[4] Outputting JavaScript modules was first requested almost five years ago, see issue 10317, and nowadays there *should* be much better support for JavaScript modules in various tools.
The user should *always* provide a correct `GlobalWorkerOptions.workerSrc` value when using the PDF.js library in browser environments. Note that the fallback:
- Has been deprecated ever since PR 11418, first released in version `2.4.456` over three years ago.
- Was always a best-effort solution, with no guarantees that it'd actually work correctly.
- With upcoming changes, w.r.t. outputting JavaScript modules, it'd now be more diffiult to determine the correct value.
The minified default viewer has never been distributed in either official releases or through pdfjs-dist, which means that it's most likely unused, and it's has never been tested nor actively maintained.
Setting the alpha-value explicitly to `1` in `rgb` colors is unnecessary, since that's the default value, and this way we ever so slightly reduce the size of our CSS files.
Unfortunately I've not found a Stylelint rule to enforce this automatically, and the patch was generated using search and replace.
When an editing button is disabled, focused and the user press Enter (or space), an
editor is automatically added at the center of the current page.
Next creations can be done in using the same keys within the focused page.
When an element has the hasOwnCanvas flag we must have an HTML container to attach
the canvas where the element will be rendered.
So the noHTML flag must take this information into account:
- in some cases the noHTML flag is resetted depending on the hasOwnCanvas value;
- in some others, the hasOwnCanvas flag is set depending on the value of noHTML.
To reduced the risk of regressing something else, given that the issue only applies to a (for the default viewer) non-default configuration, this patch is purposely limited to only TextWidget-annotations in the display layer.
It happens only on windows with chrome.
For any reason, click event isn't correctly triggered and it seems work correctly
with pointerup.
And it seems that when drawing a svg on an OffscreenCanvas we need to wait
a little in order to be able to transfer it: it's why this patch adds
a check on the canvas content.
This has been deprecated since version `2.15.349`, which is a year ago.
Removing this will also simplify some upcoming changes, specifically outputting of JavaScript modules in the builds.
This removes the only remaining old and non-standard handling of exports in the `web/`-folder, since some initial attempts at outputting JavaScript modules in the builds have identified this file as a potential problem.
While this uses a hard-coded list, for overall simplicity, I don't believe that that's a big problem since:
- Generating this file automatically would require a bunch more parsing *every single time* that the library is built.
- The official API-surface doesn't change often enough for this to really impede development in any significant way.
- The added unit-test helps ensure that this list cannot accidentally become outdated.
When the editor is invisible (because on a non-rendered page) its parent is null.
But when we undo its deletion, we need to have a parent to attach it.
Given that this is accessed multiple times per page in the viewer, that leads to a number of (strictly speaking unneeded) function calls and allocated Objects for each invocation. By converting `layerProperties` to a, lazily initialized, Object we can avoid this.
When PR 17015 removed the `disabled` handling for the "Save"-button it left a bunch of now unused CSS rules behind, which seems like a simply oversight.
Rather than shipping "dead" CSS rules, let's remove those until such a time that they're actually needed.
The old pre-processor used for CSS, and HTML, files leaves comments intact which unnecessarily contributes to the overall size of the *built* CSS files (note that the built JavaScript files don't include comments).
Rather than trying to "hack" comment removal into the pre-processor it seems easier to use a PostCSS plugin instead. The one potential issue is that it also affects *some* whitespaces, and it's not clear to me if this'll work with the various CSS-related tests that run in mozilla-central.
Please refer to https://www.npmjs.com/package/postcss-discard-comments for additional information.
*For many non-English locales the translated strings will be longer, which is easy to forget about during development/review.*
Note how for some locales (e.g. Swedish) the altText-button end up looking horizontally "cramped", hence it seems reasonable to add a bit of inline padding to improve this.
In the scripting integration tests we use a few different typing
delays, mostly 100 or 200 milliseconds. According to for example
https://www.typingpal.com/en/documentation/school-edition/pedagogical-resources/typing-speed,
a fast typing speed is around 300 characters per minute, which is 5
characters per second and therefore a delay of 200 milliseconds between
each keystroke. Note that this is already above average, so in practice
the delay will be even larger. Therefore the 100 milliseconds variant
is unrealistically fast and therefore not suitable for the integration
tests which aim to simulate the average user behavior.
On top of that, the quick typing speeds are problematic for the tests
that involve validation alert dialogs appearing during typing. In those
tests a handler is registered to close the dialog once it pops up, but
it takes time for Puppeteer to notice the dialog, trigger the handler
and close it. If the typing delay, which is the delay between the key
down and key up events according to the Puppeteer source code at
https://github.com/puppeteer/puppeteer/blob/master/packages/puppeteer-core/src/cdp/Input.ts#L209-L215,
is too short, the key up event will be fired before the dialog is
closed. In that time the text box we're typing in is not focused, so
when the dialog is closed the `page.type()` call on the text box will
never resolve because the key up event never reached the text box.
This commit aims to fix the issues by converting all 100 millisecond
delays to 200 milliseconds. For instance the "must check input for US
zip format" failed pretty consistently locally before and hasn't failed
anymore with a 200 millisecond delay.
This integration test fails often because we wait for scripting to be
ready before we check the printed page, but most of the time the PDF
is already done printing before scripting is reported to be ready.
This happens because the print trigger is on the `Open` event, which is
one of the first events to be dispatched and, most notably, before
scripting is marked as ready; please see
https://github.com/mozilla/pdf.js/blob/master/web/pdf_scripting_manager.js#L176-L191.
Given that the PDF document is only one page, printing it is usually
finished between triggering the `Open` event and scripting reported
to be ready. If this happens the printed page is already destroyed
before we get to our actual test, which will then timeout because it
will never find the printed page in the DOM.
This commit fixes the problem by not awaiting scripting to be ready
because the fact that the printed page appears is already enough to know
that autoprint was triggered (after all, there is no other user
interaction involved here). While we're here we also switch to the
shorter `page.waitForSelector` function.
but keep it for the text area.
Disable pointerdown on the alt-text button to disable dragging the editor
when the button is clicked (especially when slightly moving the mouse
between the down and the up).
The dialog element handles closing with <kbd>Esc</kbd> automatically, however we're not reporting telemetry in that case.
In order to fix that the easiest solution, as far as I'm concerned, seem to be moving the telemetry reporting into the dialog-close handler since it's always invoked.
This patch addresses an edge-case that'll probably never happen, but it nonetheless seems like something that we want to fix.
Note how we're using the `#currentEditor`-field to prevent opening the dialog when it's already active, and it being reset once the dialog has been closed.
By also resetting the `#currentEditor`-field during destruction, instead of waiting until the dialog has actually closed (assuming it's currently open), there's a *tiny* window of time[1] during which we could theoretically allow to (incorrectly) re-open the dialog and thus getting out-of-sync state in the viewer-component.
---
[1] Since the "close" event, on a dialog-element, is dispatched asynchronously by the browser.
When the user edit an existing alt-text and remove it, we want to be able
to save this state and consequently remove the done state from the
alt-text button.
Remove the button from its parent when the editor is removed: it should
help to save few Kb of memory.
Radio-buttons can also be toggled by clicking on their associated `label`-elements, and not only the `input`-elements itself, however it seems that "pointerdown" event listeners don't cover that case.
Hence it's possible that telemetry could miss certain cases of a mouse being used, and the easiest solution seem to be to instead use "click" event listeners and just ignore keyboard-based events.
Rather than trying to be "clever" here, and possibly affect code readability negatively, let's just restore the `collectFields` parameter to address the unneeded parsing that now happens when printing new Annotations.
Given the limitations of the old pre-processor that's used for CSS/HTML files, this unfortunately isn't as "easy" to implement as it is for JavaScript code.
Since this is the first case where we've wanted to do conditional CSS imports, rather than trying to completely re-write the pre-processor, this patch settles for handling it explicitly in the `expandCssImports` function.
Looking at the save-telemetry values they're all boolean *except* for "alt_text_edit" in one instance, since `this.#previousAltText` may be an empty string (looking at the `editAltText` method) and this value may thus become an empty string as well.
When closing a document in the viewer, e.g. by running `PDFViewerApplication.close()` in the console, the `AltTextManager.#finish` method currently throws *unless* the `altText` dialog is actually open.
Similar to e.g. the PasswordPrompt, we should thus only attempt to close the `altText` dialog when it's open.
In the rare situation that an optional content dictionary lacks a /Type-entry we currently throw, which may prevent e.g. Form XObjects from rendering completely.
Fixes https://bugs.ghostscript.com/show_bug.cgi?id=707147
It's a part of the UX specifications. There's a drawing issue in Firefox
(see bug https://bugzilla.mozilla.org/1853288) but setting the
background-clip property to content-box seems to be a good workaround.
Especially on slower bots there is some time between clicking the
element and the actual visibility change, but we didn't await this and
checked the visibility state immediately after clicking. This can be
reproduced 100% of the time by introducing a delay in the `display` and
`hidden` handlers of the `_commonActions` shadow call.
This commit fixes the problem by waiting until the first visibility
change actually happened before continuing with the assertions.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay just before dispatching the `switchannotationeditormode` event.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
The x/y-coordinates are floats instead of integers like one might
expect. The current approach rounds both the old and the new
coordinates in order to do integer comparison. However, rounding each
coordinate individually causes too much loss of precision because,
depending on the decimal value, they are either rounded up or down
which causes intermittent off-by-one errors.
This commit fixes the problem by comparing coordinate differences
instead of the coordinates themselves. The precision loss is avoided
by subtracting the old from the new coordinate as-is and only rounding
the final result.
This integration test currently fails intermittently on the bots because
of the fixed timeout in the test, which is sometimes too low on slower
systems. The issue can be reproduced 100% of the time by introducing a
delay in the `WidgetAnnotationElement.showElementAndHideCanvas` method.
Puppeteer also discourages this and instead recommends waiting for a
selector instead, which we now do here. This ensures that the test only
continues if the element under test is available and therefore prevents
any timing problems.
We already use `page.$eval` in most other integration tests and it's
simpler because it already takes the selector as argument, so we don't
have to do a separate `querySelector` call ourselves.
When there is no tree, the tags for the new annotions are just put under the root element.
When there is a tree, we insert the new tags at the right place in using the value
of structTreeParentId (added in PR #16916).
Now that modern JavaScript is fully supported also in the worker-thread we no longer need to keep old closures, which slightly reduces the size of the code.
Given that this is a shadowed getter, the `opMap` is already lazily initialized and it shouldn't be necessary to *also* use the `getLookupTableFactory` helper function here. Looking at the history of the code, it seems that this is simply a leftover from before JavaScript classes existed.
Now that modern JavaScript is fully supported also in the worker-thread we no longer need to keep old closures, which slightly reduces the size of the code.
Now that modern JavaScript is fully supported also in the worker-thread we no longer need to keep old closures, which slightly reduces the size of the code.
Now that modern JavaScript is fully supported also in the worker-thread we no longer need to keep old closures, which slightly reduces the size of the code.
Now that modern JavaScript is fully supported also in the worker-thread we no longer need to keep old closures, which slightly reduces the size of the code.
While this cache will not contain a huge amount of data in practice, it's nonetheless a *global* cache that currently will never be cleared.
This patch also removes the existing closure, since it shouldn't really be necessary nowadays given that the code is a JavaScript module which means that only explicitly listed properties will be exported.
When I started looking at PR 16938 it occurred to me that some of the new structTree-methods are synchronously accessing certain dictionary-data (not used during "normal" structTree-parsing), which may not be generally safe since everything in a dictionary could be a reference (and the relevant data may not have been loaded yet).
Rather than suggesting that we make all those new methods even more asynchronous, to me the overall simplest and safest solution is to ensure that the *entire* PDF document has been loaded *before* we begin saving it. In practice this shouldn't really affect "performance" of saving noticeably, since it's always depended on the entire PDF document being downloaded.
Finally note that with the exception of the PDF document possibly not having been fully downloaded when saving is triggered, all other "global" document properties are pretty much guaranteed to already be available at this point.
The unit test is re-enabled because it no longer seems to fail after 10
runs on Linux where this used to fail often. Code inspection also shows
that the code is correct and should raise the previous exception
(anymore). Finally, a lot has changed since this test was disabled such
as new Jasmine versions, new Linux bot OS version and new browser
versions.
While it makes sense to check that the `destDict` parameter is indeed a Dictionary, since that data comes from the PDF document itself, the `resultObj` parameter is an internal PDF.js implementation detail that should always be correct (or tests will fail).
Over time the amount of "document level" data potentially needed during parsing of Annotations have increased a fair bit, which means that we currently need to ensure that a bunch of data is available for each individual Annotation.
Given that this data is "constant" for a PDF document we can instead create (and cache) it lazily, only when needed, *before* starting to parse the Annotations on a page. This way the parsing of individual Annotations should become slightly less asynchronous, which really cannot hurt.
An additional benefit of these changes is that we can reduce the number of parameters that need to be explicitly passed around in the annotation-code, which helps overall readability in my opinion.
One potential drawback of these changes is that the `AnnotationFactory.create` method no longer handles "everything" on its own, however given how few call-sites there are I don't think that's too much of a problem.
The classes were stripped out during when creating the field name but
it led to a wrong name.
Since class components in a path are irrelevant, they're just ignored
when searching for a node in the datasets.
Focus callback must be called only when the element has been blurred.
For example, blur callback (which implies some potential validation) is not called
because the newly focused element is an other tab, an alert dialog, ... so consequently
the focus callback mustn't be called when the element gets its focus back.
While reviewing PR 16898 it occurred to me that it's currently impossible to trigger downloading of FileAttachment annotations using the keyboard.
Hence this patch adds `Ctrl + Enter` as the keyboard shortcut to download those, thus supplementing the existing double-clicking when using a mouse.
The goal is to always have something which is focusable to let the user select
it with the keyboard.
It fixes the mentioned bug because, the annotation layer will now have a container
to attach the canvas for annotations having their own canvas.
`.grab-to-pan-grab:active` is `#viewerContainer` when the mouse is
pressed down. It is supposed to have a `cursor: grabbing` appearance
immediately on mousedown,
`.grab-to-pan-grabbing` is the overlay that is supposed to cover
everything, and also has the `cursor: grabbing` appearance. The "cover
everything" result is achieved through `position:fixed`, `inset:0`, etc.
The block with these CSS properties for "cover everything" is currently
shared by `.grab-to-pan-grab:active` and `.grab-to-pan-grabbing`, but
only "cursor" need to be shared. The original JS and CSS code at
https://github.com/Rob--W/grab-to-pan.js shows that these were supposed
to be associated with the overlay only.
The PR that added this to PDF.js also shows that the "cover everything"
CSS properties were supposed to be limited to the overlay only:
https://github.com/mozilla/pdf.js/pull/4209#discussion-diff-9285917
But the final version of the PR mistakenly merged them together.
This patch rectifies that mistake.
The typescript compiler is now configured to know about the import map
to be able to resolve those imports and find the associated types.
As tsc outputs declaration files using the original module identifiers
and not the resolved ones, tsc-alias is used to post-process the
declaration files by resolving those paths.
Some configurations settings like `paths` cannot be provided through CLI
arguments but only in a configuration file. And when using a
configuration file, only a few options (like `--outDir` can still be
provided) through the CLI.
Using `removeNullCharacters` on the URL should be completely redundant, given the kind of data that we're passing to the `addLinkAttributes` helper function. Note that whenever we're handling a URL, originating in the worker-thread, in the viewer that helper function is always being used.
Furthermore, on the worker-thread all URLs are parsed with the `createValidAbsoluteUrl` helper function, which uses `new URL()` to ensure that a valid URL is obtained. Note that the `URL` constructor will either throw, or in some cases just ignore them, when encountering `\u0000`-characters during parsing.
Hence it should be *impossible* for a valid URL to contain `\u0000`-characters and we can thus simplify the viewer-code a tiny bit. The use of `removeNullCharacters` is most likely a left-over from back when `new URL()` wasn't generally available in browsers.
Testing the `tagged_stamp.pdf` document locally in the viewer, I noticed that e.g. the /Alt entry for the StampAnnotation contains "Secondary text for stamp\u0000".
Elsewhere in the viewer we're skipping null-chars and it's easy enough to do that in the `StructTreeLayerBuilder` class as well. (Note that we generally let the API itself return the data as-is.)
This fixes invalid type references (either due to invalid paths for the
import or missing imports) in the JS doc, as well as some missing or
invalid parameter names for @param annotations.
The issue described in the mentioned bug is reall because
Acrobat is rendering the XFA instead of the Acroform.
The original patch just tried to workaround the issue but it
induces some regressions.
This was added in PR 14899, over a year ago, however it's still completely unused in the PDF.js library/viewer. In hindsight I think that it was a mistake to add unused functionality, and the issue should probably have been WONTFIXed instead, however we probably can't just remove it now.
Thanks to the pre-processor, we can at least exclude this code in the *built-in* Firefox PDF Viewer.
After the changes in PR 16828 the `StampEditor` can now be initialized with a File, in addition to a URL, hence it seems that the `isEmpty` method ought to take that property into account as well.
Looking at this I also noticed that the assignment in the constructor may cause the `this.#bitmapUrl`/`this.#bitmapFile` fields be `undefined` which "breaks" the comparisons in the `isEmpty` method.
We could obviously fix those specific cases, but it seemed overall safer (with future changes) to just update the `isEmpty` method to be less sensitive to exactly how these fields are initialized and reset.
Currently we're repeating virtually the same code *four times* when fetching the bitmap-data, which seems unnecessary.
Also, ensure that the `#bitmapPromise` is always `null`ed by moving that into the `StampEditor.#getBitmapDone` method.
Currently this unit-test will pass just fine if compression is disabled, e.g. by commenting out the relevant code in the `src/core/writer.js` file.
While we don't have a simple way of *directly* checking that the Annotation text-content is compressed, we can however use the resulting file-size as a fairly good proxy. (Note that if compression is disabled the file-size is more than doubled.)
Please note that for performance reasons it's not really advised to use the same worker-thread *in parallel* for parsing multiple PDF documents, since they will then unnecessarily compete for resources.
However, given that it's still possible to do that e.g. when using the global `workerPort` it probably won't hurt to add a unit-test for this particular situation.
Given that the `PDFDocumentLoadingTask.destroy()`-method is documented as being asynchronous, you thus need to await its completion before attempting to load a new PDF document when using the global `workerPort`.
If you don't await destruction as intended then a new `getDocument`-call can remain pending indefinitely, without any kind of indication of the problem, as shown in the issue.
In order to improve the current situation, without unnecessarily complicating the API-implementation, we'll now throw during the `getDocument`-call if the global `workerPort` is in the process of being destroyed.
This part of the code-base has apparently never been covered by any tests, hence the patch adds unit-tests for both the *correct* usage (awaiting destruction) as well as the specific case outlined in the issue.
- The `src/core/unicode.js` exclude ought to have become unnecessary already with PR 16200, which significantly shortened and simplified that file.
- The `src/core/glyphlist.js` exclude no longer seems necessary in practice either, possibly because of improvements in Babel.
The main stamp button will be used to just enter in a add/edit image mode:
- the user can add a new image in using the new button.
- the user can edit an image in resizing, moving it.
In image mode, when the user clicks outside on the page but not on an editor,
then all the selected editors will be unselected.
After the `src/core/`-changes in PR 16779 the `PDFDocumentProxy.getJSActions` method should no longer be able to return *empty* entries, which means that we can simplify the "JavaScript support is not enabled"-warning in the viewer.
Furthermore, improve the auto-printing hack used when scripting is disabled.
Given that the FieldObjects are parsed in parallel, in combination with the existing caching in the `getPage`-method and `annotations`-getter, adding additional caches for this fallback code-path doesn't seem entirely necessary.
We're adding the action in the undo/redo stack whatever the status of the
operation was. This patch aims to add the action only when the image has been
successfully added.
When several editors are selected and the window loses and then gets back its focus,
the previously focused editor is triggering its focus callback making it the only
selected one.
This patch aims to avoid triggering the focus callback called when the main window
gets its focus back.
When moving an element in the DOM, the focus is potentially lost, so we need to make sure
that the focused element before the translation will get back its focus after it.
But we must take care to not execute any focus/blur callbacks because the user didn't
do anything which should trigger such events: it's a detail of implementation. For example,
when several editors are selected and moved, then at the end the same must be selected, so
no element receive a focus event which will set it as selected.
There are 2 rotation we've to deal with: the viewer one and the editor one.
The previous implementation was a bit complex and having to deal with these
rotation would have potentially increase it.
So this patch aims to simplify the implementation and deal with all the possible
cases.
The main idea is to transform the mouse deltas according to the rotations and then
apply the resizing in the page coordinates system.
When resizing an editor we're currently using unidirectional cursors, please refer to https://developer.mozilla.org/en-US/docs/Web/CSS/cursor
Given that editors can (generally) be resized to become either smaller or larger, it seems overall more appropriate to use bidirectional cursors to make this clearer to the user.
Note that as mentioned in the MDN article some environments, which seems to apply to e.g. Windows 11, doesn't differentiate between the two cursor formats and simply use bidirectional ones unconditionally.
One additional benefit of these changes is that the relevant CSS rules become slightly more compact.
We obviously don't want to re-introduce any `require` usage in e.g. the viewer, since we should strive to only use native `import` statements wherever possible.[1]
Hopefully exposing e.g. the library globally in more cases won't break anything, however it's somewhat difficult for me to imagine all the ways in which third-party users may be accessing the PDF.js library. (Given the lack of a runnable test-case in the issue, I also cannot guarantee that this is enough to fully address the problem.)
---
[1] Ideally we should probably not rely on e.g. `pdfjsLib` being globally available in the *built* viewer, and rather always `import` the library instead.
Unfortunately this would require larger (possibly breaking) changes in the builds that we provide, however note that Firefox only recently got support for `import` in workers and that Webpack still only have *experimental* support for outputting "proper" modules.
This method is very old, however with the exception of the auto-print hack (when scripting is disabled) in the viewer it's never actually been used.
Most likely the idea with `PDFDocumentProxy.getJavaScript` was that it'd be useful if scripting support was added, however it turned out that it was a bit too simplistic and instead a number of new methods were added for the scripting use-cases.
Without this patch the password dialog is pretty difficult to use in the GeckoView-viewer, because of a number of missing CSS variables.
*Please note:* This patch makes no effort at actually styling the dialog to better suite the overall look of the GeckoView-viewer, but focuses solely on making it actually usable (since password protected PDF documents are somewhat rare).
If the current PDF document is closed while the password dialog is open, e.g. manually by calling `PDFViewerApplication.close()` from the console, the password dialog wouldn't be closed as intended.
*Please note:* This could only affect the GENERIC viewer, although it's very unlikely to ever happen, since that's the only one that supports opening more than one PDF document.
@ -55,12 +55,17 @@ Next, install Node.js via the [official package](https://nodejs.org) or via
[nvm](https://github.com/creationix/nvm). You need to install the gulp package
globally (see also [gulp's getting started](https://github.com/gulpjs/gulp/tree/master/docs/getting-started)):
$ npm install -g gulp-cli
$ npm install -g gulp-cli@^2.3.0
If you prefer to not install `gulp-cli` globally, you have to prefix all the `gulp` commands with `npx` (for example, `npx gulp server` instead of `gulp server`).
If everything worked out, install all dependencies for PDF.js:
$ npm install
> [!NOTE]
> On MacOS M1/M2 you may see some `node-gyp`-related errors when running `npm install`. This is because one of our dependencies, `"canvas"`, does not provide pre-built binaries for this platform and instead `npm` will try to build it from source. Please make sure to first install the necessary native dependencies using `brew`: https://github.com/Automattic/node-canvas#compiling.
Finally, you need to start a local web server as some browsers do not allow opening
"description":"The theme to use.\n0 = Use system theme.\n1 = Light theme.\n2 = Dark theme.",
"type":"integer",
"enum":[
0,
1,
2
],
"enum":[0,1,2],
"default":2
},
"showPreviousViewOnLoad":{
@ -21,11 +17,7 @@
"title":"View position on load",
"description":"The position in the document upon load.\n -1 = Default (uses OpenAction if available, otherwise equal to `viewOnLoad = 0`).\n 0 = The last viewed page/position.\n 1 = The initial page/position.",
"type":"integer",
"enum":[
-1,
0,
1
],
"enum":[-1,0,1],
"default":0
},
"defaultZoomDelay":{
@ -45,13 +37,7 @@
"title":"Sidebar state on load",
"description":"Controls the state of the sidebar upon load.\n -1 = Default (uses PageMode if available, otherwise the last position if available/enabled).\n 0 = Do not show sidebar.\n 1 = Show thumbnails in sidebar.\n 2 = Show document outline in sidebar.\n 3 = Show attachments in sidebar.",
"type":"integer",
"enum":[
-1,
0,
1,
2,
3
],
"enum":[-1,0,1,2,3],
"default":-1
},
"enableHandToolOnLoad":{
@ -59,14 +45,15 @@
"type":"boolean",
"default":false
},
"enableML":{
"type":"boolean",
"default":false
},
"cursorToolOnLoad":{
"title":"Cursor tool on load",
"description":"The cursor tool that is enabled upon load.\n 0 = Text selection tool.\n 1 = Hand tool.",
"type":"integer",
"enum":[
0,
1
],
"enum":[0,1],
"default":0
},
"pdfBugEnabled":{
@ -81,6 +68,18 @@
"description":"Whether to allow execution of active content (JavaScript) by PDF files.",
"description":"Controls if the text layer is enabled, and the selection mode that is used.\n 0 = Disabled.\n 1 = Enabled.",
"type":"integer",
"enum":[
0,
1
],
"enum":[0,1],
"default":1
},
"externalLinkTarget":{
"title":"External links target window",
"description":"Controls how external links will be opened.\n 0 = default.\n 1 = replaces current window.\n 2 = new window/tab.\n 3 = parent.\n 4 = in top window.",
"type":"integer",
"enum":[
0,
1,
2,
3,
4
],
"enum":[0,1,2,3,4],
"default":0
},
"disablePageLabels":{
@ -152,22 +142,12 @@
},
"annotationMode":{
"type":"integer",
"enum":[
0,
1,
2,
3
],
"enum":[0,1,2,3],
"default":2
},
"annotationEditorMode":{
"type":"integer",
"enum":[
-1,
0,
3,
15
],
"enum":[-1,0,3,15],
"default":0
},
"enablePermissions":{
@ -198,25 +178,14 @@
"title":"Scroll mode on load",
"description":"Controls how the viewer scrolls upon load.\n -1 = Default (uses the last position if available/enabled).\n 3 = Page scrolling.\n 0 = Vertical scrolling.\n 1 = Horizontal scrolling.\n 2 = Wrapped scrolling.",
"type":"integer",
"enum":[
-1,
0,
1,
2,
3
],
"enum":[-1,0,1,2,3],
"default":-1
},
"spreadModeOnLoad":{
"title":"Spread mode on load",
"description":"Whether the viewer should join pages into spreads upon load.\n -1 = Default (uses the last position if available/enabled).\n 0 = No spreads.\n 1 = Odd spreads.\n 2 = Even spreads.",
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Main toolbar buttons (tooltips and alt text for images)
previous.title=Папярэдняя старонка
previous_label=Папярэдняя
next.title=Наступная старонка
next_label=Наступная
# LOCALIZATION NOTE (page.title): The tooltip for the pageNumber input.
page.title=Старонка
# LOCALIZATION NOTE (of_pages): "{{pagesCount}}" will be replaced by a number
# representing the total number of pages in the document.
of_pages=з {{pagesCount}}
# LOCALIZATION NOTE (page_of_pages): "{{pageNumber}}" and "{{pagesCount}}"
# will be replaced by a number representing the currently visible page,
# respectively a number representing the total number of pages in the document.
page_of_pages=({{pageNumber}} з {{pagesCount}})
zoom_out.title=Паменшыць
zoom_out_label=Паменшыць
zoom_in.title=Павялічыць
zoom_in_label=Павялічыць
zoom.title=Павялічэнне тэксту
presentation_mode.title=Пераключыцца ў рэжым паказу
presentation_mode_label=Рэжым паказу
open_file.title=Адкрыць файл
open_file_label=Адкрыць
print.title=Друкаваць
print_label=Друкаваць
save.title=Захаваць
save_label=Захаваць
bookmark1.title=Дзейная старонка (паглядзець URL-адрас з дзейнай старонкі)
bookmark1_label=Цяперашняя старонка
# LOCALIZATION NOTE (open_in_app.title): This string is used in Firefox for Android.
open_in_app.title=Адкрыць у праграме
# LOCALIZATION NOTE (open_in_app_label): This string is used in Firefox for Android. Length of the translation matters since we are in a mobile context, with limited screen estate.
open_in_app_label=Адкрыць у праграме
# Secondary toolbar and context menu
tools.title=Прылады
tools_label=Прылады
first_page.title=Перайсці на першую старонку
first_page_label=Перайсці на першую старонку
last_page.title=Перайсці на апошнюю старонку
last_page_label=Перайсці на апошнюю старонку
page_rotate_cw.title=Павярнуць па сонцу
page_rotate_cw_label=Павярнуць па сонцу
page_rotate_ccw.title=Павярнуць супраць сонца
page_rotate_ccw_label=Павярнуць супраць сонца
cursor_text_select_tool.title=Уключыць прыладу выбару тэксту
cursor_text_select_tool_label=Прылада выбару тэксту
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.