Please note that this patch *purposely* doesn't add every standard (or semi-standard) page name in existence, but rather only a few common ones. This is done to lessen the burden on localizers, since it's quite possible that all of the page names could need translation (depending on locale).
It's easy to add more standard page sizes in the future, but we should take care to *only* add those that are very commonly used in actual PDF files.
This uses a whitelist, based on the locale, to determine where non-metric units should be used.
Note that the behaviour implemented here seem consistent with desktop PDF viewers (e.g. Adobe Reader), where the pageSizes are *always* displayed with locale dependent units rather than pageSize dependent ones (since the latter would probably be quite confusing).
A couple of basic unit-tests are added, and a manual `isLandscape` check (in `web/base_viewer.js`) is also converted to use the helper function instead.
Jasmine had a major version bump and required a few minor changes in our
booting code. Most notably, using `pending` in a `describe` block is no
longer supported, so we can only return early there. On the positive
side, the unit tests now run in a random order by default, which
eliminates any dependencies between unit tests.
Note that upgrading to Webpack 4 is out of scope for this patch since
the bots cannot work well with the newly generated bundles (both
browsers on both bots do not react within 120 seconds). Webpack 4 is not
faster for us than Webpack 3, so for now there is no need to upgrade.
Test case to exercise the different encodings:
1. Create a file "some file#@%M<br>%25 .pdf"
2. Build the extension with `gulp chromium` and load it in Chrome.
3. Go to `chrome://extensions/` and ensure that the
"Allow access to file URLs" is disabled.
4. Try to open the file from step 1 in Chrome (maybe reload once).
5. PDF.js should be showing a file chooser button.
6. Click on that button and select a different file.
Test: Check that a confirmation dialog pops up that warns about
a different file name. Cancel the dialog.
7. Click on the button again and select the original file.
Test: Check that the file opens as expected.
The current PageLabel dictionary validation code won't catch some (unlikely) forms of corruption. For example: a `Type`/`S` entry being `null`/`0`/empty string, a `P`/`St` entry being `null`/`0`.
Please note: I'm not aware of any bugs caused by the old code, but I've had this patch sitting locally for some time and figured it couldn't hurt to submit it.
The units are currently repeated after each dimension, which seems unnecessary and is also not done in other PDF viewers (such as e.g. Adobe Reader).
Furthermore, the name of the l10n arguments can be simplified slightly, since the name of the strings themselves should be enough information.
Finally, the `width`/`height` should be formatted according to the current locale, as is already done for other strings in the document properties dialog.
The `getPageSizeInches` method was implemented on `PDFDocumentProxy`, which seems conceptually wrong since the size property isn't global to the document but rather specific to each page. Hence the method is moved into `PDFPageProxy`, as `get pageSizeInches` instead to address this.
Despite the fact that new API functionality was implemented, no unit-tests were added. To prevent issues later on, we should *always* ensure that new functionality has at least some test-coverage; something that this patch also takes care of.
The new `PDFDocumentProperties._parsePageSize` method seemed unnecessary convoluted. Furthermore, in the "no data provided"-case it even returned incorrect data (an array, rather than the expected object).
Finally, the fallback strings didn't actually agree with the `en-US` locale. This inconsistency doesn't look too great, and it's thus addressed here as well.
This required changing the import script in two ways:
- we should use the `default` branch and not the `tip` tag since the
latter may refer to another branch than `default` (this is the case for
the `vi` locale, which caused in the files to be overwritten with
incorrect contents since `tip` referred to the
`THUNDERBIRD600b1_2018031614_RELBRANCH` branch);
- we should check if the response code is indeed 200 because recently a
script removed all empty localization files upstream (refer to
https://bugzilla.mozilla.org/show_bug.cgi?id=1443175).
PR #9493 moved from `appConfig.defaultUrl` to `AppOptions.get('defaultUrl')`.
However, it forgot to replace `appConfig.defaultUrl` in chromecom.js,
and as a result the extension is not able to open any PDF file.
This patch fixes that issue.
Chrome 60 and earlier does not include credentials (cookies) in requests
made with fetch, regardless of extension permissions. This was fixed in
61.0.3138.0 by
2e231cf052
This patch disables the fetch backend in all affected Chrome versions.
The browser detection is done by checking for a change that coincides
with the release of Chrome 61.
Test case:
1. Copy the `isChromeWithFetchCredentials` function from the patch.
2. Run it in the JS console of Chrome and verify the return value.
Verified results:
- 49.0.2623.75 - false (earliest supported version by us)
- 60.0.3112.90 - false (last major version affected by bug)
- 61.0.3163.100 - true (first major version without bug)
- 65.0.3325.146 - true (current stable)
Test case 2:
1. Build the extension (`gulp chromium`) and load it in Chrome.
2. Open the developer tools, and open any PDF file.
3. In the "Network tab" of the developer tools, look at "request type".
In Chrome 60-: Should be "xhr"
In Chrome 61+: Should be "fetch"
This function combines the logic of two separate methods into one.
The loop limit is also a good thing to have for the calls in
`src/core/annotation.js`.
Moreover, since this is important functionality, a set of unit tests and
documentation is added.
It's only used in two places in the class and those callsites can
directly get the information from the dictionary, which is more readable
and avoids an additional method call.