From 168c6aecae30243732f23949216dd4e46ca43f3d Mon Sep 17 00:00:00 2001 From: Jonas Jenwald Date: Thu, 28 Nov 2019 16:16:04 +0100 Subject: [PATCH] Stop caching Streams in `XRef.fetchCompressed` I'm slightly surprised that this hasn't actually caused any (known) bugs, but that may be more luck than anything else since it fortunately doesn't seem common for Streams to be defined inside of an 'ObjStm'.[1] Note that in the `XRef.fetchUncompressed` method we're *not* caching Streams, and that for very good reasons too. - Streams, especially the `DecodeStream` ones, can become *very* large once read. Hence caching them really isn't a good idea simply because of the (potential) memory impact of doing so. - Attempting to read from the *same* Stream more than once won't work, unless it's `reset` in between, since using any method such as e.g. `getBytes` always starts at the current data position. - Given that even the `src/core/` code is now fairly asynchronous, see e.g. the `PartialEvaluator`, it's generally impossible to assert that any one Stream isn't being accessed "concurrently" by e.g. different `getOperatorList` calls. Hence `reset`-ing a cached Streams isn't going to work in the general case. All in all, I cannot understand why it'd ever be correct to cache Streams in the `XRef.fetchCompressed` method. --- [1] One example where that happens is the `issue3115r.pdf` file in the test-suite, where the streams in question are not actually used for anything within the PDF.js code. --- src/core/obj.js | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/core/obj.js b/src/core/obj.js index 6a6c8ff2e..253a48666 100644 --- a/src/core/obj.js +++ b/src/core/obj.js @@ -1748,6 +1748,9 @@ var XRef = (function XRefClosure() { if ((parser.buf1 instanceof Cmd) && parser.buf1.cmd === 'endobj') { parser.shift(); } + if (isStream(obj)) { + continue; + } const num = nums[i], entry = this.entries[num]; if (entry && entry.offset === tableOffset && entry.gen === i) { this._cacheMap.set(num, obj);