Stop caching Streams in XRef.fetchCompressed
I'm slightly surprised that this hasn't actually caused any (known) bugs, but that may be more luck than anything else since it fortunately doesn't seem common for Streams to be defined inside of an 'ObjStm'.[1] Note that in the `XRef.fetchUncompressed` method we're *not* caching Streams, and that for very good reasons too. - Streams, especially the `DecodeStream` ones, can become *very* large once read. Hence caching them really isn't a good idea simply because of the (potential) memory impact of doing so. - Attempting to read from the *same* Stream more than once won't work, unless it's `reset` in between, since using any method such as e.g. `getBytes` always starts at the current data position. - Given that even the `src/core/` code is now fairly asynchronous, see e.g. the `PartialEvaluator`, it's generally impossible to assert that any one Stream isn't being accessed "concurrently" by e.g. different `getOperatorList` calls. Hence `reset`-ing a cached Streams isn't going to work in the general case. All in all, I cannot understand why it'd ever be correct to cache Streams in the `XRef.fetchCompressed` method. --- [1] One example where that happens is the `issue3115r.pdf` file in the test-suite, where the streams in question are not actually used for anything within the PDF.js code.
This commit is contained in:
parent
06412a557b
commit
168c6aecae
@ -1748,6 +1748,9 @@ var XRef = (function XRefClosure() {
|
||||
if ((parser.buf1 instanceof Cmd) && parser.buf1.cmd === 'endobj') {
|
||||
parser.shift();
|
||||
}
|
||||
if (isStream(obj)) {
|
||||
continue;
|
||||
}
|
||||
const num = nums[i], entry = this.entries[num];
|
||||
if (entry && entry.offset === tableOffset && entry.gen === i) {
|
||||
this._cacheMap.set(num, obj);
|
||||
|
Loading…
Reference in New Issue
Block a user