In the referenced issue, there is a PDF which uses a fill pattern which does not
have a matrix defined. This causes singularValueDecompose2dScale to fail with
undefined property error when accessing elements of that matrix.
This fix will only use the matrix when it is defined. The output for the PDF in
question now looks identical to chrome and preview with respect to the gradient
fill pattern.
QueueOptimizer is really hard to read. Enough so that it's blocking my
efforts to streamline the representation used for operator lists.
This patch improves its readability in the following ways.
- More descriptive variable names make the sequence checking much clearer,
as do additional comments.
- The addState() functions now return the index of the first op past the
sequence, instead of setting context.currentOperation to the last op of
the sequence.
- The loop in optimize() is clearer.
- The array modification in the fourth addState() function is much clearer
-- we're just removing trios of ops.
- All four |addState| functions are now more consistent with each other.
I used some debug printfs to find documents where these optimizations are
used and then checked that the number of optimized ops was the same before
and after my changes.
DecodeStream currently initializes its |buffer| field to |null|, which
is reasonable, because lots of DecodeStreams never need to instantiate a
buffer. But this requires various special cases in the code.
This patch change it so DecodeStreamClosure has a single empty
Uint8Array which gets shared between all buffers upon initialization.
This avoids the special cases.
DecodeStream.prototype.ensureBuffer() is really hot, and this removes a
test from the fast path. For one 226 page scanned document this sped up
rendering by about 2%.
In src/core/obj.js, we convert a Ref to a string to index into a table like
this: 'R1.0'. This conversion is repeated numerous times.
This patch factors out the conversion into a new function.
Ref.prototype.toString().
This function can be called 100s of 1000s or even millions of times, and the
allocated return object accounts for 10% of all GC thing allocations for some
documents. It's easy to avoid, which reduces stress on the garbage collector,
and this patch does that.
PartialEvaluator.getTextContent() builds up textChunk strings 1 char at a time,
creating many 100s of 1000s of intermediate strings along the way. This patch
make it instead push chars to an array and then join them at the end, as we
have done in numerous other places.
This new function is much faster than ensureRange(pos, pos+1), which is a very
common case.
This speeds up the rendering of some test cases (including the Tracemonkey
paper) by 4--5%.