Re: [xsl] saxon:next-in-chain and memory usage

Subject: Re: [xsl] saxon:next-in-chain and memory usage
From: "Frederik Fouvry frederik.fouvry@xxxxxxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 9 Nov 2016 17:19:33 -0000
Thanks for the pointer. Unfortunately, it looks like we'll have to take a very different route: the XProc processors I've looked at (Calabash, MorganaXProc) run into the same problem.

FWIW:
I'm now trying (with XSLT 2.0) to use a combination of xsl:result-document and saxon:next-in-chain to split the input file (it consists of relatively small-sized chunks; the big size of the original file is mainly due to the large number of chunks), and process each of the chunks separately, and then merge again them in the last step. I have a prototype that looks promising.
(I'll need to deal with any cross-references later on, but I'm fairly confident that this will work too. And I'll have to get rid of the temporary files ...)


Thanks,

Frederik Fouvry


On 09.11.2016 14:01, Michael Kay mike@xxxxxxxxxxxx wrote:
I remember at some stage doing some work to ensure that saxon:next-in-chain released memory from one stylesheet before running the next, but I haven't checked that this is the case today, and there have been considerable changes in the way it's implemented. And in any case 9.1 is a pretty old release. I would encourage you to use a different pipelining technology, for example XProc.

Michael Kay
Saxonica


On 9 Nov 2016, at 12:57, Frederik Fouvry frederik.fouvry@xxxxxxxxxxxx <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx> wrote:

Hello,

I am running a pipeline of XML transformations over an XML file of about 130Mb, using Saxon 9.1.0.5J and Java 1.8.0_112. The pipeline is implemented using saxon:next-in-chain. It works fine as long as I have fewer than about five steps, but when I have five or more, then Java runs out of memory, regardless of the complexity of the style sheets (as the simplest, I tried the identity transformation). Is this to be expected? Is the result of each step kept in memory? Or is it due to something else?

Giving Java more memory seems to help in some cases, but because the size of the file can vary a lot (mostly it will be smaller, but in some cases, it may be bigger), that is not a scalable solution.

Many thanks,

Frederik Fouvry

Current Thread