Re: [xsl] [xslt performance for big xml files]

Subject: Re: [xsl] [xslt performance for big xml files]
From: Robert Koberg <rob@xxxxxxxxxx>
Date: Sat, 25 Apr 2009 17:36:46 -0400
On Apr 25, 2009, at 12:40 PM, Liam Quin wrote:

Finally - the real reason for posting to this thread -- * If you are repeatedly scanning the same XML document and generating different things, such as reports or small documents, consider an XQuery implementation that uses an index. E.g. for a file this small, the free MarkLogic or Qixz/fe engines would work fine I expect; the first is limited to 50MBytes (or was last I looked) and the second to a gigabyte.


I assume you mean, 'use an XQuery implementation that runs against a proprietary XML database', rather than the broadly defined and mostly non-interoperable implementations of the standard XQuery language, right?




* If you read the document once, but tend not to need to look ahead or
 behind very far, you could split the input into smaller XML files,

most likely (and perhaps you would want to use StAX to do the split)


-Rob

Current Thread