Re: [xsl] huge xml processing

Subject: Re: [xsl] huge xml processing
From: Mike Brown <mike@xxxxxxxx>
Date: Mon, 17 Feb 2003 13:11:46 -0700 (MST)
Vasu Chakkera wrote:
> [lots of XML, not enough memory to transform it]

There's not a whole lot that an XSLT processor can do to optimize the
processing of very large files. If the processor knows that it doesn't need to
use the preceding or preceding-sibling axes, it can process the input as it is
read and discard branches it knows it won't need, but this requires that the
input be fairly regular and the stylesheet not be doing anything too fancy 
with keys or whatever.

Usually we say that a custom SAX filter (i.e., don't use XSLT at all; just
write a SAX application) is most appropriate in this situation, but you might
also take a look at STX at http://stx.sourceforge.net/ which aims to make this
a much easier task. There are 2 implementations to choose from: one in Java,
one in Perl.

Mike

-- 
  Mike J. Brown   |  http://skew.org/~mike/resume/
  Denver, CO, USA |  http://skew.org/xml/

 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list


Current Thread