RE: [xsl] anyway obvious way to speed this tranfrom up?

Subject: RE: [xsl] anyway obvious way to speed this tranfrom up?
From: "Ray Tayek" <rtayek@xxxxxxxxxxxxxxx>
Date: Fri, 2 Jan 2004 17:28:34 -0800

> -----Original Message-----
> From: owner-xsl-list@xxxxxxxxxxxxxxxxxxxxxx [mailto:owner-xsl-
> list@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Michael Kay
> Sent: Friday, January 02, 2004 3:59 PM
> To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
> Subject: RE: [xsl] anyway obvious way to speed this tranfrom up?
> 
> Perhaps the thing that's making this slow is that for each <row> element
> you do
> 
> <xsl:template match="/inputDocument/row">
> 	<xsl:for-each select="child::*">
> 		<xsl:if test="name=/inputDocument/header/name[4]">
> 			<xsl:call-template name="generateOutputRecord">
> 
> where generateOutputRecord then does:
> 
>       <xsl:for-each select="../*">
>          <xsl:call-template name="processField">
> 
> which means that for every cell you are processing every cell, in other
> words it's O(n^2) in the number of cells per row. That shouldn't matter
> too much if there are only five cells per row as in your example: but
> perhaps there are actually more?
> 

yes it is n^2 and currently the file we are trying to make work has 80
fields. but we expect to throw away most of these as we do not need them.
Also, the processing for the fields that do not generate records (they are
just copied and have some data value mapping done on them) could be done
just once if i knew what i was doing with xslt, but there was this deadline
(and i have never used a functional programming language before).

> It would be useful to know (a) what the actual data sizes are like, (b)
> what performance you are actually getting (and with what processor), and
> (c) how the performance scales as the data size increases.
> 
> For all that we know, you could simply be thrashing for lack of memory.
> 
using java 1.3 and a recent xalan. the sample customer file has about 60k
records. timing tests indicate a few hundred hours on a dual processor
pentuim 4 at 3ghz with 4gb ram on dead rat linux. so an xslt solution may
not be feasible for files with 60k records even if the fields get reduced
down to around 10 and the fields that are coped and data mapped are
processed just once (although 64:1 reduction might do the trick when the
number of fields drop from 80 to 10).

i will probably not have time to do the experiments, but your observations
have been very helpful.

thank you for your assistance

> 
> > -----Original Message-----
> > From: owner-xsl-list@xxxxxxxxxxxxxxxxxxxxxx
> > [mailto:owner-xsl-list@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Ray Tayek
> > Sent: 30 December 2003 19:41
> > To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
> > Subject: [xsl] anyway obvious way to speed this tranfrom up?
> >
> >
> > hi, newbie managed to get something to work, but it's *real* slow.
> >
> > the xslt's are generated by a program, so they can not be
> > hand tuned, but maybe there is a way to do some things faster?
> >
> > The xml input comes from a spreadsheet via a .csv file, so
> > the original names in the <header> can contains spaces and
> > strange characters, but the names in the <cell>'s are legal
> > database names....


 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list


Current Thread