Re: [xsl] optimization of complex XPath

Subject: Re: [xsl] optimization of complex XPath
From: Liam R E Quin <liam@xxxxxx>
Date: Thu, 18 Nov 2010 20:29:58 -0500
On Thu, 2010-11-18 at 20:20 -0500, Graydon wrote:
[...]

> I am currently using the XPath expression:
> 
> for $x in (//link[@area='decisions']/@cite) return
>     (if ($x = (//num/@cite))
>      then 'good'
>      else concat('bad cite value ',$x,'&#x000a;')
>     )
> 
> to check the links.
[..]

> How can I make this check faster?

Since this is xsl-list, I'll note that xsl:key is generally a good
way to make this faster.

In XQuery you could try making a sequence of all the distinct
link/@cite values and all the distinct num/@cite values, and then
find the set difference (XQuery use cases has a similar example,
I think) and then process only those -- this assumes that most of
the links are correct.

It's also sometimes worth trying more implementations - your 2G of
data is too much for the zero-dollar qixz-fe, but maybe BaseX could
handle it.

Liam

PS: &#xa; is the same as &#x000a; although that change won't make a
measurable performance difference :-)

-- 
Liam Quin - XML Activity Lead, W3C, http://www.w3.org/People/Quin/
Pictures from old books: http://fromoldbooks.org/
Ankh: irc.sorcery.net irc.gnome.org www.advogato.org

Current Thread