Re: [xsl] Re: Simple problem - complicated solution - performance

Subject: Re: [xsl] Re: Simple problem - complicated solution - performance
From: Antonio Fiol Bonnín <fiol@xxxxxxxxxx>
Date: Fri, 17 May 2002 09:01:40 +0200
Hello all.

I had already seen FXSL before posting to this list.

When coming to performance, AFAICT, O(n) may seem worth using, except for the fact that I am thinking of using this on a browser-side computation model.

That meaning: I have the XML file and the XSLT file on a server. The XML is dynamically generated, and includes a "call" to the XSLT file. When thinking of FXSL, the size of the "functions" itself makes it not worth using.

I have not deeply studied what was proposed for O(n log n), but that may be interesting if it is simple.

Otherwise, the --very simple-- O(n2) approach will do, as my data sets will invariantly have 10 elements each, which should not be a big deal.

IIRC, O(n) should be equal to O(n log n) for a 10 sample data model, and O(n2) is worse, but only 10 to 100 (1 order of magnitude). The processing time used on it may be worth the network transfer time over a 56K analog modem (yes, still more used in Europe than DSL lines).


I will study that, and maybe do some performance tests. It seems to me that I will get some interesting results.


Thank you very much, especially for such a quick answer.


Antonio Fiol Bonnín




Dimitre Novatchev wrote:

Stuart,

The performance will remain linear if a DVC algorithm is used.

Read about DVC algorithms and their optimisation at:

http://vbxml.com/snippetcentral/main.asp?view=viewsnippet&lang=&id=v20020107050418

and

http://www.topxml.com/xsl/articles/recurse/

Many functions in FXSL have their DVC implementation.

Cheers,
Dimitre Novatchev.


"Stuart Celarier" <stuart at ferncrk dot com> wrote:


Dimitre raises an interesting point about using recursion for computing
the minimum and maximum values of a set of data. Let me throw this
question back out to the list, especially to people with XSLT
implementation experience:


It seems like there must be some practical limits to recursion since
that would involve a call stack in memory. Is it reasonable to think
about recursion that stacks up a couple of thousand or tens of
thousands
of calls deep? Taking a page fault on a call stack seems like it could
get very expensive very quickly.

Clearly computing a the minimum and maximum should require linear time,
O(n). But if the computation itself doesn't scale well, then a
seemingly
O(n) algorithm could perform much worse in practice. Comments?

Cheers,
Stuart




__________________________________________________ Do You Yahoo!? LAUNCH - Your Yahoo! Music Experience http://launch.yahoo.com

XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list


.






Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Current Thread