Subject: Re: [xsl] how to estimate speed of a transformation|
From: "J.Pietschmann" <j3322ptm@xxxxxxxx>
Date: Thu, 11 Dec 2003 13:49:03 +0100
The same holds for XSLT. Most extensions to XSLT 1.0 compensate for deficiencies of implementations which do not support efficient computation based on features of XSLT.
I don't think so. There are plenty of extensions, in particular functions dealing with string manipulation, regex or date/time/calendar related functionality which are always faster if implemented natively in the underlying programming language, regardless whether the XSLT processor has some guearntees about optimizing certain constructs or not. Not to mention the advantage of using the functions in XPath expressions.
I still think the division into XSLT for manipulating node trees and XPath for navigation and manipulating primitive data like strings and dates is a good one.
Therefore, determining required optimizations (and 'optimization' is a wrong word -- computational rules) and requiring conformant implementations to follow them would keep the language simple and easy to use.
The C/C++ and in particular the FORTRAN people did quite well without any explicit guarantees about what the compiler will optimize. Although everybody takes well-known techniques like constant folding, dead code elimination, loop strength reduction, loop unrolling, jump optimization and others for granted. I can see why a lispisch/functional programming language requires the run time environment to provide tail recursion elimination, firstly because it's a reasonably simple and well established technology, and because it encourages using recursive functions instead of writing loops. I don't see what other optimizations should be guaranteed, mainly because stuff like handling lazy evaluation or chaching is *not* yet well established in the sense that the algorithms providing the optimizations guarantee that they always provide an advantage instead of proving detrimental even in quite common cases. You don't want to require a technique which is likely to slow down at least every tenth program, according to the current level of wisdom.