Re: [xsl] How to efficiently obtain the first 10 records of a file with over 2 million records?

Subject: Re: [xsl] How to efficiently obtain the first 10 records of a file with over 2 million records?
From: "Michael Kay mike@xxxxxxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 20 Jul 2023 07:54:34 -0000
> The DPH was one intended beneficiary of the stated goal that "It shall
> be easy to write programs which process XML documents."
>

I've always been amused by the term because my first encounter with XML was in
around 1998, when I was called in to audit a project that had just failed its
customer acceptance tests on the basis of chronically non-scalable
performance. It quickly became apparent that it was doing an XML
transformation using Perl regular expressions, and the situation was quickly
salvaged by substituting Microsoft's very new WD-xsl processor.

A classic case, which I've encountered very often, where management are in a
state of panic because you're an order of magnitude out from meeting your
performance requirement, but the problem can be easily fixed by changing 20
lines of code. In fact I sometimes argue these days that the bigger the
performance problem appears to be, the easier it is to find the cause.

If you track StackOverflow, there are still plenty of DPHs out there trying to
process XML without a proper parser, but these days the P tends to be PHP or
Python.

Michael Kay
Saxonica

Current Thread