Re: [xsl] Get the duplicates in a list

Subject: Re: [xsl] Get the duplicates in a list
From: "Liam R. E. Quin liam@xxxxxxxxxxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 3 Jan 2025 22:24:59 -0000
On Fri, 2025-01-03 at 22:13 +0000, Bauman, Syd
s.bauman@xxxxxxxxxxxxxxxx wrote:
>
> All well and good, and thank you for that, Liam (Like some others
> here, I understandwhat it means to be O(1) vs O(n) vs O(nB2), but am
> not much at figuring it out.)
>
>
> That said, isnbt this post about efficiency as opposed to complexity?

I read it as wanted something easy to understand: human efficiency
rather than computational efficiency.

O(...) is part of complexity theory, in computer science, but
complexity is one part of computational efficiency. Too often the
indirect effects such as memory complexity are ignored, and if you're
sorting 4,000 three-gigabyte buffers by name, with in-place swaps to
save temporary space, the number of comparisons of their 8-byte names
(say) pales into insignificance compared to the cost of even one swap,
so algorithmic complexity is only part of efficiency as i see it. And
itbs always worth watching for complexity.

At Extreme Markup or Balisage once i wrote a quick XQuery expression to
compare two dictionaries. When i got back from lunch it hadnbt got very
far, and i looked and realized it was O(nB3) or even O(nb4), and the
dictionaries had 10,000 entries each... it wasnbt likely to finish any
time soon. But the expression was super easy to read and understand!

But i got into complexity theory a bit in this thread because someone
asked about it as a side-track :-)

Human complexity depends not on the size of the input but the
recentness of coffee, which is 12 hours ago here !

--
Liam Quin,B https://www.delightfulcomputing.com/
Available for XML/Document/Information Architecture/XSLT/
XSL/XQuery/Web/Text Processing/A11Y training, work & consulting.
Barefoot Web-slave, antique illustrations: B http://www.fromoldbooks.org

Current Thread