Subject: Re: [xsl] Get the duplicates in a list From: "Bauman, Syd s.bauman@xxxxxxxxxxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx> Date: Fri, 3 Jan 2025 22:13:04 -0000 |
All well and good, and thank you for that, Liam (Like some others here, I understand what it means to be O(1) vs O(n) vs O(nB2), but am not much at figuring it out.) That said, isnbt this post about efficiency as opposed to complexity? Not that I think we have a very solid definition of complexity, but run time is not it. p ________________________________ > > I would hope that the map:merge is ensured to be more efficient than > that count($fieldnames[. eq $f]) gt 1 First, Dmitrebs solution is great. On complexity - if a map is implemented as a hash table, insertion of a single item is usually O(1) (which is why they are popular) although this disregards collisions, growing the table, garbage collection, etc. So inserting n items is O(n). count($fieldnames) has to look at every item in $fieldnames so is O(n), but an implementation of sequences as a finger list might store the length at the start, in which case thatbs O(1) too, BUT count($fieldnames[some expr]) likely has to evalauate the expression n times, once per item, and then do a count, so is O(n). I played with a solution that put the items inside elements and used the preceding:: axis, but that meant for each item you looked at all the preceding items, making it n x n, i.e. O(nB2) Note: ordered maps are likely slower than O(n) for insertion and deletion, but with the data structures that have been mentioned, likely still O(n), but webll need a way to say where the insertion happens.
Current Thread |
---|
|
<- Previous | Index | Next -> |
---|---|---|
Re: [xsl] Get the duplicates in a l, Liam R. E. Quin liam | Thread | Re: [xsl] Get the duplicates in a l, Liam R. E. Quin liam |
Re: [xsl] Get the duplicates in a l, Liam R. E. Quin liam | Date | Re: [xsl] Get the duplicates in a l, Liam R. E. Quin liam |
Month |