Subject: Re: [xsl] Large transforms (was Re: [xsl] GByte Transforms) From: Kevin Jones <kjones@xxxxxxxxxxx> Date: Fri, 4 Jun 2004 15:47:44 +0100 |
On Friday 04 June 2004 09:01, Michael Kay wrote: > > Yes; but to develop ideas on how to write transforms that work well with > this kind of implementation we need to understand a lot more about the > characteristics of the implementation. Take your point. The access costs are a little different in the model I have been working with compared to the traditional object model approach. Probably beyond what you could explain to people. Given that, then I think the earlier suggestion that the compiler has to do much of the hardwork is the only one viable. It is not clear if we could do this alongside the traditional optimisation techniques or something more dedicated would be needed along the lines of what SQL query engines use to estimate likely costs. Obviously database engines have a big advantage here in have easy access to schema, sizes and indexes for the data they query. Kev.
Current Thread |
---|
|
<- Previous | Index | Next -> |
---|---|---|
RE: [xsl] Large transforms (was Re:, David . Pawson | Thread | delete the white spaces, Dionisio Ruiz de Zar |
Re: [xsl] delete the white spaces, Vasu Chakkera | Date | Re: [xsl] Large transforms (was Re:, Kevin Jones |
Month |