Re: Char node-type

Subject: Re: Char node-type
From: Richard Light <richard@xxxxxxxxxxxxxxxxx>
Date: Thu, 23 Nov 2000 19:42:18 +0000
In message <002501c05565$e5804320$7e0aa8c0@xxxxxxx>, Dave Hartnoll
<Dave_Hartnoll@xxxxxxx> writes
>I have an idea that will alleviate your depth of recursion problem, but as
>I'm a relative newcomer to XSL, so I'm not fluent enough to express this
>idea in XSL itself yet.
>
>The idea is that your character processing template should first check the
>length of it's string. When it's exactly 1 then process the character as you
>do now. Otherwise, call yourself recursively, once for the 1st half of the
>string, then again for the 2nd half.

That's a thought.  What I have actually done for now is to split the
string on word boundaries (i.e. spaces), which reduces the load on the
stack too.  The problem with a 'binary chop' technique is that one thing
we need to do is to combine 'character-plus-Unicode-combining-
character(s)' sequences into an image representing the single combined
character.  The chop could split them, unless it looked about for spaces
before deciding exactly where to split the string.

Richard.

Richard Light
SGML/XML and Museum Information Consultancy
richard@xxxxxxxxxxxxxxxxx


 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list


Current Thread