Subject: Re: [xsl] Symbol handling in XSLT|
From: Alex <alexscott@xxxxxxxxxxx>
Date: Fri, 26 Mar 2004 16:00:49 +0000
So what difference does it make if it writes characters not bytes?
If the MSXML (Which I am coming to dislike) uses UTF-16 as a default then this means that it will interpret any entity ie. &NBSP; as a ?.
Note entity names are case sensitive in XML so &NBSP; is an error. is expanded to character 160 before XSLT starts so XSLt does not see whether the original source had an entity or not.
If you (get XSLT to) put in a declaration that the document is encoded in latin 1 (ISO-8859-1) but actually XSLt does not linearise the document and some other part of the system finally lineraises it as utf16 (or any other encoding) then the browser will try to interpret the bytes in the file according to the specified encoding and as this is not actually the encoding used it will produce essentially arbitrary rubbish. If it manages to get most characters "correct" and then just use its missing glyph mark (?) on a few, that is by luck rather than design.
So how do I get MSXML to write: <xsl:output method="html" encoding="ISO-8859-1" /> You don't want it to write xsl instructions into the output.
If you want the transfrom to take note of an encoding specified in the stylesheet see Julian's post from a few minutes ago:
If this is served from IIS, you almost certainly have XSLT/ASP bug #1: sending the transformation result as a string to the response object, instead of using transformNodeToObject.