Re: [xsl] xslt on server-side vs. client-side

Subject: Re: [xsl] xslt on server-side vs. client-side
From: Michael Case <mecase@xxxxxxxxxxx>
Date: Fri, 16 Nov 2001 12:51:28 -0800
Hi,

Just to chime in and see if I'm understanding the data side of the
discussion (not the 'green' part).

Is there really a generic answer to this question?  It seems to me that
the data you are manipulating, the kinds of manipulations you perform on
it, and how much user interaction affects the presentation are all
factors in the pro and con debate of how XML would work for a site.

I don't know how to explain my point of view in a clear way.  But I'll
try to give it a shot.

It seems to me quite acceptable that a site have three ways to handle
things, and what is more interesting is how that is managed by the site.

1) static HTML
  1.1) semi-static XML to HTML via XSL, converted periodically or
occassionally.
2) dynamic client-side XML (depending on client-side capability).
3) dynamic server-side XML (due to factors like size of data being sent
over).

These are characterized by

1) least complex (simple hyper-links and display, some minimal
interaction)
   smallest size (not necessarily over-all size, but rather chunks that
people look at
   least user-controlled

2) more complex (sorting, changing presentation, searching)
   mid-size: size does not exceed patience on user end to receive
initial download
   much more user-controlled/interactive

3) most complex (data from different sources, domain spanning searches,
merging information, passing information to automated systems (i.e. no
user interaction necessary))
   large data base (not database, basis of data) which is very freely
explored
   less user-control/interaction than (2) in some ways, or maybe a
different measure, it could be "slower" and more server dependent
because of 2.

I could also see 3) and 2) complementing each other in the sense that 3)
could generate a subset of XML/XSLT which then provides at the
client-side all that 2) gives, but also has some means to re-initiate a
3) type interaction if the local base of data is changed.  One way I
think of that is drilling down to a particular sub-set of information
which then nicely fits into the 2) type of profile.

I could also see 2) as being conditional on the browser.  A simple
redirection might be able to handle this, say if you have a
cocoon/jakarta-tomcat side of the site, maybe?  Then whatever is making
decisions about what to do with a 2) could decide if the processing was
going to be client-side (IE) or server side.  As usual, it is the
practicalities that might be limiting 2) right now, not the ideal
functionality.  

a) Size of what initially goes to the client-side vs. many (smaller?)
chunks from server-side processing and network traffic ramifications.
b) Deciding which client you are dealing with (especially since IE
tempts people to use MS specific implementations) and how standard xml
aware clients are developing.
c) Fighting a common perception that doing "the same thing many times on
different machines" means its better to do it at the server.  My
argument against this, again, is it depends.  If "thing" is variable,
i.e. process or manipulations based on user requests, then it doesn't
matter if it is done locally or remotely and frankly, if it does not
cross outside of the chunk the client has then it would be more
efficient on the client-side.  If "thing" is static, like data produced,
or "all users want all US states sorted alphabetically", then the
"server should do it" argument makes more sense, and the 3) to 2)
relationship I tried to explain above seems like an acceptable
compromise.

The thing is, if each user is doing the exact same thing with the data
as any other user (just viewing it on the screen, for example) then 1)
may be enough.  If each user sorts, re-arranges, queries the same data,
then getting that to them in the form of 2) above is good enough.  If
there is a need to select, search and limit the size of what goes to the
client, or if things come from so many different sources and are brought
together with complex logic, in "real" time, then 3) needs to happen (at
least first, before 2)).

Finally, there is the real meat, which is, wanting the data.  If I want
users to be able to have the real data, in XML format, then no matter
which way you do it above, you must have a mechanism to say "if they
come at my web-site from this entry-point, they get XML".  Because then,
relying on that, they can use XML technologies to do whatever they want
to do with the data you provide.  Essentiall, feed them the "raw" data.

Generating the characteristics or properties of the kind of information
that best suites 1, 2 or 3 would be neat.  Maybe there are books out
there on such issues that someone could suggest for learning?

Anyway, those are my rambling opinions and I've enjoyed thinking about
this and reading your discussion.  And if any of my thoughts above seem
rather naive, please educate me!

Sincerely,

Michael Case

-- 

Michael E. Case
UC Davis
case@xxxxxxxxxxxxxxxxxx
(530) 754-7226

 XSL-List info and archive:  http://www.mulberrytech.com/xsl/xsl-list


Current Thread