I have worked for a company that (probably still is) heavily invested in XSLT for XML templating. It's not good, and they would probably migrate from it if they could.
1. Even though there are newer XSLT standards, XSLT 1.0 is still dominant. It is quite limited and weird compared to the newer standards.
2. Resolving performance problems of XSLT templates is hell. XSLT is a Turing-complete functional-style language, with performance very much abstracted away. There are XSLT templates that worked fine for most documents, but then one document came in with a ~100 row table and it blew up. Turns out that the template that processed the table is O(N^2) or worse, without any obvious way to optimize it (it might even have an XPath on each row that itself is O(N) or worse). I don't exactly know how it manifested, but as I recall the document was processed by XSLT for more than 7 minutes.
JS might have other problems, but not being able to resolve algorithmic complexity issues is not one of them.
Features are now available like key (index) to greatly speedup the processing.
Good XSLT implementation like Saxon definitively helps as well on the perf aspect.
When it comes to transform XML to something else, XSLT is quite handy by structuring the logic.
I never really grokked later XSLT and XPath standards though.
XSLT 1.0 had a steep learning curve, but it was elegant in a way poetry is elegant because of extra restrictions imposed on it compared to prose. You really had to stretch your mind to do useful stuff with it. Anyone remembers Muenchian grouping? It was gorgeous.
Newer standards lost elegance and kept the ugly syntax.
"Newer standards lost elegance and kept the ugly syntax."
My biggest problem with XSLT is that I've never encountered a problem that I wouldn't rather solve with an XPath library and literally any other general purpose programming language.
When XSLT was the only thing with XPath you could rely on, maybe it had an edge, but once everyone has an XPath library what's left is a very quirky and restrictive language that I really don't like. And I speak Haskell, so the critic reaching for the reply button can take a pass on the "Oh you must not like functional programming" routine... no, Haskell is included in that set of "literally any other general purpose programming language" above.
XSLT just needs a different, non-XML serialization.
XML (the data structure) needs a non-XML serialization.
Similar to how Semantic Web's Owl has four different serializations, only one of them being the XML serialization. (eg. Owl can be represented in Functional, Turtle, Manchester, Json, and N-triples syntaxes.)
It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them, doesn't matter if XSLT or SOAP or even XHTML in a way ... Those were defined as machine language meant for machine talking to machine, and invariably something go south and it's not really made for us to intervene in the middle; it can be done but it's way more work than it should be; especially since they clearly never based it on the idea that those machine will sometime speak "wrong", or a different "dialect".
It looks great, then you design your stuff and it goes great, then you deploy to the real world and everything catches on fire instantly and everytime you stop one another one starts.
> It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them
Generally speaking I feel like this is true for a lot of stuff in programming circles, XML included.
New technology appears, some people play around with it. Others come up with using it for something else. Give it some time, and eventually people start putting it everywhere. Soon "X is not for Y" blogposts appear, and usage finally starts to decrease as people rediscover "use the right tool for the right problem". Wait yet some more time, and a new technology appears, and the same cycle begins again.
Seen it with so many things by now that I think "we'll" (the software community) forever be stuck in this cycle and the only way to win is to explicitly jump out of the cycle and watch it from afar, pick up the pieces that actually make sense to continue using and ignore the rest.
A controversial opinion, but JSON is that too. Not as bad as XML was (̶t̶h̶e̶r̶e̶'̶s̶ ̶n̶o̶ ̶"̶J̶S̶L̶T̶"̶)̶, but wasting cycles to manifest structured data in an unstructured textual format has massive overhead on the source and destination sides. It only took off because "JavaScript everywhere" was taking off — performance be damned. Protobufs and other binary formats already existed, but JSON was appealing because it's easily inspectable (it's plaintext) and easy to use — `JSON.stringify` and `JSON.parse` were already there.
We eventually said, "what if we made databases based on JSON" and then came MongoDB. Worse performance than a relational database, but who cares! It's JSON! People have mostly moved away from document databases, but that's because they realized it was a bad idea for the majority of usecases.
I think the only left out part is about people currently believing in the current hyped way, "because this time it's right!" or whatever they claim. Kind of the way TypeScript people always appear when you say that TypeScript is currently one of those hyped things and will eventually be overshadowed by something else, just like the other languages before it, then soon sure enough, someone will share why TypeScript happen to be different.
There have been many such cycles, but the XML hysteria of the 00s is the worst I can think of. It lasted a long time and the square peg XML was shoved into so many round holes.
IDK, the XML hysteria is similar by comparison to the dynamic and functional languages hysterias. And it pales in comparison to the micro services, SPA and the current AI hysterias.
IMHO it's pretty comparable, the difference is only in the magnitude of insanity. After all, the industry did crap out these hardware XML accelerators that were supposed to improve performance of doing massive amounts of XML transformations — is it not the GPU/TPU craze of today?
Those were defined as machine language meant for machine talking to machine
i don't believe this is true. machine language doesn't need the kind of verbosity that xml provides. sgml/html/xml were designed to allow humans to produce machine readable data. so they were meant for humans to talk to machines and vice versa.
> part of the problem with the entire "XML as a savior" mindset of that earlier era
I think part of the problem is focusing on the wrong aspect. In the case of XSLT, I'd argue its most important properties are being pure, declarative, and extensible. Those can have knock-on effects, like enabling parallel processing, untrusted input, static analysis, etc. The fact it's written in XML is less important.
Its biggest competitor is JS, which might have nicer syntax but it loses those core features of being pure and declarative (we can implement pure/declarative things inside JS if we like, but requiring a JS interpreter at all is bad news for parallelism, security, static analysis, etc.).
When fashions change (e.g. XML giving way to JS, and JSON), we can end up throwing out good ideas (like a standard way to declare pure data transformations).
(Of course, there's another layer to this, since XML itself was a more fashionable alternative to S-expressions; and XSLT is sort of like Lisp macros. Everything old is new again...)
> Even though there are newer XSLT standards, XSLT 1.0 is still dominant.
I'm pretty sure that's because implementing XSLT 2.0 needs a proprietary library (Saxon XSLT[0]). It was certainly the case in the oughts, when I was working with XSLT (I still wake up screaming).
XSLT 1.0 was pretty much worthless. I found that I needed XSLT 2.0, to get what I wanted. I think they are up to XSLT 3.0.
Are you saying it is specified that you literally cannot implement it other than on top of, or by mimicing bug-for-bug, that library (the way it was impossible to implement WebQSL without a particular version of SQLite) or is Saxon XSLT just the only existing implementation of the spec?
It's odd cause xslt was clearly made in an era where expecting long source xml to be processed was the norm, and nested loops would blow up obviously..
Yeah, I was using Novell DirXML to do XSLT processing of inbound/outbound data in 2000 (https://support.novell.com/techcenter/articles/ana20000701.h...) for directory services stuff. It was full XML body (albeit small document sizes, as they were usually user or identity style manifests from HR systems), no streaming as we know it today.
But they worked on the xml body as a whole, in memory, which is where all the headaches started. Then we introduced WSDLs on top, and then we figured out streaming.
How, where? In 2013 I was still working a lot with XSLT and 1.0 was completely dead everywhere one looked. Saxon was free for XSLT 2 and was excellent.
I used to do transformation of both huge documents, and large number of small documents, with zero performance problems.
Probably corps. I was working at Factset in the early 2000's when there was a big push for it and I imagine the same thing was reflected across every Microsoft shop across corporate America at the time, which (at the time) Microsoft was winning big marketshare in. (I bet there are still a ton of internal web apps that only work with IE... sigh)
Obviously, that means there's a lot of legacy processes likely still using it.
The easiest way to improve the situation seems to be to upgrade to a newer version of XSLT.
I recently had the occasion to work with a client that was heavily invested in XML processing for a set of integrations. They’re migrating / modernizing but they’re so heavily invested in XSL that they don’t want to migrate away from it. So I conducted some perf tests and, the performance I found for xslt in .NET (“core”) was slightly to significantly better than the performance of Java (current) and Saxon. But they were both fast.
In the early days the xsl was all interpreted. And was slow. From ~2004 or so, all the xslt engines came to be jit compiled. XSL benchmarks used to be a thing, but rapidly declined in value from then onward because the perf differences just stopped mattering.
Are you using the commercial version of Saxon? It's not expensive, and IMHO worth it for the features it supports (including the newer standards) and the performance. If I remember correctly (it was a long time ago) it does some clever optimizations.
We didn't use Saxon, I don't work there anymore. We also supported client-side (browser) XSLT processing, as well as server-side. It might have helped on the server side, maybe could even resolve some algorithmic complexities with some memoization (possibly trading off memory consumption).
But in the end the core problem is XSLT, the language. Despite being a complete programming language, your options are very limited for resolving performance issues when working within the language.
O(n^2) issues can typically be solved using keyed lookups, but I agree that the base processing speed is slow and the language really is too obscure to provide good DX.
I worked with a guy who knew all about complexity analysis, but was quick to assert that "n is always small". That didn't hold - but he'd left the team by the time this became apparent.
What an incoherent writing lol. I'm not sure if grug = incoherent necessarily, but I'm sure that there is the type of genius that every sentence of them is painfully clear. Wouldn't it be better to cater towards that?
Anyway.
Paco Grug talks about how they want a website (e.g. a blog) without a server-side build-step. Just data, shape of data, and the building happening automagically, this time on the client. HTML has javascript and frames for that, but HTML painfully lacks transclusion, for header menu, sidebar and footer, which birthed myriads of web servers and webserver technologies.
It seems that .xml can do it too, e.g. transclusion and probably more. The repo doesn't really showcase it.
Anyway, I downloaded the repo, and run it on a local webserver, it works. It also works javascript disabled, on an old browser. Nice technology, maybe it is possible to use it for something useful (in a very specific niche). For most other things javascript/build-step/dynamic webserver is better.
Also, I think that for a blog you'll want the posts in separate files, and you can't just dump them in a folder and expect that the browser will find them. You'll need a webserver/build-step/javascript for that.
Ok, so it might be a long shot, but I would say that
1. the browsers were inconsistent in 1990-2000 so we started using JS to make them behave the same
2. meanwhile the only thing we needed were good CSS styles which were not yet present and consistent behaviour
3. over the years the browsers started behaving the same (mainly because Highlander rules - there can be only one, but Firefox is also coping well)
4. but we already got used to having frameworks that would make the pages look the same on all browsers. Also the paradigm was switched to have json data rendered
5. at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.
Why do I say that? Recently we started working on a migration from a legacy system. Looks like 2000s standard page per HTTP request. Every action like add remove etc. requires a http refresh. However it works much faster than our react system. Because:
1. Nowadays the internet is much faster
2. Phones have a lot of memory which is wasted by js frameworks
3. in the backend all's almost same old story - CRUD CRUD and CRUD (+ pagination, + transactions)
AJAX and updating DOM wasn't there just to "make things faster" it was implemented there to change paradigm of "web sites" or "web documents" — because web was for displaying documents. Full page reload makes sense if you are working in a document paradigm.
It works well here on HN for example as it is quite simple.
There are a lot of other examples where people most likely should do a simple website instead of using JS framework.
But "we could all go back to full page reloads" is not true, as there really are proper "web applications" out there for which full page reloads would be a terrible UX.
To summarize there are:
"websites", "web documents", "web forms" that mostly could get away with full page reloads
"web applications" that need complex stuff presented and manipulated while full page reload would not be a good solution
Yes, of course for web applications you can't do full page reload (you weren't either back in the days, where web applications existed in form of java applets or flash content).
Let's face it, most uses of JS frameworks are for blogs or things that with full page reload you not even notice: nowadays browsers are advanced and only redraw the screen when finished loading the content, meaning that they would out of the box mostly do what React does (only render DOM elements who are changes), meaning that a page reload with a page that only changes one button at UI level does not result in a flicker or loading of the whole page.
BTW, even React now is suggesting people to run the code server-side if it is possible (it's the default of Next.JS), since it makes the project easier to maintain, debug, test, as well as get better score in SEO from search engines.
I'm still a fan of the "old" MVC models of classical frameworks such as Laravel, Django, Rails, etc. to me make overall projects that are easier to maintain for the fact that all code runs in the backend (except maybe some jQuery animation client side), model is well separated from the view, there is no API to maintain, etc.
Classic frames were quite bad. Every frame on a page was a separate, independent, coequal instance of the browser engine. This is almost never what you actually want. The header/footer/sidebar frames are subordinate and should not navigate freely. Bookmarks should return me to the frameset state as I left it, not the default for that URL. History should contain the frameset state I saw, not separate entries for each individual frame.
Even with these problems, classic frames might have been salvageable, but nobody bothered to fix them.
> Every frame on a page was a separate, independent, coequal instance of the browser engine. This is almost never what you actually want.
Most frames are used for menu, navigation, frame for data, frame for additional information of data. And they are great for that. I don't think that frames are different instances of the browser engine(?) but that doesn't matter the slightest(?). They are fast and lightweight.
> The header/footer/sidebar frames are subordinate and should not navigate freely.
They have the ability to navigate freely but obviously they don't do that, they navigate different frames.
Yup, they are not enough for an SPA, not without javascript. And if you have javascript to handle history, URL, bookmarks and all that, you can just use divs without frames.
They can navigate targeting any other frame. For example, clicking "System Interfaces" updates the bottom-left navigation menu, while keeping the state of the main document frame.
It's quite simple, just uses the `target` attribute (target=blank remains popular as a vestigial limb of this whole approach).
This also worked with multiple windows (yes, there were multi-window websites that could present interactions that handled multiple windows).
The popular iframe is sort of salvaged from frame tech, it is still used extensively and not deprecatred.
An iframe is inherently subordinate. This solves one of the major issues with classic frames.
Classic frames are simple. Too simple. Your link goes to the default state of that frameset. Can you link me any non-default state? Can I share a link to my current state with you?
That timeline doesn't sound right to me. JS was rarely used to standardise behaviour - we had lots of user agent detection and relying on quirks ordering to force the right layout. JS really was for the interactivity at the beginning - DHTML and later AJAX. I don't think it even had easy access to layout related things? (I may be mistaken though) CSS didn't really make things more consistent either - once it became capable it was still a mess. Sure, CSS garden was great and everyone was so impressed with semantic markup while coding tables everywhere. It took ages for anything to actually pass first two ACIDs. I'm not sure frameworks ever really impacted the "consistent looks" side of things - by the time we grew out of jQuery, CSS was the looks thing.
Then again, it was a long time. Maybe it's me misremembering.
Before jQuery there was Prototype.js, part of early AJAX support in RoR, which fixed inconsistencies in how browsers could fetch data, especially in the era between IE 5 and 7 (native JS `XMLHttpRequest` was only available from IE 7 onwards, before that it was some ActiveX thing. The other browsers supported it from the get go). My memory is vague, but it also added stuff like selectors, and on top of that was script.aculo.us which added animations and other such fanciness.
jQuery took over very quickly though for all of those.
> native JS `XMLHttpRequest` was only available from IE 7 onwards, before that it was some ActiveX thing.
Almost sure it was available on IE6. But even if not, you could emulate it using hidden iframes to call pages which embedded some javascript interacting with the main page. I still have fond memories of using mootools for lightweight nice animations and less fond ones of dojo.
Internet Explorer 5–6 was the ActiveX control. Then other browsers implemented XMLHTTPRequest based on how that ActiveX control worked, then Internet Explorer 7 implemented it without ActiveX the same way as the other browsers, and then WHATWG standardised it.
Kuro5hin had a dynamic commenting system based on iframes like you describe.
jQuery in ~2008 was when it kinda took off, but jQuery was itself an outgrowth of work done before it on browser compatibility with JavaScript. In particular, events.
Internet Explorer didn’t support DOM events, so addEventListener wasn’t cross-browser compatible. A lot of people put work in to come up with an addEvent that worked consistently cross-browser.
The DOMContentLoaded event didn’t exist, only the load event. The load event wasn’t really suitable for setting up things like event handlers because it would wait until all external resources like images had been loaded too, which was a significant delay during which time the user could be interacting with the page. Getting JavaScript to run consistently after the DOM was available, but without waiting for images was a bit tricky.
These kinds of things were iterated on in a series of blog posts from several different web developers. One blogger would publish one solution, people would find shortcomings with it, then another blogger would publish a version that fixed some things, and so on.
This is an example of the kind of thing that was happening, and you’ll note that it refers to work on this going back to 2001:
When jQuery came along, it was really trying to achieve two things: firstly, incorporating things like this to help browser compatibility; and second, to provide a “fluent” API where you could chain API calls together.
I wasn't clear, jQuery was definitely used for browser inconsistencies, but in behaviour, but layout. It had just a small overlap with CSS functionality (at first, until it all got exposed to JS)
2002, I was using “JSRS”, and returning http 204/no content, which causes the browser to NOT refresh/load the page.
Just for small interactive things, like a start/pause button for scheduled tasks. The progress bar etc.
But yeah, in my opinion we lost about 15 years of proper progress.
The network is the computer came true
The SUN/JEE model is great.
It’s just that monopolies stifle progress and better standards.
Standards are pretty much dead, and everything is at the application layer.
That said.. I think XSLT sucks, although I haven’t touched it in almost 20 years. The projects I was on, there was this designer/xslt guru. He could do anything with it.
> But yeah, in my opinion we lost about 15 years of proper progress.
Internet Explorer 6 was released in 2001 and didn’t drop below 3% worldwide until 2015. So that’s a solid 14 years of paralysis in browser compatibility.
jQuery, along with a number of similar attempts and more single-item-focused polyfills¹ was as much about DOM inconsistencies as JS ones. It was also about making dealing with the DOM more convenient² even where it was already consistent between commonly used browsers.
DOM manipulation of that sort is JS dependent, of course, but I think considering language features and the environment, like the DOM, to be separate-but-related concerns is valid. There were less kitchen-sink-y libraries that only concentrated on language features or specific DOM features. Some may even consider a few parts in a third section: the standard library, though that feature set might be rather small (not much more than the XMLHTTPRequest replacement/wrappers?) to consider its own thing.
> For stuff which didn't need JS at all, there also shouldn't be much need for JQuery.
That much is mostly true, as it by default didn't do anything to change non-scripted pages. Some polyfills for static HTML (for features that were inconsistent, or missing entirely in, usually, old-IE) were implemented as jQuery plugins though.
--------
[1] Though I don't think they were called that back then, the term coming later IIRC.
[2] Method chaining³, better built-in searching and filtering functions⁴, and so forth.
[3] This divides opinions a bit though was generally popular, some other libraries did the same, others tried different approaches.
[4] Which we ended up coding repeatedly in slightly different ways when needed otherwise.
Old guy here. Agreed- the actual story of web development and JavaScript’s use was much different.
HTML was the original standard, not JS. HTML was evolving early on, but the web was much more standard than it was today.
Early-mid 1990s web was awesome. HTML served HTTP, and pages used header tags, text, hr, then some backgound color variation and images. CGI in a cgi-bin dir was used for server-side functionality, often written in Perl or C: https://en.m.wikipedia.org/wiki/Common_Gateway_Interface
Back then, if you learned a little HTML, you could serve up audio, animated gifs, and links to files, or Apache could just list files in directories to browse like a fileserver without any search. People might get a friend to let them have access to their server and put content up in it or university, etc. You might be on a server where they had a cgi-bin script or two to email people or save/retrieve from a database, etc. There was also a mailto in addition to href for the a (anchor) tag for hyperlinks so you could just put you email address there.
Then a ton of new things were appearing. PhP on server-side. JavaScript came out but wasn’t used much except for a couple of party tricks. ColdFusion on server-side. Around the same time was VBScript which was nice but just for IE/Windows, but it was big. Perl then PhP were also big on server-side. If you installed Java you could use Applets which were neat little applications on the page. Java Web Server came out serverside and there were JSPs. Java Tomcat came out on server-side. ActionScript came out to basically replace VBScript but do it on serverside with ASPs. VBScript support went away.
During this whole time, JavaScript had just evolved into more party tricks and thing like form validation. It was fun, but it was PhP, ASP, JSP/Struts/etc. serverside in early 2000s, with Rails coming out and ColdFusion going away mostly. Facebook was PhP mid-2000s, and LAMP stack, etc. People breaking up images using tables, CSS coming out with slow adoption. It wasn’t until mid to later 2000s until JavaScript started being used for UI much, and Google’s fostering of it and development of v8 where it was taken more seriously because it was slow before then. And when it finally got big, there was an awful several years where it was framework after framework super-JavaScript ADHD which drove a lot of developers to leave web development, because of the move from server-side to client-side, along with NoSQL DBs, seemingly stupid things were happening like client-side credential storage, ignoring ACID for data, etc.
So- all that to say, it wasn’t until 2007-2011 before JS took off.
> at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory
I've got a .NET/Kestrel/SQLite stack that can crank out SSR responses in no more than ~4 milliseconds. Average response time is measured in hundreds of microseconds when running release builds. This is with multiple queries per page, many using complex joins to compose view-specific response shapes. Getting the data in the right shape before interpolating HTML strings can really help with performance in some of those edges like building a table with 100k rows. LINQ is fast, but approaches like materializing a collection per row can get super expensive as the # of items grows.
The closer together you can get the HTML templating engine and the database, the better things will go in my experience. At the end of the day, all of that fancy structured DOM is just a stream of bytes that needs to be fed to the client. Worrying about elaborate AST/parser approaches when you could just use StringBuilder and clever SQL queries has created an entire pointless, self-serving industry. The only arguments I've ever heard against using something approximating this boil down to arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.
> arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.
Unfortunately, they're not actually wrong though :-(
Still, there are ways to enforce escaping (like preventing "stringly typed" programming) which work perfectly well with streams of bytes, and don't impose any runtime overhead (e.g. equivalent to Haskell's `newtype`)
at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.
however when you have a high latency connection, the "thick client" json-filled webapp will only have its advantages if the most of the business logic happens on the browser. I.e. Google Docs - great and much better than it used to be in 2000s design style. Application that searches the apartments to rent? Not really I would say.
-- edit --
by the way in 2005 I programmed using very funny PHP framework PRADO that was sending every change in the UI to the server. Boy it was slow and server heavy. This was the direction we should have never gone...
Application that searches the apartments to rent? Not really I would say.
not a good example. i can't find it now, but there was a story/comment about a realtor app that people used to sell houses. often when they were out with a potential buyer they had bad internet access and loading new data and pictures for houses was a pain. it wasn't until they switched to using a frontend framework to preload everything with the occasional updates that the app became usable.
low latency affects any interaction with a site. even hackernews is a pain to read over low latency and would improve if new comments where loaded in the background. the problem creeps up on you faster than you think.
Prefetching pages doesn't require a frontend framework though. All it takes is a simple script to preload all or specific anchor links on the page, or you could get fancier with a service worker and a site manifest if you want to preload pages that may not be linked on the current page.
Yep, that works as well. I'll reach for a script still if I want more logic around when to prefetch, like only prefetching on link hover or focus. A script is also needed for any links that you need to preload but aren't included on the current page.
It's sad how the bloat of '00s enterprise XML made the tech seem outdated and drove everyone to 'cleaner' JSON, because things like XSLT and XPath were very mature and solved a lot of the problems we still struggle with in other formats.
I'm probably guilty of some of the bad practice: I have fond memories of (ab)using XSLT includes back in the day with PHP stream wrappers to have stuff like `<xsl:include href="mycorp://invoice/1234">`
This may be out-of-date bias but I'm still a little uneasy letting the browser do the locally, just because it used to be a minefield of incompatibility
It's been 84 years but I still miss some of the "basics" of XML in JSON - a proper standards organization, for one. But things like schemas were (or, felt like) so much better defined in XML land, and it took nearly a decade for JSON land to catch up.
Last thing I really did with XML was a technology called EXI, a transfer method that converted an XML document into a compressed binary data stream. Because translating a data structure to ASCII, compressing it, sending it over HTTP etc and doing the same thing in reverse is a bit silly. At this point protobuf and co are more popular, but imagine if XML stayed around. It's all compatible standards working with each other (in my idealized mind), whereas there's a hard barrier between e.g. protobuf/grpc and JSON APIs. Possibly for the better?
I just leaned about EXI as it's being used on a project I work on. It's quite amazingly fast and small! It is a binary representation of the xml stream. It can compress quite small if you have an xmlschema to go with your xml.
I was curious about how it is implemented and I found the spec easy to read and quite elegant:
https://www.w3.org/TR/exi/
That data transform thing xslt could do was so cool. You could twist it into emitting just about any other format and XML was the top layer. You want it in tab delimited yaml. Feed it the right style sheet and there you go. Other system wants CSV. Sure thing different style sheet and there you go.
For a transport tech XML was OK. Just wasted 20% of your bandwidth on being a text encoding. Plus wrapping your head around those style sheets was a mind twister. Not surprised people despise it. As it has the ability to be wickedly complex for no real reason.
It depends what you use it for. I worked on a interbank messaging platform that normalised everything into a series of standard xml formats, and then used xslt for representing data to the client. Common use case - we could rerender data to what a receiver’s risk system were expecting in config (not compiled code). You could have people trained in xslt doing that, they did not need to be more experienced developers. Fixes were fast. It was good for this. Another time i worked on a production pipeline for a publisher of education books. Again, data stored in normalised xml. Xslt is well suited to mangling in that scenario.
That's funny, I would reverse those. I loved XSLT though it took me a long time for it to click; it was my gateway drug to concepts like functional programming and idempotency. XPath is pretty great too. The problem was XML, but it isn't inherent to it -- it empowered (for good and bad) lots of people who had never heard of data normalization to publish data and some of it was good but, like Irish Alzheimer's, we only remember the bad ones.
True, and it's even more sad that XML was originally just intended as a simplified subset of SGML (HTML's meta syntax with tag inference and other shortforms) for delivery of markup on the web and to evolve markup vocabularies and capabilities of browsers (of which only SVG and MathML made it). But when the web hype took over, W3C (MS) came up with SOAP, WS-this and WS-that, and a number of programming languages based on XML including XSLT (don't tell HNers it was originally Scheme but absolutely had to be XML just like JavaScript had to be named after Java; such was the madness).
If your document has namespaces, xpath has to reflect that. You can either tank it or explicitly ignore namespaces by foregoing the shorthands and checking `local-name()`.
Ok. Perhaps 'namespace the query' wasnt quite the right way of explaining it. All I'm saying is, whenever I've used xpath, instead of it looking nice like
... I guess because they couldn't bear to have it just match on tags as they are in the file and it had to be tethered to some namespace stuff that most people dont bother with. A lot of XML is ad-hoc without a namespace defined anywhere
Its like
Me: Hello Xpath, heres an XML document, please find all the bookstore/book/title tags
Xpath: *gasps* Sir, I couldn't possibly look for those tags unless you tell me which namespace we are in. Are you some sort of deviant?
Is not actually relevant and is not an information the average XML processor even receives. If the file uses a default namespace (xmlns), then the elements are namespaced, and anything processing the XML has to either properly handle namespaces or explicitly ignore namespaces.
> A lot of XML is ad-hoc without a namespace defined anywhere
If the element is not namespaced xpath does not require a prefix, you just write
I don't recall ever needing to do that for unnamespaced tags. Are you sure the issue you're having isn't that the tags have a namespace?
my:book is a different thing from your:book and you generally don't want to accidentally match on both. Keeping them separate is the entire point of namespaces. Same as in any programming language.
In the 2003 The Art of Unix Programming, the author advocated bespoke text formats and writing parsers for them. Writing xml by hand is his list of war crimes. Since then syntax highlighting and autocomplete and autoformatting narrowed the effort gap and tolerant parsers (browsers being the main example) got a bad rap. Would Markdown and Yaml exist with modern editors?
XML is a markup language system. You typically have a document, and various parts of it can be marked up with metadata, to an arbitrary degree.
JSON is a data format. You typically have a fixed schema and things are located within it at known positions.
Both of these have use-cases where they are better than the other. For something like a web page, you want a markup language that you progressively render by stepping through the byte stream. For something like a config file, you want a data format where you can look up specific keys.
Generally speaking, if you’re thinking about parsing something by streaming its contents and reacting to what you see, that’s the kind of application where XML fits. But if you’re thinking about parsing something by loading it into memory and looking up keys, then that’s the kind of application where JSON fits.
I've built my personal site on XSLT a couple times just to see how far I could push it.
It works surprisingly well, the only issue I ever ran into was a decades old bug in Firefox that doesn't support rendering HTML content directly from the XML document. I.e. If the blog post content is HTML via cdata, I needed a quick script to force Firefox to render that text to innerHTML rather than rendering the raw cdata text.
Me simple man. Me see caveman readme, me like. Sometimes me feel like caveman hitting keyboard to make machine do no good stuff. But sometimes, stuff good. Me no do websites or web things, but me not know about XSLT. Me sometimes hack XML. Me sometimes want show user things. Many many different files format makes head hurt. Me like pretty things though. Me might use this.
People love to complain about verbosity of XML, and it looks complicated from a distance, but I love how I can create a good file format based on XML, validate with a DTD and format with XSLT if I need to make it very human readable.
XML is the C++ of text based file formats if you ask me. It's mature, batteries included, powerful and can be used with any language, if you prefer.
Like old and mature languages with their own quirks, it's sadly fashionable to complain about it. If it doesn't fit the use case, it's fine, but treating it like an abomination is not.
One of my first projects as a professional software engineer at the ripe age of 19 was customizing a pair of Google Search Appliances that my employer had bought. They'd shelled out hundreds of thousands of dollars to rack yellow-faced Dell servers running CentOS with some Google-y Python because they thought that being able to perform full-text searches of vast CIFS document stores would streamline their business development processes. Circa 2011 XHTML was all the rage and the GSA's modus operandi was to transform search results served from the backend in XML into XHTML via XSLT. I took the stock template and turned it into an unholy abomination that served something resembling the rest of the corporate intranet portal by way of assets and markup stolen from rendered Coldfusion application pages, StackOverflow, and W3Schools tutorials.
I learned quickly to leave this particular experience off of my resume as sundry DoD contractors contacted me on LinkedIn for my "XML expertise" to participate in various documentation modernization projects.
The next time you sigh as you use JSX to iterate over an array of Typescript interfaces deserialized from a JSON response remember this post - you could be me doing the same in XSLT :-).
A long time ago, in a dystopic project far, far, away:
Depressed and quite pessimistic about the team’s ability to orchestrate Java development in parallel with the rapid changes to the workbook, he came up with the solution: a series of XSLT files that would automatically build Java classes to handle the Struts actions defined by the XML that was built by Visual Basic from the workbook that was written in Excel.
What is this "XSLT works natively in the browser" sourcery? The last time I used XSLT was like 20 years ago- but I used it A LOT, FOR YEARS. In those days you needed a massive wobbly tower of enterprise Java to make it work which sort of detracted from the elegance of XSLT itself. But if XSLT actually works in the browser- has the holy grail of host-anywhere static templating actually been sitting under our noses this whole time?
I'm also more concerned about depreciation risk. However, you can still do a lot with XSLT 1.0. There is also SaxonJS, which allows you to run XSLT 3.0. However, embedding JavaScript to use XSLT defeats the purpose of this exercise.
Chrome has libxslt; FireFox has something called "Transformiix". Both 1.0. Chrome has no extensions, only 'exsl:node-set'; FireFox has quite a few, although not all of EXSLT.
Plug: here is a small project to get the basic information about the XSLT processor and available extensions. To use with a browser find the 'out/detect.xslt' file there and drag it into the browser. Works with Chrome and Firefox; didn't work with Safari, but I only have an old Windows version of it.
I was _really_ deep into XSLT- I even wrote the XSLT 2 parser for Wikipedia in like 2009, so I'm not sure why I haven't been aware of browser native support for transformations until now. Or maybe I was and I just forgot.
I updated an XSLT system to work with then latest Firefox a couple of years ago. We have scripts in a different directory to the documents being transformed which requires a security setting to be changed in Firefox to make it work, I don't know if an equivalent thing is needed for Chrome.
> massive wobbly tower of enterprise Java to make it work
It wasn't that bad. We used tomcat and some apache libraries for this. Worked fine.
Our CMS was spitting out XML files with embedded HTML that were very cachable. We handled personalization and rendering to HTML (and js) server side with a caching proxy. The XSL transformation ran after the cache and was fast enough to keep up with a lot of traffic. Basically the point of the XML here was to put all the ready HTML in blobs and all the stuff that needed personalization as XML tags. So the final transform was pretty fast. The XSL transformer was heavily optimized and the trick was to stream its output straight to the response output stream and not do in memory buffering of the full content. That's still a good trick BTW. that most frameworks do wrong out of the box because in memory buffering is easier for the user. It can make a big difference for large responses.
These days, you can run whatever you want in a browser via wasm of course. But back then javascript was a mess and designers delivered photoshop files, at best. Which you then had to cut up into frames and tables and what not. I remember Google Maps and Gmail had just come out and we were doing a pretty javascript heavy UI for our CMS and having to support both Netscape and Internet Explorer, which both had very different ideas about how to do stuff.
XSLT works, though if I'm not mistaken browsers are all stuck on older versions of the spec. Firefox has a particularly annoying bug that I run into related to `disable-output-escaping` not really working when you need to encode HTML from the document to render as actual DOM (it renders the raw HTML text).
To show how wild things got w/ XML and XSLT in the early 2000s, I worked for a company that built an ASIC to parse XML at wire speed and process XSLT natively in the chip - because the anticipated future of the internet was all XML/XSLT. Intel bought the company and the guts made their way into the SSE accelerators.
I used XSLT in the past for trade message transformation from one format of XML (produced by an upstream system) to another (used by the downstream consuming system). It works reasonably well for not overly complex stuff but debugging things are a pain once the complexity increases. Prefer to not do that again.
> how I can run it? open XML file
> open blog.xml -a Safari
This didn't work for me on my browsers (FF/Chrome/Safari) on Mac, apparently XSLT only works there when accessed through HTTP:
$ python3 -m http.server --directory .
$ open http://localhost:8000/blog.xml
I remember long hours using XSLT to transform custom XML formats into some other representation that was used by WXWindows in the 2000s, maybe I should give it a shot again for Web :)
In my first job, when .net didn't yet exist, xml + xslt was the templating engine we used for html and (html) e-mail and sometimes csv. I'd write queries in sql server using "for xml" and it would output all data needed for a page and feed it to an xsl template (all server side) which would output html. Microsoft had a caching xsl parser that would result in less than 10ms to load such a page. Up until we though "hey, let's start using xml namespaces, that sounds like a good idea!". Was a bit less fun after that!
Looking back it was a pretty good stack, and it would still work fine today imho. I never started disliking it, but after leaving that job I never wrote another stylesheet.
When I a teenager around 2002, I made what one might call a blogging platform today, and it was using asp, xhtml, xslt, and xml. It worked well in browsers at that time. When I look back on it, it depresses me that I didn't even realize someone could make money hacking together web applications until like a decade later.
I built an actual shipping product that used this approach over 25 years ago. The server would have the state of every session, that would be serialized to xml, and then xslt templates would be used to render html. Idea was that this would allow customers to customize the visual appearance of the webpages, but xslt was too difficult. Not a success.
I did something like this at an employer a while ago as well. Taking it a step further, we wanted to be able to dynamically build the templates that the browser would then use for building the HTML. Senior dev felt the best way would be to have a "master" xslt that would then generate the xslt for the browser. I ended up building the initial implementation and it was a bit of a mind bender. Fun, but not developer friendly for sure .
ZjsComponent: A Pragmatic Approach to Modular, Reusable UI Fragments for Web Development
In this paper, I present ZjsComponent, a lightweight and framework-agnostic web component designed for creating modular, reusable UI elements with minimal developer overhead. ZjsComponent is an example implementation of an approach to creating components and object instances that can be used purely from HTML. Unlike traditional approaches to components, the approach implemented by ZjsComponent does not require build-steps, transpiling, pre-compilation, any specific ecosystem or any other dependency. All that is required is that the browser can load and execute Javascript as needed by Web Components. ZjsComponent allows dynamic loading and isolation of HTML+JS fragments, offering developers a simple way to build reusable interfaces with ease. This approach is dependency-free, provides significant DOM and code isolation, and supports simple lifecycle hooks as well as traditional methods expected of an instance of a class.
Please let this come back since I was highly skilled at it and nobody uses it and I am the sads.. since it was a bit functional and a good challenge and was fun. And I would like to be paid to write teh complicated stylesheets again. Thanks
I looked into this a while ago and concluded that it works fine but browsers are making stroppy noises about deprecating it, so ended up running the transform locally to get html5. Disappointing.
Whoa, I just realized how much Zope’s page templates were basically XSLT that looked slightly different.
This gives me new appreciation for how powerful XSLT is, and how glad I am that I can use almost anything else to get the same end results. Give me Jinja or Mustache any day. Just plain old s-exprs for that matter. Just please don’t ever make me write XML with XML again.
Zope was cool in that you couldn't generate ill-formed markup, and optionally wrapping something in `<a>` didn't need repeating the same condition for `</a>`.
However, it was much simpler imperative language with some macros.
XSLT is more like a set of queries competing to run against a document, and it's easy to make something incomprehensibly complex if you're not careful.
I have created a CMS that supported different building blocks (plugins), each would output its data in XML and supply its XSLT for processing. The CMS called each block, applied the concatenated XSLT and output HTML.
It was novel at the time and really nice and handy to use.
It felt like a great idea at the time, but it was incredibly slow to generate all the HTML pages that way.
Looking back I always assumed it was partly because computers back then were too weak, although reading other comments in this thread it seems like even today people are having performance problems with XSLT.
I used XSLT as a build system for websites way back in 1999–2000. The developer ergonomics were terrible. Looking at the example given, it doesn’t seem like anything much has changed.
Has there been any progress on making this into something developers would actually like to use? As far as I can tell, it’s only ever used in situations where it’s a last resort, such as making Atom/RSS feeds viewable in browsers that don’t support them.
I made a website based on XML documents and XSLT transformations about 20 years ago. I really liked the concept. The infrastructure could have been made much simpler but I guess I wanted to have an excuse to play with these technologies.
After spending months working on my development machine, I deployed the website to my VPS, to realize to my utter dismay that the XSLT module was not activated on the PHP configuration. I had to ask the (small) company to update their PHP installation just for me, which they promptly did.
In the early 2000s, XSLT allowed me as a late teenager with some HTML experience but without real coding skills (I could copy some lines of PHP from various forums and get it to work) to build a somewhat fancy intranet for a local car shop, complete with automatic styling of a feed of car info from a nationwide online sales portal.
Somehow it took me many years, basically until starting uni and taking a proper programming class, before I started feeling like I could realize my ideas in a normal programming language.
XSLT was a kind of tech that allowed a non-coder like me to step by step figure out how to get things to show on the screen.
I think XSLT really has some strong points, in this regard at least.
Sometimes I wish we could have kept XML alive alongside JSON.. I miss the comments, CDATA etc, especially when you have to serialize complex state. I know there are alternatives to JSON like YAML but I felt XML was better than YAML. We adopted JSON for its simplicity but tried to retrofit schema and other things that made XML complex. Like we kind of reinvented JSON Schema, and ended up like what XSD did decades ago and still lacking a good alternative to XSLT..
Let's not romanticize XML. I wrote a whole app that used XSL:T about 25 years ago (it was a military contract and for some reason that required the use of an XML database, don't ask me). Yes it had some advantages over JSON but XSL:T was a total pain to work with at scale. It's a functional language, so you have to get into that mindset first. Then it's actually multiple functional languages composed together, so you have to learn XPath too, which is only a bit more friendly than regular expressions. The language is dominated by hacks working around the fact that it uses XML as its syntax. And there are (were?) no useful debuggers or other tooling. IIRC you didn't even have any equivalent of printf debugging. If you screwed up in some way you just got the wrong output.
Compared to that React is much better. The syntax is much cleaner and more appropriate, you can mix imperative and FP, you have proper debugging and profiling tools, and it supports incremental re-transform so it's actually useful for an interactive UI whereas XSL:T never was so you needed JS anyway.
I just had to explain to some newbies that SOAP is a protocol with rigid rules; REST is an architectural style with flexibility. The latter means that you have to work and document really well and consumers of the API need tools like Postman etc. to be even able to use the API. With SOAP, you get most of that for free.
Postman is just a terrible GUI for making HTTP requests. Using a REST API can be as simple as `curl https://api.github.com/repos/torvalds/linux`, and you can even open that link in a browser. SOAP requires sending a ton of XML [0] - it is not very usable without a dedicated SOAP-aware tool.
I'm old enough to remember when Google released AJAXSLT in 2005. It was a JS implementation of XSLT so that you could consistently use XSLT in the browser.
The funny thing is that the concept of AJAX was fairly new at the time, and so for them it made sense that the future of "fat" web pages (that's the term they use in their doc) was to use AJAX to download XML and transform it. But then people quickly learned that if you could just use JS to generate content, why bother with XML at all?
Back in 2005 I was evaluating some web framework concepts from R&D at the company I worked, and they were still very much in an XML mindset. I remember they created an HTML table widget that loaded XML documents and used XPATH to select content to render in the cells.
XSLT is cool and was quite mind-expanding for me when it came out - I wouldn't say it's "grug brain" level technology at all. An XML language for manipulating XML - can get quite confusing and "meta". I wouldn't pick it as a tool these days.
I know XML and XSLT gets a lot of hate. To some extent, the hate for XSLT is warranted. But I have to work with XML files for my job, and it was pretty refreshing to not have to install any libraries to work with them in a web app. We use XML as the serialization format for a spaceflight mission planning app, so there's a lot of complex data that would be trickier to represent with JSON. At the end of the day, HTML is spicy XML, so you can use all the native web APIs to read/write/query/manipulate XML files and even apply XSLT transformations.
I suspect some of the hate towards XML from the web dev community boils down to it being "old". I'll admit that used to have the same opinion until I actually started working with it. It's a little bit more of a PITA than working with JSON, but I think I'm getting a much more expressive and powerful serialization format for the cost of the added complexity.
I remember that I did the same in 2005-2006, just combine XML with XSL(T) to let the browser transform the XML into HTML.
After that, also combined XML with XSL(T) with PHP.
At that time modern way of working, separate concerns in the frontend.
Around 2008-2009 I stopped with this method, and start using e.g. smarty.
I still like the idea of using all native methods from browsers, that are described at the W3c.
No frameworks or libraries needed, keep it simple and robust.
I think there are just a few that know XSL(T) these days, or need some refresh (like me).
XSLT was many people’s first foray into functional programming (usually unwilling, because their company got a Google Search Appliance or something). I can’t imagine ever reaching for it again personally, but it was useful and somewhat mind-expanding in its heyday.
I made many transformation pipelines with XSLT back in the days, and even a validation engine using Schematron; it was one of the most pleasant experience I had.
It never broke, ever.
It could have bugs, of course! -- but only "programmer bugs" (behavior coded in a certain way that should have been coded in another); it never suddenly stopped working for no reason like everything does nowadays.
I love XSLT, that is what I ported my site to after the CGI phase.
Unfortunately it is not a sentiment that is shared by many, and many developers always had issues understanding the FP approach of its design, looking beyond the XML.
25 years later we have JSON and YAML formats reinventing the wheel, mostly badly, for that we already had nicely available on the XML ecosystem.
XML is tooling based, and there have been plenty of tools to write XSLT on, including debugging and processing example fragments, naturally not something vi crowd ever became aware of amid their complaints.
Agree, when MS moved their office file formats to xml, I made plenty of money building extremely customizable templating engines all based on a very small amount of XSLT - it worked great given all the structure and metadata available in xml
Does anybody remember Cocoon? It was an XSLT Web Framework that built upon Spring. It was pretty neat, you could do the stuff XSLT was great at with stylesheets that were mapped to HTTP routes, and it was very easy to extend it with custom functions and supporting Java code to do the stuff it wasn't really great at. Though I must say that as the XSLT stylesheets grew in complexity, they got *really* hard to understand, especially compared to something like a Jinja template.
Yes! In the mid 00's, two places I worked (major US universities) used Cocoon heavily. It was a good fit for reporting systems that had to generate multiple output formats, such as HTML and PDF.
Early in my career I worked on a carrier's mobile internet portal in the days before smartphones. It was XSLT all the way down, including individual XSLT transforms for every single component the CMS had for every single handset we supported (hundreds) as they all had different capabilities and browser bugs. It was not that fun to write complex logic in haha but was kind of an interesting thing to work on, before iPhone etc came along and everything could just render normal websites.
Same. I was part of the mobile media messaging (WAP) roll-out at Vodafone. Oh man, XSLT was one of those "theoretical" W3C languages that (rightfully) aged like milk. Never again.
i have a static website with a menu. keeping the menu synchronized over the half dozen pages is a pain.
my only option to fix this are javascript, xslt or a server side html generator. (and before you ask, static site generators are no better, they just make the generation part manual instead of automatic.)
i don't actually care if the site is static. i only care that maintenance is simple.
build tools are not simple. they tend to suffer from bitrot because they are not bundled with the hosting of the site or the site content.
server side html generators (aka content management systems, etc.) are large and tie me to a particular platform.
frontend frameworks by default require a build step and of course need javascript in the browser. some frameworks can be included without build tools, and that's better, but also overkill for large sites. and of course then you are tied to the framework.
another option is writing custom javascript code to include an html snippet from another file.
or maybe i can try to rig include with xslt. will that shut up the people who want to view my site without javascript?
at some point there was discussion for html include, but it has been dropped. why?
> i have a static website with a menu. keeping the menu synchronized over the half dozen pages is a pain
You can totally do that with PHP? It can find all the pages, generate the menu, transform markdown to html for the current page, all on the fly in one go, and it feels instantaneous. If you experience some level of traffic you can put a CDN in front but usually it's not even necessary.
that's the server side html generator i already mentioned. ok, this one is not large, but it still ties me to a limited set of server platforms that support running php. and if i have to write code i may as well write javascript and get a platform independent solution.
the point is, none of the solutions are completely satisfactory. every approach has its downsides. but most critically, all this complaining about people picking the wrong solution is just bickering that my chosen solution does not align with their preference.
my preferred solution btw is to take a build-less frontend framework, and build my site with that. i did that with aurelia, and recently built a proof of concept with react.
You didn't actually indicate a downside to using xslt, and yes it would fit your use case of a static include for a shared menu, though the better way to do it is to move all of the shared pieces of your site into the template and then each page is just its content. Sort of like using a shared CSS file.
To just do the menu, if your site is xhtml, IIRC you could link to the template, use a <my-menu> in the page, and then the template just gives a rule to expand that to your menu.
the downside to xslt is xslt itself, and lack of maintenance of xslt support in the browser. (browsers only supports xslt 1.0 and it looks like even that may be dropped in the future, making its use not futureproof without server side support)
I recently tried building a website using Server Side Includes (SSI) with apache/nginx to make templates for the head, header and footer. Then I found myself missing the way Hugo does things, using a base template and injecting the content into the base template instead.
This was easy do achieve with PHP with a super minimal setup, so I thought, why not? Still no build steps!
PHP is quite ubiquitous and stable these days so it is practically equivalent to making a static site. Just a few sprinkles of dynamism to avoid repeting HTML all over the place.
I had done a couple of nontrivial projects with XSLT at the time and the problem with it is its lack of good mnemonics, discoverability from source code, and other ergonomics coupled with the fact that it's only used rarely so you find yourself basically relearning after having not used it for a couple of weeks. Template specifity matching is a particularly bad idea under those circumstances.
XSLT technically would make sense the more you're using large amounts of boilerplate XML literals in your template because it's using XML itself as language syntax. But even though using XML as language meta-syntax, it has lots of microsyntax ie XPath, variables, parameters that you need to cram into XML attributes with the usual quoting restrictions and lack of syntax highlighting. There's really nothing in XSLT that couldn't be implemtented better using a general-purpose language with proper testing and library infrastructure such as Prolog/Datalog (in fact, DSSSL, XSLT's close predecessor for templating full SGML/HTML and not just the XML subset, was based on Scheme) or just, you know, vanilla JavaScript which was introduced for DOM manipulation.
Note maintainance of libxml2/libxslt is currently understaffed [1], and it's a miracle to me XSLT (version v1.0 from 1999) is shipping as a native implementation in browsers still unlike eg. PDF.js.
My first intranet job early 2000s reporting was done this way. You could query a db via asp to get some xml, then transform using xslt and get a big html report you could print. I got pretty good at xslt.
Nowadays I steer towards a reporting system for reports, but for other scenario you’re typically doing one of the stacks he mentioned: JSON or md + angular/vue/react/next/nuxt/etc
I’ve kinda gotten to a point and curious if others feel same: it’s all just strings. You get some strings from somewhere, write some more strings to make those strings show other strings to the browser. Sometimes the strings reference non strings for things like video/audio/image. But even those get sent over network with strings in the http header. Sometimes people have strong feelings about their favorite strings, and there are pros and cons to various strings. Some ways let you write less strings to do more. Some are faster. Some have angle brackets, some have curly brackets, some have none at all! But at the end of the day- it’s just strings.
Just my two cents - the worst pieces of tech I ever worked with in my 40+ year career were Hibernate (second) and XSLT templating for an email templating system around 2005. Would not touch it with a stick if I can avoid it.
I remember Blizzard actually using this concept for their battle.net site like, 10 years ago. I found it always really cool, but at some point I think they replaced it with a "regular" SPA stack.
I think one big problem with popularizing that approach is that XSLT as a language frankly sucks. As an architecture component, it's absolutely the right idea, but as long as actually developing in it is a world of pain, I don't see how people would have any incentive to adopt it.
The tragic thing is that there are other pure-functional XML transformation languages that are really well-designed - like XQuery. But there is no browser that supports those.
My favorite thing about XQuery is that it supports logically named functions, not just templates that happen to work upon whatever one provides it as with XSLT. I think golang's text/template suffers from the same problem - good luck being disciplined enough to always give it the right context, or you get bad outcomes
An example I had lying around:
declare function local:find-outline-num( $from as element(), $num as xs:integer ) as element()* {
for $el in $from/following-sibling::h:div[@class=concat('outline-', $num)]/*[local-name()=concat('h', $num)]
return $el
};
> [...] the idea of building a website like this in XML and then transforming it using XSL is absurd in and of itself [...]
In the comments the creators comment on it, like that it was a mess to debug. But I could not find anything wrong with the technique itself, assuming that it is working.
There are 2 main problems with XSLT.
The first one is that manipulating strings is a pain. Splitting strings, concatenating them is verbose like hell and difficult to read.
The second one is that it quickly becomes a mess when you use the "priority" attribute to overload functions.
I compare XSLT to regular expressions, with great flexibility but impossible to maintain due to poor readability. To my knowledge, it's impossible to trace.
My first resume was in XSLT, because I didn't want to duplicate HTML tags and style around, it worked really well, and it was fun to see the xml first when clicking "view source".
You don't even need XML anymore to do XML, "thanks" to iXML where you can provide a grammer of any language and have that work as if you are working with XML. Not saying that is a good idea though.
I’m disappointed that this uses a custom XML format, rather than RSS (tolerable) or Atom (better). Then you could just drop it into a feed reader fine.
At the time, I strongly considered making the next iteration of my website serve all blog stuff as Atom documents—post lists as feeds, and individual pages as entries. In the end, I’ve decided to head in a completely different direction (involving a lot of handwriting!), but I don’t think the idea is bad.
XML needs a renaissance because it solves problems modern formats still fumble with. Robust schema validation, namespaces, mixed content, and powerful tooling like XPath/XSLT. It's verbose, yes. It's can be made to look like shit and make you wanna throw up, but also battle-tested and structured for complexity. We ditched it too soon chasing simplicity.
Presumably part of the goal is to implicitly claim that what's being described is so simple a caveman could understand it. But writing such a post about XSLT is like satire. Next up, grug brain article about the Coq proof assistant?
Many, many years back I used Symphony21[0] for an events website. It’s whole premise was build an XML structure via blueprints and then your theme is just XSLT templates for pages.
Gave it up because it turns out the little things are just a pain. Formatting dates, showing article numbers and counts etc.
I use XSLT to generate a markdown README from a Zotero export XML file. It works well, but some simple things become much harder - sorting, counting, uniqueness.
Still maintaining an e-commerce site using XML/xslt and Java/servlet... Passed easily each wave of tech and survived 2 databases migrations (mainframe/db2 => sqlserver => ERP)
I have last used XSLT probably about 2 decades ago. Back then XML was king. Companies were transferring data almost always using XML and translating it to a visual web-friendly format with XSLT was pretty neat. Cool tech and very impressive.
clutter? i find it MUCH more elegant and simple, but conceptually and practically, than the absolute clown-car of modern js driven web, css frameworks hacks, etc etc
XSLT is probably the #1 reason people get turned off from XML and swear it off as a mistaken technology. I actually quite like XML, so I have been trying lately to tease out exactly what it is that makes XSLT a mistake.
XML is a semi-structured format, which (apart from & < >) includes plain text as a more or less degenerate case. I don't think we have any other realistic format for marking up plain text with arbitrary semantics. You can have, for example, a recipe format with <ingredient> as part of its schema, and it's trivial to write an Xpath to pull out all the <ingredient>s (to put them in your shopping list, or whatever).
Obviously, XSLT is code. Nobody denies this really. One thing about code is that it's inherently structured. Only the craziest of literate programmers would try to embed executable code inside of text. But I don't think that's the biggest problem. Code is special in that special purpose programming languages always leak outside the domain they're designed for. If you try and write a little language that's really well-scoped to transforming XML, you are definitely going to want to call stuff outside it sooner or later.
Combined with the fact that there really isn't any value in ever parsing or processing a stylesheet, it seems like it was doomed never to pan out.
xslt does one thing clean , walks trees on tree input. both data and layout stay in structured memory. no random jumps. browser-native xslt eval can hit perf spots most json-to-dom libs might miss. memory layout was aligned by design. we dropped it too early just cuz xml got unpopular
Long time ago somebody wanted to put a searchable directory of products on a CD. It was maybe 100MB. There was no sqlite back then and the best browser you could count on your client having was probably IE 5.5
JS was waay too slow, but it turned out that even back then XSLT was blazing fast. So I basically generated XML with all the data, wrote a simple XSLT with one clever XPath that generated search input form, did the search and displayed the results, slapped the xml file in CD auto-run and called it a day. It was finding results in a second or less. One of my best hacks ever.
Since then I always wanted to make a html templating system that compiles to XSLT and does the HTML generation on client side. I wrote some, but back then Firefox didn't support displaying XML+XSLT directly and the workaround I came up with I didn't like. Then the AJAX came and then JS got faster and client side rendering with JS became viable. But I still think it's a good idea, to send just dynamic XMLs with static XSLTs preloaded and cached, if we ever want to come back to purely server driven request-response flow. Especially if binary format for XML catches on.
XSLT is great fun as a general functional programming language! You can build native functional data-structures[1], implement graph-traversal algorithms[2], and even write test assertions[3]!
internet Explorer also had the ability to render XML directly into HTML tables without using any JS using the datasrc attribute. I had to deal with this nonsense early in my career in the early 2000s, along with people regularly complaining that it did not work in Firefox.
Abandoning XML tech is was and forever will be the webs biggest mistake. The past 20 years has been just fumbling about trying to implement things that it would have provided easily.
>HTML Components (HTCs) are a legacy technology used to implement components in script as Dynamic HTML (DHTML) "behaviors" in the Microsoft Internet Explorer web browser. Such files typically use an .htc extension and the "text/x-component" MIME type.
JavaScript Pie Menus, using Internet Explorer "HTC" components, xsl, and xml:
If that wasn't obsolete enough, here is the "ConnectedTV Skin Editor". It was a set of HTC components, XML, and XML Schemas, and a schema driven wysiwyg skin editor for ConnectedTV: a Palm Pilot app that turned your Palm into a personalized TV guide + smart remote.
Full fresh lineup of national and local broadcast + TiVo + Dish TV guides with customized channel groups, channel and show filtering and favorites, hot sync your custom tv guide with just the shows you watch, weeks worth of schedules you could download and hot sync nightly with the latest guide updates.
Integrated with trainable consumer IR remote controller with custom touch screen user interfaces (with 5-function "finger pie menus" that let you easily tap or stroke up/down/left/right to stack up multiple gesture controls on each button (conveniently opposite and orthogonal for volume up/down, channel next/previous, page next/previous, time forward/back, show next/previous, mute/unmute, favorite/ignore, etc -- finger pies are perfect for the kind of opposite and directionally oriented commands on remote controls, and you need a lot fewer 5-way buttons than single purpose physical buttons on normal remotes, so you could pack a huge amount of functionality into one screen, or have any number of less dense screens, customized for just the devices you have and features you use. Goodbye TiVo Monolith Monster remote controls, since only a few of the buttons were actually useful, and ConnectedTV could put 5x the number of functions per gesture activated finger pie menu button.
The skin editor let you make custom user interfaces by wysiwyg laying out and editing out any number of buttons however you liked and bind tap/left/right/up/down page navigation, tv guide time and channel and category navigation, sending ir commands to change the channel (sends multi digits per tap on station or show so you can forget the numbers), volume, mute, rewind/skip tivo, etc.
Also you could use finger pies easily and reliably on the couch in a dark room with your finger instead of the stylus. Users tended to lose their Palm stylus in the couch cushions (which you sure don't wanna go fishing around for if JD Vance has been visiting) while eating popcorn and doing bong hits and watching tv and patting the dog and listening to music and playing video games in their media cave, so non-stylus finger gesture control was crucial.
Finger pies were was like iPhone swipe gestures, but years earlier, much cheaper (you could get a cheap low end Palm for dirt cheap and dedicate it to the tv). And self revealing (prompt with labels and give feedback (with nice clicky sounds) and train you to use the gestures efficiently) instead of invisible mysterious iPhone gestures you have to discover and figure out without visual affordances. After filtering out all the stuff you never watch and favoriting the ones you do, it was much easier to find just the shows you like and what was on right now.
More on the origin of the term "Finger Pie" for Beatles fans (but I digress ;) :
It was really nice to have the TV guide NOT on the TV screen taking you away from watching the current show, and NOT to have to wait 10 minutes while it slowly scrolled the two visible rows to through 247 channels to finally see the channel you wanted to watch (by that time you'll miss a lot of the show, but be offered lots of useless shit and psychic advice to purchase from an 800 number with your credit card!).
Kids these days don't remember how horrible and annoying those slow scrolling TV guides with ads for tele-psychics and sham wows and exercise machines used to be.
I can objectively say that it was much better than the infamous ad laden TV Guide Scroll:
Using those slow scrolling non-interactive TV guides with obnoxious ads was so painful that you needed to apply HEAD ON directly to the forehead again and again and again to ease the pain.
You could use the skin editor to create your own control panels and buttons for whatever TV, TiVO, DVR, HiFi, Amplifier, CD, DVD, etc players you wanted to use together. And we had some nice color hires skins for the beautiful silver folding Sony Clie.
It was also nice to be able to curate and capture just the buttons you wanted for the devices that you actually use together, and put them all onto one page, or factor them out into different pages per device. You could ignore the 3 digit channel number and never peck numbers again, just stroke up on your favorite shows to switch the channel automatically.
We ran out of money because it was so expensive to license the nightly feed of TV guide (downloading a huge sql dump every night of the latest schedules as they got updated), and because all of our competitors were just stealing their data by scraping it from TV guide web sites instead of licensing it legally. (We didn't have Uber or OpenAI to look up to for edgy legal business practice inspiration.)
Oh well, it was fun while it lasted, during the days that everybody was carrying a Palm Pilot around beaming their contacts back and forth with IR. What a time that was, right before and after 9/11 2001. I remember somebody pointedly commented that building a Palm app at that time in history was kind of like opening a flower shop at the base of the World Trade Center. ;(
Blast from the past. I actually used XSLT quite a bit in the early 00s. Eventually I think everyone figured out XML is an ugly way to write S-expressions.
What is needed more now is YAML, especially the visualization of the YAML format supported by k8s by default. On the contrary, in the devops community, people need to generate YAML through HTML to execute cicd. For example, this tool shows k8s-generator.vercel.app
Features are now available like key (index) to greatly speedup the processing. Good XSLT implementation like Saxon definitively helps as well on the perf aspect.
When it comes to transform XML to something else, XSLT is quite handy by structuring the logic.
XSLT 2+ was more about side effects.
I never really grokked later XSLT and XPath standards though.
XSLT 1.0 had a steep learning curve, but it was elegant in a way poetry is elegant because of extra restrictions imposed on it compared to prose. You really had to stretch your mind to do useful stuff with it. Anyone remembers Muenchian grouping? It was gorgeous.
Newer standards lost elegance and kept the ugly syntax.
No wonder they lost mindshare.
My biggest problem with XSLT is that I've never encountered a problem that I wouldn't rather solve with an XPath library and literally any other general purpose programming language.
When XSLT was the only thing with XPath you could rely on, maybe it had an edge, but once everyone has an XPath library what's left is a very quirky and restrictive language that I really don't like. And I speak Haskell, so the critic reaching for the reply button can take a pass on the "Oh you must not like functional programming" routine... no, Haskell is included in that set of "literally any other general purpose programming language" above.
XML (the data structure) needs a non-XML serialization.
Similar to how Semantic Web's Owl has four different serializations, only one of them being the XML serialization. (eg. Owl can be represented in Functional, Turtle, Manchester, Json, and N-triples syntaxes.)
That's YAML, and it is arguibly worse. Here's a sample YAML 1.2 document straight from their spec:
Nightmare fuel. Just by looking at it, can you tell what it does?--
Some notes:
- SemWeb also has JSON-LD serialization. It's a good compromise that fits modern tooling nicely.
- XML is still a damn good compromise between human readable and machine readable. Not perfect, but what is perfect anyway?
- HTML5 is now more complex than XHTML ever was (all sorts of historical caveats in this claim, I know, don't worry).
- Markup beauty is relative, we should accept that.
- Xee: https://github.com/Paligo/xee
- xrust: https://docs.rs/xrust/latest/xrust/xslt/
- XJSLT (compiles XSLT to JS): https://github.com/egh/xjslt
It looks great, then you design your stuff and it goes great, then you deploy to the real world and everything catches on fire instantly and everytime you stop one another one starts.
Generally speaking I feel like this is true for a lot of stuff in programming circles, XML included.
New technology appears, some people play around with it. Others come up with using it for something else. Give it some time, and eventually people start putting it everywhere. Soon "X is not for Y" blogposts appear, and usage finally starts to decrease as people rediscover "use the right tool for the right problem". Wait yet some more time, and a new technology appears, and the same cycle begins again.
Seen it with so many things by now that I think "we'll" (the software community) forever be stuck in this cycle and the only way to win is to explicitly jump out of the cycle and watch it from afar, pick up the pieces that actually make sense to continue using and ignore the rest.
We eventually said, "what if we made databases based on JSON" and then came MongoDB. Worse performance than a relational database, but who cares! It's JSON! People have mostly moved away from document databases, but that's because they realized it was a bad idea for the majority of usecases.
I think the only left out part is about people currently believing in the current hyped way, "because this time it's right!" or whatever they claim. Kind of the way TypeScript people always appear when you say that TypeScript is currently one of those hyped things and will eventually be overshadowed by something else, just like the other languages before it, then soon sure enough, someone will share why TypeScript happen to be different.
https://en.wikipedia.org/wiki/XML_appliance
E.g.
https://www.serverwatch.com/hardware/power-up-xml-data-proce...
i don't believe this is true. machine language doesn't need the kind of verbosity that xml provides. sgml/html/xml were designed to allow humans to produce machine readable data. so they were meant for humans to talk to machines and vice versa.
I think part of the problem is focusing on the wrong aspect. In the case of XSLT, I'd argue its most important properties are being pure, declarative, and extensible. Those can have knock-on effects, like enabling parallel processing, untrusted input, static analysis, etc. The fact it's written in XML is less important.
Its biggest competitor is JS, which might have nicer syntax but it loses those core features of being pure and declarative (we can implement pure/declarative things inside JS if we like, but requiring a JS interpreter at all is bad news for parallelism, security, static analysis, etc.).
When fashions change (e.g. XML giving way to JS, and JSON), we can end up throwing out good ideas (like a standard way to declare pure data transformations).
(Of course, there's another layer to this, since XML itself was a more fashionable alternative to S-expressions; and XSLT is sort of like Lisp macros. Everything old is new again...)
I'm pretty sure that's because implementing XSLT 2.0 needs a proprietary library (Saxon XSLT[0]). It was certainly the case in the oughts, when I was working with XSLT (I still wake up screaming).
XSLT 1.0 was pretty much worthless. I found that I needed XSLT 2.0, to get what I wanted. I think they are up to XSLT 3.0.
[0] https://en.wikipedia.org/wiki/Saxon_XSLT
Streaming is not supported until later version.
How, where? In 2013 I was still working a lot with XSLT and 1.0 was completely dead everywhere one looked. Saxon was free for XSLT 2 and was excellent.
I used to do transformation of both huge documents, and large number of small documents, with zero performance problems.
Obviously, that means there's a lot of legacy processes likely still using it.
The easiest way to improve the situation seems to be to upgrade to a newer version of XSLT.
In the early days the xsl was all interpreted. And was slow. From ~2004 or so, all the xslt engines came to be jit compiled. XSL benchmarks used to be a thing, but rapidly declined in value from then onward because the perf differences just stopped mattering.
But in the end the core problem is XSLT, the language. Despite being a complete programming language, your options are very limited for resolving performance issues when working within the language.
I worked with a guy who knew all about complexity analysis, but was quick to assert that "n is always small". That didn't hold - but he'd left the team by the time this became apparent.
A couple of blue chip websites I‘ve seen that could be completely taken down just by requesting the sitemap (more than once per minute).
PS: That being said it is an implantation issue. But it may speak for itself that 100% of the XSLT projects I‘ve seen had it.
Anyway.
Paco Grug talks about how they want a website (e.g. a blog) without a server-side build-step. Just data, shape of data, and the building happening automagically, this time on the client. HTML has javascript and frames for that, but HTML painfully lacks transclusion, for header menu, sidebar and footer, which birthed myriads of web servers and webserver technologies.
It seems that .xml can do it too, e.g. transclusion and probably more. The repo doesn't really showcase it.
Anyway, I downloaded the repo, and run it on a local webserver, it works. It also works javascript disabled, on an old browser. Nice technology, maybe it is possible to use it for something useful (in a very specific niche). For most other things javascript/build-step/dynamic webserver is better.
Also, I think that for a blog you'll want the posts in separate files, and you can't just dump them in a folder and expect that the browser will find them. You'll need a webserver/build-step/javascript for that.
1. the browsers were inconsistent in 1990-2000 so we started using JS to make them behave the same
2. meanwhile the only thing we needed were good CSS styles which were not yet present and consistent behaviour
3. over the years the browsers started behaving the same (mainly because Highlander rules - there can be only one, but Firefox is also coping well)
4. but we already got used to having frameworks that would make the pages look the same on all browsers. Also the paradigm was switched to have json data rendered
5. at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.
Why do I say that? Recently we started working on a migration from a legacy system. Looks like 2000s standard page per HTTP request. Every action like add remove etc. requires a http refresh. However it works much faster than our react system. Because:
1. Nowadays the internet is much faster
2. Phones have a lot of memory which is wasted by js frameworks
3. in the backend all's almost same old story - CRUD CRUD and CRUD (+ pagination, + transactions)
It works well here on HN for example as it is quite simple.
There are a lot of other examples where people most likely should do a simple website instead of using JS framework.
But "we could all go back to full page reloads" is not true, as there really are proper "web applications" out there for which full page reloads would be a terrible UX.
To summarize there are:
"websites", "web documents", "web forms" that mostly could get away with full page reloads
"web applications" that need complex stuff presented and manipulated while full page reload would not be a good solution
Let's face it, most uses of JS frameworks are for blogs or things that with full page reload you not even notice: nowadays browsers are advanced and only redraw the screen when finished loading the content, meaning that they would out of the box mostly do what React does (only render DOM elements who are changes), meaning that a page reload with a page that only changes one button at UI level does not result in a flicker or loading of the whole page.
BTW, even React now is suggesting people to run the code server-side if it is possible (it's the default of Next.JS), since it makes the project easier to maintain, debug, test, as well as get better score in SEO from search engines.
I'm still a fan of the "old" MVC models of classical frameworks such as Laravel, Django, Rails, etc. to me make overall projects that are easier to maintain for the fact that all code runs in the backend (except maybe some jQuery animation client side), model is well separated from the view, there is no API to maintain, etc.
grug remember ancestor used frames
then UX shaman said frame bad all sour faced frame ugly they said, multiple scrollbar bad
then 20 years later people use fancy js to emulate frames grug remember ancestor was right
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Even with these problems, classic frames might have been salvageable, but nobody bothered to fix them.
Most frames are used for menu, navigation, frame for data, frame for additional information of data. And they are great for that. I don't think that frames are different instances of the browser engine(?) but that doesn't matter the slightest(?). They are fast and lightweight.
> The header/footer/sidebar frames are subordinate and should not navigate freely.
They have the ability to navigate freely but obviously they don't do that, they navigate different frames.
History doesn't work right
Bookmarks don't work right -- this applies to link sharing and incoming links too
Back button doesn't work right
The concept is good. The implementation is bad.
https://pubs.opengroup.org/onlinepubs/9799919799/
They can navigate targeting any other frame. For example, clicking "System Interfaces" updates the bottom-left navigation menu, while keeping the state of the main document frame.
It's quite simple, just uses the `target` attribute (target=blank remains popular as a vestigial limb of this whole approach).
This also worked with multiple windows (yes, there were multi-window websites that could present interactions that handled multiple windows).
The popular iframe is sort of salvaged from frame tech, it is still used extensively and not deprecatred.
Classic frames are simple. Too simple. Your link goes to the default state of that frameset. Can you link me any non-default state? Can I share a link to my current state with you?
Then again, it was a long time. Maybe it's me misremembering.
This was maybe 2008?
jQuery took over very quickly though for all of those.
Almost sure it was available on IE6. But even if not, you could emulate it using hidden iframes to call pages which embedded some javascript interacting with the main page. I still have fond memories of using mootools for lightweight nice animations and less fond ones of dojo.
Kuro5hin had a dynamic commenting system based on iframes like you describe.
Internet Explorer didn’t support DOM events, so addEventListener wasn’t cross-browser compatible. A lot of people put work in to come up with an addEvent that worked consistently cross-browser.
The DOMContentLoaded event didn’t exist, only the load event. The load event wasn’t really suitable for setting up things like event handlers because it would wait until all external resources like images had been loaded too, which was a significant delay during which time the user could be interacting with the page. Getting JavaScript to run consistently after the DOM was available, but without waiting for images was a bit tricky.
These kinds of things were iterated on in a series of blog posts from several different web developers. One blogger would publish one solution, people would find shortcomings with it, then another blogger would publish a version that fixed some things, and so on.
This is an example of the kind of thing that was happening, and you’ll note that it refers to work on this going back to 2001:
https://robertnyman.com/2006/08/30/event-handling-in-javascr...
When jQuery came along, it was really trying to achieve two things: firstly, incorporating things like this to help browser compatibility; and second, to provide a “fluent” API where you could chain API calls together.
2002, I was using “JSRS”, and returning http 204/no content, which causes the browser to NOT refresh/load the page.
Just for small interactive things, like a start/pause button for scheduled tasks. The progress bar etc.
But yeah, in my opinion we lost about 15 years of proper progress.
The network is the computer came true
The SUN/JEE model is great.
It’s just that monopolies stifle progress and better standards.
Standards are pretty much dead, and everything is at the application layer.
That said.. I think XSLT sucks, although I haven’t touched it in almost 20 years. The projects I was on, there was this designer/xslt guru. He could do anything with it.
XPath is quite nice though
Internet Explorer 6 was released in 2001 and didn’t drop below 3% worldwide until 2015. So that’s a solid 14 years of paralysis in browser compatibility.
DOM manipulation of that sort is JS dependent, of course, but I think considering language features and the environment, like the DOM, to be separate-but-related concerns is valid. There were less kitchen-sink-y libraries that only concentrated on language features or specific DOM features. Some may even consider a few parts in a third section: the standard library, though that feature set might be rather small (not much more than the XMLHTTPRequest replacement/wrappers?) to consider its own thing.
> For stuff which didn't need JS at all, there also shouldn't be much need for JQuery.
That much is mostly true, as it by default didn't do anything to change non-scripted pages. Some polyfills for static HTML (for features that were inconsistent, or missing entirely in, usually, old-IE) were implemented as jQuery plugins though.
--------
[1] Though I don't think they were called that back then, the term coming later IIRC.
[2] Method chaining³, better built-in searching and filtering functions⁴, and so forth.
[3] This divides opinions a bit though was generally popular, some other libraries did the same, others tried different approaches.
[4] Which we ended up coding repeatedly in slightly different ways when needed otherwise.
HTML was the original standard, not JS. HTML was evolving early on, but the web was much more standard than it was today.
Early-mid 1990s web was awesome. HTML served HTTP, and pages used header tags, text, hr, then some backgound color variation and images. CGI in a cgi-bin dir was used for server-side functionality, often written in Perl or C: https://en.m.wikipedia.org/wiki/Common_Gateway_Interface
Back then, if you learned a little HTML, you could serve up audio, animated gifs, and links to files, or Apache could just list files in directories to browse like a fileserver without any search. People might get a friend to let them have access to their server and put content up in it or university, etc. You might be on a server where they had a cgi-bin script or two to email people or save/retrieve from a database, etc. There was also a mailto in addition to href for the a (anchor) tag for hyperlinks so you could just put you email address there.
Then a ton of new things were appearing. PhP on server-side. JavaScript came out but wasn’t used much except for a couple of party tricks. ColdFusion on server-side. Around the same time was VBScript which was nice but just for IE/Windows, but it was big. Perl then PhP were also big on server-side. If you installed Java you could use Applets which were neat little applications on the page. Java Web Server came out serverside and there were JSPs. Java Tomcat came out on server-side. ActionScript came out to basically replace VBScript but do it on serverside with ASPs. VBScript support went away.
During this whole time, JavaScript had just evolved into more party tricks and thing like form validation. It was fun, but it was PhP, ASP, JSP/Struts/etc. serverside in early 2000s, with Rails coming out and ColdFusion going away mostly. Facebook was PhP mid-2000s, and LAMP stack, etc. People breaking up images using tables, CSS coming out with slow adoption. It wasn’t until mid to later 2000s until JavaScript started being used for UI much, and Google’s fostering of it and development of v8 where it was taken more seriously because it was slow before then. And when it finally got big, there was an awful several years where it was framework after framework super-JavaScript ADHD which drove a lot of developers to leave web development, because of the move from server-side to client-side, along with NoSQL DBs, seemingly stupid things were happening like client-side credential storage, ignoring ACID for data, etc.
So- all that to say, it wasn’t until 2007-2011 before JS took off.
I've got a .NET/Kestrel/SQLite stack that can crank out SSR responses in no more than ~4 milliseconds. Average response time is measured in hundreds of microseconds when running release builds. This is with multiple queries per page, many using complex joins to compose view-specific response shapes. Getting the data in the right shape before interpolating HTML strings can really help with performance in some of those edges like building a table with 100k rows. LINQ is fast, but approaches like materializing a collection per row can get super expensive as the # of items grows.
The closer together you can get the HTML templating engine and the database, the better things will go in my experience. At the end of the day, all of that fancy structured DOM is just a stream of bytes that needs to be fed to the client. Worrying about elaborate AST/parser approaches when you could just use StringBuilder and clever SQL queries has created an entire pointless, self-serving industry. The only arguments I've ever heard against using something approximating this boil down to arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.
Unfortunately, they're not actually wrong though :-(
Still, there are ways to enforce escaping (like preventing "stringly typed" programming) which work perfectly well with streams of bytes, and don't impose any runtime overhead (e.g. equivalent to Haskell's `newtype`)
unless you have a high latency internet connection: https://news.ycombinator.com/item?id=44326816
-- edit --
by the way in 2005 I programmed using very funny PHP framework PRADO that was sending every change in the UI to the server. Boy it was slow and server heavy. This was the direction we should have never gone...
not a good example. i can't find it now, but there was a story/comment about a realtor app that people used to sell houses. often when they were out with a potential buyer they had bad internet access and loading new data and pictures for houses was a pain. it wasn't until they switched to using a frontend framework to preload everything with the occasional updates that the app became usable.
low latency affects any interaction with a site. even hackernews is a pain to read over low latency and would improve if new comments where loaded in the background. the problem creeps up on you faster than you think.
It can also be imposed by the client, e.g. via a https://en.wikipedia.org/wiki/Web_accelerator
I'm probably guilty of some of the bad practice: I have fond memories of (ab)using XSLT includes back in the day with PHP stream wrappers to have stuff like `<xsl:include href="mycorp://invoice/1234">`
This may be out-of-date bias but I'm still a little uneasy letting the browser do the locally, just because it used to be a minefield of incompatibility
Last thing I really did with XML was a technology called EXI, a transfer method that converted an XML document into a compressed binary data stream. Because translating a data structure to ASCII, compressing it, sending it over HTTP etc and doing the same thing in reverse is a bit silly. At this point protobuf and co are more popular, but imagine if XML stayed around. It's all compatible standards working with each other (in my idealized mind), whereas there's a hard barrier between e.g. protobuf/grpc and JSON APIs. Possibly for the better?
I was curious about how it is implemented and I found the spec easy to read and quite elegant: https://www.w3.org/TR/exi/
For a transport tech XML was OK. Just wasted 20% of your bandwidth on being a text encoding. Plus wrapping your head around those style sheets was a mind twister. Not surprised people despise it. As it has the ability to be wickedly complex for no real reason.
XPath is kind of fine. It's hard to remember all the syntax but I can usually get there with a bit of experimentation.
XSLT is absolutely insane nonsense and needs to die in a fire.
True, and it's even more sad that XML was originally just intended as a simplified subset of SGML (HTML's meta syntax with tag inference and other shortforms) for delivery of markup on the web and to evolve markup vocabularies and capabilities of browsers (of which only SVG and MathML made it). But when the web hype took over, W3C (MS) came up with SOAP, WS-this and WS-that, and a number of programming languages based on XML including XSLT (don't tell HNers it was originally Scheme but absolutely had to be XML just like JavaScript had to be named after Java; such was the madness).
If your document has namespaces, xpath has to reflect that. You can either tank it or explicitly ignore namespaces by foregoing the shorthands and checking `local-name()`.
/*bookstore/*book/*title
its been some godawful mess like
/*[name()='bookstore']/*[name()='book']/*[name()='title']
... I guess because they couldn't bear to have it just match on tags as they are in the file and it had to be tethered to some namespace stuff that most people dont bother with. A lot of XML is ad-hoc without a namespace defined anywhere
Its like
Me: Hello Xpath, heres an XML document, please find all the bookstore/book/title tags
Xpath: *gasps* Sir, I couldn't possibly look for those tags unless you tell me which namespace we are in. Are you some sort of deviant?
Me: oh ffs *googles xpath name() syntax*
Is not actually relevant and is not an information the average XML processor even receives. If the file uses a default namespace (xmlns), then the elements are namespaced, and anything processing the XML has to either properly handle namespaces or explicitly ignore namespaces.
> A lot of XML is ad-hoc without a namespace defined anywhere
If the element is not namespaced xpath does not require a prefix, you just write
my:book is a different thing from your:book and you generally don't want to accidentally match on both. Keeping them separate is the entire point of namespaces. Same as in any programming language.
XML is a markup language system. You typically have a document, and various parts of it can be marked up with metadata, to an arbitrary degree.
JSON is a data format. You typically have a fixed schema and things are located within it at known positions.
Both of these have use-cases where they are better than the other. For something like a web page, you want a markup language that you progressively render by stepping through the byte stream. For something like a config file, you want a data format where you can look up specific keys.
Generally speaking, if you’re thinking about parsing something by streaming its contents and reacting to what you see, that’s the kind of application where XML fits. But if you’re thinking about parsing something by loading it into memory and looking up keys, then that’s the kind of application where JSON fits.
https://www.w3.org/TR/exi/
https://susam.net/feed.xml
https://susam.net/feed.xsl
It works surprisingly well, the only issue I ever ran into was a decades old bug in Firefox that doesn't support rendering HTML content directly from the XML document. I.e. If the blog post content is HTML via cdata, I needed a quick script to force Firefox to render that text to innerHTML rather than rendering the raw cdata text.
Thank you reading specs.
Thank you making tool.
XML is the C++ of text based file formats if you ask me. It's mature, batteries included, powerful and can be used with any language, if you prefer.
Like old and mature languages with their own quirks, it's sadly fashionable to complain about it. If it doesn't fit the use case, it's fine, but treating it like an abomination is not.
// XML $xml_doc = new DOMDocument(); $xml_doc->load("file1.xml");
// XSL $xsl_doc = new DOMDocument(); $xsl_doc->load("file.xsl");
// Proc $proc = new XSLTProcessor(); $proc->importStylesheet($xsl_doc); $newdom = $proc->transformToDoc($xml_doc);
print $newdom->saveXML();
XSLT lacks functionality? No problem, use php functions in xslt: https://www.php.net/manual/en/xsltprocessor.registerphpfunct...
RTFM
I learned quickly to leave this particular experience off of my resume as sundry DoD contractors contacted me on LinkedIn for my "XML expertise" to participate in various documentation modernization projects.
The next time you sigh as you use JSX to iterate over an array of Typescript interfaces deserialized from a JSON response remember this post - you could be me doing the same in XSLT :-).
Depressed and quite pessimistic about the team’s ability to orchestrate Java development in parallel with the rapid changes to the workbook, he came up with the solution: a series of XSLT files that would automatically build Java classes to handle the Struts actions defined by the XML that was built by Visual Basic from the workbook that was written in Excel.
https://raganwald.com/2008/02/21/mouse-trap.html
HN Discussions:
https://news.ycombinator.com/item?id=120379 · https://news.ycombinator.com/item?id=947952
Recently I need a solution for a problem and what XSLT promises is a big part of the solution, so I'm in an existential and emotional crisis.
I would rather that they introduced support for v3, as that would make it easier to serving static webpages with native support for templating.
Plug: here is a small project to get the basic information about the XSLT processor and available extensions. To use with a browser find the 'out/detect.xslt' file there and drag it into the browser. Works with Chrome and Firefox; didn't work with Safari, but I only have an old Windows version of it.
https://github.com/MikhailEdoshin/xslt-detect-ext/
I updated an XSLT system to work with then latest Firefox a couple of years ago. We have scripts in a different directory to the documents being transformed which requires a security setting to be changed in Firefox to make it work, I don't know if an equivalent thing is needed for Chrome.
It wasn't that bad. We used tomcat and some apache libraries for this. Worked fine.
Our CMS was spitting out XML files with embedded HTML that were very cachable. We handled personalization and rendering to HTML (and js) server side with a caching proxy. The XSL transformation ran after the cache and was fast enough to keep up with a lot of traffic. Basically the point of the XML here was to put all the ready HTML in blobs and all the stuff that needed personalization as XML tags. So the final transform was pretty fast. The XSL transformer was heavily optimized and the trick was to stream its output straight to the response output stream and not do in memory buffering of the full content. That's still a good trick BTW. that most frameworks do wrong out of the box because in memory buffering is easier for the user. It can make a big difference for large responses.
These days, you can run whatever you want in a browser via wasm of course. But back then javascript was a mess and designers delivered photoshop files, at best. Which you then had to cut up into frames and tables and what not. I remember Google Maps and Gmail had just come out and we were doing a pretty javascript heavy UI for our CMS and having to support both Netscape and Internet Explorer, which both had very different ideas about how to do stuff.
??
I was transforming XML with, like, three lines of VBScript in classic ASP.
You needed the jvm and saxon and that was about it...
Just imagine how fast websites would have rendered if we went that route
https://evidlo.github.io/xsl-website
It has worked amazingly well for us, and the generated files are already merged in the Linux Kernel.
[1] https://gitlab.com/x86-cpuid.org/x86-cpuid-db
This didn't work for me on my browsers (FF/Chrome/Safari) on Mac, apparently XSLT only works there when accessed through HTTP:
I remember long hours using XSLT to transform custom XML formats into some other representation that was used by WXWindows in the 2000s, maybe I should give it a shot again for Web :)Huh, neat! Did’t know it supported that. (python3 -m http.server will default to current directory anyway though)
"XSLT is a failure wrapped in pain"
original article seems offline but relevant HN discussion: https://news.ycombinator.com/item?id=8708617
https://news.ycombinator.com/item?id=44290315
Paper abstract :
ZjsComponent: A Pragmatic Approach to Modular, Reusable UI Fragments for Web Development
This gives me new appreciation for how powerful XSLT is, and how glad I am that I can use almost anything else to get the same end results. Give me Jinja or Mustache any day. Just plain old s-exprs for that matter. Just please don’t ever make me write XML with XML again.
However, it was much simpler imperative language with some macros.
XSLT is more like a set of queries competing to run against a document, and it's easy to make something incomprehensibly complex if you're not careful.
Xee: A Modern XPath and XSLT Engine in Rust
https://news.ycombinator.com/item?id=43502291
I have created a CMS that supported different building blocks (plugins), each would output its data in XML and supply its XSLT for processing. The CMS called each block, applied the concatenated XSLT and output HTML.
It was novel at the time and really nice and handy to use.
all in VBScript, god help me
It felt like a great idea at the time, but it was incredibly slow to generate all the HTML pages that way.
Looking back I always assumed it was partly because computers back then were too weak, although reading other comments in this thread it seems like even today people are having performance problems with XSLT.
Has there been any progress on making this into something developers would actually like to use? As far as I can tell, it’s only ever used in situations where it’s a last resort, such as making Atom/RSS feeds viewable in browsers that don’t support them.
After spending months working on my development machine, I deployed the website to my VPS, to realize to my utter dismay that the XSLT module was not activated on the PHP configuration. I had to ask the (small) company to update their PHP installation just for me, which they promptly did.
Somehow it took me many years, basically until starting uni and taking a proper programming class, before I started feeling like I could realize my ideas in a normal programming language.
XSLT was a kind of tech that allowed a non-coder like me to step by step figure out how to get things to show on the screen.
I think XSLT really has some strong points, in this regard at least.
Turns out you can do a lot with the RegEx-support in XSLT 2.0!
https://saml.rilspace.com/exercise-in-xslt-regex-partial-gal...
The result? A Java-based tools for creating CLI commands via a wizard:
https://www.youtube.com/watch?v=WMjXsBVqp7s
Let's not romanticize XML. I wrote a whole app that used XSL:T about 25 years ago (it was a military contract and for some reason that required the use of an XML database, don't ask me). Yes it had some advantages over JSON but XSL:T was a total pain to work with at scale. It's a functional language, so you have to get into that mindset first. Then it's actually multiple functional languages composed together, so you have to learn XPath too, which is only a bit more friendly than regular expressions. The language is dominated by hacks working around the fact that it uses XML as its syntax. And there are (were?) no useful debuggers or other tooling. IIRC you didn't even have any equivalent of printf debugging. If you screwed up in some way you just got the wrong output.
Compared to that React is much better. The syntax is much cleaner and more appropriate, you can mix imperative and FP, you have proper debugging and profiling tools, and it supports incremental re-transform so it's actually useful for an interactive UI whereas XSL:T never was so you needed JS anyway.
https://github.com/jqlang/jq
Learn it. It is insanely useful for mungling json in day to day work.
[0] https://en.wikipedia.org/wiki/SOAP#Example_message_(encapsul...
The funny thing is that the concept of AJAX was fairly new at the time, and so for them it made sense that the future of "fat" web pages (that's the term they use in their doc) was to use AJAX to download XML and transform it. But then people quickly learned that if you could just use JS to generate content, why bother with XML at all?
Back in 2005 I was evaluating some web framework concepts from R&D at the company I worked, and they were still very much in an XML mindset. I remember they created an HTML table widget that loaded XML documents and used XPATH to select content to render in the cells.
I suspect some of the hate towards XML from the web dev community boils down to it being "old". I'll admit that used to have the same opinion until I actually started working with it. It's a little bit more of a PITA than working with JSON, but I think I'm getting a much more expressive and powerful serialization format for the cost of the added complexity.
I think there are just a few that know XSL(T) these days, or need some refresh (like me).
It never broke, ever.
It could have bugs, of course! -- but only "programmer bugs" (behavior coded in a certain way that should have been coded in another); it never suddenly stopped working for no reason like everything does nowadays.
Unfortunately it is not a sentiment that is shared by many, and many developers always had issues understanding the FP approach of its design, looking beyond the XML.
25 years later we have JSON and YAML formats reinventing the wheel, mostly badly, for that we already had nicely available on the XML ecosystem.
Schemas, validation, graphical transformation tools, structured editors, comments, plugins, namespaces,...
It would probably help if xslt was not a god-awful language even before it was expressed via an even worse syntax.
Now ironically, we have to reach for tooling to work around the design flaws of json and yaml.
That reads like an indictment of using XML for a programming language.
Not that it has anything to do with the semantics of XSLT.
XML is tooling based, and there have been plenty of tools to write XSLT on, including debugging and processing example fragments, naturally not something vi crowd ever became aware of amid their complaints.
Our mobile and web portal was made of j2ee services producing XML which were then transformed by XSLT into HTML or WAP
At the time it blew me away that they expected web designers to work in an esoteric language like that
But it was also nicely separated
my only option to fix this are javascript, xslt or a server side html generator. (and before you ask, static site generators are no better, they just make the generation part manual instead of automatic.)
i don't actually care if the site is static. i only care that maintenance is simple.
build tools are not simple. they tend to suffer from bitrot because they are not bundled with the hosting of the site or the site content.
server side html generators (aka content management systems, etc.) are large and tie me to a particular platform.
frontend frameworks by default require a build step and of course need javascript in the browser. some frameworks can be included without build tools, and that's better, but also overkill for large sites. and of course then you are tied to the framework.
another option is writing custom javascript code to include an html snippet from another file.
or maybe i can try to rig include with xslt. will that shut up the people who want to view my site without javascript?
at some point there was discussion for html include, but it has been dropped. why?
You can totally do that with PHP? It can find all the pages, generate the menu, transform markdown to html for the current page, all on the fly in one go, and it feels instantaneous. If you experience some level of traffic you can put a CDN in front but usually it's not even necessary.
the point is, none of the solutions are completely satisfactory. every approach has its downsides. but most critically, all this complaining about people picking the wrong solution is just bickering that my chosen solution does not align with their preference.
my preferred solution btw is to take a build-less frontend framework, and build my site with that. i did that with aurelia, and recently built a proof of concept with react.
To just do the menu, if your site is xhtml, IIRC you could link to the template, use a <my-menu> in the page, and then the template just gives a rule to expand that to your menu.
This was easy do achieve with PHP with a super minimal setup, so I thought, why not? Still no build steps!
PHP is quite ubiquitous and stable these days so it is practically equivalent to making a static site. Just a few sprinkles of dynamism to avoid repeting HTML all over the place.
XSLT technically would make sense the more you're using large amounts of boilerplate XML literals in your template because it's using XML itself as language syntax. But even though using XML as language meta-syntax, it has lots of microsyntax ie XPath, variables, parameters that you need to cram into XML attributes with the usual quoting restrictions and lack of syntax highlighting. There's really nothing in XSLT that couldn't be implemtented better using a general-purpose language with proper testing and library infrastructure such as Prolog/Datalog (in fact, DSSSL, XSLT's close predecessor for templating full SGML/HTML and not just the XML subset, was based on Scheme) or just, you know, vanilla JavaScript which was introduced for DOM manipulation.
Note maintainance of libxml2/libxslt is currently understaffed [1], and it's a miracle to me XSLT (version v1.0 from 1999) is shipping as a native implementation in browsers still unlike eg. PDF.js.
[1]: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
I’ve kinda gotten to a point and curious if others feel same: it’s all just strings. You get some strings from somewhere, write some more strings to make those strings show other strings to the browser. Sometimes the strings reference non strings for things like video/audio/image. But even those get sent over network with strings in the http header. Sometimes people have strong feelings about their favorite strings, and there are pros and cons to various strings. Some ways let you write less strings to do more. Some are faster. Some have angle brackets, some have curly brackets, some have none at all! But at the end of the day- it’s just strings.
I think one big problem with popularizing that approach is that XSLT as a language frankly sucks. As an architecture component, it's absolutely the right idea, but as long as actually developing in it is a world of pain, I don't see how people would have any incentive to adopt it.
The tragic thing is that there are other pure-functional XML transformation languages that are really well-designed - like XQuery. But there is no browser that supports those.
My favorite thing about XQuery is that it supports logically named functions, not just templates that happen to work upon whatever one provides it as with XSLT. I think golang's text/template suffers from the same problem - good luck being disciplined enough to always give it the right context, or you get bad outcomes
An example I had lying around:
And the browser takes care of the rendering.
Good times.
> [...] the idea of building a website like this in XML and then transforming it using XSL is absurd in and of itself [...]
In the comments the creators comment on it, like that it was a mess to debug. But I could not find anything wrong with the technique itself, assuming that it is working.
This is the first I've seen it. Interesting...
A few years ago, I decided to style my own feeds, and ended up with this: https://chrismorgan.info/blog/tags/fun/feed.xml. https://chrismorgan.info/atom.xsl is pretty detailed, I don’t think you’ll find one with more comprehensive feature support. (I wrote a variant of it for RSS too, since I was contemplating podcasts at the time and almost all podcast software is stupid and doesn’t support Atom, and it’s all Apple’s fault: https://temp.chrismorgan.info/2022-05-10-rss.xsl.)
At the time, I strongly considered making the next iteration of my website serve all blog stuff as Atom documents—post lists as feeds, and individual pages as entries. In the end, I’ve decided to head in a completely different direction (involving a lot of handwriting!), but I don’t think the idea is bad.
Gave it up because it turns out the little things are just a pain. Formatting dates, showing article numbers and counts etc.
[0] https://www.getsymphony.com/
https://github.com/captn3m0/boardgame-research
It also feels very arcane - hard to debug and understand unfortunately.
https://zvon.org/xxl/XSLTutorial/Books/Output/contents.html
Still a great resource.
--
I would say CSS selectors superseeded XPath for the web. If one could do XSLT using CSS selectors instead, it would feel fresh and modern.
me come to hn, see xml build system, me happy, much smiling, me hit up arrow, me thank good stranger.
I learned one thing: Apply XSL to an XML by editing the XML. But can we flip it?
The web works in MVC ways. Web servers are controllers that output the view populated with data.
(XML) Data is in the backend. (XSLT) View page is the front end. (XPath) Query filters is requesting (XML) data like controllers do.
XML is a semi-structured format, which (apart from & < >) includes plain text as a more or less degenerate case. I don't think we have any other realistic format for marking up plain text with arbitrary semantics. You can have, for example, a recipe format with <ingredient> as part of its schema, and it's trivial to write an Xpath to pull out all the <ingredient>s (to put them in your shopping list, or whatever).
Obviously, XSLT is code. Nobody denies this really. One thing about code is that it's inherently structured. Only the craziest of literate programmers would try to embed executable code inside of text. But I don't think that's the biggest problem. Code is special in that special purpose programming languages always leak outside the domain they're designed for. If you try and write a little language that's really well-scoped to transforming XML, you are definitely going to want to call stuff outside it sooner or later.
Combined with the fact that there really isn't any value in ever parsing or processing a stylesheet, it seems like it was doomed never to pan out.
XSLT controls the styling, Lua the running functions. When Lua adjusts a visible thing, it generates XSLT.
"FrameXML" is a thin Lua wrapper around the base XSLT.
me have make vomit from seeing xml
JS was waay too slow, but it turned out that even back then XSLT was blazing fast. So I basically generated XML with all the data, wrote a simple XSLT with one clever XPath that generated search input form, did the search and displayed the results, slapped the xml file in CD auto-run and called it a day. It was finding results in a second or less. One of my best hacks ever.
Since then I always wanted to make a html templating system that compiles to XSLT and does the HTML generation on client side. I wrote some, but back then Firefox didn't support displaying XML+XSLT directly and the workaround I came up with I didn't like. Then the AJAX came and then JS got faster and client side rendering with JS became viable. But I still think it's a good idea, to send just dynamic XMLs with static XSLTs preloaded and cached, if we ever want to come back to purely server driven request-response flow. Especially if binary format for XML catches on.
https://en.wikipedia.org/wiki/Efficient_XML_Interchange
If you're manually writing the <>-stuff in an editor you're doing it wrong, do it programmatically or with applications that abstract it away.
Use things like JAXB or other mature libraries, eXist-db (http://exist-db.org), programs that can produce visualisations and so on.
From talking to AI, it seems the main issues would be:
- SEO (googlebot)
- Social Media Sharing
- CSP heavy envs could be trouble
Is this right?
1: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/ut...
2: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/ut...
3: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/ut...
Well, Apache says hi: https://httpd.apache.org/docs/2.4/howto/ssi.html (Look for "include")
https://learn.microsoft.com/en-us/previous-versions/windows/...
Here's how use XSLT to make Punkemon Pie Menus! [ WARNING: IE 5 required! ;) ]
The "htc" files are ActiveX components written in JScript, aka "Dynamic HTML (DHTML) behaviors":
https://en.wikipedia.org/wiki/HTML_Components
>HTML Components (HTCs) are a legacy technology used to implement components in script as Dynamic HTML (DHTML) "behaviors" in the Microsoft Internet Explorer web browser. Such files typically use an .htc extension and the "text/x-component" MIME type.
JavaScript Pie Menus, using Internet Explorer "HTC" components, xsl, and xml:
https://www.youtube.com/watch?v=R5k4gJK-aWw
>Pie menus for JavaScript on Internet Explorer version 5, configured in XML, rendered with dynamic HTML, by Don Hopkins.
punkemonpiemenus.html: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
punkemon.xsl: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
punkemon.xml: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
punkemonpiemenus.xml: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenu.htc: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
Also an XML Schema driven pie menu editor:
piemenuschemaeditor.html: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenuschemaeditor.xsl: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenuschema.xml: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenuschemaeditor.htc: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenuxmlschema-1.0.xsd: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
Here's an earlier version that uses ActiveX OLE Control pie menus, xsl, and xml, not as fancy or schema driven:
ActiveX Pie Menus:
https://www.youtube.com/watch?v=nnC8x9x3Xag
>Demo of the free ActiveX Pie Menu Control, developed and demonstrated by Don Hopkins.
ActiveXPieMenuEditor.html: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenueditor.xsl: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenueditor.html: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenueditor.htc: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
piemenumetadata.xml: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
Fasteroids (Asteroids comparing Pie Menus -vs- Linear Menus):
fasteroids.html: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
fasteroids.htc: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/ma...
If that wasn't obsolete enough, here is the "ConnectedTV Skin Editor". It was a set of HTC components, XML, and XML Schemas, and a schema driven wysiwyg skin editor for ConnectedTV: a Palm Pilot app that turned your Palm into a personalized TV guide + smart remote.
Full fresh lineup of national and local broadcast + TiVo + Dish TV guides with customized channel groups, channel and show filtering and favorites, hot sync your custom tv guide with just the shows you watch, weeks worth of schedules you could download and hot sync nightly with the latest guide updates.
Integrated with trainable consumer IR remote controller with custom touch screen user interfaces (with 5-function "finger pie menus" that let you easily tap or stroke up/down/left/right to stack up multiple gesture controls on each button (conveniently opposite and orthogonal for volume up/down, channel next/previous, page next/previous, time forward/back, show next/previous, mute/unmute, favorite/ignore, etc -- finger pies are perfect for the kind of opposite and directionally oriented commands on remote controls, and you need a lot fewer 5-way buttons than single purpose physical buttons on normal remotes, so you could pack a huge amount of functionality into one screen, or have any number of less dense screens, customized for just the devices you have and features you use. Goodbye TiVo Monolith Monster remote controls, since only a few of the buttons were actually useful, and ConnectedTV could put 5x the number of functions per gesture activated finger pie menu button.
The skin editor let you make custom user interfaces by wysiwyg laying out and editing out any number of buttons however you liked and bind tap/left/right/up/down page navigation, tv guide time and channel and category navigation, sending ir commands to change the channel (sends multi digits per tap on station or show so you can forget the numbers), volume, mute, rewind/skip tivo, etc.
Also you could use finger pies easily and reliably on the couch in a dark room with your finger instead of the stylus. Users tended to lose their Palm stylus in the couch cushions (which you sure don't wanna go fishing around for if JD Vance has been visiting) while eating popcorn and doing bong hits and watching tv and patting the dog and listening to music and playing video games in their media cave, so non-stylus finger gesture control was crucial.
Finger pies were was like iPhone swipe gestures, but years earlier, much cheaper (you could get a cheap low end Palm for dirt cheap and dedicate it to the tv). And self revealing (prompt with labels and give feedback (with nice clicky sounds) and train you to use the gestures efficiently) instead of invisible mysterious iPhone gestures you have to discover and figure out without visual affordances. After filtering out all the stuff you never watch and favoriting the ones you do, it was much easier to find just the shows you like and what was on right now.
More on the origin of the term "Finger Pie" for Beatles fans (but I digress ;) :
https://news.ycombinator.com/item?id=16615023
https://donhopkins.medium.com/gesture-space-842e3cdc7102
It was really nice to have the TV guide NOT on the TV screen taking you away from watching the current show, and NOT to have to wait 10 minutes while it slowly scrolled the two visible rows to through 247 channels to finally see the channel you wanted to watch (by that time you'll miss a lot of the show, but be offered lots of useless shit and psychic advice to purchase from an 800 number with your credit card!).
Kids these days don't remember how horrible and annoying those slow scrolling TV guides with ads for tele-psychics and sham wows and exercise machines used to be.
I can objectively say that it was much better than the infamous ad laden TV Guide Scroll:
https://www.youtube.com/watch?v=JkGR29TSueM
Using those slow scrolling non-interactive TV guides with obnoxious ads was so painful that you needed to apply HEAD ON directly to the forehead again and again and again to ease the pain.
https://www.youtube.com/watch?v=Is3icfcbmbs
You could use the skin editor to create your own control panels and buttons for whatever TV, TiVO, DVR, HiFi, Amplifier, CD, DVD, etc players you wanted to use together. And we had some nice color hires skins for the beautiful silver folding Sony Clie.
https://en.wikipedia.org/wiki/Sony_CLI%C3%89_PEG-TG50
It was also nice to be able to curate and capture just the buttons you wanted for the devices that you actually use together, and put them all onto one page, or factor them out into different pages per device. You could ignore the 3 digit channel number and never peck numbers again, just stroke up on your favorite shows to switch the channel automatically.
We ran out of money because it was so expensive to license the nightly feed of TV guide (downloading a huge sql dump every night of the latest schedules as they got updated), and because all of our competitors were just stealing their data by scraping it from TV guide web sites instead of licensing it legally. (We didn't have Uber or OpenAI to look up to for edgy legal business practice inspiration.)
Oh well, it was fun while it lasted, during the days that everybody was carrying a Palm Pilot around beaming their contacts back and forth with IR. What a time that was, right before and after 9/11 2001. I remember somebody pointedly commented that building a Palm app at that time in history was kind of like opening a flower shop at the base of the World Trade Center. ;(
https://github.com/SimHacker/ConnectedTVSkinEditor
https://www.pencomputing.com/palm/Pen44/connectedTV.html
https://uk.pcmag.com/first-looks/29965/turn-your-palm-into-a...
Connected TV User Guide:
Overview: https://donhopkins.com/home/ConnectedTVUserGuide/Guide1-Over...
Setting Up: https://donhopkins.com/home/ConnectedTVUserGuide/Guide2-Sett...
Using: https://donhopkins.com/home/ConnectedTVUserGuide/Guide3-Usin...
Memory: https://donhopkins.com/home/ConnectedTVUserGuide/Guide4-Memo...
Sony: https://donhopkins.com/home/ConnectedTVUserGuide/Guide5-Sony...