

https://www.w3.org/TR/2004/REC-xml-infoset-20040204/
[children] An ordered list of child information items, in document order.
Does this not cover it?
Do you mean if you were to follow XML standard but not XML information set standard?


https://www.w3.org/TR/2004/REC-xml-infoset-20040204/
[children] An ordered list of child information items, in document order.
Does this not cover it?
Do you mean if you were to follow XML standard but not XML information set standard?


while JSON is a generalized data structure with support for various data types supported by programming languages
Honestly, I find it surprising that you say “support for various data types supported by programming languages”. Data types are particularly weak in JSON when you go beyond JavaScript. Only number for numbers, no integer types, no date, no time, etc.
Regarding use, I see, at least to some degree, JSON outside of use for network transfer. For example, used for configuration files.


The point is that there are degrees to readability, specificity, and obviousness, even without a common understanding. Self-describing data, much like self-describing code, is different from a dense serialization without much support in that regard.


Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.
Every time it didn’t immediately work, you’d think: Man, why don’t they publish the schema under that public URL.


In XML the practice to approximate arrays is to put the index as an attribute. It’s incredibly gross.
I don’t think I’ve seen that much if ever.
Typically, XML repeats tag names. Repeating keys are not possible in JSON, but are possible in XML.
<items>
<item></item>
<item></item>
<item></item>
</items>


It can be used as alternatives. In MSBuild you can use attributes and sub elements interchangeably. Which, if you’re writing it, gives you a choice of preference. I typically prefer attributes for conciseness (vertical density), but switch to subelements once the length/number becomes a (significant) downside.
Of course that’s more of a human writing view. Your point about ambiguity in de-/serialization still stands at least until the interface defines expectation or behavior as a general mechanism one way or the other, or with specific schema.


The readability and obviousness of XML can not be overstated. JSON is simple and dense (within the limit of text). But look at JSON alone, and all you can do is hope for named fields. Outside of that, you depend on context knowledge and specific structure and naming context.
Whenever I start editing json config files I have to be careful about trailing commas, structure with opening and closing parens, placement and field naming. The best you can do is offer a default-filled config file that already has the full structure.
While XML does not solve all of it, it certainly is more descriptive and more structured, easing many of those pain points.
It’s interesting that web tech had XML in the early stages of AJAX, the dynamic web. But in the end, we sent JSON through XMLHttpRequest. JSON won.


Yeah, I wish I had something like XPath as consistently (in terms of availability and syntax) for JSON.


There was a time where HTML moved towards a more formalized XML-valid definition named XHTML. Ultimately, web/browser backwards compatibility and messy and forgiving nature lead to us giving up on that and now we have the HTML living standard with rules, but browsers (not sure to what degree it’s standardized or not) are very forgiving in their interpretation.
While HTML, prior to HTML5, was defined as an application of Standard Generalized Markup Language (SGML), a flexible markup language framework, XHTML is an application of XML, a more restrictive subset of SGML. XHTML documents are well-formed and may therefore be parsed using standard XML parsers, unlike HTML, which requires a lenient, HTML-specific parser.[1]
XHTML 1.0 became a World Wide Web Consortium (W3C) recommendation on 26 January 2000. XHTML 1.1 became a W3C recommendation on 31 May 2001. XHTML is now referred to as “the XML syntax for HTML”[2][3] and being developed as an XML adaptation of the HTML living standard.[4][5]


Most use cases don’t need fully sematic data storage
If both sides have a shared data model it’s a good base model without further needs. Anything else quickly becomes complicated because of the dynamic nature of JSON - at least if you want a robust or well-documented solution.


Depending on how stable your work, environment, and risks are, the range size and confidence rating may change a lot or reach refinement limits quite fast.


It’ll come crashing down 😱


I’m not saying you’re wrong, but would we even be seeing them when they exist? When I publish or update my personal projects as public GitHub repos nobody sees and nobody cares. I imagine it would be the same if I were using LLM.
I wonder what the login requirement for the benefit is about. Does it preload or something?
If they load faster, will they get fixed faster too? /s


Maybe I can build a bird feeder that is as tall as a skyscraper. 🤔/s


relevant, from a PR comment
On Monday January 26, 2026, I intend to merge this pull-request and post an explainer blog post detailing some further reasoning and details behind this move. The change, the end of the bounty, is officially set for January 31 but I am certain it will take some days to “take effect” and by merging the update a few days early I don’t think we actually hurt anyone.


His comments came as cURL users complained that the move was treating the symptoms caused by AI slop without addressing the cause. The users said they were concerned the move would eliminate a key means for ensuring and maintaining the security of the tool.
A single user commented, and they responded. “users complained” and “the users” is wrong. implying something different.
“users complained” feels like a misrepresentation to me as well, at least how I read and understand “complained”. The user wrote “As a security researcher, this is honestly painful to see, but also completely understandable.” Is it complaining if they understand the act and change?
In a separate post on Thursday, Stenberg wrote: “We will ban you and ridicule you in public if you waste our time on crap reports.”
The linked separate post is a /.well-known/security.txt file. It’s not really a “separate post”. And I don’t see where they got the date from. Maybe from whatever linked to that in the first place.
An update to cURL’s official GitHub account made the termination, which takes effect at the end of this month, official.
Isn’t that from the merge request, which is not merged yet? It’s definitely not in the main branch. Current MR state is something different. The MR discussion clearly states that they will merge on 26th - no early.
“an update to the official GitHub account” makes no sense to me in the first place, when it’s a file in a repo, not even the account.
At first, I only wanted to point out one thing. Now this whole article feels like AI slop. Dunno how warranted that feeling/assessment is. Is it sloppy reporting? Am I, as a reader, the problem?
/edit: The bleeping computer article posted in the community is much better/consistent/coherent. Of course, this one was earlier and already has traction.
I can’t read this because it’s not in code fencing /s
If the XML parser parses into an ordered representation (the XML information set), isn’t it then the deserializer’s choice how they map that to the programming language/type system they are deserializing to? So in a system with ordered arrays it would likely map to those?
If XML can be written in an ordered way, and the parsed XML information set has ordered children for those, I still don’t see where order gets lost or is impossible [to guarantee] in XML.