



I would say doneness is about completeness within context, not immutability.
The environment may change, but within context, it can still be considered done.
It’s fine to say and consider software never done, because there are known and unknown unknowns and extrapolations and expectations. But I think calling something done has value too.
It is a label of intention, of consideration, within the current context. If the environment changes and you want or need to use it, by all means update it. That doesn’t mean the done label assigned previously was wrong [in its context].
We also say “I’m done” to mean our own leave, even when there is no completeness on the product, but only on our own tolerance.
In the same way, if you shift focus, done may very well be done and not done at the same time. Done for someone in one environment, and not done for someone in another.
More often than ‘done’ I see ‘feature complete’ or ‘in maintenance mode’ in project READMEs, which I think are better labels.


and figure out whether the new framework with a weird name actually addresses
Couldn’t name what this is about in the title, nor in the teaser, I guess?
“Latest hotness” and “the new framework with a new name” isn’t very discerning.


From the paper abstract:
[…] Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI.
We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library.
We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation – particularly in safety-critical domains.


And there are cookies 🍪
Do they share my cookies with third parties? /s


That’s really cool.
Repo already archived, only 26 days after the linked post?
Oh, they rewrote the ruby tool into Go. Even better.
https://github.com/git-pkgs/git-pkgs
Looks like it probably lost some dependency manager systems support through that.
You add more tags?
In my main work projects I regularly archive tags into refs/archive/tags/* - which is hidden from normal tooling, but still accessible in Git and (some?) Git tooling and UIs.
Branches get “path” prefixes like draft/* or other longterm category indications. I don’t archive them, but if I would, I would put them into non /refs/heads like /refs/archive/heads/*.


What’s Codeberg’s stance on this? Do they advocate for this, accept it, or dislike it?
Their FAQ talks about having disabled mirroring because of resource use of abandoned mirrored repos. Their blog post about the drop of mirroring says that manual mirroring is still possible.


Is it open source if there’s no source yet 🤔


I wouldn’t call such things agents though. They’re not acting autonomously or out-of-process.


Powered by LiveKit
Apparently it’s AI coded? Or maybe not?
What does powered by mean when LiveKit is about AI agents?
Are we letting AI agents meet instead of ourselves?
The linked LiveKit website and linked LiveKit blog post seem completely disconnected. I don’t get it.


Do good work, be interested and show interest, and be in a recipiable environment.
If your current environment is overbearing with power politics you don’t succeed in and you want change you’ll probably have to change environments.
If you want impact consider whether smaller companies and teams would be beneficial. You may be able to fill your desires of impact and control even without becoming a formal lead role. Or become one implicitly or naturally quicker in smaller less formal and structured environments.
You can also look for job offerings for those kinds of roles specifically. No need to seek out a climb in house when you can find more direct routes.


If the XML parser parses into an ordered representation (the XML information set), isn’t it then the deserializer’s choice how they map that to the programming language/type system they are deserializing to? So in a system with ordered arrays it would likely map to those?
If XML can be written in an ordered way, and the parsed XML information set has ordered children for those, I still don’t see where order gets lost or is impossible [to guarantee] in XML.


https://www.w3.org/TR/2004/REC-xml-infoset-20040204/
[children] An ordered list of child information items, in document order.
Does this not cover it?
Do you mean if you were to follow XML standard but not XML information set standard?


while JSON is a generalized data structure with support for various data types supported by programming languages
Honestly, I find it surprising that you say “support for various data types supported by programming languages”. Data types are particularly weak in JSON when you go beyond JavaScript. Only number for numbers, no integer types, no date, no time, etc.
Regarding use, I see, at least to some degree, JSON outside of use for network transfer. For example, used for configuration files.


The point is that there are degrees to readability, specificity, and obviousness, even without a common understanding. Self-describing data, much like self-describing code, is different from a dense serialization without much support in that regard.


Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.
Every time it didn’t immediately work, you’d think: Man, why don’t they publish the schema under that public URL.


In XML the practice to approximate arrays is to put the index as an attribute. It’s incredibly gross.
I don’t think I’ve seen that much if ever.
Typically, XML repeats tag names. Repeating keys are not possible in JSON, but are possible in XML.
<items>
<item></item>
<item></item>
<item></item>
</items>


It can be used as alternatives. In MSBuild you can use attributes and sub elements interchangeably. Which, if you’re writing it, gives you a choice of preference. I typically prefer attributes for conciseness (vertical density), but switch to subelements once the length/number becomes a (significant) downside.
Of course that’s more of a human writing view. Your point about ambiguity in de-/serialization still stands at least until the interface defines expectation or behavior as a general mechanism one way or the other, or with specific schema.


The readability and obviousness of XML can not be overstated. JSON is simple and dense (within the limit of text). But look at JSON alone, and all you can do is hope for named fields. Outside of that, you depend on context knowledge and specific structure and naming context.
Whenever I start editing json config files I have to be careful about trailing commas, structure with opening and closing parens, placement and field naming. The best you can do is offer a default-filled config file that already has the full structure.
While XML does not solve all of it, it certainly is more descriptive and more structured, easing many of those pain points.
It’s interesting that web tech had XML in the early stages of AJAX, the dynamic web. But in the end, we sent JSON through XMLHttpRequest. JSON won.