

, so it’s in the public domain and they’re free to do with it whatever they want, and they would legally be right.
What do you base this “all AI code is public domain by legal definition” on?


, so it’s in the public domain and they’re free to do with it whatever they want, and they would legally be right.
What do you base this “all AI code is public domain by legal definition” on?


Being asked to gesture words I don’t know, or Nierenfunktionsstörung and other long German words like that 😅☠️


The need to manually download and load a lib and then send the results manually via email is somewhat of a hassle, unfortunately.


A meta analysis is an interesting reaction to, or should I say founded in, the post title. But we better let go.


“early stages”, “could not verify”, “company did not respond”, “considers making available for purchase”
That’s neither solid news, nor a real or full GitHub alternative.


The CLA can never override the code license. It handles the transition of your code into their code, and what they can do with it. But once it’s published as AGPL, you or anyone else can fork it and work with it as AGPL anyway. The CLA can allow them to change the license to something different. But the AGPL published code remains published and usable under AGPL.
I’m usually fine with contributing under CLA. A CLA often make sense. Because the alternative is a hassle and lock-in to current constructs. Which can have its own set of disadvantages.
A FOSS license and CLA combination can offer reasonable good to both parties: You can be sure your contribution is published as FOSS, and they know they can continue to maintain the project with some autonomy and choices. (Choices can be better or worse for others, of course.)
I’m a bit confused by them publishing their personal essays on their htmx project page. This essay certainly doesn’t have anything to do with htmx directly. Either way, valuable content and possibly their strategy to get people to htmx, or reuse a domain and website they already have.


abstracting away determinism /s


This part from the article supports this sentiment:
In a pleasant surprise, reactions have been positive. Throttled organizations were “surprised and apologetic,” mistaking issues for malice rather than “ignorance, unawareness.”


I sneakily changed our pipeline to pull from the in-house docker registry, and for pipelines to require pulling from package repos only when locks changed. Our CI is faster than every other team, but nobody noticed.
So yeah, charge the companies! Please!
How come this is not an obvious improvement opportunity that materializes in other teams too, and visibly so, rather than “sneakily” hidden?
Isn’t it better not only for performance but also for reliability?


The article doesn’t even mention this critical risk and history. Huge gap.


Think about whether TODOs will be revisited, and how you can guarantee that. What do you gain and lose by replacing warnings with TODOs.
In my projects and work projects, I advocate for:
Dotnet warning suppression attributes have a Justification property. Editorconfig severity, disabling, suppression can have a comment.
If it’s your own project and you know when and how you will revisit, what do you gain by dropping the warning? A no-warning, but then you have TODOs with the same uncertainties?


I do. But I’m very selective and critical in choosing and trusting the right ones. They’re also not my only source.
I don’t think YouTube reviews are any worse than other forms of reviews. There are plenty of bad text reviews out there, too.


It’s a fund you donate to; they invest the money, then fund open source with the investment gains.
I posted a comment on this other post that summarizes the most relevant (because it wasn’t clear to me either, and as a note/explanation to myself too).


Data-driven grant model. There’s no perfect model for distributing OSS grants. Our approach is an open, measurable, algorithmic (but not automatic) model, […] We’re finalizing the first version of the selection model after the public launch, and its high-level description is at osendowment/model.
The fund invests all donations in a low-risk portfolio and uses only the investment income for grants, making it independent of annual budgets and market volatility. Even a modest $10M fund at this rate would generate ~$500K every year — enough for $10K grants to 50 critical open source projects.
Currently standing at $700k.
Regarding the model:
We aim to focus our support on the core of open-source ecosystems — like ~1% of packages accounting for 99% of downloads and dependencies. Our model shall be a data-driven approximation of the global usage of the open-source supply chain, helping to detect its most critical but underfunded elements.


We onboarded our team with VS integrated Copilot.
I regularly use inline suggestions. I sometimes use the suggestions that go beyond what VS suggested before Copilot license… I am regularly annoyed at the suggestions moving off code, even greyed out sometimes being ambiguous with grey text like comma and semicolon, and control conflicting with basic cursor navigation (CTRL+Right arrow)
I am very selective about where I use Copilot. Even for simple systematic changes, I often prefer my own editing, quick actions, or multi cursor, because they are deterministic and don’t require a focused review that takes the same amount of time but with worse mental effect.
Probably more than my IDE “AI”, I use AI search to get information. I have the knowledge to assess results, and know when to check sources anyway, in addition, or instead.
My biggest issue with our AI is in the code some of my colleagues produce and give me for review, and that I don’t/can’t know how much they themselves thought about the issues and solution at hand. A lack of description, or worse, AI generated summaries, are an issue in relation to that.


And it’s so popular! It must be good!


Many times I’ve used piefed, wrote a comment, some longer some shorter, and without fail, it denied posting after writing it out but without telling me specifically why I can’t post. Just no permission. Consequently, it never stuck to me.


I’ve been using TortoiseGit since the beginning, but it’s Windows-only.
In TortoiseGit, the Log view is my single entry point to all regular and semi-regular operations.
Occasionally, I use native git CLI to manage refs (archive old tags into a different ref path, mass-remote-delete, etc).
Originally, it was a switch from TortoiseSVN to TortoiseGit, and from then on, no other GUI or TUI met my needs and wants. I explored/tried out many alternative GUIs and TUIs over the years, but none felt as intuitive, gave as much overview, or capabilities. Whenever I’m in Visual Studio and use git blame, I’m reminded that it is lacking - in the blame view you can’t blame the previous versions to navigate backwards through history within a code view. I can do that in TortoiseGit.
I’ve also tried out Gitbutler and jj, which are interesting in that they’re different. Ultimately, they couldn’t convince me for regular use when git works well enough and additional tooling can introduce new complexities and issues when you don’t make a full switch. I remember Gitbutler added refs making git use impractical. jj had a barrier to entry, to understand and follow the concepts and process, which I think I simply did not pass yet to have a more accurate assessment.
I did explore TUIs also as no-install-required fallback alternatives, but in practice, I never needed them. When I do use the console, I’m familiar with native git to cover my needs. Remote shell: native git, locally: Nushell on top of native git for mass queries and operations.
AI-generated art not being copyrightable doesn’t necessarily mean AI-generated art can’t violate original copyright, though.
This is not about AI-generated code being relicensed to different AI-generated code. It’s about the original licensed code being relicensed or otherwise violated through AI-generated code.