Stone Mason, Canadian ExPat living in the UK, Hobbyist musician.

  • 1 Post
  • 189 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle






  • I personally believe there could be a place in a person’s creative workflow for the use of AI as a tool to enhance their own creative work…with caveats…

    As for the ethics of using AI to copy and art style? It’s theft. End of.

    These models are trained on stolen data. The artists/musicians/writers/intellectuals, or their estates, never gave permission for their works to be used to train these models. They never receive royalties, or payment of any kind, for the use of their works. And as we’re finding out, at the very least, Meta took that data…those creative works… illegally. People’s lives have been destroyed by laws put in place to protect IP. I personally feel those laws are fucked and should be fully scrapped in favour of something that actually protects the people creating these works. That doesn’t change the fact that when Joe Shmoe shares a torrent he could be hit with fines and possibly jail. Fines alone could essentially make a person’s life literal hell for however long they have left. The companies who have trained these models are likely going to get a “cost of doing business” slap on the wrist.

    It’s ethically ambiguous if you look at it from the standpoint of “IP law shouldn’t exist” while totally ignoring that even if these companies get away with it common people nearly never will.


  • because why else would you go to a whole other post to “prove a point” about downvoting?
    It wasn’t you (you claim)

    I do claim. I have an alt, didn’t downvote you there either. Was just pointing out that you were also making assumptions. And it’s all comments in the same thread, hardly me going to an entirely different post to prove a point.

    We will not get the benefits of Generative AI if we don’t 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that’s the discussion that’s happening here.

    I agree. And while I personally feel like there’s already room for it in some people’s workflow, it is very clearly problematic in many ways. As I had pointed out in my first comment.

    I’m not going to even try to justify to you what I said in this post or that one because I honestly don’t think you care.

    I do actually! Might be hard to believe, but I reacted the way I did because I felt your first comment was reductive, and intentionally trying to invalidate and derail my comment without actually adding anything to the discussion. That made me angry because I want a discussion. Not because I want to be right, and fuck you for thinking differently.

    If you’re willing to talk about your views and opinions, I’d be happy to continue talking. If you’re just going to assume I don’t care, and don’t want to hear what other people think…then just block me and move on. 👍


  • and here you are, downvoting my valid point

    Wasn’t me actually.

    valid point

    You weren’t really making a point in line with what I was saying.

    regardless of whether we view it as a reliable information source, that’s what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don’t actually agree that it’s good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

    This is a really valid point, and if you had taken the time to actually write this out in your first comment, instead of “Tell that to the guy that was expecting factual information from a hallucination generator!” I wouldn’t have reacted the way I did. And we’d be having a constructive conversation right now. Instead you made a snide remark, seemingly (personal opinion here, I probably can’t read minds) intending it as an invalidation of what I was saying, and then being smug about my taking offence to you not contributing to the conversation and instead being kind of a dick.



  • Ok? If you read what I said, you’ll see that I’m not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid…for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
    What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

    Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha…when I wasn’t talking about that as a viable use case?


  • Oh, and it also hallucinates.

    This is arguably a feature depending on how you use it. I’m absolutely not an AI acolyte. It’s highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn’t necessarily be an issue if people who aren’t tech broligarchs weren’t routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren’t being fucked…just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

    If you’re using AI to make music, images or video… you’re depending on those hallucinations.
    I run a Stable Diffusion model on my laptop. It’s kinda neat. I don’t make things for a profit, and now that I’ve played with it a bit I’ll likely delete it soon. I think there’s room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal…


  • Since you tagged @realitista@lemm.ee I’ll do the same so they see my counter…argument?

    In it’s first release(2017) it was 50.3MB.

    In Nov, 2024, it was 340MB. Oct, 2024, 327MB

    6th Jan, 2024: 266MB
    24th Oct, 2023: 229MB
    21st Oct, 2023: 180MB
    30th Jul, 2023: 214MB
    23rd Jun, 2023: 196MB

    This is a very active project, things are added constantly, then optimised or tweaked or removed. Going through the releases it looks like their install packages get bigger over the span of a few months, then through optimisation gets shrunk down again. The above numbers are only for the windows full install package, but all their different packages have a very similar pattern of bloat followed by trimming. I don’t see how this is weird at all.

    What about this is actually concerning for you?






  • No worries bud! It really makes me angry what M$ is doing here, obviously they’re not alone in being a cause of our eWaste “addiction”…but this is really a situation that should be opening people’s eyes. Apple is bad for it too. Their laptops have a fairly short lifecycle, and unless you want/can to put Linux on your MacBook whatever, it’s no longer secure after usually about 5 years. Intentionally driving the disposable culture, buy, use briefly, replace for arguably too much, repeat.

    Have her try Nobara for a week, but tell her it’s hardly the only option! With how easy it is to include SLK into (almost?) any distro, I really wasn’t kidding when I said the Linux Universe is your oyster! Hell, you could compile your own distro if you’re a masochist!