• 1 Post
  • 47 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle







  • for such things like shared documents/database entities also a shared test dataset should be available.

    Then they cannot play around and modify those outputs anymore without noticing of others. (because their unittests would fail)

    my assumption here is only an example. I dont know what youre dealing with.

    While I understand the rant. And am on your side regarding those jerk moves. its a management issue. even when they do not act, its up to you to bring this to attention if this seriously conflicts with your work.

    And in the long run its a win win for everyone.

    edit: I am working myself in early development and despite being an engineer by background Im coding. So I know quite well how difficult it is to make it properly instead of quick and dirty.




  • the plot seems so random that it probably fucks. In a good way

    while I had my doubts when seeing the thumbnail, i think this has potential to be quite fun.

    the textures look kind if poor, but I guess that rolls with the concept of the game.

    the dessert looks like straight from breaking bad, so I guess you can fetch some storyline quirks from there.

    but really. put some work into the cover image. based from the image I thought this would be another generic low effort mobile game ad.





  • well. indeed the devil’s in the detail.

    But going with your story. Yes, you are right in general. But the human input is already there.

    But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

    AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

    Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

    While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

    I simply doubt this:

    LLMs will get progressively less useful

    Where will it get data about new programming languages or solutions to problems in new software?

    On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

    And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.






  • I could not comprehend what you were up to telling us.

    But the summary is:

    The key essence of this post is a deeply disillusioned and angry critique of modern American society, government, and technology. The author expresses a sense of frustration with the perceived emptiness, manipulation, and decay of U.S. institutions—seeing democracy as a facade, tech innovation as overhyped and hollow, and the government as ineffective. They convey a desire for systemic collapse or radical upheaval (accelerationism), suggesting that elites will soon resort to authoritarianism to maintain control. There’s also an undercurrent of socio-political pessimism, nihilism, and rejection of both corporate and state power—coupled with a belief that the current system is unsustainable and nearing a breaking point.