• xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    I imagine that it is theoretically possible to successfully vibe-code, but probably not with a conventional project layout nor would it look much like traditional programming. Something like your interaction primarily being a “requirements list”, which gets translated into interfaces and heavy requirements tests against those interfaces, and each implementation file being disposable (regenerated) & super-self-contained, and only being able to “save” (or commit) implementations that pass the tests.

    …and if you are building a webapp, it would not be able to touch the API layer except through operational transforms (which trigger new [major] version numbers]. Sorta like MCP.

    Said another way, if we could make it more like a “ratchet” incrementing, and less like an out-of-control aircraft… then maybe?!?

    • I vibe code helper functions and gut them to repurpose them for my needs. But I suppose even that isn’t really vibe coding, I guess that’s actually more like browsing stack overflow.

    • zalgotext@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      For that to work, the people doing the vibe coding would need to be experienced and skilled at writing test suites and managing strict version control practices. Which at that point, you’re not really a vibe coder, you’re an actual software engineer. So what you’re describing is just software engineering with extra steps lol

      • xia@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Well, I do have MBSE on the brain, but the idea here is more like a low-code/no-code environment with an ABSOLUTELY ENORMOUS “pit of success”… so large that even GenAI can reliably fall into it. Numbered tabs, you go left to right answering questions and fiddling with with prompts, paint-by-numbers for working software.

  • logi@piefed.world
    link
    fedilink
    English
    arrow-up
    77
    ·
    1 day ago

    Yes. Except that cursor is running at a loss and so is the company running the LLM that they pass all the work on to.

    • abbadon420@sh.itjust.works
      link
      fedilink
      arrow-up
      27
      ·
      1 day ago

      It’s not about the company. It’s the investors that are making the profits. They dont care whether its making a profit or not, as long as they are making a profit themselves.

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 hours ago

        OTOH OpenAI is not on the public stock market, so current investors can’t really sell their shares and there’s no way they actually raise the valuation it has.

      • snooggums@piefed.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 day ago

        How are the investors making a profit when the company is being run at a massive loss?

        Probably selling their shares to the next grifer or something, I don’t know how the stock market casino actually works.

        • abbadon420@sh.itjust.works
          link
          fedilink
          arrow-up
          25
          ·
          1 day ago

          Yeah that’s basically it. They’re betting that they’re not holding the shares when the company falls. Sometimes they are actually betting the opposite

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 day ago

      So many of the complaints I see about LLM behaviour can be so easily solved by just adding “don’t behave this way” to the prompt. Most LLM frameworks these days let you add stuff like that to the default system prompt so you don’t even have to remember to do it.

  • BeigeAgenda@lemmy.ca
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    1 day ago

    Recently I roo-coded a node.js MVP without knowing too much about Node, but something about JS/CSS/HTML although it’s years ago I last used it.

    I got something working decently by:

    • Make a project plan and use cases
    • Take (very) small steps
    • Commit often
    • Throw away bad attempts
    • Make test cases
    • Hand edit from time to time, especially CSS stuff.

    Would I have been able to fling something together by reading some node.js guides and using stack overflow yes, would it have taken around the same time yes, but without test cases and documentation. Do I think vibe coding is the best thing since sliced bread no!

  • generator@lemmy.zip
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    After using opencode.ai to create some python apps and a webui, when you ask to do something you don’t know if it will fix it or break everything

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    6
    arrow-down
    4
    ·
    1 day ago

    The very first comparison fails, though. I run LLMs locally on my own computer, tokens cost me nothing.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        9 hours ago

        I pay for my electricity. It uses roughly the same amount of power when I’m running an LLM as it would if I was playing a game. It’s negligible.

        And contrary to all the breathless headlines about water-guzzling data centers, my computer doesn’t consume any water at all when I run an LLM.

        • edinbruh@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 hours ago

          If you count only the cost for you, maybe it doesn’t consume water, but your toy still guzzled lakes as it was training. Plus, the hardware to run a full sized LLM is expensive, so you bragging about how it costs nothing is like a millionaire preaching to gamblers that it’s better to just be rich than try to win at the slots

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            6 hours ago

            Plus, the hardware to run a full sized LLM is expensive

            It’s a regular gaming PC. Are you going to dismiss all gamers as “millionaires”?

            • edinbruh@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              6 hours ago

              I specifically said “full sized”, a pc with modern gpu and more than 32gb of vram is not a regular computer that most gamers have access to. If you are running a 7B model on a gtx 1080 or even an rtx 3060, you are not running a full LLM like the ones you would get from a subscription service

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                2
                ·
                5 hours ago

                Yes, I know. You’re saying that a 32GB graphics card is millionarie hardware? You’ve got a weird view of the cost of these things.

                • edinbruh@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 hours ago

                  It’s an analogy, it must be similar in principle, not in numbers. A subscription to chatgpt also costs less than what gamblers spend in slots. But whatever, I don’t care enough to argue much more