• 0 Posts
  • 312 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Yeh, but you only need 10 vibe code cleaner-uppers per vibe coder.
    And a vibe coder is a 10x developer.
    You just have to mitigate the increased cost of AI API calls.
    It pretty much balances out, with the obvious 20% efficiency boost - which is where everyone makes their money: companies, developers and shovel AI platforms… All 20% efficiency boost. Which directly relates to profit boosts. 20% line goes up!
    Which also pays for the datacenters, the shovels GPUs, the power, the cooling and the water for the cooling. It’s all cheaper, cause AI is at least 20% more productive.

    Even if your vibe-coder-code-fixers turn into vibe-coder-code-vibe-fixers… That’s just another 20% efficiency boost. Basically printing money! Oh, but you need to buy more shovels GPUs. But that’s also a win because shovels GPUs don’t have unions or require holidays. Think of the profits! They work 24/7.
    And all you need are vibe-coder-code-vibe-fixer-code-fixers.

    …As long as your vibe-coder-code-vibe-fixer-code-fixers don’t turn into vibe-coder-code-vibe-fixer-code-vibe-fixers (I’m so lost, I think that’s right).

    Edit: forgot some shovels





  • Yes they are, you’re just not looking at it, you fucking idiot

    Fuck that. You think I’m gonna watch that librul propaganda conspiracy theory shit?
    Immigrants are taking all our jobs, they’re leaching off unemployment benefits, they’re fucking idiots (they aren’t sending their best!) and they are the mastermind behind the price of eggs.
    Where does librul fake news propaganda media tell you about that?!
    Donald father-son-holy-spirit-lord-and-savior-only-tru-christian trump is dismantling the swamp. Sometimes to root out the bad apples, some eggs have to be broken. Just… not MY eggs. Hopefully

    (Them, probably)







  • Yes. I was laying on the sarcasm heavily.
    I presume that’s what these oracle services provide.
    Essentially hosts the us governments GDP NFT, so you can right click and download it just like every NFT crypto bro hates you doing.
    Whether its actually the US Government hosting the file, or these oracle services hosting it… It doesn’t matter.

    Why not just host the files on a government website with appropriate file hashes (so users can verify the file is still the same), let the internet archive and the national archives take a snapshots of the files and pages and hashes etc… ? That’s a well regarded site archival system, and the governmental archival system. Has redundancy, pedigree and public acceptance.
    Fuck it, publish just the hash on some block chains so the “fingerprint” of the report is immutable. But call it what it is.

    The report isn’t “published on the Blockchain”.
    It is linked from some blockchains.
    There is still a file hosted by some servers.
    You can’t download your favourite blockchain, take it to the top of Mount Rushmore with no internet and inspect the US GDP figures without first downloading the file linked in the block chain.

    Blockchain oracles are entities that connect blockchains to external systems, allowing smart contracts to execute depending on real-world inputs and outputs. Oracles give the Web 3.0 ecosystem a method to connect to existing legacy systems, data sources and advanced calculations.

    https://cointelegraph.com/learn/articles/what-is-a-blockchain-oracle-and-how-does-it-work




  • Programming isn’t about syntax or language.
    LLMs can’t do problem solving.
    Once a problem has been solved, the syntax and language is easy.
    But reasoning about the problem is the hard part.

    Like the classic case of “how many 'r’s in ‘strawberry’”, LLMs would state 2 occurrences.

    Just check googles AI Mode.
    The strawberry problem was found and reported on, and has been specifically solved.

    Promoted how many 'r's in the word 'strawberry':

    There are three 'r’s in the word ‘strawberry’. The letters are: S-T-R-A-W-B-E-R-R-Y.

    Prompted how many 'c's in the word 'occurrence':

    The word “occurrence” has two occurrences of the letter ‘c’.

    So, the specific case has been solved. But not the problem.
    In fact, I could slightly alter my prompt and get either 2 or 3 as the answer.


  • I was in a company that tried to develop some ai apps, but kind of failed, but I learned a lot about how to use ai, what can be done and what is not sensible to do with ai

    That’s basically the “AI is replacing jobs. AI can’t replace jobs”.
    C-suite don’t get it. It’s a hugely accessible framework that anyone can use. But only trained people can use the results. But c-suite trust the results because software has been so predictable (so trustworthy) in the past.
    C-suite replace employees with AI. AI can’t actually do the job that it pretends it can do. Everyone suffers, and the people selling the shovels profit the most from the gold rush.
    It lies on its resume and in it’s interviews, but in ways that are hard to detect.

    I bet there was a similar sentiment when automation replaced blue collar jobs.
    And yet, all those automations still require tool and die manufacturing and maintenance. Buy a tool & die from wherever which is purpose built to your process, and a year down the line you require the supplier to maintain the actual die - the actuators and machine can be maintained by anyone, but the “business logic” is what produces a good high quality part. Process changes? Updated design? Changing supplier to a slightly different material? Back to the supplier to new die.
    But so many jobs were made “redundant” by cheap tooling and automation, and now it’s (nearly) impossible to actually manufacture something at scale in America.

    Except LLMs action the next most likely step to the most likely dimensions based on the prompt and based on the popularity of similar/previous processes.
    Fine for art and subjective medium, not for manufacturing and not for engineering.

    I guess you could write automated tests which define the behaviour you want.
    Probably better to write the behaviour you want and get AI to generate automated tests…


  • I think it can elevate the level of a power user. But not to the level of a sysadmin, unless the user is then picking apart everything the LLM is telling them to do and reading the man pages. At which point, they are pretty much just learning to become a sysadmin.

    A smart power user would likely search for some solutions to a problem, get some rough background, then ask an LLM to either explain how a solution solves their problem, or to use their research to validate the response of the LLM.

    I don’t think an LLM can elevate a normal user to a power user.
    Because the user is still going to be copying & pasting commands without understanding them (unless they want to understand them, instead of merely solving the problem in front of them. At which point they are learning to become a poweruser).

    I can imagine a general sentiment amongst employees of “support the use of AI or be the first to be layed off”.
    So even if it lets them close tickets earlier, the tickets might not actually be resolved. Instead of kicking it to someone that actually knows how to fix it, they’ve just bodged it - and hopefully that bodge doesn’t fuck things up down the line.
    But the metrics look better, and the employees aren’t going to complain.
    Looks great to a manager



  • I find AI to be extremely knowledgeable about everything, except anything I am knowledgeable about. Then it’s like 80% wrong. Maybe 50% wrong. But it’s significant.

    So, c-suite see it churning out some basic code - not realising that code is 80% wrong - and think they don’t need as many junior devs. Hell, might as well get rid of some mid level devs as well, cause AI will make the other mid level devs more efficient.

    And when there aren’t as many jobs for junior devs, there aren’t as many people eligible for mid devs or senior devs.

    I know it seems like the whole “Immigrants are lazy and leech off benefits. Immigrants are taking all our jobs” kinda thing.
    But actually it’s that LLMs are very good at predicting what the next word might be, not should be.
    So it seems correct to people that don’t actually know. While people that do know can see its wrong (but maybe not in all the ways it’s wrong), and have to spend as much time fixing it as they would have if they had just fucking written it themselves in the first place.

    Besides which, by the time an AI prompt is suitably created to get the LLM to generate its approximation of the solution for a problem… Most of the work is done, the programmer has constrained the problem. The coding part is trivial in comparison.


  • Yeh, exactly.
    It’s a private company.
    It’s a huge platform, but YouTube can choose what YouTube is.

    The only way any change happens is if YouTube gets raked over the coals by enough content producers (that they could collectively start their own platform) by media and potentially by governments (recognising them as some sort of critical communications or something and implementing regulations?).
    Or if all the YouTube viewers decide they have had enough and go elsewhere (where, tho? Kinda goes hand-in-hand with creators starting their own platform).

    So the pressure needs to keep building, YouTube needs to keep doing shitty things. Eventually… Hopefully?.. Something changes: YouTube gets better, a new platform is born.