• 0 Posts
  • 1.24K Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle





  • Problem as evidenced by a lot of outsourcing success is that the people cutting the checks are not fazed by broken software.

    This applies to a lot of industries where laypeople are at the mercy of ‘expertise’, a lot of folks doing things like HVAC or auto mechanics are actually not that good, and while they are the bane of the good HVAC and mechanics, they manage to secure market share just fine. Yes, there are mechanics that have crappier mechanics to thank for them having some stuff to fix, but the crappy mechanics can do easy stuff fine and lots of people driving with something busted because the mechanic couldn’t figure it out and told the customer “yeah, it actually is normal for it to be that way”.






  • I agree with you and I consider it similar to the ‘hollywood effect’: Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won’t catch.

    Problem is that if you compare the works that do it ‘right’ to the ones that do it ‘wrong’, there’s no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.

    Now one might reasonably argue ‘sure, but that’s purely fiction anyway, if it had real consequences, that would actually matter’, except it constantly happens in real world situations.

    My work colleague picked up his car from some mechanic chain after having it ‘fixed’ and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said “Oh, the staff told me that cars just sound like that after a repair until the parts break in” and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said “hey, free flashlight, thanks for figuring that out” and a few months later he had mentioned going back to the exact same place for something else.

    A few days ago I went to a hardware store and their site said they had it, but under location it said “see associate”. The first one checked his device and didn’t understand what the deal was so he said “Oh, go over there and ask John, he knows all this stuff”. Ok, so I walk over to John, who takes one glance and confidently says “oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it”. I think “ok, good, a guy who really knows his stuff and the other staff recognize him for it”. I roll up to the cage and look in and realize “uh oh, this is not the type of stuff I’m looking for, he made a pretty amateur mistake”, but I push the button anyway. I show my phone to the guy who comes up and said that “John” said it would be here but I couldn’t see it, and at the mention of “John” the guy clearly rolled his eyes and it was abundantly clear that John’s “expertise” was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.

    This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said “in this new era, we don’t need you for this quick tool, I just asked Claude and it made me this application”. So I tested it and reported that ‘a’, it didn’t actually work, it produced stuff that looked right, but the actual tool wouldn’t accept it because it didn’t se the right syntax, and ‘b’, if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said ‘guess LLMs sometimes aren’t perfect yet’, no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.

    AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick “usually” correct answer is indeed frequently “good enough” for a lot of purposes, and LLM’s speed at generating output can’t be beat.



  • Which even they saw as a diminishing opportunity, so they bought Sun so they also have Solaris and Java and a bunch of other miscellaneous crap.

    They get non trivial amounts of money by punishing anyone with a business relationship with them with audits and superfluous invoices.

    Story time, a product at my company used to provide a Java webstart application from a web GUI. We did not use any oracle software including any of their Java editions so we paid it no mind (though I hated the applet demanding Java, but at least it wasn’t active x).

    Anyway several of our customers said we needed to purge it, because oracle detected JSPs served by our software, and their audit said that if JSPs were served but no Java runtimes detected, obviously the company must be “hiding” the JREs and invoiced the company for every employee to have their paid Java runtimes. Happened to multiple of our clients.

    So that’s what drive us to finally purge Java and embrace modern html capabilities, and a way that Oracle makes money and also any no one who knows anything wants to willingly end up with an Oracle business relationship.


  • To split hairs, I think it can be conceivable, but just not possible to assign a viable probability to.

    Some one could have conceived a black swan, but since it had never ever happened, there was no data to drive a percent chance of finding one

    Some virus that instead of killing anyone just manages to somehow make everyone compulsively tell the truth. Could be imagined, but it would be ridiculous to ask how that possibility is accommodated for in someones financial model.

    Even if someone could somehow define a probability for that event, no way of really modeling the outcome since the ability to lie has been always part of the economy.