Ai lying would implicit that it can think, which it fucking can’t because its just a a glorified statistical output generator.
Ai is overhyped while it completelty underpferforms and unless there someone finds a new way to get an actual intelligence - that can self calculate, learn and produce stuff without input - which is a completely different approach than all of what the current models do - ai doesnt have a real future other than being some generator for some shit that needs to be verified anyways
It’s a fucking math function. Numbers go in, numbers go out. It’s a glorified text suggestion.
If your results are, that it’s hiding away information or trying to lock files used for it’s configuration, then you specifically allowed it to do it, or more probably you have no idea how file locking works in the first place.
I hate this kind of AI doomsaying with passion, because it makes zero sense and only sways the discussion away from actual problems, while also being comparable in it’s bullshitism as anti-vaxers are.
I mean, the problem they talk about is kind of missalingment, but because they are making nonsensical claims about how the AI is trying to go rogue, instead of actually talking about the real dangers of misalignment (like manipulating people into extremism to maximize their engagement on platforms, or not being factually correct), which will always be a limitation of any ML algorithm and is a reason why it shouldn’t be used for 90% of cases it’s being used in.
The article is literally cold reading. They are trying so hard to push their bullshit narrative, that it’s painful to read. A software that locks his configuration file when running? Oh, I guess my git is also AI gone rogue, and doesn’t want me to delete it.
Lol.
Yup, agreed. AI is not actual AI but it seems more and more the general public doesn’t understand this. Just because something has been fed enough data to mimic conversation doesn’t mean it’s actually thinking for itself.
I agree that the article is pushing a narrative, but you have to recognize that AIs are absolutely not being kept in sandboxes where they cannot affect the outside world.
Some AIs are being asked to write code. Do all the users of that code check it thoroughly before putting it into production?
Apple have recently rolled out Apple Intelligence. Siri can do all kinds of things on your phone.
People are racing each other to put AI in everything, and the restrictions on them will be looser and looser.
Yes agree, the problem is the marketing bullshit term that is AI. What we have now are sophisticated algorithms for remixing data - very impressive but in no way AI.
The big problem is how in accurate they are, how they drift over time and need resetting and how biased they are in terms of what they’re taught and what they’re allowed to say (which then has unpredictable consequences).
A good example with the visual models is how many seem incapable of drawing a penis and give men vaginas. That’s an inherent bias in what the models have been taught, and while amusing speaks of all the other biases that models are getting from being given curated data.
Another example is the shit summaries tools give in search engines that are frequently wrong.
This is basically alpha software that has been released on the world and oversold to inflate share prices. All the companies care about is being first and getting market share, in the hopes that the 90-9-1 split will occur and they want the 90% slice.