• stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    Just as a few months ago, when another AI/LLM destroyed another project in a similar way, this happened due the failure to classify AI/LLM risks properly.

    1. In both these instances the person treats the AI/LLM as a person capable of reason, it is not, remember what Isaac Asimov wrote about the robots in his universe, they are capable of extreme logic, but can’t deal with reason at all. The same applies here, the AI/LLM can absolutely be logical, but it can’t use reason.
    2. Failure to treat the AI/LLM as a potentially bad actor, the person looked at the AI/LLM, probably ran a few simple tests, then granted it full access to highly sensitive files. Since the software is quite good at chatting with humans like we chat with each other, it makes the developer think they and trust it as if it were a person.