• nelly_man@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    In the court document, it lays out how OpenAI developed the latest model to prioritize engagement. In this case, they had a system that was consistently flagging his conversations as high risk for harm, but it didn’t have any safeguards to actually end the conversation like it does when requested to generate copyrighted material.

    The complaint is ultimately saying that OpenAI should have implemented safeguards to stop the conversation when the system determined that it was high risk rather than allowing it to continue to give responses from the large language model.