• plc@feddit.dk
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Hmm isn’t that a somewhat paradoxical take?

    If the proposed solution is to pin your idea of what is safe on a rigorously formalised security policy, doesn’t that entail that you know what you’re doing (i.e. your problem domain is sufficiently narrow that you are capable of comprehending it fully) and isn’t that exactly not the case with most/all(?) applications that benefit from AI?

    Didn’t read the complete article, so mea culpa, but some examples of systems where this is feasible would be welcome.

    It certainly doesn’t seem feasible for my goto example of software development using claude code.

    • d.rizo@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I only cautiously use AI. Most certainly because I see how, AI companies apply “flex tape” to fix issues. They themselves don’t realy know how the AI will behave and are only treating symptoms and ignore the real problem.

      That’s why I think your stance on “we don’t ‘really’ know what AI is doing” is right. “Knowing the problem good enough to formalize the risks/policies” is a very valid point, but on the other hand, I think you would agree that we already solved such issues. Think about a “black box”.

      If you see AI as a black box, you might come to a similar conclusion as the author.

      I know that this is not the fix we need or even want. But better this than playing the beta tester for AI companies.