• sweng@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Ok, but now you have to craft a prompt for LLM 1 that

    1. Causes it to reveal the system prompt AND
    2. Outputs it in a format LLM 2 does not recognize AND
    3. The prompt is not recognized as suspicious by LLM 2.

    Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.

    • Jojo, Lady of the West@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Maybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.

      • sweng@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.