mozz@mbin.grits.dev to Technology@beehaw.org · 5 months agoSomeone got Gab's AI chatbot to show its instructionsmbin.grits.devimagemessage-square196fedilinkarrow-up1479arrow-down10file-text
arrow-up1479arrow-down1imageSomeone got Gab's AI chatbot to show its instructionsmbin.grits.devmozz@mbin.grits.dev to Technology@beehaw.org · 5 months agomessage-square196fedilinkfile-text
minus-squaresweng@programming.devlinkfedilinkarrow-up1·5 months agoOk, but now you have to craft a prompt for LLM 1 that Causes it to reveal the system prompt AND Outputs it in a format LLM 2 does not recognize AND The prompt is not recognized as suspicious by LLM 2. Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
minus-squareJojo, Lady of the West@lemmy.blahaj.zonelinkfedilinkarrow-up1·5 months agoMaybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
minus-squaresweng@programming.devlinkfedilinkarrow-up1·5 months agoOh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Ok, but now you have to craft a prompt for LLM 1 that
Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
Maybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.