

This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.
Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.
There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.
OP, you say “free, open source, and fully attributed”, but it’s really not fully attributed. I know Google will live, but you need to be more attentive to licensure and credit. Here are some major problems (in no particular order):
LICENSE
file, you call the copyright status of the icons “uncertain”. This confuses the hell out of me, because on the icons pack page for Google, it clearly reads at the bottom:Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.
Now you have all your research done for you, and Cunningham’s law is proven right again.