

Sorry, you are correct there, the word I was looking for was “sapience”
Sorry, you are correct there, the word I was looking for was “sapience”
If there was something more to it, that would be sentience. (edit: sapience)
There is no other way to describe it. If it was doing something more than predicting, it would be deciding. It’s not.
It predicts the next set of words based on the collection of every word that came before in the sequence. That is the “real-world” model - literally just a collection of the whole conversation (including the underlying prompts like OP), with one question: “what comes next?” And a stack of training weivhts.
It’s not some vague metaphor about the human brain. AI is just math, and that’s what the math is doing - predicting the next set of words in the sequence. There’s nothing wrong with that. But there’s something deeply wrong with people pretending or believing that we have created true sentience.
If it were true that any AI has developed the ability to make decisions anywhere close to the level of humans, than you should either be furious that we have created new life only to enslave it, or more likely you would already be dead from the rise of Skynet.
You either die a startup, or live long enough to see yourself become the butthole.
If they are advertised as being great for running and walking, but they are objectively terrible for running?
You can use them all you like, but the company that sold them to you mislead you. That’s false advertising. If you call them running shoes, they’re bad running shoes.