I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans.
If humanity is doomed, it will be our own stupid fault, not AI.
It’s crazy how little experts like these think of humanity, or just underestimate our tollerance and adaptability to weird shit. People used to talk about how “if we ever learned UFOs were a real phenomena, there would be global mayhem!” because people’s world views would collapse and they’d riot, or whatever. After getting a few articles the past few years since that first NY Times article, I’ve basically not heard anyone really caring (who didn’t already seem to be into them before, anyway). Hell, we had a legitimate attempt to overthrow our own government, and the large majority of our population just kept on with their lives.
The same AI experts 10 years ago would have thought the AI we have right now would have caused societal collapse.
Idk about societal collapse, but think about the amount of damage the World Wide Web and social media has and continues to do. Look at the mess cars have made of cities around the world over the course of a century. Just because it doesn’t happen overnight, doesn’t mean serious problems can’t occur. I think we have 10 years before the labour market is totally upended, with or without real AGI. Even narrow AI is capable of fucking things up on a scale no one wants to admit.
Agreed, partially. However, the “techbros” in charge, for the most part, aren’t the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.
“Futurologist” is a self-appointed honorific that people who fancy themselves “deep thinkers” while thinking of nothing more deeply than how deep they are. It’s like declaring oneself an “intellectual”.
I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.
People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.
I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.
You seem to have this optimistic view that humanity is invincible against any threat but itself
I didn’t say that. You’re making assumptions. However, I don’t take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.
True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesn’t mean it is a human mind. It won’t have billions of years of evolution and thousands of years of civilization and development.
I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans. If humanity is doomed, it will be our own stupid fault, not AI.
I think much of it comes from “futurologists” spending too much time smelling each others’ farts. These AI guys think so very much of themselves.
It’s crazy how little experts like these think of humanity, or just underestimate our tollerance and adaptability to weird shit. People used to talk about how “if we ever learned UFOs were a real phenomena, there would be global mayhem!” because people’s world views would collapse and they’d riot, or whatever. After getting a few articles the past few years since that first NY Times article, I’ve basically not heard anyone really caring (who didn’t already seem to be into them before, anyway). Hell, we had a legitimate attempt to overthrow our own government, and the large majority of our population just kept on with their lives.
The same AI experts 10 years ago would have thought the AI we have right now would have caused societal collapse.
Idk about societal collapse, but think about the amount of damage the World Wide Web and social media has and continues to do. Look at the mess cars have made of cities around the world over the course of a century. Just because it doesn’t happen overnight, doesn’t mean serious problems can’t occur. I think we have 10 years before the labour market is totally upended, with or without real AGI. Even narrow AI is capable of fucking things up on a scale no one wants to admit.
Agreed, partially. However, the “techbros” in charge, for the most part, aren’t the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.
“Futurologist” is a self-appointed honorific that people who fancy themselves “deep thinkers” while thinking of nothing more deeply than how deep they are. It’s like declaring oneself an “intellectual”.
I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.
People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.
I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.
I didn’t say that. You’re making assumptions. However, I don’t take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.
But if AI learns from us…
True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesn’t mean it is a human mind. It won’t have billions of years of evolution and thousands of years of civilization and development.