Independent thinker valuing discussions grounded in reason, not emotions.

Open to reconsider my views in light of good-faith counter-arguments but also willing to defend what’s right, even when it’s unpopular. My goal is to engage in dialogue that seeks truth rather than scoring points.

  • 0 Posts
  • 12 Comments
Joined 2 months ago
cake
Cake day: August 25th, 2024

help-circle







  • A chess engine is intelligent in one thing: playing chess. That narrow intelligence doesn’t translate to any other skill, even if it’s sometimes superhuman at that one task, like a calculator.

    Humans, on the other hand, are generally intelligent. We can perform a variety of cognitive tasks that are unrelated to each other, with our only limitations being the physical ones of our “meat computer.”

    Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities, but without the brain’s limitations. It should be noted that AGI is not synonymous with AI. AGI is a type of AI, but not all AI is generally intelligent. The next step from AGI would be Artificial Super Intelligence (ASI), which would not only be generally intelligent but also superhumanly so. This is what the “AI doomers” are concerned about.




  • AGI is inevitable unless:

    1. General intelligence is substrate independent and what the brain does cannot be replicated in silica. However, since both are made of matter, and matter obeys the laws of physics, I see no reason to assume this.

    2. We destroy ourselves before we reach AGI.

    Other than that, we will keep incrementally improving our technology and it’s only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.