• theshatterstone54@feddit.uk
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    I’m actually on this man’s side.

    The idea-stealing he talks about is not unheard of, and multiple people or groups coming up with similar ideas at the same time by looking at market trends is actually quite common.

    If you also look at the fact that he has evidence for pretty much all his claims,

    AND

    He has gotten the domain and has evidence for the ideas and ownership of “Open AI” before Altman’s “OpenAI” was formed

    AND

    He says a lot of his ideas never came to fruition because he couldn’t get funding but the one thing he didn’t need crazy funding for, investing in Bitcoin when it was $10 per coin, is something he ends up doing and leaves him well-off.

    All that to me is enough evidence that this man is one hell of an unlucky individual.

    And as such, I believe him.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      22 hours ago

      i’m not. just because he’s an underdog here means that you’re gonna ignore all the harms of generative ai up to this day? it’s like complaining that big oil stole the idea of adding tetraethyllead to gasoline from you and you got no profits from that as a result

      • theshatterstone54@feddit.uk
        link
        fedilink
        arrow-up
        2
        ·
        18 hours ago

        Not necessarily. A lot of the harms disappear when everything goes open, which is what this person stands for, and what OpenAI was supposed to stand for.

        Open LLM + Open Training Data = Open AI

        Copyright and IP concerns disappear with an open dataset.

        Open models are inherently more trustworthy because of an obvious reduction in vendor lock-in.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          18 hours ago

          Copyright and IP concerns disappear with an open dataset.

          i don’t think i’d agree with that, doesn’t matter if dataset goes open if content went there without consideration for authors

          also even things like thispersondoesnotexist were used to mass-create fake identities and such

          • theshatterstone54@feddit.uk
            link
            fedilink
            arrow-up
            1
            ·
            13 hours ago

            Yeah, but something like that would be super easy to find and fix without going through lawsuits. And I’d argue the dataset creators would be far less likely to add copyrighted material to the training data when it’s all out in the open and they can be immediately made to remove and retrain the AI without that data.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    What this show is a total lack of originality.

    AI is not new. Open-source is not new. Putting two well known concepts together wasn’t new either because… AI has historically been open. A lot of the cutting edge research is done in public laboratories, with public funding, and is published in journals (sadly often behind paywall but still).

    So the name and the concept are both unoriginal.

    A lot of the popularity gained from OpenAI by using a chatbot is not new either. Relying on always larger dataset and benefiting from Moore’s law is not new either.

    So I’m not standing on any side, neither this person nor the corporation.

    I find that claiming to be “owning” common ideas is destructive for most.