On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

  • leds@feddit.dk
    link
    fedilink
    arrow-up
    31
    ·
    8 months ago

    Great! When will this be included in teams? So that I can deepfake all meetings

    • Dave@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I’ll give it a photo of myself from 10 years ago so that my coworkers don’t realize that I’m getting old.

    • bitfucker@programming.dev
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      8 months ago

      Someone is always bound to make this someday. At least the maker is announcing it which is decent enough. Actually, I have always thought that if AI can generate image and voice, what is stopping someone from identity theft? And BAM, we are now in an age where digital data will soon be unreliable unless we have protocol in-place to prove the origin of the data.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      If you pump out enough research papers, maybe Microsoft won’t move you over to the Office team.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    8 months ago

    We’re going to need strong digital signatures on everything, and we need it fast, else we won’t be able to believe anything we see. It will be Steve Bannon’s “flood the zone with shit” dream come true.

    • simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      We’re going to need strong digital signatures on everything

      That won’t help anything considering how easy it is to strip metadata.

      • cygnus@lemmy.ca
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        8 months ago

        I mean the opposite scenario, where if there’s no signature we assume it’s fake.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          We’ve had email forgery and signatures to prevent it for decades, but barely anyone does that either.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    8 months ago

    That lip sync is scary good. It’s still a little off, the teeth are weirdly stretchy, but nobody would notice it’s a deepfake on first glance.

    Seems very similar to Nvidia’s idea of only having a moving photo for video calls to reduce bandwidth needed. Very nice.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      8 months ago

      We’d need better optimization and more powerful processing on ye average laputopu for that to happen.

  • Vendetta9076@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    8 months ago

    Revenge porn machine go brrrrr.

    Parents need to learn this stuff and teach their kids about it. Rumored nudes were enough to ruin kids lives at my highschool, nevermind “real” ones.

  • NoneYa@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    The hair is the giveaway for me. Though I may not have noticed it unless I was looking for something.

    • Gebruikersnaam@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Also the teeth that keep expanding and shrinking. But if you just lowkey watch something it is really hard to notice…

  • skatrek47@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    This is impressive and terrifying as hell… I’m pretty tech savvy and have watched other AI videos, but if you presented this to me as real, I’d totally believe it! Especially I can imagine it being spliced with B roll footage and maybe “real” footage to make it even more believable. I’m floored…

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    No. No, they can’t. This shit still takes lots and lots of training data.

    It’s just like any job. You can’t just fully fake something in one day. At best, you might get 60% of the way there, maybe 80% after adding on some generic experience. But, you’re not going to fully mimic anything without lots of training and experience.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 months ago

    This is the best summary I could come up with:


    On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track.

    In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

    To show off the model, Microsoft created a VASA-1 research page featuring many sample videos of the tool in action, including people singing and speaking in sync with pre-recorded audio tracks.

    The examples also include some more fanciful generations, such as Mona Lisa rapping to an audio track of Anne Hathaway performing a “Paparazzi” song on Conan O’Brien.

    While the Microsoft researchers tout potential positive applications like enhancing educational equity, improving accessibility, and providing therapeutic companionship, the technology could also easily be misused.

    “We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection,” write the researchers.


    The original article contains 797 words, the summary contains 183 words. Saved 77%. I’m a bot and I’m open source!