Online chat service Discord has announced it will begin testing age verification for some users, joining a growing list of platforms trying to work out who is actually behind the screen.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    No it can’t.

    Any cryptographic “solution” relies on some magical disconnect between a picture of your ID and the verification. There’s no way to ensure this if the verification service is actively malicious. This is similar to the cryptographic properties of Signal’s sealed sender.

    • Hirom@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Are you referring to a verification service running on the user’s device, or on a third party providers’ server?

      Running a verification service on the user’s device can’t work, regarless of the approach. The software on the user’s device can be altered to give fake results.

      Relying on a third party service to do remote verification imply trusting that third party isn’t giving fake results, regarless of the approach.

      The company/org that needs to verify age should run the actual age verfication, ideally using a privacy-friendly method and a free reference implementation that got scrutiny from the public. It requires only trusting the gov issues acvurate ids, and trusing the math.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Basically, it reduces to

        1. User does face scan or whatever
        2. Service verifies this
        3. Service generates cryptographic ticket of some sort and promises to not attach this ticket to the user’s face
        4. User uses ticket.

        Fundamentally there’s always a way to break that promise.

        • Hirom@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          That approach is indeed easy to break, not privacy friendly. But also, it’s not what this article is referring to.

          For example, the German eID exchanges data directly between a microprocessor in a person’s plastic “eID card” and the platform. The microprocessor proves it belongs to a government-issued eID via a cryptographic key, which is shared with 9,999 other eIDs. This means the only thing a platform learns is that one of 10,000 potential people signed up.

          • Kairos@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            You did not listen to me. The German government could keep a map between this chip and the ID it’s issued to. They just don’t. (Edit: don’t know anything about if they do.)

            It might be easier to understand this via algorithm runtime.

            Imagine there’s some EID database of some sort. A storage of which of all possible keys are tied to valid IDs, or something. Doesn’t matter what it stores, really.

            Now imagine that Germany wants to issue another one of these EIDs to an adult. This would change that database in some way. This change will only touch some small amount of data in the database. Maybe just adding a row. It’d be infeasible to recompile the database, or something.

            Now, if this adult were to want to verify their ID or whatever, this process would necessarily touch this and pretty much only this data. If the German government were to keep metadata tying that DB entry or whatever to the person’s name, they can tie the cryptographic process of age verification to their real ID.