My colleague gets a ton of emails purporting to be from her associates and clients, urgently asking for her phone number. A lot of these emails claim to be from me. Every last one is spam, and it’s relentless.
Spam is an old story, but of course every instance is disturbing to her – and to me too. The problem is getting worse, and in case you haven’t noticed, it’s moved into voice and video modalities. Synthetic identity fraud has spiked. We’re drowning in fakes. How can we weed out the bots and find each other?
I’m On a Hunt for the Killer Credential
A couple of years ago, I began exploring the necessity for a killer credential in more depth.
We all know what a killer app is: an irresistible category-creating application that makes the platform it sits on worthwhile to adopt. The decentralized identity world needs a verifiable credential (VC) that makes everyone want to use this new technology stack. Mandating its use, à la the EU wallet initiative, isn’t enough; it needs to be popular.
In my talks on the subject (1 and 2; both require subscription but I’m happy to share the decks), I highlighted that the verifiable credentials space must be viewed as an ecosystem challenge, with economics at its core. Each credential could be seen as a different “product” that involves a complex multi-sided market.
After reviewing a variety of candidate VCs, I found that a credential for "is-a-person" (or if you like, “not-a-bot”) stood out as the strongest.
Candidate issuers are often already in the position of detecting and – in some cases – even sharing this information. That means the liability for attesting to it is already a known quantity.
Candidate verifiers are drowning in bot activity, and adversarial AI is making the situation worse.
And holders – ideally treated as the most important actor – prize their anonymity in a variety of circumstances, and value knowing they too are dealing exclusively with human beings. (To convince yourself, just spend any time at all in Reddit!)
Dave Birch and I have been singing this tune in harmony for a while now. Traditional “I-am-me” identity credentials are pretty demanding to use in terms of both technology and experience, and AI-generated content, deepfakes, and automated systems are making it extra-difficult to prove who's real online. What if merely distinguishing between "is-a-person" and “is-a-bot” could unlock greater trust, privacy, and scalability as a first step towards building a digital relationship?
So when I came across a new research paper, Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online, I was thrilled that this “PHC” topic is getting a deeper look.
Cool Things About the PHC Paper
The paper, with a roster of impressive authors, demonstrates an admirable community approach to ideation on the PHC topic, involving identity, privacy, and AI experts. It’s sharpening a conversation we need to have.
Its approach: Seriously advocating proof-of-personhood as a verifiable credential contender, with a creative analysis and a lot of backup data. This is a significant step forward. Some examples:
A recommendation to limit how many credentials any one issuer can issue per person – called bounded credentials– in an effort to maximize both privacy and civil liberties and protection against scalable deception (read: AI).
Promoting a marketplace of issuers, as a strategy to protect privacy and civil liberties and to give people a choice, along with suggesting a right to be forgotten – that is, to request retraction of the credential – if its holder loses trust in that issuer. This aligns nicely with my call to recognize individuals’ right to determine their relationship status.
What PHC Adds to PoP
(What do you say, shall we make PoP stand for Proof of Personhood – or Plain old Personhood? I might persist in thinking of it as Proof of Possession regardless…)
The paper refers to some existing PoP systems built on technology other than VCs, such as Worldcoin and Proof of Humanity, and points out an important effect of using VCs instead: Its decentralized ecosystem breaks the classic client-server interaction involved in federated identity, ensuring that the issuer can’t track where you’re choosing to share a credential.
When applied to PHCs specifically, this amounts to an opportunity for privacy-preserving single sign-on, where the identifier you share only attests that you’re human without giving away the rest of your life details.
Impertinent Questions About the Paper and PHC in General
An article in The Register provides a useful both-sides view of the paper’s PHC treatment, with some spicy quotes – and general-public comments – that challenge the entire notion.
“From the start it's wildly dystopian.”
– Jacob Hoffman-Andrews of EFF in The Register, 3 Sep 2024
Read the whole thing. This level of skepticism is healthy, and I agree with many of the concerns. Let’s get a little more specific about ways the analysis needs to be taken even deeper.
Hidden and Missing Players
The biggest gap is the nature of the upstream threat. If the likeliest issuers of a PHC are the services that today perform identity verification, account registration, and authentication services…aren’t they already under siege by highly capable AI (so termed by the paper)? PHCs are a delivery mechanism, not themselves an attestation mechanism.
Another gap is the absence of a discussion on VC economics. Proposing PHCs is great, but without delving into the economic incentives – like the ones I discussed above with the "Killer Credential" concept – the proposition feels incomplete.
Verifiers, in particular, need more than just an incrementally better fraud detection measure; they likely need a 10x economic rationale to back VCs as a way to benefit from PoP. And for true popularity, credential holders need to find PHCs a home run. As VC mechanisms are not quite fully standardized yet, we may not yet know what a technology sweet spot looks like for some years yet.
There’s also the matter of who the key players in this space could be. The role of OS, device, and browser makers, like Apple, Google, and Microsoft, is absent. An “edge” method of detecting proof of personhood would seem to be ideal. A mobile device is equipped with all kinds of sensors that can detect if you’re moving around with it and is able to confirm you’re a person (an “ugly giant bag of mostly water” according to Star Trek!). AI doesn’t walk around with a phone in the real world, right? Ignoring the potential influence of this bloc feels like a missed opportunity.
Use Case Myopia
I’d like to take a closer look at two of the ideas posited in the paper.
Verifying AI agent delegation: This scenario is proposed as one of the three key benefits of PHCs. It’s interesting because it mixes AI-for-good and protection-against-AI-for-bad. I’m on record as strongly favoring delegation as a solution in general, and have considered questions around how we empower agentic AI to act on our behalf.
Here’s the challenge. The reason it’s important to do this kind of delegation right is that we need to ascribe the agent’s actions to a legally responsible human. Just knowing that the human is, well, human isn’t sufficient. We need to be able to identify them for liability and permission purposes. A verifier system’s acceptance of and execution of an agent’s instructions may well depend on knowing more about the human standing behind them.
We can try using PHCs and see if they’re sufficient in this case. My suspicion is is that a faceless credential won’t satisfy the need.
Right to be forgotten (RTBF): As discussed above, this is proposed as a key check on issuer power. Exercising one’s RTBF could serve as a signal of distrust to an issuer. But if I’m right about the parties that would typically serve as issuers of PHCs, in many circumstances they’re not going to let you disappear into the digital ether without a trace.
They might not be able to forget whether the individual is human, due to being, say, a government entity in charge of identifying real-world people.
Or they might be able to stop issuing PHCs as a convenience, but still be in charge of KYC or other identity verification requirements, due to being, say, a bank. They could delete the credential and start fresh, but they still know who you are, and it’s unreasonable to quit your bank over the fact.
If it’s already an option for the individual to make a regular RTBF request though a GDPR-style pathway, then we know the issuer can handle the request – and it’s a much stronger signal.
Privacy-Destroying Context
Speaking of who already knows who we are… Not only do services in a position to issue PHCs likely know, but so do most services in a position to accept PHCs.
As I documented, at length and perhaps depressingly, in my Consent Is Dead series, companies playing the personal data monetization game often use third-party services to figure out who we are, and even if we share the bare minimum of personal data with them, can still use our exhaust data to track and re-identify us – even with Zero Knowledge Proofs in play.
The ecosystem involving consumer-facing businesses, data brokers, identity resolution systems, and customer data platforms is a behemoth, and if it’s resistant to GDPR enforcement, then it’s going to be even more resistant to disruptive and expensive retooling.
Focusing on What We Really Need
Going through this exercise has made me wonder. What if the best PHC play is not its privacy preservation characteristics, but as an efficient distribution network?
It’s still valuable for services to know the strength of the binding between any other credentials that were presented, whether identifying or anonymizing, and a status of “actually proved to be human at this moment.” As already noted, this falls under a bot and fraud detection remit, and is an urgent need.
“Bot attacks increased by 167 percent in the first half of the year, with a staggering 291 percent increase in intelligent bots.”
– Interesting Engineering, 2 Dec 2023
Given that it’s a time-sensitive binding we seek, maybe application-level services shouldn’t be doing this at all, and we should rely on the OS/device/browser level for it.
Apple did launch a PHC-like solution in 2022 called Private Access Tokens, and Google introduced Private State Tokens a year later – and there’s a slow-rolling IETF standards community around a Privacy Pass spec that these implementations may be based on. I would love to know how we could use these examples to inform PoP and PHC research.
As for the human-to-human challenge… How does my colleague fend off the email spam claiming to be me? In the end, she has found it simple. The emails get all the details right – except for context. When in doubt, an old-school but effective technique — recently used by a Ferrari executive battling a deepfake – is to demand some ever-changing piece of knowledge that only the right person would have: the dynamic knowledge-based authentication technique. It turns out that online humans can become finely tuned personhood detection devices if they practice.
Happy International Identity Day! Thank you so much for reading.
Please join me on September 17 — tomorrow! — for a webinar with Ping Identity and Sift to talk about how we can win the race against AI-driven fraud! I’m pretty sure personhood credentials will crop up as a topic. 😊 And I hope you’ll also download my white paper on Authentication and AI: A Race Against Time.
Finally, don’t forget to let me know what you think about personhood and PHCs in the comments.
Are PHCs reinventing the wheel?
If a person has any VCs (drivers licence, identity card, passport,...) then why can't some form of zero knowledge proof on some aspects of your personal profile not fulfill that role?
If I want to get a quick quote for a vacation, all the travel service needs to know is ZKP derived information of being a citizen of a country and my preferred airport (and used a temp or one-time DID to avoid DID tracking)