7 Comments
User's avatar
John Wunderlich's avatar

Your comment about more identity <> more security reminded me of something that Bruce Schneier wrote years ago. Paraphrasing, he said that we don't need to know anything ABOUT the person sitting next to us on the plane (i.e. identity attributes). All we need to know is that they are not carrying explosives or weapons.

Expand full comment
Eve Maler's avatar

Yowza. Interesting use case. I think I disagree about there being no need to understand more about individuals who carry harmful items onto planes, because they may be indicators of intent to do very serious harm, and we very much need better prevention vs. remediation in those cases. But I take your point: In general, it's certainly true that lots of identity data gets collected – more than just for security purposes.

Expand full comment
John Wunderlich's avatar

It’s possible you are conflating two use cases. The first and the one that Schneier’s view applies to is the assurance that a passenger does not have a weapon on their person. The second use case and the one that too easily falls down the police state surveillance rabbit hole is the one that thinks we can preserve our democracy using surveillance, especially if we can avoid the whole pesky warrant thing. That’s how we ended up with no-fly lists with thousands of people listed based on weak to non-existentent evidence. Police work should only be easy in a police state.

Expand full comment
Eve Maler's avatar

I get it — it’s a classic tension. So how does one do a good job of providing a high “harm assurance level” assertion? Self-assertion by passengers is untrustworthy (HAL0?), as is casual visual inspection (maybe HAL1). A lot of it goes to intention, where discovery techniques tend to be invasive.

Expand full comment
John Wunderlich's avatar

I think that the point is this. All that should be needed for reasonable assurances of the physical safety of passengers are physical measures. It becomes an identity issue when one makes the - possibly unwarranted - assumption about predicting risk as one does in a surveillance state.

Expand full comment
James Bonifield's avatar

Totally agreed with those observations. Your point on the “more identity != more security” I think takes on another layer of meaning in a world captivated by LLMs.

With these models being stochastic in nature, I think we are at risk of drifting even further away from a clear, deterministic way of enforcing and managing access/AuthN+AuthZ, and if the solution to this problem relies to heavily on purely AI (at least as we currently understand it) we are at risk of adding an unauditable and unaccountable piece to this quagmire

Expand full comment
Eve Maler's avatar

Thanks, James. So true! Classic “default deny” approaches often seek to progressively limit access at each step. But now we’re asking agents airily to “go access whatever is necessary to accomplish my mission” — possibly outright impersonating us.

Expand full comment