This talk by Mark S. Miller was the keynote talk for Day 2 of the ActivityPub Conference 2019. You can find the talk at https://archive.org/details/apconf-mark. Wikipedia describes him as : “He is known for his work as one of the participants in the 1979 hypertext project known as Project Xanadu; for inventing Miller columns; as the co-creator of the Agoric Paradigm of market-based distributed secure computing; and the open-source coordinator of the E programming language. He also designed the Caja programming language. Miller is a Senior Research Fellow at the Foresight_Institute.”
His talk is about social networks that are robust against attacks, but open to strangers. He begins with an historical overview that mentions how cooperation has spread as a dominant form of interaction, which surprises many people. Our world today is extraordinarily less violent than it used to be. But what about the online world? It started out as a place where you could reasonably assume pleasant and cooperative interactions, but something happened.
He gives the example of junk mail. No one likes it, but we can manage it by throwing out the junk. But in the online world we got SPAM. It is not feasible to manually sort though and discard the junk in e-mail. And in looking at security, there is no perfect wall in the physical world, since enough force will breach any wall, but in the online world we can have impenetrable walls through things like cryptography, which is close enough to perfect for our purposes. On the other hand, in the physical world the attack takes some scarce resources of the attacker, but in the online world the attacker can multiply the attack very cheaply.
Mark sees a trade-off between safety and cooperation, and worries that we will give up cooperation to retreat into a closed realm of safety. He wants to change the terms of the trade-off (since you can’t get rid it) to allow for more cooperation for any given level of safety. In order to cooperate in a decentralized environment you have to address the problem of identity. You don’t want an attacker to impersonate you online, but at the same time we don’t want a centralized naming authority that can take your name away, which is censorship. BitCoin has solved an analogous problem since your key pair cannot be impersonated and no one can stop you from spending your BitCoin.
There are two fundamental safety problems we need to solve: Proactive, and Reactive. Proactive safety lets us act online in ways that do not create safety issues, but occasionally people will make mistakes. So we also need Reactive damage control.
There are two ways to provide safety, through Authorizations or through Identities. Authorizations gets us to Object Capabilities (OCaps), which is the decentralized way, while identity relies on centralized systems such as Access Control Lists. OCaps is very good at Proactive safety, while Identity-based systems are best for reactive safety. The example he used was a car key. The car key is right that goes to the bearer of a car key to operate the vehicle. You can transfer that right (.g. to the valet at the restaurant) by handing that person the key. You build proactive safety by who get the key. My wife, for example, has a copy of my car key, and I have a copy of hers, and that lets us cooperate in safety. I don’t need to tell my car that my wife is an authorized driver. But if I make a mistake and give a key ot the wrong person, I now need a Reactive system to fix my problem.
So, how do we build a system that combines the strengths of both systems? If we start with a fundamentally Access Control system, we can add some Authorizations features, and examples of that would be Polaris, Plash, and Bitfrost. In the 1980s a number of systems were built that were hybrids like SCAP and Sys/38. Mark says that both of these approaches were not successful, and that what we need to do is start with a pure Object Capabilities base, and add ingredients to it to improve Reactive safety. The problem here is that an Object Capabilities system has some inherent tendencies to anonymity. Think about trying to know who used the car when all you have is that it was someone with a key. So in an Object Capabilities framework, if you decide a message was abusive in some way looking back, you may not be able to determine where it came from. You might try to do a hybrid with an Identity list, but you run into the problem that identities are designed to come and go.
One approach he discusses is Two-Party Intermediation, wherein the party sending the message logs the message and who it was sent to, and the receiving party logs the message and who it was from. HP had a system like this called SCoopFS (Simple Cooperative File System). This will work only as long as you can prevent impersonation and censorship, the names are meaningful to humans, and are globally meaningful. No single naming system can do all three, but using several systems together can accomplish all three.
Another approach is Three-Party Intermediation. Consider this example: you call Uber to get a ride. Sometime later someone you have never met drives up, in a car you haven ever seen, and you get in it! And similarly, the person who drove up is letting a complete stranger get into her car and she drives off. Why would we do this? The answer is that Uber is a third-party that is essentially vouching for both us to the other. But can either of the other parties know for sure that the person they are introduced to is independent of the third-party?
Four-Party Intermediation systems, where only joint introductions corroborate independence solve this problem. Secure Scuttlebutt is such a system, where multiple people may attest to the identity of a participant. With this, we have shown how to build a decentralized, federated social network with naming integrity (the names are not subject to impersonation or censorship) that is network of consent. But is it welcoming to strangers? Not quite. In a pure object capabilities system there is a saying that only connectivity begets connectivity. Two isolated sub-groups will remain isolated until something links them. And someone not already linked to anyone in a community has no way to “knock on the door”. That would require a publicly-open inbox of some kind to which people could send messages asking for admission, and if it is open we still have the SPAM problem. To get around that, you need to impose a cost of some kind on the sender. CAPTCHA was such a cost, though it is becoming less useful.
This proposed system would hold people responsible for Requests, Responses, and Introductions, which is more than an Identity-based system can do. And we have robust openness because there is no global naming authority, skeptical aggregation strategy, and a corroboration-driven disaggregation strategy, and it is still open to strangers.
This was an interesting but difficult talk for a non-developer since a lot of the concepts and language come from Object-Oriented Programming, but I think it is a valuable talk to listen to.
Listen to the audio version of this post on Hacker Public Radio!