3 Comments

Great article, Dorian. One of the few great things that has emerged from the "web3" bubble has been a renewed spurt of development in privacy-centric infrastructure stacks and protocols. I'm particularly a fan of source.network. We're building a decentralized science framework on top of their database: http://bit.ly/dtwins-wp

Expand full comment

Thanks Rafael. When I think about what it means to be "decentralized", the Web is already that (and always has been) to the extent that you can spin up a piece of e-waste as a server and run it off a home cable modem (something I have been doing since the late 90s). What are of course still centralized are mechanisms for a) discovery, b) availability, and c) identity. Discovery of course being things like DNS and search engines, availability being (mainly implemented as) CDNs, and identity reducing to accounts on various social media platforms. (I actually worked at a federated identity startup in 2005-06.)

My observation here is that in order to have distributed (not just decentralized) discovery, you need distributed availability (let's bracket big payloads for the moment), and in order to manipulate either, you need some kind of distributed identity protocol. Where blockchain may play a role (I'm assuming if blockchain isn't involved then it isn't "web3") in the short term is floating these relatively small chunks of content in the ether in perpetuity. But then there's the question of what you actually put in the payload. Is this something you're dealing with?

Expand full comment

Hey Dorian, thanks for the great questions. The capabilities you're describing, IMO, are exactly what folks like Source, Protocol Labs (creators of IPFS, Filecoin, the new compute-over-data framework Bacalhau, etc) have been building. One of the core elements here is the decentralized identifier (DID) and verifiable credentials (VC). You can see the v1 specs for DIDs here, and implementations are available etc. https://www.w3.org/TR/did-core . I don't pretend that all the pieces are robust (ex: there are some known infosec problems with the current VC specs), but it's all coming together and Protocol Labs particularly has been putting substantial grant money behind this development programme.

What we're doing specifically is a higher-level protocol on top of those primitives to perform distributed yet fully verifiable computational statistics, i.e., to build shared, evolving inference graphs where nodes can consume, compose and contribute assertions of the type "if you believe this input set of statements and this statistical model, you should believe this output set of statements".

Expand full comment