Verification of Twitter’s attempt to move to protocols instead of platforms

of it going department

With Elon Musk now Twitter’s largest shareholder and joining the company’s board, there have been (perhaps reasonable) concerns about how much influence he would have on the platform – mainly by because of his childish understanding of free speech, in which speech he likes should obviously be allowed, and speech he doesn’t like should obviously be punished. That’s not to say he won’t have some great ideas for the platform. Prior to his infamous Twitter free speech poll, he did another poll asking whether or not Twitter’s algorithm should be open source.

And that’s much more interesting, because it’s an idea that a lot of people have been discussing for a while, including Twitter founder Jack Dorsey, who has talked a lot about creating algorithmic choice for site users. Web, partly based on Dorsey and Twitter’s decision to adopt my vision of a world of protocols on platforms.

Of course, it’s not as simple as just “opening up” the algorithm. Again, Musk’s simplification of a complex problem is a bit childish, even if the underlying idea is valuable. But you can’t just open the algorithm, without a whole bunch of other stuff in place. Simply opening the doors (1) wouldn’t really work because it wouldn’t mean much, and (2) without taking any other steps first, it would essentially open the system up for games by trolls and malicious users.

Anyway, I continued to follow what was happening with Project Bluesky, the project created by Twitter to try to build a protocol-based system. Last month, the NY Times had a good (if brief) project update, noting how Twitter could have gone this route initially, but chose not to. Backing up is tricky, but doable.

What interested me the most was the progression of Bluesky. Some have complained that it’s done next to nothing, but looking at things it seems what’s really going on is that the people who work there are deliberate and careful, rather than rushing and breaking things the typical Silicon Valley way. There are a lot of other projects out there that haven’t really caught on. And every time I mention things like Bluesky, people quickly rush to point fingers at things like Mastodon or other projects – which to me are only partial steps towards envisioning a future based on the protocol, rather than really advancing the effort in a widely adopted way.

Bluesky, however, has a plan (and unlike people keep yelling at me every time I mention Bluesky, no, it’s not designed to be a blockchain projectnoting:

We rely on existing protocols and technologies, but we do not commit to any stack in its entirety. We see use cases for blockchains, but Bluesky is not a blockchain, and we believe the adoption of social web protocols should be blockchain-agnostic.

And, after recently announcing its main initial hiresthe Bluesky team revealed some aspects of the plan, in what it calls a self-authentication social protocol. As he notes, for all the existing projects, none quite fit the protocol/not platform vision. But that doesn’t mean they can’t work within this ecosystem, or that there aren’t useful things to build on and connect to:

Many projects have created protocols to decentralize speech, including ActivityPub and SSB for social networks, Matrix and IRC for chat, and RSS for blogs. While each of them is successful in their own right, none of them have fully achieved the goals we had for a network that enables long-term global public conversations at scale.

Bluesky’s goal is to fill the gaps, to make a protocol-based system a reality. And the Bluesky team considers the main shortcomings to be portability, scalability, and trust. To build this, they see that the key initial need is this self-authentication element:

The conceptual framework that we have adopted to achieve these objectives is the “self-authentication protocol”. In law, a “self-authenticating” document does not require any extrinsic proof of authenticity. In computing, a “authenticated data structure” may have its operations independently audited. When network resources can attest to their own authenticity, that data is inherently inhabit – i.e. canonical and transactional – no matter where it is. This is a break with the connection-centric model of the Web, where information is certified by the host and therefore becomes dead when it is no longer hosted by its original service. Self-authentication data transfers authority to the user and thus keeps the data alive in every hosting service.

As they note, this self-authentication protocol can help provide missing portability, scalability, and trust:

Portability is directly satisfied by self-authentication protocols. Users who wish to switch providers can transfer their dataset at their convenience, including to their own infrastructure. The UX for how to handle key management and association of usernames in a system with cryptographic credentials has come a long way in recent years, and we plan to build on emerging standards and best practices. . Our philosophy is to give users a choice: between stand-alone solutions where they have more control but also take on more risk, and custodial services where they gain convenience but give up some control.

Self-authenticating data provides a scalability advantage by enabling store and forward caches. Aggregators in a self-authenticating network can host data on behalf of small providers without reducing confidence in the authenticity of the data. With verifiable computation, these aggregators will even be able to produce computed views – metrics, tracking charts, search indexes, etc. – while preserving the reliability of the data. This topological flexibility is essential for creating holistic views of activity from many different origins.

Finally, self-authentication data provides more mechanisms that can be used to establish trust. Self-authenticated data can retain metadata, such as who posted something and if it was changed. Reputation and trust graphs can be built on top of users, content and services. The transparency brought by the verifiable calculation provides a new tool to establish confidence by showing precisely how the results were produced. We believe that verifiable computation will provide tremendous opportunities for sharing indexes and social algorithms without sacrificing trust, but cryptographic primitives in this area are still being refined and will require active research before ending up in some products.

There are more in the links above, but the project is moving forward, and I’m glad to see it doing so in a thoughtful and deliberate way, focused on filling in the gaps to build a world based on the protocol, rather than trying to entirely reinvent the wheel.

It’s this kind of approach that will get things done successfully, rather than simplistic concepts like “just open source the algorithm”. The end result of this may (and may hopefully) be open source algorithms (lots of them) helping to moderate the Twitter experience, but there is a thoughtful way to get there, and the he Bluesky team seems to be taking this route.

Filed Under: Platforms, Portability, Protocols, Protocols and Non-Platforms, Scalability, Self-Authentication, Trust

Companies: bluesky, twitter



Source link