All of these things are true and it's clear you know the problem space well! We avoid the primary "go brrrrr" performance issue with IPFS by using small private IPFS networks and never touching the noisy, CPU-intensive global network.
You didn't even mention the biggest time drain, which is getting everything to work well on Android and iOS! This is on track to dwarf all other time drains by 10x or more.
However, the amazing thing is, as full-of-warts as the stack we've chosen is, it isn't a clearly worse choice than the alternatives, from GUN, to Hypercore, to the Automerge space, to Braid, to Holochain, P2Ppanda, and so on. All the other attempts to build a general purpose stack have a few things they do well and a bunch of things they don't do well, or at all, or they have similar questionable longevity to something like OrbitDB.
But if you believe what we both believe (i.e. that users shouldn't have to depend on someone else's server or run their own just to gather online) what is to be done? It seems like you decided to roll up your sleeves and try. Maybe the necessary ingredient is PG's "Schlep Blindness" [1] where you have to be pretty naive or masochistic to get started but if you keep pushing you can make something really new and valuable.
I think the reason why the field is in this place is simply that a general purpose p2p stack for user-facing apps needs to do the impossible: it needs to build complex solutions to an actually very long list of heterogeneous problems, without clear knowledge of what those problems are and what form they take in production, because there aren't any successful, desktop + mobile peer-to-peer apps yet to provide that reality-tested information, unless you count WebRTC video calls, which only covers a small piece of the picture.
So we just have to push through. And it's fun! Also, please do DM or email me! I'd love to compare notes more deeply than would be possible here (same offer goes for others who are interested in these questions!): h [at] quiet.chat
I appreciate the reply and I understand where you're coming from. Looking at when you released your first few versions of this project, I understand that the options you had at that time probably weren't as plentyful or mature as they might be today.
> However, the amazing thing is, as full-of-warts as the stack we've chosen is, it isn't a clearly worse choice than the alternatives, from GUN, to Hypercore, to the Automerge space, to Braid, to Holochain, P2Ppanda, and so on.
I think far more interesting these days would be projects like Veilid, Hyphanet's Locutus and ultimately Nostr -- even though not truly P2P in that sense -- which already happens to have a first try going with nostrchat.io. If P2P is something that is truly desired, I feel like projects like Briar (https://briarproject.org/how-it-works/) have solved this with Bramble (https://code.briarproject.org/briar/briar-spec/blob/master/p...) more eloquently than it could be done on top of IPFS.
> It seems like you decided to roll up your sleeves and try.
Ultimately IPFS was designed as a file store, which to me was the main reason to use it for Superhighway84. I intended to implement USENET-like file attachments and (go-)OrbitDB was merely an easier way to deal with the messages posted into the groups. However, if instant communication would be my main focus, I would rather try more lightweight frameworks (see above), especially if mobile devices should come into play eventually. Superhighway84 was never intended to leave the beautiful space that is the command line and hence niche-by-design. :-)
Again, I will definitely keep an eye on Quiet, to see how it ultimately evolves on top of IPFS. I could nevertheless imagine it being overtaken fairly quickly by other projects sporting a rather lightweight and more managable basis, that allows for increased development speed and ultimately for faster iteration on features that users might wish for (e.g. DMs, @-mentions, message deletion, mobile clients, you-name-it) -- without the need to invest heavily into e.g. performance (or reliability!) issues of the underlying framework.
Could I ask, with these p2p/federated projects (and thank you mrusme for helping me to get superhighway84 running! dope personal homepage, big fan of yours) why doesn't anyone use Usenet itself as resilient backup storage?
The Usenet network itself is always online and highly resilient - most providers offer ~5000 days of binary retention, and endless retention on text - and great bandwidth to boot. If a user doesn't have Usenet, or the Usenet isn't at the 'current' timestamp, that's where the Tor/P2P layer could kick in. You would only need a single server (with a private key, trusting the public key in the main executable) that continuously archives new posts to Usenet to make it work.
Thank you kindly, I appreciate it and I'm glad you enjoy the content! Happy to help any time.
Your thought sounds interesting, however I'm not sure I fully grasp the details correctly, so bear with me. Generally speaking though, integrating with the actual USENET these days poses a few hurdles, one of which is as plain as it gets:
Finding well-maintained libraries to do so, especially with more modern stacks (e.g. https://github.com/search?q=nntp+language%3AGo&type=reposito...). Depending on how exactly you thought to integrate with USENET servers, it might look even more meager in regard to UUCP libraries. And yes, of course you could bind e.g. existing C libraries and so on, but I'd still argue that it's not the most straight forward developer experience that you'll get -- unlike with more modern technologies that either provide language bindings or other means of integration (gRPC, websockets, etc).
But apart from this, one key difference to keep in mind in regard of especially resilience is that, with USENET, resilience depends on the number of active servers that are willing to offer unfiltered access to the content, meaning that the game of whac-a-mole is in theory slightly more predictable for an attacker or oppressor trying to limit access to the network. On the other hand with projects like I2P, Tor, or IPFS, every new client that connects to the network is also/can also be a relay at the same time, that an attacker or an oppressor would need to find and neutralize in order to successfully block the entire network.
We might as well not forget that many USENET servers are paid infrastructure these days. For someone who lives in a developed country, this might not be an issue. However, being unable to pay for your access simply because you don't have the resources, because you are unbanked, or because your government took the easy path of sanctioning financial transactions to either providers of such services, or specific payment providers in general, in an effort to curb the use of the network, makes USENET theoretically more prone to censorship than for example IPFS.
One instance where this government intervention is rampant are VPNs, which similarly rely on a legal entity that provides the server-side of the network. There are countries that have either outlawed these type of paid services altogether, or made the companies bend over, in an effort to limit freedom of access to information. In a theoretical scenario in which USENET would re-gain traction and become a more mainstream service, it would be fairly easy for governements to sanction the legal entities that provide access to the network. And there would be little alternative, as with the amount of data on USENET, it would be quite expensive for individuals to offer free, unfiltered USENET access to others. On the other hand, there's nothing that could be sanctioned with IPFS or similar peer-to-peer services. The use of this type of software might be made illegal in general, but cracking down on it, on an individual basis, is significantly harder.
Besides, the account requirement and setup for USENET also makes it more complex for an end-user to get onto the network, as compared to IPFS, where one can basically just download and run Kubo (and use a browser extension to access the local gateway). However, from what I understood, your idea would not imply each user having an individual USENET account, but rather having own USENET servers that trust the client no matter of the user that's using it, which thinking about it, might come with its own set of challenges.
To get back to the topic at hand, I would rather not implement USENET or any client-server-based system as a sort of backup for an otherwise P2P app (e.g. Superhighway84), as I tend to agree with what OP stated in a different comment, which is...
> The thing that frustrates me about free and open source software that requires servers is: most people don't have servers! And the prevalent model for using others' servers involves a terrible power / dependence relationship. One thing that drives me to build Quiet is that I want to see a world where free software comes "batteries included" with all of its promised freedoms and advantages, for the vast majority who do not have servers.
A software landscape in which end-user applications are not dependent on dedicated servers at all, and would instead be able to directly communicate and exchange information with each other, is ideally how I, too, would envision the future. Hence, while I'm a fan and user of USENET, XMPP, IRC and so on, and I have the knowledge and can afford the /luxury/ of renting servers to host these kind of things, I'm far from being the average end-user. I believe that the future should belong to truly peer-to-peer decentralized technologies.
RE the libraries, while it is true that I can't find anything made specifically for Usenet in Go, Usenet itself is just an extension of the Telnet protocol[0], and there are Telnet clients in Go[1] and Node[2]. It probably isn't simple, but I'm sure working with OrbitDB wasn't easy either!
RE the resilience of content on Usenet, the vast majority of binaries are heavily encrypted, and doesn't make sense to anyone without the key, despite being conveyed en-masse between the world's 10-ish full-scale Usenet backbones [3]. I'm proposing that the backend of a service that makes use of Usenet could be similar, with a single 'background server' on one trusted machine enough to continuously push the history to Usenet. A regular user client could then search for the latest version of this history and quickly refresh their side from Usenet, regardless of the status of IPFS at the time.
RE democratic access to technology, at least with Superhighway84 it was very expensive for me to actually run the software, as I have a small allocation of bandwidth from my ISP and not much I can do about that in my area, and I ultimately had to delete it due to ongoing transfers of 3GB/day running the IPFS node. Quiet itself notes a limit of 30-100 individuals with its application - I'm proposing that using the one remaining federated multicast technology with some modern encryption might help with issues around blasting data everywhere from a bandwidth-constrained environment. I know that definitely in Africa, there are ongoing issues with bandwidth and networks that we forget about in the West. Usenet, with extremely lean network overheads, could be part of the answer.
I do agree with your vision of a future of truly peer-to-peer technologies, but for those of us who are bandwidth-constrained or otherwise limited in our access to those technologies, having a technology-agnostic application that just 'does magic' to do whatever it needs to do with your content is what's going to make a majority of users happy.
Thank you for the detailed description of your idea. Indeed, if you're willing to accept the shortcomings of a dedicated USENET infrastructure, then it is definitely something that could be done. In fact, I did consider NNTP for another project of mine (https://github.com/mrusme/neonmodem), which might eventually swallow up Superhighway84 altogether. If you're interested in actually giving it a try and implement a functional NNTP library for Go I'd be more than happy to make use of it! :-)
> Superhighway84 it was very expensive for me to actually run the software
I agree with you, in terms of efficiency IPFS is still miles away from where it should be. Hence my feedback on Quiet, as I do not perceive IPFS to radically improve within the next few months or even years. And as you correctly stated it looks like Quiet uses some workarounds to improve on the overall mediocre efficiency of IPFS, which however lead to shortcomings on other ends:
> Quiet itself notes a limit of 30-100 individuals with its application
However, this is not how P2P should be. I'd be truly curious to hear from someone at OpenSea, or Fleek, or any of the services that offer high volume IPFS hosting about their experience and gut feeling on its future. I personally gave up on hosting my website via IPFS myself -- which I did for a brief period of time -- mainly for these exact reasons.
> but for those of us who are bandwidth-constrained or otherwise limited in our access to those technologies
I believe that quite on the contrary, this might benefit these people the most. Imagine not having to do the roundtrip from your phone, to a server on the internet, back to your computer, just to have a synchronized state of your address book available.
Similarly, imagine writing with someone in your city -- let's say Melbourne, Australia -- without your messages first travelling to Utah, USA, and then back again. My gut feeling is that overall congestion on the internet could even be reduced, by allowing more applications to communicate directly within small meshes rather than travel all the way across the globe and back again. That is, as soon as there are more efficient ways to deal with the overhead that is currently breaking IPFS' neck.
A quick note on Quiet's capacity limits: the 30-100 number is very conservative and not an intrinsic limit of the approach we're taking.
I'm pretty confident that we can get Quiet to the point where the practical limit of participation in any Quiet channel is storage, relative to the amount of message history that a particular community wants to hang on to for a particular channel.
It wouldn't be crazy difficult to shard storage either. Once we do, a community could store a lot of data by marshaling many nodes with low or uncorrelated downtime. Paying for storage is also an option.
RE contributing to neonmodem, I was thinking about it! But baseline NNTP, as it sits today, is a fetid pit of spam, and I don't think it would add value. In fact, spam is, I believe, a far bigger problem with these networks than the technical distribution of messages over P2P. I took a sample of a random Usenet group today: [0] - but I really struggled to find one to post an image of here because even a lower-spam group that I found (e.g. alt.politics.uk) was full of profanities in the subject lines of the posts.
I think superhighway84 remains no/low-spam because of the technical hurdle of connecting. I don't think you've got any inbuilt spam protection? Plebbit [1], full of spam. The innovation of Reddit is arguably that people love the power-trip that comes with moderating a reddit group, and will do it for free - there's been no shortage of moderators to replace the protesters in the latest rebellion [2]. Hacker News, where I get to talk to smarter people than myself - very well/heavily moderated [3]. The SomethingAwful forums just resorted to charging everyone $10 for an account when they started having a spam problem, and that happily paid for the hosting costs and a life for the main admin for years.
To deal with the spam, you need some kind of filter where users can't just create thousands of accounts, especially in the age of LLMs. Logging in with a social account is the obvious one - Github/Facebook/Google have expensive processes in place to reduce the deluge of spam accounts, but some obviously creep through. Do you then run on an algorithmic chain of trust, promoting posts based on the quality/ratings of the individual's contributions elsewhere? If you do this, you're creating a system to be gamed. Running on invites only is another potential solution, but then it's difficult to start the gravy train of quality posts - who wants to apply effort to talk to nobody? Do you instead run a pyramid scheme - charge $10 upfront, but give a share of the site's ongoing revenue to those who get their posts upvoted, Twitch/Youtube/Instagram style? This to me seems like the one solution that could potentially displace Reddit, but I lack the personal belief/gusto to make it a reality.
Even if you managed to register and motivate a thousand decent posters, I don't have a clear view of how you keep topics on track within a group without a human moderator, but some research has been done in using LLMs to pre-rate posts based on the history of the group. But if the LLM is agendaless, you obviously get a groupthink echochamber. Give it an agenda, and you start dealing with bias - not every post of value is war and peace, and sometimes you just want to thumbs up a funny cat.
Please forgive the above musings if they're low value. I feel like I have no answers, only problems and questions, and I believe I'll be posting on Hackernews for tech, Instagram for comedy, and Facebook groups for special interests (e.g. car repair) for some time to come.
> I think far more interesting these days would be projects like Veilid, Hyphanet's Locutus
I have not assessed Veilid yet but it's on my list and at a first glance seems like a very serious and informed attempt. I'm personal friends with Freenet / Hyphanet's Ian Clarke and spoke with him about Locutus when he was just getting started. It sounded awesome then and I will give this a second look too, though when he explained it to me it sounded like it had the same limitations with deletion that Nostr or the global IPFS network would have. It does seem important to note here that both Veilid and Locutus are much less mature and battle-tested than libp2p and Tor and have less Lindy longevity (longevity as a function of age.) We already suffer a lot from being on the bleeding edge, so it's nice to limit the number of bleeding edge tools we use. Libp2p, notably, has been rock solid for us and barely a time drain at all, apart from some unexpected interactions with Tor which are mostly about the lack of an official first-class Tor transport, which is specific to our use case and should start to change soon when Tor's Arti is ready.
> and ultimately Nostr -- even though not truly P2P in that sense -- which already happens to have a first try going with nostrchat.io.
Nostr and Bluesky both seem very promising for the open-world use case of social networking, and it has been amazing to see Nostr grow so rapidly as a community. I am rooting for this project and we might use it someday in Quiet for public feeds. Timed deletion is the user requirement that drives me away from building Quiet on Nostr. Based on conversations I've had with users doing sensitive work (and based on my own experience as a founder of Fight for the Future) timed deletion is extremely important to team security, and for deletion to be meaningful one needs more control over where the data is relayed than what Nostr provides in the default mode. A group that wanted trustworthy timed deletion would have to control their own private Nostr relay. Technically, a Tor relay could subvert the timed deletion of some Quiet messages just by capturing all traffic, but this is much less of a worry.
Bramble could work for us and I would recommend that anyone look into it. Briar is probably the most similar thing to Quiet that exists right now. There are big differences between Quiet and Briar, but we could definitely build Quiet on Bramble if it adequately supports iOS. My worry would be its maturity as a tool for people building things other than Briar. That could be worth the risk though and I do recommend anyone else reading this thread look at Bramble if you are doing something similar.
> I could nevertheless imagine it being overtaken fairly quickly by other projects sporting a rather lightweight and more managable basis, that allows for increased development speed and ultimately for faster iteration on features that users might wish for (e.g. DMs, @-mentions, message deletion, mobile clients, you-name-it) -- without the need to invest heavily into e.g. performance (or reliability!) issues of the underlying framework.
This is definitely something we will keep an eye on, and thank you for the thoughtful advice! My guess is that as soon as we have a significant number of real users we will need to build things that don't happen to be supported by whatever stack we choose (whether that is our current stack, Bramble, Veilid, Automerge, etc.) So the question is what's the easiest one to maintain and adapt. So far libp2p and IPFS have both been good in that department: implementations in many languages, active development, an absence of major problems showing signs of maturity (especially in libp2p), etc.
Also, my 2 cents are (for anyone following along) that if I had to do this all over again I would use Tor + Libp2p + Automerge. Libp2p and Gossipsub are solid, flexible, and will be around a while. No need to reinvent the wheel. The conceptual framework behind Automerge and Briar/Bramble are pretty similar (sync state!) but the Automerge team exists to serve people building other apps, while the Bramble team mostly focuses on Briar AFAIK. What's nice about Automerge is that the community around it (Ink & Switch, Martin Kleppmann, and other academics) is all at the academic frontier, so the level of thought and anticipation of user needs that goes into their decisions is very thorough, even if the implementations lag behind the papers. If I was doing real-time p2p text editing I would also look at the Braid project (braid.org) and Seph Gentle's work on Diamond Types, since that's where the most thought has gone into the raw performance you need for text CRDTs that can handle large documents: https://github.com/josephg/diamond-types
Out of curiousity, what os your plan w.r.t the business?
Given that Skype as it was originally implemented was very nearly this (P2P comms), and was targeted specifically for acquisition by Microsoft by pressure from intelligence agencies (to be re-implememted in a centralized fashion for tappability, see PRISM); I try to encourage every eager startup founder to think about their personal exit early. Any type of software offering that is done as a commercial venture lasts only as long as that founder/idealist is at the helm and there remains enough technically savvy people to fork on the inevitable rugpull. Which from your tech stack, may be an issue.
Anything like this, while noble, is going to inevitably become a hot target for law enforcement/intelligence agency/nation state compromise, or media smear campaign the first time a bad actor comes to light who has been enabled by it. Prepare for this type of stuff as early as possible, and godspeed.
Also, how'd you tackle the key distribution nut? Which is the hardest part of the entire process, in my experience. PKI?
Great questions and advice! Re: business plans, ideally we'll sell premium subscriptions for features you need a server for, like video calls.
The biggest difference between us and Skype is that Quiet is open source. But yes, open source businesses can rugpull too, as we saw recently with Terraform.
What about our stack makes you worried about the "enough tech savvy people to fork" piece? One decision we've made deliberately is to build on the most widely-used tech, so that maintenance will require less expertise than for a homegrown stack, and so there will be existing communities around the stack that are bigger than the Quiet community. I would love to know more about what problem you envision in building a tech-savvy open source community around our stack. Too boring?
If our business is upselling users to server-backed subscription plans, I think even the threat of a fork goes a long way to keeping us honest, especially since a community fork would not need to run infrastructure. If "Quiet Co." (or whatever we call ourselves) is suddenly no longer the most trusted purveyor of Quiet, we wouldn't have much of a business, which is as it should be in my view.
Re: the politics of providing these tools, I have been preparing, and I have some background in the political side of this from Fight for the Future. It's funny because I am actually quite eager to get to the point where we get to make the social and political case for Quiet to a partly-skeptical world, but first I have to make something that works well on phones! And find users! Ideally we can find some awesome initial users that really tell the story of why Quiet needs to exist.
>What about our stack makes you worried about the "enough tech savvy people to fork" piece?
Cryptography/cryptographic primitives/secrecy preserving architectures are a bitch and a half. :) Toss on top of having the mind/frustration tolerance to put yourself through the wringer to make all that happen without a slip up, then you run into the really hard part of taking all of that and getting regular people able to grok the thing, which takes empathy, a genuine capacity to care for the end user's time/experience, and the capability to synthesize a lotta minutiae into a limited interpersonal window. In my experience; the people with the technical chops to handle the former challenge almost always accrue deficits in the capacity for the latter, and an over abundance of the qualities to succeed in the latter aspect is almost always going to result in some level of talking past one another when dealing with your technical peeps.
It's a problem I've been ruminating on for quite a few years, because I know I'll have to solve it for my friends/family sphere before too long. The process of migrating my own mind from that crypto-weenie who actually knows what a key schedule or S-Box or what a Diffie-Hellman Key Exchange is, or what guarantees you get out of composing what primitives, who gets annoyed that other people just don't get it, or just can't be bothered to put up with a little inconvenience for the sake of reclaiming the privacy that everyone up higher in industrial hierarchy are fine with people not bothering to reclaim, to one that has the patience to sit there and try to render down for Grandma's and such that "doing this is the digital equivalent of putting something in an envelope, that will only open for the person on the other side" is... Well, not fun. It's work.
That's it I guess. I'm just now getting around to wrangling some of what were cutting edge primitives of 5 years ago, because I've lived 'under a rock' trying to get non digital natives up to speed is all. I don't believe just leaving them to die out is an acceptable approach, because if we want this to really catch on from the bottom up, you have to take cryptography, and make it easy enough a child can understand and operate it. That's hard.
It's part of why my peers think I'm nuts. I still try to tackle things like that. Computers should be bicycles for the mind. Not the Wizard of Oz.
I'll be keeping an eye on y'all. You've officially intrigued me.
Re: key distribution, we're just changing it now but in a few days the scheme will be:
1. a community member sends you an invite link containing some onion addresses of community members
2. you sync community data and send a CSR to the community owner.
3. We show an "unregistered" message next to your name until the community owner signs your CSR, at which point you're a full member.
We use PKI.js for the certs. For multi-party message-layer encryption with multi-device support we plan on using: https://github.com/local-first-web/auth, which is inspired by Keybase and a Martin Kleppmann paper.
> i.e. that users shouldn't have to depend on someone else's server or run their own just to gather online
I don't understand this. Can you elaborate exactly what you mean?
Because to me... you're now just depending on a whole bunch of other people's machines indirectly, and directly on the community owner's machine which is generating the certs.
It feels like a lot of complexity for something that could just be a small chat server running on the community owner's server (which they will need anyways - unless I'm misunderstanding, which is entirely possible).
---
So since I'm probably missing something - can I get the elevator pitch?
Assume I'm your target market (I want private messaging that I control).
I would likely be a "community owner" as described in your article.
I am already running a self-hosted solution (ex: Zulip/Rocket/Mattermost).
> I am already running a self-hosted solution (ex: Zulip/Rocket/Mattermost). What makes this a compelling offering to me?
(Quiet founder here) Great question! If you're already happy running your own self-hosted Zulip/Rocket/Mattermost/Matrix and you have no problems with maintenance or downtime, Quiet is just a cool demo and probably not useful!
If you cannot run a server (a minority on HN but a majority of the world) or you do not want to (maybe a slim majority on HN?) and you need a team chat with nice privacy properties, Quiet is being built for you!
The thing that frustrates me about free and open source software that requires servers is: most people don't have servers! And the prevalent model for using others' servers involves a terrible power / dependence relationship. One thing that drives me to build Quiet is that I want to see a world where free software comes "batteries included" with all of its promised freedoms and advantages, for the vast majority who do not have servers.
You aren’t missing anything. The restriction to community fences ensures that each community will have to host the community. There’s no free lunch. Now, someone in that community can be more generous with compute than others. Using Tor to try to be anonymous isn’t going to work either as Tor has been broken.
Sure thing. Let me know if you need more. Government agencies have been watching for years. Also keep in mind that no one has more admin access to network infrastructure than government agencies do such that the NSA can monitor any computer on the internet.
I think it's helpful to have a more layered perspective here. Privacy tools never provide absolute protection in the real world, because the attacker could always have some capability the user doesn't know about.
Network layer privacy is even more layered in this way. A burner HN account is very anonymous for a wide range of threat models. But if you're a terrorist or spy, NSA and GCHQ will see to it that they break anything you use. Users can learn about the properties of different tools and make informed decisions. Nobody should do something they would not otherwise do just because they believe they are protected by Tor. That is a bad idea. But if someone needs to do something sensitive and wants to lower their risk profile, Tor will likely help and it's fairly low-cost to use it.
Another way to look at it is: any naively implemented p2p communication tool will reveal the IP address of all your conversation partners by default. Tor is a big improvement over that, and comes with other benefits, like NAT traversal and peer discovery.
> because there aren't any successful, desktop + mobile peer-to-peer apps yet to provide that reality-tested information, unless you count WebRTC video calls, which only covers a small piece of the picture
Not that it really managed much in mobile, but old-school Skype comes to mind as the most widely used P2P messaging application. That's probably where a lot of the most valuable 'at scale' knowledge is/was.
All of these things are true and it's clear you know the problem space well! We avoid the primary "go brrrrr" performance issue with IPFS by using small private IPFS networks and never touching the noisy, CPU-intensive global network.
You didn't even mention the biggest time drain, which is getting everything to work well on Android and iOS! This is on track to dwarf all other time drains by 10x or more.
However, the amazing thing is, as full-of-warts as the stack we've chosen is, it isn't a clearly worse choice than the alternatives, from GUN, to Hypercore, to the Automerge space, to Braid, to Holochain, P2Ppanda, and so on. All the other attempts to build a general purpose stack have a few things they do well and a bunch of things they don't do well, or at all, or they have similar questionable longevity to something like OrbitDB.
But if you believe what we both believe (i.e. that users shouldn't have to depend on someone else's server or run their own just to gather online) what is to be done? It seems like you decided to roll up your sleeves and try. Maybe the necessary ingredient is PG's "Schlep Blindness" [1] where you have to be pretty naive or masochistic to get started but if you keep pushing you can make something really new and valuable.
I think the reason why the field is in this place is simply that a general purpose p2p stack for user-facing apps needs to do the impossible: it needs to build complex solutions to an actually very long list of heterogeneous problems, without clear knowledge of what those problems are and what form they take in production, because there aren't any successful, desktop + mobile peer-to-peer apps yet to provide that reality-tested information, unless you count WebRTC video calls, which only covers a small piece of the picture.
So we just have to push through. And it's fun! Also, please do DM or email me! I'd love to compare notes more deeply than would be possible here (same offer goes for others who are interested in these questions!): h [at] quiet.chat
1. http://www.paulgraham.com/schlep.html