Decentralized Applications and Disaster Recovery

The Crazy Idea™

I have an idea that I am not sure how we would implement, but I would like to explore it in theory with the community.

I ponder of something great
My lungs will fill and then deflate
They fill with fire, exhale desire
I know it’s dire my time today
  Credit Twenty One Pilots - “Car Radio”

The Premise

One topic I have seen come up a few times on Discord is related to what makes a truly decentralized application, and I think it is healthy to explore using DApps from a disaster recovery perspective.

My big question:
What happens if the server running the DApp goes down, or is taken offline?

This is related to an earlier conversation we had around governance, which asked questions like “who runs the server”, “who has access to make changes”, and “who can shut it down”. I think this can all be summed up by the question above, and hopefully solved by the idea below.

What if running a server for a DApp could be split among multiple server instances, with some sort of load balancing or fail-over mechanism in place, similar to how download mirrors work?

The Implementation

@dant shared a comment in Discord about NoteRiot, where a user shared how easy it was to deploy their own instance using the “Deploy to Netlify” button. From the conversation:

i love that you have the ‘deploy to netlify’ button. i forked you on gitlab, clicked the button, and then i don’t have to do anything except push commits!

I challenge you all to enable your users to run your applications in a trust-minimized way.

This got me thinking, what if there was an easy way to encourage or incentivize someone to run a 2nd or 3rd instance of an application server as a backup?

The main application developer would be required to open source the code (which a lot of us are a fan of anyway), and it would be their job to manage the “master repository”.

Then, there could be a set amount of STX set aside by the application developer, or even posted up by the community much like a bounty, that serves as a “fund” for running additional servers. The latter would be more difficult, but I wonder if smart contracts could be adapted to serve this purpose.

There would have to be some sort of code verification; where a user could fork a repository, bring a node online, and be evaluated or selected as a “backup server”. I see this being possible between file hashes for integrity (which git/github handles nicely) and a nice interactive badge like what we see on most repos.

(keeping with the NoteRiot theme, and using totally fake badges/shields)

NoteRiot Badge Example for Backup Server NoteRiot Badge Example for Netlify Success NoteRiot Badge Example for Code Integrity

This way, if the main server were ever to go down, there would be readily-available backup servers to keep the application running. There would be no way to shut down the app if it had enough interest from the community, and there would be no downtime waiting for someone to spin up something new, updating code to point to the new server, pushing updates to clients, etc.

The data should easily move as well due to the decentralized design of Gaia, and the same principles can be applied to Gaia hubs.

The Funding

To me, this fits right in with BUIDL and HODL, such that someone has to get the server up and running and passing tests, as well as keep it updated and running to serve as a backup to the application and qualify for rewards.

It also doesn’t get in the way of implementing other business models, such as a subscription model, up-front charges, etc.

In theory, it could be built into the anticipated operating costs, but that would mean divulging some of this information to the public. That’s the missing link, I think. Plus, the budget for the server operation will increase at least 2x-3x what it already is, which may scale outside of what is possible depending on the needed availability of the service and the business model itself.

One thing we commonly do not know when using applications is how much running the server costs versus what the application charges, and I think encouraging concepts like Open Collective could help remedy this.

Then, if the running servers meet certain criteria on a regular basis, payouts are made. Those payouts could be proportionate to the traffic served, the number of backup servers online, the number of users in the DApp, or something else.

Wrapping it Up

The idea is a bit out there - and definitely nontraditional. My goal is to find a way to create an extra layer of redundancy for DApps, so not only are they “distributed” from control of a single entity, but that they also provide a sense of security for someone moving their services over to a dedicated DApp solution. Plus, this would limit the possibility of censorship or hostile take-down.

Instead of trying to grow as an individual, how do we grow as a community? How do we take ideas like a web server, password manager, blogging software, wallet, or something else and work toward a common goal, versus creating individual competing entities and diluting the ecosystem?

That, to me, could be one of the true strengths of DApps, especially if they are critical to personal or business operations. I am curious what everyone else thinks, and what other collaborative models we can come up with!

2 Likes

I read this as decentralized redundancy … I wonder if it could be tied to an app staking model?

2 Likes

@dant Sure… sum up all my paragraphs into two words :stuck_out_tongue:

Seriously though, I think this has potential. Today’s ecosystem says you have to guard your ideas, run and scale your own services, but what if we allowed (and encouraged) others to get involved? Beyond just opening the source.

2 Likes

This is what I was waiting for - “tied to an app staking model”. I see every road leading here or perhaps beginning here.

3 Likes

I sat down to write down a few open questions/issues:

  • How do you verify availability?
  • How do you load balance as good or better than existing CDNs? (currently Netlify)
  • How do you prevent multiple “redundancy” systems on the same infrastructure? Why reward duplication that adds no value? 14 servers in Amazon West, means 13 are potentially redundant.
  • How do you handle latency issues? Ie. Node A is slow, but node B is fast, and user is logged into Node A…currently existing CDNs deliver content from the nearest location, mitigating some of this.
  • Is there a P2P protocol needed here? or just distributed load balancing?
1 Like