It’s currently impossible to conduct operations on user data when the user is not currently accessing the web application (‘off-client’?). Technically, an application could interact with user data whenever they want by configuring a multi-reader account which the user shares data with, but that’s not ideal at all – the application with read-privileges could easily abuse the user’s data.
One solution to said problem is what I’d describe as an “exposed” microservice cloud, or a cloud of publicly-visible and community-monitored microservices that are granted privileges to user data.
How this would look in practice:
App developer writes code that A) parses through a list of users currently allowing data operations by the app and then B) performs other functions with that data (e.g. sends out emails, performs anonymous analytics-related processing, etc)
App developer deploys the code to a trusted microservice provider that offers visibility into the microservice logic
Community approves the microservice logic (e.g. “Yes, the logic maintains the privacy of the user’s data”)
Blockstack app requests privilege from the user and the magic happens.
Technically, an application could interact with user data whenever they want by configuring a multi-reader account which the user shares data with, but that’s not ideal at all – the application with read-privileges could easily abuse the user’s data.
I’m not sure what the problem is? If a third-party server can read your data once, it can just keep a copy. The online/offline status of the user has nothing to do with the ability of someone else to read and publish their data once they have it.
Can you give an example of a specific problem you’re trying to solve that cannot be solved with multi-reader storage alone?
Can you give an example of a specific problem you’re trying to solve that cannot be solved with multi-reader storage alone?
Multi-reader would play a huge role, but as of now, if I give an app privileges to read my data whenever they want, they can upload whatever code they want to the app they’re using to perform logic.
For instance:
App developer creates an identity bad_apple and an app with multireader enabled so that they can fetch user data.
App developer asks users to allow reads from bad_apple and users accept
Any code published by the app developer from the Blockstack app now has read access. Sure, the dev could point users to a repo and say “this is the code that operates on your data”, but the dev could easily upload a temporary malicious script that operates on the data and then just remove the script once they get what they need.
The proposed solution is a way to guarantee that code run at a particular microservice endpoint is what it says it is. Perhaps the microservices are hosted on a distributed network so changes must be tracked and agreed upon.
Let me see if I can reformulate the fundamental problem. You’re saying that the app developer could temporarily publish malicious code for their app that a user would run, and that malicious code would (1) gain access to the app-specific app private key and (2) exfiltrate the user’s private data. This is possible because the user had approved the sign-in request to the malicious app, and the malicious app now has decryption access. Is my understanding correct?
I do not believe that the general problem is solvable. A malicious app developer can do arbitrary bad things with their app. This is not a new result
However, one thing we could do is make it so all the history of changes a developer makes to their code are auditable, and make it so the user has to opt-in to loading unaudited app code. Then, if the developer does something evil or screws up and loses their private key, at least we can prove that it happened, when it happened, and when it was resolved.
@ryan had the idea of using the Atlas network to achieve this. When a developer publishes an application, they could tie it to a Blockstack ID by putting the hashes of the app’s index.html and manifest.json file in the zone file. Then, when the browser loads the app, it (1) uses subresource integrity to ensure that all the app’s resources are consistent with the index.html file, and (2) uses the hashes in the app’s Blockstack ID’s zone file to verify the authenticity of the index.html and manifest.json files. Then, we preserve the history of all changes made to the app and can identify when malicious code was introduced (and later corrected), thereby preventing developers (or hackers impersonating the developer) from silently introducing evil functionality. This could be augmented with a “known-evil app list” that the community curates in order to help new users avoid apps that have exhibited bad behavior.
I’m not surprised that the team has already gone down this path of thought. If we were to use the Atlas network the way you’re describing, it would work for both 1) apps clients use directly, and 2) apps built to operate on shared data via multireader.
^ this has some serious implications about how data migrations might be performed on the client across app versions, and is worth a separate discussion if we go down that path.
How much storage space is required in atlas to store a copy of every version of every code subresource? This does not seem realistic. Even with infinite space for storing all code ever ran for future review, how would 404s to hide code for all except the victim be handled in such system?