March 2019 App Mining Results

Cross-posting here that the March 2019 results are out.

Public Scores:

Things to note:

  • Product Hunt now provides us with a ‘credible votes’ metric. This filters out votes that are flagged by their internal algorithms as potentially ‘suspicious’. Your total number of PH upvotes may not be exactly what is shown on PH.
  • No more Product Hunt team score
  • A new tab includes a ‘dry run’ of our new digital rights reviewer
    • Although I definitely think the new reviewer is an awesome addition, I think there are some minor points that could be improved. I’ve opened 3 tickets for discussion: please join in.


Great points Hank! I think the Digital Rights Reviewer is a great idea. They should definitely reward decentralized tech when that option is available. But if the option is not available, it should not penalize somebody trying to build a user friendly and high quality user experience (i.e indexing for collaborative apps and email for notification etc…).

I think it will be iterative as the underlying decentralized infrastructure is being built out (i.e Radiks) we will see the top apps adopt that technology as opposed to there centralized counterparts.

Hi all, Jack from Recall here. I’m writing this today because I feel that the Blockstack app mining program is not achieving what it set out to do. I have no reason to complain, but I don’t feel that what is happening is right and I wish to express my discontent about what I see that is wrong.

There are apps within the top 20 that have no activity for the past 2 months. There are apps within the top 20 that are just a landing page (with no Blockstack ID or Gaia working in any capacity at the time of the vote). There are apps where developers just joined in, did a basic MVP back in January, didn’t touch it at all (there are no new features, no twitter activity, nothing other than silence for the past 60 days) and now they are within the top 20. There are apps within the top 20 that have less than 6 installations (according to

What have they done to deserve being within the top 20? Sure some ideas look great, but remaining in the top 20 for over 2 months with an idea and no execution is laughable at best. This shouldn’t be rewarded, it just sets bad examples and attracts the wrong crowd.

Basically, some developers just showed up for the free money put in a couple of hours but are getting rewarded as much or sometimes more than others that put in a lot of hours and have solid working products with users installations. These developers that quickly show up and then leave after ticking all the checkboxes to qualify bring no value to the platform yet they keep getting rewarded. They barely engage or don’t even join places like slack or the forum. They are away from the community and continue making the minimum or no effort, yet they continue to be rewarded. This is a bad joke to everyone that keeps putting in the hours into their apps and pushing releases out every week. It’s no wonder that developers that put in the hours start to feel cheated by the developers that game the programme. All these questions, doubts and disapproval is a product of a lack of transparency in some areas of the ranking system. By making it decentralized as much as possible to avoid collision or interference from PBC, the programme started to spiral in a negative path by losing control of quality and consistency.

It should be possible for PBC to flush out all of the apps that are not actively developing new features or that don’t go beyond a landing page, but month after month they continue to rise in the ranking and end up staying there the following months. It’s not the first time developers raise their concerns to PBC regarding this. PBC should just come forward and agree that the current model is flawed, effectively acting upon all of these issues with some drastic changes to avoid what we have seen in these past months. Personally, even though it affects us, I would even go as far as suggesting to consider halting the program until all of this can be fixed since thinking about these changes takes time. If this is to continue as it is for much longer, with little tweaks spaced out by long periods, it makes me have reasons to suspect that there are ulterior motives behind this chosen model, which I want to believe there aren’t. I’m not the only one that feels this way, more developers in the program share the same thinking. Solutions like the Digital Rights Reviewer are good and address some of the issues the community raised previously on channels like slack and app mining review calls, but they are not enough.

A few suggestions I could think off to try to improve fairness in the process:

  1. Let developers rank their peers. There are an awful lot of discussions on slack regarding which apps are good. People that put in the hours know each other and they know the projects that are actually putting in the work. They know who is just coming in to collect the cash and barely works on the app. They can help address the fake developers that are gaming the system. Let them vote on projects which they are not involved. Example: The 1st place gets 100 votes, 2nd gets 90, 3rd gets 80, etc. Similar to receiving bitcoin you also get a pair of votes that you can cast. PBC can even easily give access to a Google Sheet and let the process be entirely transparent, showcasing who voted on who so there is transparency and little to no collusion. PBC can even create rules that forbid the votes to go all to one app or not allowing to vote consecutive months in the same app. I’m sure a system like this can be thought out and implemented. This system could weight something like 1/3 of the vote to also try to bring the community more closely together by fostering direct engagement.

  2. Create a channel for the app creators to address the stakeholders that vote for the DE. This has been addressed on a previous call but still not acted upon. At this stage, we can’t identify the voters (this leads to transparency questions, although I understand that this might be for security reasons to shell their identities) and we can’t address their concerns and questions about our apps. Not only that but if my app is downvoted once, the votes remain forever. Therefore, the votes should be mandatorily reviewed every month (or receive a reset) since the app keeps evolving and adoption by users should also be a factor presented to the stakeholders. If this is not possible then this should be weighted less. I have no guarantees that the stakeholders that vote for my app are entirely connected to what is going on the platform. Not everyone goes on slack or checks the forum often, so how can I be sure that they are not fooled by the ones that are gaming the system?

  3. Include the number of installations of the app by using metrics such as If an app is able to capture a new user by using its Blockstack id, it means it is doing a great job of evangelizing others to join Blockstack and discover what it is. This should be rewarded. Still, I know this can be gamed too, so I’m not saying to weight it extremely high but it should factor in. At least as a minimum requirement for the top placements. Plus, apps that are not within the top 100 installations after 1 or 2 months should not be allowed in the top 20 of the programme. This just means that either the idea is not appealing to users or that the app itself is still in the idea stage (since the Blockstack id is not registering), so why should an app like this be within the top 20 and get rewarded for more than 1 or 2 months? Ideally, we would replace this with a more fair metric like user activity, but since these are the early days’ installations should matter for now to try to factor in demand for the app.

  4. Remove apps that were not actively developed in the past 30 or 60 days from having a chance of reaching the top 20. This one should be obvious and a major weight. These apps should drop their position as time passes to allow more active developers to get rewarded by their dedication. This would help solve one of the biggest injustices currently in the program since there are some apps that keep rising every month without any further development or engagement over the past two months.

Developers are the decisive factor of any platform since they are the ones that add value and that value ultimately brings users to the platform. This program is great in generating incentives for developers and allows them to foster their creativity without worrying about paying their bills and without having to constrain ourselves to develop what is hip and trending for the web3 at the moment. However, if this continues I wouldn’t be surprised if some of the teams heavily involved, that put in a lot of hours and actually create something that works will start to consider if they should stop developing for the platform, therefore, affecting the quality of the apps being developed and the knowledge shared within the community. I don’t think that developers will keep developing for the program if the program isn’t fair (as much as it can be). Incentives to the right hands motivate good developers to go the extra mile, but incentives to the wrong hands just push good developers away. I understand that PBC wants the process to be as decentralized as possible as I mentioned above and is trying its best to not only please as many as possible but to be transparent and fair, but maybe it should consider that this might not be the best scenario for the time being because the process itself is affecting the transparency and fairness of the program.


Here here! What a simple, effective idea!


So lets be clear here about theblockstats.

theblockstats is an accurate representation ONLY of apps with a SOCIAL sharing component … not all apps have a social sharing component.

Some apps included ‘publish_share’ scope originally, then removed it after realizing it was not needed. If you are not enabling users to share data publicly, then having that scope is leaking user data needlessly (usage of the app).

theblockstats is not part of the app mining ranking algorithm. You can certainly argue that it should be, but currently it is not.


Hey Jack! There is a lot to unpack here, and thank you for sharing.

You have brought up a number of interesting suggestions, but the forum is a really hard place for us to track these suggestions and bring any of them to reality. In short, please post your suggestions individually to our Github.

We have had a hard time tracking issues that are brought up in the forum. The conversation is easily hijacked, and there is no way to see a clear stream of responses and feedback to just one suggestion. When suggestions are brought up in different, unrelated forum threads, you can’t follow the conversation. Github also allows us to track the status of different issues, and assign them to specific individuals when action items are necessary. We have weekly calls where we go through issues and make sure they’re being followed through on.

I have no problem with you sharing thoughts on the forum, but it’d be a lot more helpful if you could post this with links to Github issues, so that others can follow the conversation from there.

Hey @jackv this is really great feedback. Can you please point out exactly which apps and the exact failure of functionality and the installation figures. Best place to share is on the Github repo. Thanks.

I like the spirit of this idea but in practice it’s susceptible to collusion very very easily (i.e a group of N apps who are friends can rank each other highly or take turns raking each other number 1 in order to achieve a certain expected payout). I’m sure the the Princeton/ NYU game theorists can give you a better explanation as I just took 1 class about it.

I have tried to translate the comments into actionable github issues:


I massaged the data and produced a chart that shows the ranking of apps over time:

It is up to you to interpret the graph - whether there is movement or not in the top 20.

We have lost 4 apps (Hermes, Healthfundit, Huddle Group, Ourtopia - why?) this months and 5 new apps were added.

@dant you are right as I have forgotten that apps could opt out since the ‘publish_share’ is not required, therefore the data currently displayed is not accurate. In regards to having theblockstats or other stats metric added to the app mining ranking algorithm, that was what I was trying to convey since at the moment there is no way of rewarding demand for the app.

@hank I’ll follow your suggestion and go on github, thanks!

@jeffd please disregard my comment in regards to the installations, since as @dant reminded us you can opt out of sharing those stats with theblockstats.

In regards to my point of having a landing page ranking within the top 20, there is a discussion on it here

@avthars yes I know collusion can be a problem as I’ve highlighted, but as I’ve written there are ways to reduce such risks and have a fair peer review system in place. I’m sure there is a way for the community to reward other peers that put in the hours or new developers that join and bring fresh ideas and amazing execution since there is nothing currently ranking such.

@friedger thank you for creating the issues on github, I’ll have a look at them later.

1 Like

If you wonder how you ranked on the individual reviewers here is a sheet for you:

Hey everyone, cross posting from the Blockstack blog and Jeff’s forum post.

Wanted to share some of the priorities we’ve had, and progress we’ve made, over the last few months: Developer Success: Highlights Q1 2019

It would be incredibly helpful if you could complete the survey below. This is the primary way we measure our progress and your answers directly feed into our priorities for next quarter:

Help us improve Blockstack and App Mining

Thank you in advance! :pray:

1 Like

Totally understand. Let’s hope we can make it happen!