TryMyUI allows us to dial in a specific audience using filters and screener questions. Assuming we get this working well, it should help make the testing more relevant and fair. We’ve discussed this with TryMyUI a few times and there are a few complications:
Information overhead: Trying to keep track of the specific, and constantly shifting, audience requests from 50+ apps will become messy and error prone.
Audience advantage: This could shift the nature of the ranking from: “Who has the best UX?” to “Who has dialed in the audience the most so their apps tests perfectly?” which is an outcome we don’t want.
No testers: Based on conversations with TryMyUI, it is pretty easy to narrow the scope of users so much that very few, or no, testers are available.
We want to solve this while maintaining the value this reviewer creates. We think we might have a solution. As a first step, we want to share some audience test data with TryMyUI and see how that affects tester numbers, test quality, and audience advantage. This won’t affect rankings we’re just gathering some info:
Only allow testers who are ____________ .
The blank is one personal descriptor and it can be professional, personal, societal, etc.
Only allow testers who are computer programers.
Only allow testers who are lawyers.
Only allow testers who are office workers.
Only allow testers who are college/university students.
Only allow testers who are parents.
Thx for the input and patience as we work on this