Two weeks after Devcon VI we had a call with the ad hoc user research team to evaluate our initiative of running live UX research at Devcon. This page describes a recap/summary and serves as reading material to prepare for future initiatives, and as an fyi to Devcon organizers.
Conclusion
It was fun, we really enjoyed the collaboration as a research team, working with participating projects, and received positive feedback from projects about the findings. In terms of impact, we didn’t meet all objectives. At the same time, the initiative sets a precedent and generated learnings for a gathering like Devcon to evaluate products and promote user research.
Impact
✅ Direct impact on 4 open-source public good Ethereum focused projects who will have their product evaluated
- We tested products by Jolocom, OUSD (Origin protocol), AlphaDay and Web3Auth with 3 participants each
✅ Opportunity to ‘learn by example’ (i.e. how to validate their ideas and designs with end-users)
- The process and observation room gave projects the experience of working with a UX researcher and getting feedback from end-users
❌ Opportunity to observe people interacting with common flows and draw conclusions on how to improve their own product or infrastructure
- Little attendance outside of participating projects. Liminal space between public and private to maintain safe environment for participants
❌ Gain perspective on the local context of using Ethereum-based applications. We’ll recruit people locally who’s perspective would otherwise not be represented at Devcon
- Due to security measures at the venue we could only recruit Devcon attendees
Key learnings
- The initiative demands significant time and effort (2-3 days) from the researchers against little incentive. This was particularly taxing in combination with presentations/panels
- Knowledge sharing could be improved. Impact outside of participating projects was limited. The concept of moderated sessions is in a liminal space between public and private. The current concept needs to be combined with active, public share-out, or needs to be revisited
- Communication with clients can be improved. Better expectation management and clarifying the cost of research. Covering incentives only, $150 per project, can create the perception that research is cheap
- Operations can be improved when it comes to timing, space and IT setup. During the Friday afternoon sessions issues compounded with lack of focus, noise from closing ceremony, and test setup hardware limitations
Retrospective
What are future risks (and actions)
Issue/Risk/Learning |
Mitigation/Action |
- Not setting up clear goals for us concerning the effort - what do we get from it |
- Create a document explaining what researchers should expect/what tasks they will need to do |
- Point to this retrospective |
| - Running the work at a financial loss, or a loss-leader without a non-financial alternative objective | - Define objectives for researchers |
| - Added stress especially when you're on a panel or you're a speaker | - Invite other researchers (new to Web3)
- Approach projects who we want to be doing user research (Make it exciting to do the research) |
| - Space that is not ideal (especially during closing ceremony) | |
| - Missing out on the conference experience +100% | |
| - Not monetizing the panel | - Present panel + promote follow-up to clients |
| - Not educating client in terms of research costs | |
| - No team :) Too much of an investment/missing out on the event | |
| - If we make it publicly viewable for conf attendees, will participants agree to take part?? Do we need to up incentives? | - Better incentives generally so people are clamouring to take part!
- Example of https://lastmile.money/ explicitly invite prior participants to live session with audience |
| - Not fully realizing the potential impact of doing research at a conference | - Knowledge sharing
- Invite other researchers, designers and dev teams to understand what research is about
- Workshop to find model for research at a conference (i.e. alternative to in-person) |
| - Unclear accountability | |
| - Burning out by the very end (resulting in subpar results for the last client) | - Different days, make sure functional mode still available
- More researchers so no-one has to do a whole day?
- Informal briefing with research team on (mental) availability) - close to the event |
| - Expectation from clients to follow-up after the event | - Making sure the clients understand what we're offering |
What helped us move forward
- Testers mostly showing up on time and having a back-up in the form of the design@Devcon Telegram group