Better playtests for game developers [How To Be A Games User Researcher - Bonus Edition 🎮✨]
Five of the biggest problems game developers face when playtesting, and a tool to solve them.
I spent last year interviewing game developers about playtesting - how & why they run playtests, and what makes playtesting harder than it should be.
As a user researcher, we’re lucky enough to spend 24/7 working on crafting reliable studies - but for most game developers it’s just one small part of their role. And it’s a huge time-sink.
Many of the problems they described were very familiar:
"It's hard to find real players to take part"
"It’ll just tell me problems I already know"
"I don’t have the tools I need to run a playtest"
"It's hard & take ages to deal with the messy data"
"I’m not sure I’m doing it right. Am I biasing my results?"
This uncertainty adds up to "we don’t have time to playtest", and teams don’t run playtests or UX research studies as often as they would like to (or should be doing). Risky!
As user researchers, we know how to deal with all of these. But not every team can afford a dedicated UX professional. I’ve been interested in fixing this problem.
At Develop last year I gave a talk, Better Playtesting for Indie Developers, which explained some of the challenges developers face, and how to overcome them using UX and user research approaches. In this bonus issue I wanted to share some of those lessons.
Today is also the launch of The Playtest Kit, which helps solve these problems for game developers. Because you are How To Be A Games User Researcher I’ve included a discount code that’ll get you 20% off if used this month. Find the code at the end of this bonus post.
Better playtesting for game developers
In my Develop talk, I explored some of the barriers to more frequent and efficient playtesting for game developers, and shared how we deal with them in the research community.
🔎 Find the right players
We know how important recruiting the right players is to running successful playtests. Without the right players, the problems + behaviour we’re discovering are unlikely to be representative - it’s not helpful testing the difficulty of a complex first person shooter with someone who has never played games before. (As an aside, experience with dual-stick controls was always the first clue that the user was misrecruited when running playtests at PlayStation!)
Unfortunately, participant recruitment takes time to do right. Many game teams find it hard to dedicate time to do appropriate participant recruitment, and test with friends, colleagues or their existing fan community. They then draw unreliable conclusions about “will players like my game” or “is the difficulty balanced correctly” based on those playtests, and I heard a few horror stories about that too!
Recently I’ve been encouraging teams in this position to start to develop playtest participant panels, and fill them regularly with fresh batches of playtesters who can be called upon when needed. This is a repeatable process to try and maximise the value of each individual playtest recruitment effort, and minimise the time needed when trying to call on playtesters in the future. When aware of the risks, this can be the perfect balance for small teams.
🆕 Learn new things with your playtest
The teams I spoke to often worried about not discovering new things in their playtest. They believed they knew what they would learn from a playtest already, and that it wouldn’t be useful. This was often rooted in vague research objectives, unfocused tasks, and just “seeing what players think”.
A repeatable structured process to identify what decisions have been made recently, rank them by risk, and use the riskiest assumptions to form research objectives is how researchers deal with this. The research objective informs the method, rather than the one way round. And with a bit of help, teams can identify their riskiest assumptions and then pick the right method to deal with them.
This means that playtests are focused, relevant and ultimately lead to a culture of running small frequent tests, to reduce the risk created by large, festering untested assumptions.
🛠️ Get the right tools for playtests
Playtesting creates a lot of admin work. For ethical and legal reasons, data from players has to be treated very carefully and adhere to local laws. Many game teams are also invested in keeping their game secret, which requires creating and managing NDAs. And tests can be technically complex - potentially requiring installing builds remotely, recording cameras + screens, and checking the technical state of the game.
This is hard when doing it for the first time. Templates, processes and the right tools help reduce the risk of critical failure, and make playtesting more efficient and speedy. Time invested in ‘research-ops’ will pay dividends for making playtesting easier to run as a regular activity.
⏳ Make the most of the time with players
Organising playtests can be a lot of effort to plan and run. I was concerned when I heard that teams were missing opportunities to make the most of that effort, and choosing underperforming research methods, such as ‘get players to give some comments on our Discord channel’.
Playtests can potentially generate lots of different types of data. Observational data from watching people play. Sentiment data from what players say, or ratings they give. Behavioural data from recording what they do within the game.
Researchers understand the type of data that is generated from playtests, and how best to match that to research objectives. As mentioned above “what we want to learn from the playtest” should be informing “what we ask players to do” and ultimately “what data we collect from the playtest”. Trying to balance appropriate mixed-method studies with the pragmatic decisions that game developers have to make has been a particularly interesting challenge!
🪠 Deal with messy playtest data, draw reliable conclusions & take the right action
Playtests, and research studies, generate a lot of raw data. The interpretation of that data takes practice, so that we don’t have misguided confidence about our decisions (or get too distracted by players saying ‘i don’t like this’)
In my interviews, I heard that game developers were not confident about this, and were worried about the risk of introducing bias, misinterpreting the data - or just getting bogged down in it for days, taking away valuable time.
In the Develop talk, I shared an approach for identifying weak + strong signals from your data, and how to treat both opinion data and behavioural data differently to draw reliable conclusions. This is essential for removing the risk of misinterpretation of data from playtests, and getting game development moving.
👇👇👇 The Playtest Kit 👇👇👇
This brings me to The Playtest Kit - my attempt to scale solving these playtest problems.
I’ve made The Playtest Kit to solve all of these problems for game developers (and a whole bunch more!) - creating a structured and repeatable playtest process, to efficiently run high quality playtests, without having to start from scratch (or hire expensive consultants 👋 ).
It’s designed for game developers who are not user researchers, but just need to run better playtests - making our knowledge and expertise accessible to teams without dedicated UX professionals. The toolkit speeds up effective playtesting, making it possible to run more regularly, and ultimately improving the quality of games.
During May the playtest kit is at a launch discount price (the price will go up next month!). Because you are a HTBAGUR subscriber, I also have a discount code that’ll get you an extra 20% off this month.
🚨 The code is ‘HTBAGUR’, and runs out when the price rises at the end of May. Until then - double discount! 🚨
Learn more, and pick up the playtest kit before the discount runs out, at playtestkit.com
(no need to enter the code with the button above!)
The Playtest Kit has got some great feedback from developers so far, including Size Five Games’ Dan Marshall, Snap Finger Click’s Jo Haslam, Paradox Interactive’s Juney Dijkstra, and more (see all their thoughts on playtestkit website). If you pick up the toolkit, I’d love to hear your impressions - do drop me an email.
Stayed tuned for our regular How To Be A Games User Researcher issue, later this month.
Steve