Find a Player

BackgroundUX / UI WorkGrowthProduct MgmtFinal Thoughts

Background

In Autumn 2018 I had the idea to create an app that would help people fill in missing numbers in their sports fixtures. As a regular member of a small sided cricket team and casual five aside player, I was well aware of the pain and anguish involved in running casual sports teams.

The next day I googled to see if anyone was working on this problem and as it happened there was a small start-up in Glasgow running the Find a Player app. I reached out to them and after an exciting chat they asked if I wanted to join the team.

With a professional background in creative advertising and experience moderating focus groups, I got started with a few user research sessions.

UX / UI Work

I recruited non-users for the first round of user research.

I sat with them asking what sports they played, how they currently organised them and what their biggest pains were with their usual process.

I then introduced the product. I'd observe them signing up, record their initial impressions and what they thought about the idea and execution of the app.

What was clear from this set of interviews was that the idea was liked and everyone could see how it might be useful - but the existing app felt dated and not user-friendly.

Alongside that there was some confusion as to what the app really was.

Was it like eBay for sports people? Matching players to organisers, as the name would suggest.

Or was it more like what Hello Fresh does for home cooking? A transformative way to help organisers run their game.

Or was it something bigger than that? More like a social network of sports people.

What was clear was that it wasn't totally clear for prospective users.

These interviews also highlighted some standard usability problems across the app. Below I'll share some of the work I did on event creation, onboarding as well as creating a design system.

General Design System

There were a few stages that users were confused, audibly sighed or were irritated while going through key flows.

It was clear that a few general UX principles needed to be applied across the app to iron out these pain points.

Simplified Colour Palette

In some screens the app was using over 40 different colours. Colourful UI's are not an issue when used deliberately. But when applied inconsistently, the feel of the app becomes jarring and hard to navigate.

I simplified the colour scheme down to 9 colours that users would learn to associate with various states - positive actions were predominantly green, warning states were red etc. And the two blacks and greys accounted for designing in dark mode.
Above: Comparing two sets of screens before and after the colour palette was implemented.

Right Thumb Rule

This is principally an ergonomic design decision, but one that's often overlooked.

Simply, most people hold their phone with their right hand. And since 2013 smart phones have been getting bigger, which means buttons at the top of the screen are harder to reach than buttons near the bottom of the screen.

So when designing every screen it was important to mark out what the primary, secondary and tertiary actions were - to ensure the primary action (most likely/desirable) was close to the right thumb.

It's really amazing how tiny, simple changes like this keep a user in flow and relaxed when using the product.
Above: Showing screens with the right thumb principle applied

Declutter & Add Clarity

"Clarity refers to the focus on one particular message or goal at a time, rather than attempting to accomplish too much at once" said the interaction designer, Michael Hoffman.

There was a general trend in the existing app to try and reduce the total number of screens the user has to go through in any flow. While this looks like an efficient UX on a wireframe, it tends to result in a bad user experience.

Generally, going through 100 clear screens is better than going through one cluttered one.

When users see one task ahead of them, it feels manageable and they won't have to think.

When users see they're going to be asked 40 different things they'll often sigh, get annoyed and leave the process before they start, because they "CBA right now".

I put an emphasis on clearing up screens by removing anything that wasn't absolutely necessary and reducing reliance on explanatory text.

Onboarding

They say first impressions last the longest so it's important to consider the first encounter users have with a product.

To start the process of improving the app's onboarding I plotted out an experience map based on interviews I'd done with existing and prospective users.
The experience map helped us identify the different stages that prospective organisers were most likely to discover the product was during "panic recruiting".

This meant the organiser was in a state of high anxiety because their game was at risk of being called off if they couldn't "find a player".

This anxiety state meant the user was highly motivated to get through registration but their tolerance for unnecessary friction was low - i.e. small UX problems made them angry.

Re-wiring registration

The registration interviews on new users revealed some positives as well as pain points.
Positives
Users liked the Twilio phone number verification as it "makes it feel more legit" to some users.

Users also enjoyed selecting the sports they played as it made them nostalgic for sports they hadn't played in a while - this gave them a pleasurable sense of possibility.

When they finished signing up - landing on the map showing games and players nearby was a bit disorientating - but once they discovered a live feed of local games they started to see real value in the app.

When re-sequencing the wireframes, I took guidance from the "peak-end rule" by placing the the phone verification and sports selection up front and then finishing the flow by landing on the Live Feed.
Pain Points
Design was dated. A few noted that the App Store cards and copy were dated and unclear.

Permission primers were text based. Users don't like reading text unless they have to, which meant we were potentially getting fewer contacts and notification permissions.

The point at which we asked for a skill rating for each sport was immediately after the user tapped the sport which made what was potentially a fun task feel laboured.

The user's profile details were all squeezed onto one screen which felt cluttered.
From these findings we wire-framed a new flow that you can see in blue below.
Once the new wireframe was agreed it was time to design the screens, implementing the new design system, while also emphasising some of the existing positive touches.
Explore Figma Prototype
Desktop only
Once the above prototype was built and live, it was up to me to do a few more rounds of user research to iron out the "last mile of UX". These would help me identify where anyone got stuck and any potential error states or edge cases.

What was great about doing research for onboarding and registration is you can go up to anyone at the side of a sports pitch and get them to run through it. Once I was confident the flow took two minutes or less it's hard for people to refuse!
Some examples of friction in the "last mile"
On any form field the input is either numbers, alphabetical or a multiple choice.

For example when you input your phone number for authentication, it's best to have the numeric keyboard open automatically, so you don't have to tap on the numbers tab of your keyboard.

Likewise, if you're filling out an e-mail address you'll want to open the alphabetic keyboard and not have the field default to a capital letter first - as most emails don't start with a capital.

Lastly, my biggest pet peeve is when date of birth fields default to today's date. (This is standard when using the iOS and Android date pickers). "Do I look like I was born yesterday?!" I yell, laughing at my own joke.

In Find a Player, we set the default DOB field to today -25 years, because that was close to the age of the average user. I am always surprised at how many users remark on details like that when observing them.

Event Creation

Another crucial flow to get right in the app was the event creation - the process organisers would have to go through to set up a game on the app.

In research I found that new users were taking three to five minutes to set up a game, and there were many bumps in the process.

There's a lot of upfront information to capture from the user when they want to list a game on the app but having watched users struggle thorough the existing flow I felt it needed to take less than two minutes.

Re-wiring Event Creation

During user interviews I'd get organisers to tell me how they ask someone to join their game if they saw them in person - and I'd get them to say it out loud.

A typical response went something like:

"Hey mate I've got a game of footie on Thursday in Streatham if you want to play? It's just a group of work mates and we're one short. Nothing too serious."

What struck me when reviewing some of the interview transcripts was it was very common for the order in which they'd ask would be the same: Sport type, Time/date, location, who's playing and then any extra details.

So when drawing out a new set of wireframes it seemed sensible to start by borrowing the language flow of what users were saying.
What, When, Where, Who, How much and AOB. In that order.

Measuring Results

Once the new flow was designed and live it was easy to analyse the performance to see where users were getting stuck or taking time.

The redesign took the time to create an event down from 3-5 minutes to an average 1.8 - 2.5 minutes per event created.
Line graph showing the time it took users to complete each step in seconds.
Setting Location was reliably the longest part of the flow.
But that wasn't the main improvement, the new flow had a 4x effect on the conversion rate of events created.

So while the previous flow only 10-15% of those who started creating an event would finish, this new UX delivered between 40-60% week on week.
This waterfall chart from Facebook Developer platform shows 51% conversion rate with a mean completion time of 150 seconds.

Wrapping Up

Those are just three examples of the UX/UI work I did at Find a Player. While I was there we also launched payments, and re-skinned the app according to the simplified design system.

One final design responsibility I had was briefing an illustrator to visually bring the benefits of using the find a player app to life.

Finding players, collecting game payments and helping players find games across a range of sports.

Growth

If you're familiar with the start-up world you might have read the essay Start Up = Growth, by the inimitable Paul Graham.

In it he says what makes a start-up distinct from a small business is a start-up's scalability.

He shares some examples of iconic start ups and how obsessively they cared about growth.

When I joined Find a Player there was some metrics being discussed and measured but it was unclear whether they were the key metrics nor was there a regular forum to review them.

I felt it was important to set up a regular metrics review to ensure the whole team was orientated around moving the metrics that mattered.

A popular framework that many use is the pirate metrics funnel.
AAARRR "Pirate" Metrics

AAARRR at Find a Player

The first task was to work out what these pirate metrics meant in relation to the Find a Player product. We had to prioritise which were worth tracking, as analysis paralysis is a serious issue to getting things done.

Once we'd prioritised, we would work out how often to review them and set goals in order to improve them.
Awareness
Awareness is largely a marketing metric. Bluntly, it's the % of people in your market who are aware of your product.

It's important, as no-one can use your product if they're not aware it exists. But it wasn't particularly relevant to a pre-product market fit start-up like Find a Player.

A marketing team for a larger product might also break this down into some brand metrics like consideration and likability.
Acquisition
Acquisition concerns how you get users.

The team had demonstrated some success using Facebook ads. But because the product didn't have a reliable revenue stream yet it was important not to spend too much time or money on paid acquisition.

Absent paid acquisition the main acquisition channels were "organic" discovery and word of mouth (e.g. referrals).
Activation
Activation is the process by which users who have engaged with the product (e.g. downloaded the app) then get to using the product.

For us this was download, registration and, setting up or joining a game.
Retention
Retention is the measure of how many people continue to use your product over time.

This was a crucial indicator of product-market fit (PMF). We were targeting weekly small sided sports players, and the majority of organisers would play every week.

We set key targets for how many users would still be active four weeks and twelve weeks after sign up.

If we had PMF, we hypothesised it would be likely that the % of users still using the product after one month and three months (week 4 and week 12) would be fairly consistent.

Likewise, if the % of users still using the product in W4 was going up, we could reasonably say we were improving the product.
Revenue
Revenue concerns the main stream of revenue for the product. While big businesses it can be important to build a diversified set of revenue streams, in small start-ups it's important to focus on getting one right.

For Find a Player, the group payments feature was still in the alpha & beta stages being tested on a small number of users.

This payment system allowed organisers to collect payments from all their players in one place - instead of collecting cash, sorting bank transfers or simply not being paid by fee-dodging friends.

The important metric to track was total number of transactions on the platform per week.
Referral
Referral is the metric that tracks how likely a user is to invite or sign up a friend.

This is a difficult metric to measure as while sometimes players would sign up because a friend sent them a link (something we could track), sometimes people would just mention the app to their friends and sign up themselves (which is something our tech team found a proxy for tracking).

Because sport is a social experience, and the product helped organisers and players run their weekly games - referral was a relevant metric for both our retention and acquisition.

Our referral metric was called the viral coefficient and was tracked weekly. We had separate metrics for organisers and players, as we hypothesised that organisers might bring more of their friends onto the platform than casual players.

If you now have a favourite epidemiologist you will understand a viral-coefficient is something like R0. If R0 > 1.0 you have an exponential growth. In the case of an app, this is a very good thing.

AAARRR at Find a Player

As a pre-PMF start up we agreed to focus  the lower half of the funnel; Retention, Revenue and Referral.

We used Facebook Developer Analytics to track these metrics. (Which is a painfully slow and hard to use tool - I'm sure there are better analytics tools out there).

We reviewed them weekly in a metrics review on a Friday morning where the whole team would discuss, in depth, what had happened to the metrics, why they might have moved, what's changed in the product this week and set targets for we'd like to be next week.

I chaired this meeting for 27 consecutive weeks and shared the metrics pack for everyone in the team - to align everyone in the company around working on what mattered.

I can't share precise metrics as I am no longer at Find a Player but I can show a graph of product usage during the first half of 2020. Bonus points for you if you can guess what happened around the middle of March.
Find a Player sent out an e-mail to all users on March 10th suggesting social distancing guidelines for sports games.
While quantitative growth metrics were really important with regards to measuring progress of the app - they were not sufficient to tell us how improve the product.

However, having no regular review of quantitative metrics would mean you don't know if your product is going in the right direction.

Over the course of my time at Find a Player I was proud to have set up, in combination with the head of engineering, a comprehensive metrics tracking system which would put the team in good stead to be able to provide accurate, hard evidence of product improvements to potential investors and new-team members.

Product Management

Product Management can be a tough job to explain. Sachin Rekhi's post The Top 10 Deliverables of Product Managers sheds the most clarity on what a PM has to juggle.

He breaks it down into Vision, Strategy, Design and Execution.

As Find a Player was still a small start up, we didn't have any one individual assigned as a "product manager", so most of the team had to chip in to take responsibility for different parts of the vision, strategy, design and execution.

Below, I'll outline what I felt I contributed to product management at Find a Player, but it's important to stress it was always a team effort.

Vision

The vision of the company usually comes from the CEO. And the clearer the vision, the easier it is to aim for.

Find a Player's vision revolved around making it possible to play your favourite sport, anywhere in the world, through the touch of a button.

The mission was to build the most seamless way to play sport, as easy as ordering a Deliveroo, or buying a gig ticket on Dice.

Strategy

Strategy at a start-up is typically a conversation about product market fit.

We had to identify where we were in relation to PMF, and how we were going to get there.
Where are we?
Product-market fit is a tricky thing to describe to anyone who's not familiar with the term - but in essence its the part of building a product when you realise you've built something that people really want.

It's famously hard to measure, they say you can always feel when it's not happening.

Andrew Chen, ex-head of growth at Uber, says about product market fit:
"these days, most starts ups fail because of lack of PMF, not technology risk. So if there's one thing that will kill you, it'll be the product never quite working, and thus, all the subsequent problems that come with that: lack of investor interest, employees leaving, cofounder fighting, etc.

It's a stressful time when it's not working.I find that product/market fit is usually reached either right away or you have to be incredibly thoughtful about your iterations if you're almost there but not quite.
This quote certainly struck a chord with the team. We knew we had an idea people wanted but we hadn't built what they wanted yet.

Thankfully, Rahul Vohra at Superhuman had shared a guide he'd created to measure progress towards PMF and how that helped his team to focus.

In short, his PMF engine asks 4 questions.
1. How would you feel if you could no longer use Find a Player?

2. What kind of person do you think would benefit from Find a Player?

3. What is the main benefit you receive from Find a Player?

4. How can we improve Find a Player for you?
The aim of the first question is to reach 40 plus percent of respondents saying they would be very disappointed if they could no longer use the app.

The aim of the second question is to identify the users who say "people like me".

The third question teases out the strengths of your existing product.

The fourth question highlights what you need to improve the product.

He then recommends spending 50% of your time continuing to improve your strong points that user's highlight in question three. And then spend the other half of your time working on the areas that are highlighted in question four.
I ran the PMF survey on existing users and it gave us some clear feedback.
We had less than the 40% "very disappointed" respondents needed to hit PMF.

Some users liked us but very few loved us.

The main benefit were people using the app to organise their regular games.

The app had performance issues in key areas like in-app messaging and the presence of inactive games and players.
This process gave us a quant metric of how close we were to PMF.

It also gave us a clear hypothesis for how we could get there.

In short, we needed to continue to improve the whole experience for game organisers; not necessarily just for finding players and finding games.

In addition, we should remove any inactive users or games, and work hard to maintain an active user base.

For the next 12 months we had to evaluate any new ideas for product improvements against this hypothesis.
Where are we going?
While the vision and mission of the company were fairly clear - we needed to break down the steps towards realising that vision.

As a seed stage company, it was felt we needed to demonstrate clearly that we were making progress towards product-market fit and building a business with a tangible revenue stream.

This meant prioritising working on parts of the product that would deliver a revenue stream as well as improving the user experience.

We worked out we needed to launch the group payments feature and demonstrate its impact on the PMF score, before looking to do anymore fundraising.

We set a date on when we expected to start the process of raising money (pre-Series A) and worked backwards from that date to determine what we needed to do week-on-week.
How are we going to get there?
The challenge of deciding what to work on when you have limited resources is a universal one.

I learnt the hard way that it's very easy to spend weeks designing and shipping something that ultimately has little impact on the most important metrics.

On top of that, we were having lots of ideas about what would be nice to have in the product, spending long meetings discussing these amazing ideas (they're always amazing), and then letting them sit in the product backlog waiting for them to get built.

As our head of engineering rightly said at the time "this is the opposite of agile".

The Product Development Cycle

It's not too harsh to say, our product development cycle looked something like this.
"Planning Poker", how we prioritised:
1. Outline of new feature or other idea.

2. Job value (estimate between 1-5)

3. Job size (estimate between 1-5 - for how much engineering time it would take)

4. Each member of the team shares their value and size estimates.

5. Discussion ensues, to iron out discrepancies.

6. Team revisit their value and size estimates.

7. Revised estimates put in a Google sheet.

8. An average score for each idea is created.

9. The ideas with the highest average job value and the lowest job size were prioritised.
This was a relatively useful process - however, what would inevitably emerge was tens of ideas what would sit in the backlog for weeks or months, which would either become stale or would need re-prioritising again at a later date.

Instead of wasting lots of time discussing these old ideas again and again, what I felt we needed was a more formal process to shape the ideas before they were pitched, discussed and estimated.

Ideally new ideas would arrive better shaped, formed and tested - so we could make more accurate predictions as to their likely value.
The revised product creation cycle
I looked around to find some examples of how other companies were solving this problem. One insightful document came from Hiten Shah who had studied various product teams' working habits at companies like Amazon. This Product Habits document informed my approach.
The new cycle put an emphasis on defining user problems and testing prototypes
The loop starts with Research.

Research typically involves listening to users; existing or potential. When listening to users we would ensure we used principles of the Mom Test, so that we weren't asking leading questions or taking compliments as reliable feedback.
Problem Definition.

99.9% of the time, ideas that were presented, built or designed without a clear problem statement would create more and more work, eventually demoralising the team.

Working to write down a clear problem statement using a technique like Jobs-to-be-done "People don't want to buy a quarter-inch drill. They want a quarter-inch hole!" or the Kaizan 5 Why's were very useful in reframing problems and identifying root causes.
Brainstorm.

Once you have defined a problem, it's a lot easier to come up with lots of different ideas that might solve it. This contrasts with when you already have an idea, and you love the idea and can't seem let go of the idea. Having an attachment to one idea stops other, sometimes better, ideas rising to the surface.
Create.

This is the exercise of bringing to life the the best ideas from your brainstorm. Taking them from a raw form into something more tangible.

If it's a new illustration style, mock up the first attempt at it.

If it's a new feature idea, write it up as if you're writing the feature announcement email.
Prototype/Test.

Show the idea to other people. Ideally users.

Any form of prototype is good and it's better if its rough.

For example, if you've already written the feature launch e-mail, send it to 50 users, see how many reply showing an interest.

If your idea is that the app needs speeding up, first try slowing it down to see if users complain.

If you think users want a specific setting in the app, put a placebo button in and see if users click on it.

Do what you need to do to make sure you're not misleading users. But also do what you need to work out if your idea is worth building, shipping or scaling.
Learn and share.

Write up your findings from your prototype test. Even if they were negative. Especially if they were negative.

Write these notes down so other people can review and learn from them - this will help them give you some ideas to go back and test again.
Build

Once you've shared the idea with the team. Work out what the minimum-viable version of this feature you could launch and set a deadline for when it will be launched.

Once it's out, go through the cycle again.
While this more formal cycle looks longer than the more informal cycle I shared upfront, it reduced the likelihood of work getting out of control or wasting weeks on a feature that flops.

In the three months before working through this cycle we managed to push out two product updates in the space of twelve weeks.

Once we started implementing this approach, we were releasing updates every week over a similar time-frame.

Now this cycle is far from perfect and 'number of releases' is not an entirely reliable gauge of progress - but both the engineering team and users remarked on how the new tempo improved their respective flows.

Final Thoughts

I loved my time at Find a Player, I got stuck in doing things that don't scale, shipping product and working closely with developers.

It taught me that to getting to product-market fit is a process of focus and iterations - the faster you can make that loop the more likely you are to succeed.

If you're in the process of trying to build a marketplace or similar network, and have any questions, I'd be more than happy to hear them.

Thanks for reading.

Go well.