UX-research of an Online Banking App: our experience, mistakes and discoveries
Hi. I’m Denis Elianovsky, the CDO at JTC and Founder of Opium Pro. We work in very narrow segments of Russian IT market, related to finance and document workflow. I have been involved with design in one way or another for the past 12 years (8 of which, I have been working closely on the design of complex interfaces) and I want to talk to you about the ways we conducted usability testing of a Remote Banking Service mobile app. I also want to point out some of the mistakes that were made and the conclusions that were drawn as a result. At some point, I will recommend a couple of books.
So, what is a Remote Banking Service? As you can derive from the name, it’s a type of service that lets users manage their banking operations remotely (usually with the help of a device, such as a phone, tablet or PC), and these applications are also often referred to as online banking apps. Well, we are one of the companies that designs and develops these types of apps — nice to meet you.
You’re very likely to find something useful in this article if you’ve already heard about UX research and would like to test your own app, but don’t really know where to start. If you already have experience with this type of testing, then I’m sorry to say that you might find the article somewhat boring.
To those who like to process information by watching videos, you can follow this link. But please turn on the subtitles, in this video I speak Russian:
What kinds of tests are there?
If you asked me to describe UX research in 31 words using only a single comma, then: UX research is a quick and useful activity that can be performed at the early stages of development, which can help in avoiding mistakes that would inevitably come out during production. Keep in mind that launching an online banking app is not only incredibly stressful, but also incredibly expensive. However, UX research lets you test your app on real-life users, long before you start unloading dump trucks full of money into the pockets of marketers and developers.
Of course, every designer (every good designer, at least) tests the app before releasing it. But more often than not, this procedure resembles a DIY project, rather than a serious systematic approach. That is to say, apps are tested on friends and acquaintances, as opposed to people that accurately represent the actual target audience. In this article, I want to convey that for large and complex applications, design testing should be a standardized process and integrated into the design process itself.
First and foremost, it’s important to distinguish that research can be divided into the quantitative and the qualitative kinds. When we talk about quantitative research, we really mean audience coverage, where we try to increase the number of people that we test with. When we talk about qualitative research, we test with fewer people, but we conduct detailed interviews with each respondent and dive deeper into each specific case.
Research types can also be divided into the categories of behavioral and perceptive. When we conduct behavioral testing, we monitor what a person does with our application. In perceptive research testing, it’s more important to analyze what the person says (thinks) about our application.
In this article, we will focus on Usability tests. They can be classified as behavioral qualitative tests. In the case of Usability testing, fewer people are involved and what they actually do with the application is carefully observed and recorded.
The design process is well reflected by Damien Newman’s squiggle.
And since I established above that design and testing should be integrated, the squiggle represents the unified process of creating the design and its testing. Two conclusions can be drawn from this graphic: 1) Design and testing is an integrated, yet non-linear process. 2) It’s also an iterative process. This becomes obvious as soon as you look at the squiggle, don’t you think?
What do I mean when I say non-linear? At the beginning of development, we have many different theories that we want to test and prototypes that we want to create. And only when we get closer to the end of the design process, do we start to settle down somewhat. Every subsequent time we test a new prototype — a new iteration in design, every new testing session starts to resemble the ones that came before it and the changes in design become fewer in number: they may become deeper and more elaborate, but from the outside it becomes much more difficult to notice the changes.
When I call the process iterative, I mean that you can’t just test the app once and be done with it. Testing should be performed regularly in order for it to be sensible. Especially considering that the design can also change quite drastically in the process.
How do you prepare for a UX study?
- A Usability test begins with putting forth a hypothesis. So, what’s a hypothesis? It’s an assumption about how a specific person will use the application. More specifically, it’s an educated guess at which scenario the user will choose to follow. When putting forward a hypothesis, we need to study the analysts’ predictions, try to take into account our own personal experience in using similar applications (if there is any), and use all this to compile a User Story Map. Using the User Story Map as the foundation, a clickable prototype of the application can then be created.
- Compose a questionnaire. It’s important that it is short. If you test people for too long, they will quickly start to get tired. Ideally, it should take 10–15 minutes for one session with one person, with a maximum of 20 (after that, in order to keep the respondent’s attention, you may have to resort to some extreme measures such as: taking their relatives hostage, getting them high, and begging them for the sake of all that is Holy to keep going.) In order to get into the sweet spot, we usually prepare 5–7 questions/scenarios. The questions must be in open format: this is very important. To clarify, open-ended questions are ones that can’t simply be answered with a “yes” or a “no.” We must provoke the person to want to share something with us; to open up their heart and soul.
- Find a group of people. It should also be a small one (5–7 people). After all, we’re conducting qualitative tests, which involve the use of small group samples. There are various ways you can recruit people. You can schedule interviews in advance and even pay them a bit, or you can go out into “the field” and look for suitable people in cafes and other public places.
Before you ask it yourself, I will answer the most popular question from our customers, which goes a little something like: “You probably test everything yourself to ensure that you get good results, don’t you?” No. Not only do we not include internal results in tests, we even try our best to exclude people with professional bias in testing, i.e. people who participate in testing for money, as a profession. And we also exclude designers, programmers and other types of people who can be subject to professional bias.
Let’s go over what we need for preparation in order to solidify the knowledge.
1. Putting forth a hypothesis
The picture below shows several possible options for visualizing this process.
The leftmost screen shows how you can create a User Story Map using the tools that are immediately at your disposal. As you can see you can even use post-it notes to accomplish this. By connecting the notes using strings, we show the assumptions of the paths the user will take when using the application.
The second option is also a User Story Map, but drawn in Miro. It’s basically the same post-it notes, but switched over to a digital form for convenience.
And the third screen is a clickable prototype, created in Figma.
Although there are specific tools mentioned above, there’s no rule written in stone saying that you have to use them, because hypotheses can be created and visualized with whatever is convenient to you. For example, our team has enthusiasts who conduct all testing with pieces of paper. They also have a clickable prototype on pieces of paper — at least it’s a start.
2. Creating questions
Open-ended ones. It’s also great if they are in the form of a story. So, let’s assume that a respondent has to put a block on their bank card to solve his particular problem, we don’t just straight up say “Block the card!” We tell him a story instead. Ideally, the story should immerse the person as much as possible and push him to tell his own story in response.
3. Finding respondents
This graph was created by Jakob Nielsen back in the 90s. Even back then, UX studies were already being conducted.
The horizontal axis represents the number of people we’re testing, and the vertical axis represents the number of errors found. Notice how sometime after the 5th person is tested, the graph starts to even out, but what does this really mean? This means that after testing 5 people, the efficiency of testing is drastically reduced after each new respondent; with each person, the number of errors that are discovered continues to drop exponentially. Jakob Nielsen made this conclusion, and we fully agree with it.
It is more efficient to use small samples in tests, but perform the tests often.
At first I wanted to suggest Jakob’s book, but it’s already somewhat outdated. I have something better to suggest. The author is still performing UX research to this day, and here is his site where you will be able read plenty of articles on the subject: nngroup.com
The testing process
It is ridiculously important to record all the tests on video. Video is the most important artifact of testing. If you don’t have a video of it, then you can safely say that you didn’t do the test.
First. Before even starting the test, since we’re testing a mobile app, we hand the person a phone. We ask the person to relax and to narrate everything that’s happening in front of them (including everything the respondent himself is doing). It’s very important for us to understand the subject’s train of thought.
Secondly, (and we call this rule the “5 Why’s”) during the process of going through the script, it is necessary to get the person to explain any stop they make, or any doubt they may have in an action that they’ve already made. At this point, it can help a lot to ask questions such as: “What did you expect to see on this screen?”, or “Why did you click on this button?” It’s not always exactly 5 Why’s, but the point is for you to ask as many questions as possible and immerse yourself in the person’s mind as much as possible.
And thirdly, based on the test results, we ask, “Do you think you’ve fully completed the task?” Moreover, not only does the respondent answer this question, but the person who’s running the test does as well. Why this needs to be done, I’ll explain a bit further on.
Now let’s move on to the tests themselves. This table shows what the summary results of a research study might look like. These are the actual results of our first test.
On the left is a list of questions and stories, followed by columns for each of the respondents. If you see a 1, it means that both the test subject and the tester considered the task to be fully completed. If 0.5 — it means that one of the two believes that the task was not fully completed. If 0 — both agree that the task was not completed. Using this data, we can understand which of our scenarios are solid, and which are flimsy.
Using this particular data, we can conclude that, for example, we did well with the card blocking process, as everyone believes that the task was fully completed. And regarding money transfers — not so great, and this is what we need to focus on.
We tested our mobile RBS application for the consumer market. In total, at the time this report was made, two iterations of testing were performed with 6 people tested in each of them. In total, there were 7 women and 5 men, with an age range of 20 to 50.
Initially, we weren’t trying to select from a very wide spectrum of professions, but it actually turned out to be quite diverse: teachers, doctors, restaurant administrators, and so on.
Following a request from our client, the second session had more people that were over 40 years of age in the group sample. And it was with this audience that the most errors were found. In contrast to the previous groups, they often got stuck on some screens, stopped here and there to think about what to do next, and had to ask the most questions.
The test results in terms of “fully completed / not completed”:
It turned out that the people who were tested actually fully completed 93% of the tasks. However, they themselves believed that they had only managed to fully complete the tasks 83% of the time. This difference of 10% are the moments when a person went along according to the designed scenario, and our tester saw that the task was completed, but the respondent wasn’t completely sure about the results. These are also problems that need to be worked on. After all, we understand that in such moments the application doesn’t give the person the desired feedback, so this needs to be fixed as well. On average, a session took about 12 minutes. This is pretty good, considering the fact that we estimated 10–15 minute sessions.
Below is the app design that was used in the first iteration of the tests. Let me explain what we decided to change after analyzing the test results.
I will talk about it from the point of view of a user who is poking around on this app.
Suppose I need to pay my phone carrier and add some money to my balance. There should probably be a “pay” button somewhere around here. I don’t find the button and leave the screen to look for it in the hamburger menu in the upper left corner.
So what’s the problem? There are two “pay” buttons on the screen, yet no one has noticed any of them. This was observed in 3 cases out of 6.
There’s another problem. The analytics section, which we thought was incredibly useful, was unfortunately not considered as such by the users. All it did was confuse people.
If you take a step back and look at the screen globally, then you will see that the screen is overwhelmed with information; it’s difficult for the user to sort through the clutter and find what is located where.
The second screen is the payments/transfer section:
During testing, we found that people are interested in seeing their regular payments, and they look for them on the payments screen. In the first version of the design, they were on the very edge and partially hidden by a horizontal scroll. Well, the regular payments were in a separate tab altogether, which made it even more difficult to find them.
The third screen is one that shows the user’s various bank products:
All the people we tested (okay, almost all the people) said that this screen was useful. They knew how to get to it, and they used it often. This is where we created a problem for ourselves, by placing the link to this screen in the upper left hamburger menu. In the video, we noticed that in order to press this button, many people shuffled the phone around in their hand, and this caused them discomfort.
Today’s phones have become pretty big for one hand, and can easily be dropped — we decided that this was a problem for us and that we will work on it. By the way, guess why the author of this article is walking around with a broken phone?
The following pictures show the significantly changed design of the second iteration.
As you can see, the screen has become much simpler in design. Now, when we asked users to tell us what they see, what they told us was pretty close to what we imagined they would say. The analytics section was moved to a separate screen, which was set to a separate button, so it no longer cluttered up the main screen’s space.
On the payment screen, regular payments are shown in such a way that they are easily noticeable. But people still get stuck here, which implies that we will definitely need to improve and simplify it.
The third screen shows a list of the bank’s products belonging to the user. Here we changed the way it was accessed to the bottom of the phone instead of tapping on it from the upper left hamburger menu.
Now this item is located directly under the user’s finger, and there’s no need to reach for it. All the user needs to do is just swipe up or click on it and it will open the list of bank products.
Here are some more observations and conclusions that we made during testing.
Who would have thought that there are lefties in the world (or those who are just used to holding the phone in their left hand)? And they have their own use characteristics to boot. If the majority of people have to switch the way they hold the phone in order to access the hamburger menu, a leftie doesn’t. We took note that lefties just use their devices differently, and we will continue to run tests and find out if we can improve the experience for them.
There are also people with poor eyesight. Everyone knows it, and yet everyone forgets about it (by everyone, I mean designers). So how can people with poor vision be helped? Well first of all, you can increase the size of the icons and text in the design. Secondly, you can increase the contrast in the design. And there is another, less obvious hint: you can also separate information and increase the distance between sections, which will also help people to read the interface better.
According to the most conservative estimates that can be found on Wikipedia, 10% of people are left-handed, and 13% of people have poor eyesight. According to the more pessimistic estimate, left-handed people take up around 15% of the population, while people with low vision are at about 30%.
And some girls have long nails. These same girls also use their phones differently. It’s hard for them to press something in the lower right corner, because they can’t press with the very end of their fingertip, due to the fact that their fingernail gets in the way. Subsequently, this forces them to switch the way they hold their phone. There are no official statistics on the matter, but I can assume that at some point in their life, up to 50% of the planet’s adult population may end up in a similar situation.
In addition to the Usability tests highlighted above, there are many different ways to test UX. Among them are:
- Test using eyetracking
- A/B testing
- Online polls
- In-depth interviews
As the years of tests go by, we have begun to better understand that UX testing is very close to the science of cognitive psychology. And especially close to the concept of “cognitive bias”.
For those who want to dig deeper into the matter, I recommend reading Daniel Kahneman’s book “Thinking, Fast and Slow”. Although you won’t find much about testing, the book will provide some food for thought by showing how the same people can answer the same question in completely different ways.
Thanks for reading! Did this article help bring you closer to testing your own interfaces? What did you consider to be useful (if you did manage to find a thing or two), and what did you think was irrelevant?
I’d like to thank everyone who helped to conduct this research and prepare the report
- JTC Team — analytics, design, UX-research
- Denis Krasilnikov — design
- Anton Kazakov — UX-research
- Ekaterina Kashkovskaya — UX-research
- Dmitry Dobrodeev — UX-research
- Irina Ponomareva — video
- JTC Team — forming the report
- Maxim Blokhin — design
- Irina Ponomareva — video
- Nadezhda Molodtsova — video
- Tatiana Kitaeva — editing
- Pavel Chernetsov — translation
- Eoin Finnegan — proofreading, editing