16

Call us now on +61 2 8212 3480

Essential steps to successful user acceptance testing

Gill Walker's presentation from the 2018 Software Testing Symposium #STS18

Transcript of presentation

Globally, 60% of IT projects are said to fail. 60%. Across the US, that is costing $150 billion per year. That is significantly more than the loose change you'll ever find in my purse. If we look in the CRM and the ERP space, which is where I play, that increases to somewhere between 70% and 80%. Now think about that for a moment. Would you go to a surgeon who had a 70% failure rate? And yet, at the moment, that is what we, as in the CRM ERP, and slightly broader people, are delivering to our clients.

If we keep the focus on the CRM / ERP space, one of the big problems with it is that projects are deemed to have failed. The word "deemed" is very important. That comes about because after the fact, after the solution is in and live, people are saying it's failed. But one of the reasons that it has failed is because it just didn't fix all the problems in the business. They didn't actually have anything up front that they could then compare it to, at which it may or may not have succeeded. It was just, "We've put this solution in, we've spent all this money, and life isn't wonderful. Therefore, the CRM ERP has failed."

So, why does this happen? One of the reasons that we get this horrendously high failure rate in the implementation of the sort of products that CRM and ERP are, and it's not only those two products, is we are working with the implementation of a solution based on an existing product, as opposed to building software from scratch.

So let me ask you, 'how many of you primarily work in what I might call "traditional software development," so building software, whatever sort of software it might be, from scratch?' Okay, relatively few of you. That surprises me. 'How many of you, therefore, are working in that environment where you are taking a product; it could be a CRM technology, it could be a CMS website technology, it could be a whole range of things. And then implementing that technology, which gives us a lot of functionality straight out of the box, but we're making relatively minor changes to meet a particular client needs'. That's definitely the space that I play in. Okay, so we do have a big majority.

So that brings us to another question: 'what are the key differences between those two types of project?'

I sat and worked through this. When we're looking at development projects, they are typically larger projects than what I'm calling "implementation projects." They're also more general, because they need to meet a wide range of end clients, whereas when you are in the end client space, it's really only your own needs that need to be met.

The starting point for a complete application is fundamentally nothing, whereas the starting point for an implementation is whatever product you have purchased to start with. Or, of course, if you are a phase two or an upgrade project, you're starting from whatever existed before. And in that case, of course, you may, courtesy of your colleagues or predecessors, be inheriting a load of problems that got through. And if we look at the degree of rigour for implementation projects, it tends to be a lot lower. So they are the reasons, I believe, that we get this higher failure rate. I'm going to go on a little bit further.

One of the points is how projects are sold. When we look at how all of these applications are sold, it is very, very competitive. A businessperson has decided they need to implement CRM, and they go out; they might do a tender, they might just invite half a dozen vendors to come and pitch to them. Those vendors obviously all want the project. They all want to win that sale. And the effect of that is, that the vendors make it out to be a lot simpler than it is, - it's a real dog eat dog world.

Why do they do that? If any one vendor didn't make it out to be simple, all the competitors would still make it out to be simple, and they would lose the project. There, automatically, we have a problem.

And of course, the purchasers, whoever they may be, they don't know what they don't know. They've turned to these vendors for advice. They don't realize how much it's a dog eat dog world; they don't realize that if the vendors were a bit more honest and pointed out all the problems, they would lose the gig, therefore they don't. So they are trusting people.

Another big problem that we have is who is leading the project. Something that I find horrendously frightening is, if we allow IT to lead the project, it tends to, rather than meeting business requirements, it's, "Here's an opportunity for me to play. If I develop this, I learn technology x, and technology x is now on my resume, so I can get another opportunity, and I'll get more pay, or I can travel," or whatever the things that float my boat. So it becomes a bit of a playground.

We hear frequently that, for all of these reasons, that IT should not lead these projects. But if we let the business lead the project, businesses typically don't have a depth of technology and understanding. We get into the, "Yes, sir," "No, sir," three bags full, where anything the business asks for is granted. I'm honestly not sure which is better. Do we want a business lead, or do we want an IT lead? The ideal world is to get somebody who can sit in the middle, who can talk in that direction to business people and understand what they want, and who can talk in this direction and understand the technology and what it delivers.

There is another big problem that we can get sucked into, and that is AGILE. Don't get me wrong: AGILE, when done properly, is absolutely awesome. But in the space that I play in, AGILE has come to mean, "We'll show you a bit of the technology, or we'll do a bit where you can have a look at it, you can have a think about it, and then you can say, 'Oh, no, that's not quite right. We need this changed, we need that changed.'" And then some changes are done, and so we go on and on and on. To me, what we have got when that happens is not AGILE development, but frAGILE development. Because it is asking for all of the problems that Andrew highlighted earlier.

So that explains a little bit of what has happened before we get to testing, and why some of these problems exist, and therefore we have got senior management saying this has failed. They would never say it's deemed to fail, but they'd say it has failed.

If we go and look a little bit further, why are they saying that this project is deemed to have failed? Probably the commonest reason is that the users just refuse to use it. The organisation has spent large amounts of money in buying the software, doing the implementation, and now the users and just saying, "No. My spreadsheet's better. I'll go back to email. It's much easier."

And the other reason is that it is just quite simply not fit for purpose. So the users are under pressure to do whatever their job requires them to do, they've got their software, and it quite simply doesn't work. And they just quite therefore don't use it. They're being asked to deliver particular results, particular metrics; the software doesn't help them, so they ignore it.

Moving now onto what the bulk of my presentation is about, i.e. user acceptance testing, which sits on the platform of what I have talked about earlier. User acceptance testing requires real users to accept that the software does what it was scoped to do. But as you will see, there are a lot of things that we need to have in place, if that is going to be something that anyone has got any hope of doing.

So, successful user acceptance testing you could think of as an iceberg. And as many of you will know, what we see of the iceberg floating on top of the water, is a very small amount of that total iceberg. Successful user acceptance testing is similar. We've got a lot of things that need to happen before, if we've got any hope of achieving successful user acceptance testing. I'm going to go through these in order.

We need to train our testers. The training I'm talking about here is quite specifically training of the testers. This is not our end user training, or training for developers so they know the product, or anything else. This is training the testers so they know what they need to test, and so they also know the reasons that the project came into being in the first place. Surprisingly, that one is often not done.

We also need to have the rest of the family of testing to have been done before UAT. We should not be finding whole gaps in integrations that just don't work, or lumps of code that are just falling over. You might get the odd one, but that is not what should be found in user acceptance testing. Those issues all should have been found earlier.

We also should have had the data migration. Users should be testing this application on real data, or at least representative data.

We need to have had some development. This is not just, typically, an out of the box product. Something should have been built to meet those requirements that the out of the box product didn't deliver.

We should have had some design, and that design should always be making best use of the out of the box products. The organisation to get to this stage has bought a product. That product probably meets somewhere of the order of 80% of the organisational requirements, and the whole project is really only about that final 20%. And we have a responsibility when we are designing those solutions, to make sure that we are making best use of the out of the box to get the 80%, that we are not overcomplicating.

We also should have a scope; in other words, what is this project delivering? You might be thinking that's obvious. It is standard software development. And you're right. I can say, hand on heart, I have been called in to rescue projects where pretty much every combination of those points that I've just gone through, is missing. Which is really quite frightening, but we can see why we're getting to the failure rates that I talked about earlier.

So, successful UAT is sitting at the top of the iceberg, on top of a number of other things. They also are all held together by the timeline and the project manager. In theory. I have also been involved in projects where all of those are missing. So we've got this idea, we need CRM, we've bought a CRM product, we've installed it, bang, it will all just work. And then we start complaining because we don't have customers, we have clients, and the software ... But the software should have just known to change its terminology from "customer" to "client." Or that is the impression that we get.

Bringing it back to you people, what can you do if you are a test lead and you find yourself being asked to deliver successful user acceptance testing and one or more of those essential steps is missing? What do you do if the scope is missing? This is a situation where we've bought a CRM. "Well, all CRM is the same, all business is the same, so why do we need to do a scope? Implementing partner? You've been in this business for a while. You know all of this stuff. We don't need to do a scope." If you find yourself in that situation, my strong advice would be to, as far as you can, build some sort of scope. Talk to people: what were they expecting from the product that is being implemented? And once you've got that, you can then start looking at what actually has been developed, and you'll be in a position to start working out which bits are missing.

What do we do when the design is missing? Surprisingly enough, this is probably the commonest of all of the bits that are missing. So we get a scope, and then no design, planning, or documentation. It's just, "We'll hire a developer, whatever a developer happens to be, and we'll spend five minutes saying we're this sort of business, we're a financial planning company and we've bought this CRM solution. Off you go, developer, you know what to do, don't you?" To me, that is like deciding that the house is getting a little small with the bub on the way, so I met a brickie down at the pub and I've asked him to come over Monday morning. There's a few bits missing.

If you're in the situation where the design is missing, what you need to do is take the scope, and for the purposes of this, I'm assuming that only one of the components is missing. But as I said, it's relatively common for two or more of them to be missing. What you then do is, you take the scope and you look at what has been delivered, and backfill from design with a focus on those bits that have not yet been delivered. Although if there was no design, chances are there's going to be a fair bit of what the client was expecting will still be in the vapourware bucket.

Well, development. I think if the development is missing, it's probably time to go home, because you're not likely to achieve very much. Certainly not in the time frame that you've probably had foisted upon you at this point. But if you're there and you really see an opportunity to shine, what you need to do is go and get the design documentation, and then look for a small number of people with the skills in the technology itself, who can do the building for you. I would strongly counsel you to keep that team as small you possibly can, because once you get a big team, you get what I love to call Chinese whispers. So you'll have bits happening over here and bits happening over here, and the whole lot is just not hanging together.

Data migration. I've been involved in a number of projects where management have said, "Now, all that data that people have built up over the past however many months or years, it's all crap, it's duplicate data, it's this, it's that, we're not going to bother." Or maybe they accept that data migration needs to happen in the project as a whole, but not for testing: "You can just make some data up. Doesn't really matter." In that situation, I think you really do need to stand up to the powers that be, explain the importance of realistic data for the testers, and get people to rewind and give you, if not a full sample, it does need to be representative data, so that the testing can be done.

What happens if there is no other testing has been done? I think the simple answer to that is, you will have problem after problem after problem, which are really not appropriate for the users to be finding out. So when you're in that situation, you need to, and it's probably easier for people like you to handle this situation, because you're more likely to have had prior exposure to it. But you need to explain to management why all of those other testing components are important, and put in place a mini-project to make sure that those other features do happen.

The final one is the training for our testers. It is completely wrong to just assume that we can bring in a user who has been doing whatever their job is for however long, sit them down in front of a new application, and let them go. But it happens. So the fix for that one is to go right the way back to the beginning of your project, look at the scope, train your testers in the reason for the project, make sure that they understand what the solution does, and then let the testing happen.

So now, I'm going to move on to, what do we need to do within the user acceptance testing phase of the project itself. What I want to do for this is to begin with the end in mind. And the end, of course, is successful user acceptance testing. But what do you need to do to make sure that that happens?

Well, first point is give the testers scripts. Make sure that they have got detailed steps to follow. What do they put in this field? What do they, where do they move to? How do they navigate the solution? And so on and so forth. It needs to be documented in a fair amount of detail.

We also need to make sure that we are covering all of the processes, and not just the normal or the straight through processes, but what I call exception processes. To understand this, I think, imagine you are testing an application that needs to take credit card payments. We all know that on the balance of probability, 95 or more credit card payments will just go through, and no problems. Therefore, if the cashier was to let the person go with their goods, no problem. But the bulk of our testing needs to be not that straight through process, but it needs to be testing, what do we do when the credit card is damaged? What do we do when the credit card is stolen? What do we do when there's insufficient funds on the card? And any other issues. So it is essential when you are building those scripts, that you're thinking not just for the straight through process, but think, what are your fraudulent credit cards, insufficient funds, damaged credit cards, and so on and so forth.

And finally, we need to make sure that all of our UAT has structure. I've been at organisations deploying various CRM solutions, where their idea of UAT is at an appropriate time, we just let all of the users into a room, give them computers. If they're lucky, they'll get testing log ons, and say, "Off you go. Tell me which bits don't work." Of course, one of the biggest risks of that approach is the faults that will be found are not faults as such, but they're new requirements.

The user doesn't like that we've chosen pink as the background colour for the application. The user doesn't like that it takes four clicks to do this process. The user doesn't like whatever. But users not liking, are not faults. If the product was scoped to be pink and take four clicks to do that process, being pink and taking four clicks to do the process, is 100% pass. Whatever the user might think. This is not the time when we are inviting people to think up, what would you like, what do you think would be beneficial.

What user acceptance testing needs to be is, we have a scope, which we agreed with appropriate users, management, whoever it is. We have built that scope, and now your role as the user acceptance testing team is to tell us, have we delivered on this scope. Whether you like it, dislike it is irrelevant.

That, in summary, is the key things to think about if you are presented with a user acceptance testing project, where you are being measured on the UAT being successful and some, or worst case scenario, all, of your preceding colleagues have not done a particularly good job. It is still possible for you people to be successful and to come up smelling of roses. Might not be easy, but it is possible.

Questions.

Question: This is just something that, from a third-party implementation point of view, it's not all about the testing, you want a successful implementation for the customer. What would you turn around and say, we did this wrong - My job's done? You need to get to customer success.

Absolutely.

To be clear, the problem with UAT is, as you said, the business end users engage further on, so you get to that point where the business end uses it, and they go, "Why is this one pink?" How do you address that, because just saying, "Oh, it's blue in the scope, therefore it's blue," isn't going to cut it for the user.

It will depend on who the person is and how functional it is, because although I (and you followed me) used colour as an example, in most instances, colour is not overly important. But there will be instances where we've got similar things, where the scope said A, and A has been delivered, but the user thinks that B would be better.

What I think needs to happen there is, we need to agree that A was in the scope, so for the time being, we've delivered A, so we'll get a tick on it. But then, go back to business and management and so on, and talk about B. Is there justification right now to make a change in scope, which is leading us into scope creep and deliver B? Is B sufficiently better, that that's really what we should do?

In most instances, I would say leave it as A right now, but put B forward as a change request that can be addressed later. It would be the very minor changes where I would say, "Oh, just do it." That is, of course, a risk. That whole, "Oh, I can do it for you, it's just quick, I'll make that change," where you're trying to please a user, can cause problems.

Question: I think you're right, and normally I wouldn't talk about it. In my experience, you might have 10 or 15 users, all sort of facing the same issue with how they had not been engaged early on, and they had this strange fixation with the end product. It's a big enough issue; it probably should have been shown on the iceberg, this end user engagement with the product.

Yes. It is part of the scoping. I suppose all of those layers, we could have split out into more. The challenge is, how many users do you engage at that scoping, to not end up with, "Well, I think A," "I think B," "I think C," and then who ultimately should make the call whether A, B, or C is the better way of implementing that scope point. And if you've got that situation, if we assume that approximately equal numbers were voting A, B, and C, two thirds of them, when we get to the testing, are going to be dissatisfied.

That’s engagement from an end user perspective, but there's also engagement from a management perspective. We decide if this is what we do, and in three months' time the users start to use the product. The end users, they trained in the new product.

Oh, absolutely. To me, that comes after UAT. End user training, absolutely is very important, but it comes after the user acceptance testing.

Question: How do you engage the end users?

To me, end user training is the how do you use the product that we are giving you to do your job? What I would do up front, and I strongly advocate, is training a wide audience within the business, in what the solution does out of the box. But that, to me, I call that business user or subject matter expert training, not end user training. To me, end user training does need to come after UAT, after you've got a sign off and a stable product.

But yes, I completely agree, even before scoping. So if I were to go back to my iceberg, I would put my business training, or training in how the selected technology works, in front of or beneath my scoping. And then I would have my end user training up in the sky above the successful UAT.

Question: How would you fit user acceptance testing into AGILE function? Separate phase or built into the sprint?

I would argue that everything that was on the iceberg exists in every sprint of agile. But there are alternatives. To me, agile is quite similar to waterfall, except it's much smaller. With waterfall, we aim to deliver this much in one go: major change. With agile, we deliver this much, this much, this much, this much. Which means that, of course, the users have had some benefit from whatever those first chunks were.

Question: I guess with the advent of agile, we're moving away from traditional planning techniques and more into automation techniques. Does that mean that our automation technicians have got to think about UAT, or should we maintain UAT as manual.

I feel, in the world that I play in, that UAT does need to stay manual with real users. But that is very specifically within UAT as I defined it. One of the challenges that I see is that people just ... and when I say people, the business, is just saying we need to do testing. Which, yes, you do need to do testing, but they're not splitting the family out into unit testing, system integration testing, stress testing, user acceptance testing, and so on. And the user acceptance testing does, by its very nature, it is users. So while the users who ultimately will be using the product are living, breathing human beings, I think the users who do the user acceptance testing also need to be living, breathing human beings, not automated.

In which case UAT leaves itself to a specific sprint?

I would say that you've got to have UAT prior to the go live. Depends whether a sprint is something you're going to release, or a sprint is just a chunk of functionality that you're going to get to save rack, we regard that as finished, move on to the next sprint. But if a sprint is releasable in and of itself, then its to be a chunk of UAT for the functionality within that sprint. But if you're not releasing it, you can then do the UAT combined on two or three, or however many sprints. But there needs to be as many chunks, to use a highly technical word, of UAT, as we've got releases into production.

Any more questions? I will be hanging around for most of the rest of the day. More than happy to talk to people.

I'm now going to hand back to our MC

This presentation came out of an earlier blog article User Acceptance Testing (UAT) for CRM Projects – Secrets of Success

Site Search


Opsis is an expert Microsoft Dynamics 365 consulting company. Our focus is your Microsoft Dynamics 365 success - not licence sales or billable hours. As Principal Consultant, Gill oversees all business operations and strategic planning and execution, yet she still believes in offering personal attention to each and every client, so as to understand their needs and offer tailored solutions.  We are based in Sydney, with clients in Sydney, Canberra, Melbourne, Brisbane and across Australia.  We offer Microsoft Dynamics 365 strategy, Microsoft Dynamics 365 scoping, Microsoft Dynamics 365 implementation, Microsoft Dynamics 365 technical support, Microsoft Dynamics 365 advice and guidance, Microsoft Dynamics 365 training and mentoring.