Just like every team within ClickValue, team Research is constantly improving processes. The aim is to work faster and smarter. Until recently, we worked with the standard ICE model, with which we determined the order of experiments. The team saw several areas for improvement and focused on optimizing this working method in the first quarter of 2023.
For anyone who doesn’t know, the ICE model helps determine which experiments are best to start with. Based on the Importance, Confidence and Ease (ICE), the order of the backlog of experiments is determined.
How important is this experiment for the user and the customer? The more users will experience this change, the higher the score.
How confident are you that this experiment will make an impact? The more sources that support this assumption, the better.
How easy is it to set up and turn on the experiment? And how easy is the implementation afterwards? These three categories are each given a number from 1 to 10 (sometimes from 1 to 5) and are added together. The resulting score determines where the experiment will be placed on the backlog.
The model ensures a relatively objective assessment of the prioritization. And if everyone in the team always assigns the same scores to the categories, then this is also objective. In practice, this turned out to be different. Reviews varied among themselves and sometimes over time. It wasn’t clear when you score something a 5 or a 6. In addition, a range of 3 to 30 creates many overlapping scores. If two experiments score an 18, how do you determine which one you place higher on the list?
We solved the subjectivity problem by drawing up four yes-no questions per category. These questions can be answered relatively objectively, so that every team member will come up with the same answer. Think of ‘Is the proposed adjustment immediately visible to every visitor (is it above the fold)?’ within the Importance category. At Ease you will find the question: Can the test be built within 1 to 7 hours? We then built an excel sheet that gives op based on the number of times ‘yes’ is filled in and range of scores. With three ‘yes’ times, a score of 6 to 8 rolls out. The consultant determines which score is filled in, so that there is still a certain degree of subjectivity. We believe that you cannot knock everything down and of course also rely on the expertise of the consultant.
The second problem, the height of the scores, is solved by a different way of calculating. Instead of adding it up, it is now multiplied. The final score is a lot higher, so there is less chance of overlapping scores.
The twelve questions (4 in each category) clearly arranged in an excel sheet. The built-in formulas ensure that the calculations are the same every time, see image. In this way, there can be 50 experiments in the columns, and all be assessed in the same way. The final scores are then listed, so that a priority list can be drawn up immediately.
The revised model bodes well. The team can prioritize faster and the scores are more consistent. We are enthusiastic and happy to show you how this upgrade can speed up your processes. Contact us and we will be happy to show you how it works. Or enter your details below and you will receive the model in your inbox.