One of the most important production tasks is that of prioritizing work. Prioritization takes place throughout a project’s life cycle, from determining the importance of items in the ever-growing backlog to determining which elements of a given sprint should take precedence, all things being equal. Making these choices can have a profound effect on the success of a project.
You, as a producer, can certainly prioritize a set of features on your own. Your leads should be able to do the same. Yet, how do you approach and justify your prioritization when an external actor applies pressure to your project?
Imagine, for example, a scenario where a high level stakeholder comes in and reviews the current roadmap with your team. You’re aware of the team’s capacity, and you’re aware of the time remaining. Four features are targeted for the upcoming sprint, but, combined, they exceed the sprint’s capacity by 50%. Even if you happen to be running under budget, you can’t simply throw more people at the project in order to hit your goals. New people require time to ramp up, and this next sprint is happening before any ramp up could feasibly complete.
Your stakeholder is insistent, however. They want everything on the list as soon as possible. How do you approach this?
You could take a capacity-based approach. After all, the numbers should reflect what is possible, given the time. Exceeding the team’s capacity introduces stability risk that could jeopardize the long-term success of the project. That would be a reasonable tactic, and that will be covered in detail a future post on calculating costs and capacity.
Development costs don’t necessarily indicate what is important for the game, however, and prioritization is all about determining what is important.
MoSCoW Prioritization
A popular way of determining the priority of features within an Agile context is MoSCoW prioritization. MoSCoW stands for “Must have, Should have, Could have, Would like to have.” Anyone familiar with P1 – P4 prioritization paradigm in JIRA or Hansoft will be familiar with this general hierarchical structure.
For those who are less familiar with that paradigm, the MoSCoW categories are defined as follows:
- Must Have (P1): These features or components are essential in order for the release to be a success. If even one “must have” isn’t included, the the release should be considered a failure. “Must have” features reflect the agreed-upon minimum usable subset or minimum viable product.
- Should Have (P2): These features are important but not necessary for immediate delivery. Generally, there’s another way to satisfy the core requirements, at least in the short term, allowing for a more complete roll out of all features in the future.
- Could Have (P3): These features are desirable, but not necessary. They might improve the overall user experience or satisfaction, but aren’t critical to the core experience. These can generally be released later in the schedule, time permitting, as product improvements.
- Would Like to Have (P4): Also referred to as “wish list” or “won’t have,” these items are the least critical, have the lowest return on investment for their effort, or are otherwise not appropriate for the project at this point in time.
To successfully utilize MoSCoW, it’s important to understand the core aspects of each proposed feature. It’s also important to understand any functional or holistic impact implied by the feature on the rest of the product. If you and your team are in agreement as to what the core experience and peak secondary aspects are of your game, this should be something you can uncover easily during reviews of the design spec.
I understand that’s a huge assumption. No worries, though. I will cover spec reviews in another post!
Once you’ve prioritized the features and their components, isolating what you must have in order to release a successful product or product update, the team can move on to more discrete costing of tasks and the integration of tasks into the upcoming sprints.
This assumes that everyone is a rational actor and that they are able to take each other’s word regarding the relative importance of a feature to the product. It all sounds good on paper, but that ain’t how things will necessarily play out.
Let’s say that every feature is a “must have” top priority in the eyes of at least one stakeholder on the team. How can you continue to break this down?
As a producer, you should have a sense of the weighted average value of each feature. It’s also useful to have a basic utility analysis handy.
Weighted Averages
To create a feature ranking based on weighted averages, it’s first essential to:
- list the desired features
- determine the core impact vectors for proposed features (here, listed as stability, revenue, and user experience)
- determine the weight of each core impact vector (weights must equal 1.0)
- determine the value for each impact vector, on a scale of 1-10
- add each weighted value and total it
- assign a ranking to each value (the highest value gets the lowest ranking, putting the feature higher in the list)
For an example, I’ve created a list of 4 features, each of high importance to the project. Any new feature that’s a low tech risk gets a higher stability score. Any new feature that contributes strongly to revenue gets a high revenue score from PM. Any new feature that enhances the overall experience of the game gets a high user experience score from Design. For this theoretical project, we’ve determined beforehand that stability is the most important factor in our development, followed by revenue, then design. Revenue is given slight precedence over design because we are making a free-to-play game.
From this simplified chart, it’s possible to see that the proposed Modular Event Structure has the higher overall rank, narrowly beating Multi-Stage Crafting as the priority feature.
The 3D Interactive Narrative System is, by a large margin, the lowest-ranked feature. At this point, it’s possible to discuss taking this feature out of the roadmap and putting it in the backlog.
Because the Modular Event Structure and Multi-Stage Crafting are very close to each other in rank, it’s useful to have a basic utility analysis handy.
Basic Utility Analysis
In a basic utility analysis, you have columns for utility and probability, which are multiplied together to create an expected value for a given feature.
Each utility value is on a scale of 1-100. The utility value can be developed by considering the overall benefit to the game, the benefit to the players alone, the benefit to the long-term development of future features, and so on. In this case, we’re imagining that each lead provided a utility score on a scale of 1-10 for each proposed feature, based on the overall benefit of that feature to the game. Those scores were added and averaged, creating the utility score displayed.
Probabilities can be estimated based on risk and/or skill required to attain each outcome. The probability scores themselves must add up to 1.0. In this imagined scenario, the team has built a Multi-Stage Crafting before in a similar development framework, so this task remains a known quantity and can be easily scoped for the project. The 3D Interactive Narrative feature, however, has never been done before and represents some real risks in terms of performance, especially on lower-end devices. For this reason, it carries the lowest probability score.
After multiplying the utility and the probability score for each feature, it’s possible to see the expected value of the given features. In this case, we can see that Multi-Stage Crafting has a significantly higher weight than the Modular Event Structure, largely because of the lower risk associated with its development.
Because the outcome of the utility analysis is different from the weighted average value, the producer should lead a discussion with the team in order to determine which feature takes priority. If the producer believes that the team can take on the risk of a feature that has a higher weighted average value, then she should make an argument that supports that outcome. If the producer believes that the team might have difficulty in successfully executing against a higher-risk feature within the time allotted, then she should make the counter-argument.
Importance and Risk
All of this looks good, but I’m sure at least some of you are asking, “How do you actually calculate the weights or the probability?” I’ve read several books on decision making and game theory, including The Thinker’s Toolkit, The Compleat Strategyst, and a downright ancient (1963) business book called New Decision Making Tools for Managers. Ultimately, the weights you use and the probabilities you list are subjective. There is no quantitative way to determine what is important to you or your project. You must be willing to state your informed opinion and to give it an appropriate weight, based on your experience, skill, and expertise in the field.
I’m sure some people don’t like this answer. Those are the same people who think you can quantify something like “fun” or “joy” or “love.” I’m sure, given enough time, enough data, and enough computing power, it may become possible to calculate objective values for a population with specific expectations of reality, negating any real project development risk. As of right now, we do not yet live in that world. We must be willing to put our own judgment to the test. We must be willing to take risks, if we are to lead and if we are to win.
Allow me to reiterate: risk will, for the foreseeable future, remain a part of the development equation. It’s best to make your peace with that and to — to bastardize the sentiment expressed by Martin Luther — “sin boldly,” if you are to “sin” with your prioritization methodology at all.
I will point back to my earlier post on trust to share that it is important for you to tell your team why you are valuing some things more highly than others. For example, the product managers may not agree with the fact that stability is ranked higher than revenue. Likewise, the designers may balk at the thought that the user experience — the “fun factor” — ranks so low in your calculations. If you are transparent in your decision making, however, your team will at least understand how you came to your conclusions, even if they don’t necessarily agree with the conclusion itself. That understanding will help engender trust and a united front among your team, which is essential to creating the consensus necessary for productive development.
After all of this, it should be possible to establish a prioritization with the stakeholders that respects what is possible within the immediate future, what will be addressed over the current and coming quarters, and what the player will experience with each release. Again, it is important to communicate not only your decision, but your methodology. By demonstrating that you have a strategy that informs your process, your stakeholders are more likely to trust your decision making and the tactical choices that you must make during development.