Skip to Main Content
Let's Talk

Over the last year and a half, we’ve successfully installed the OKR (Objectives and Key Results) model at Mostly Serious. As a framework for setting and consistently evaluating progress against goals, the system has been an absolute game changer. But, what’s been even more important to us has been the process of turning our team members into OKR experts, capable of independently using the model throughout the company for a variety of initiatives.

In this way, our approach to installing OKRs has created a self-replicating goal system, one that starts with a single point of introduction and education and spreads throughout the organization, turning all of our team members into strategic goal-setters. It was a long process, sure, but it will pay off in meaningful ways for many years. Here’s the (kind of) brief story of how we did it, and a thorough explanation of our approach so that you can do it, too!

The Critical Problem

Setting top-level goals for an organization is challenging enough. Developing an ecosystem of autonomous-but-interdependent goals across all of our teams seemed especially difficult. As we started to think through a comprehensive goal-setting framework that would work across Mostly Serious, we were specifically worried about three things:

  1. How could we ensure that everyone in the organization has access to an at-a-glance tool for seeing progress on company-wide and departmental goals? After all, it’s one thing to set goals and celebrate having written them down somewhere. It’s a fundamentally different thing to make those goals visible and hold others accountable to publicly displaying their progress.

  2. How do we ensure that departments have autonomy over their goals while also ensuring that those goals aren’t at odds with the larger company direction? We, unlike many organizations, don’t require that all departmental goals necessarily and specifically ladder up to every organization-wide goal. But, we also don’t want our company-wide goals sending us in one direction while a given department races down another path.

  3. How do we ensure that all members of our team can become expert-enough with our goal system to manage and interact with their team’s initiatives without bogging them down in unnecessary jargon or bureaucratic clutter? In my time working with other business owners, I’ve seen far too many organizations install complex webs of goal-based systems that functionally require a decoder ring and a rosetta stone for anyone to work with them. In those settings, I like to remind people that the best systems are usable systems.

With those concerns in mind, we set out to build an initial, homegrown goal system. And it was pretty good, but it wasn’t good enough (which sucks to say out loud, by the way, but it’s true). We couldn’t get the visibility and measurement elements quite right, and it wasn’t structured enough that our team members could seamlessly pick it up and work within it to produce consistently great goals. So we tweaked this and we played with that and still, much to my chagrin, it just wasn’t perfect.

One of our principles at Mostly Serious (we have six, for your information) is evolution. We strive to have ongoing conversations about the effectiveness of our processes and the quality of our work. If we determine that something could be better, we try like hell to make it better. And our goal system could be better.

How do we ensure that departments have autonomy over their goals while also ensuring that those goals aren’t at odds with the larger company direction?

OKRs

Sometimes the universe just gets it. Here I am thinking long and hard about a way to improve our goal system and then Jarad (our CEO and my business partner) says something like, “Hey I just finished this book called Measure What Matters and I think it could be really helpful for our goal system.”

Whammy.

Within days I had finished the book and was absolutely certain that this was the solution we were looking for (it also meant Jarad was right, which is another thing that sucks to say out loud). For the uninitiated, OKRs stands for “Objectives and Key Results,” and it’s a model that was first developed and refined by Andy Grove during his time at Intel during the 1970s and 80s. The system is most fully explored in John Doerr’s aforementioned Measure What Matters, and has been adopted by a number of successful organizations over the last 30 years.

Here’s the gist of the OKR model:

  • Objectives are inspirational, aspirational outcomes that you’d like to achieve. For example, “Be the best digital-first creative agency in the midwest.”
  • Key Results are the associated, measurable benchmarks or data points that give texture to your Objectives. For example, one way to determine if you’ve successfully become “the best digital-first creative agency in the midwest” is to win three (or any specific number of) meaningful awards for the quality of your work. Another way to measure your success against that Objective might be to set a revenue goal that would benchmark you at or above other quality agencies in the region. Key Results answer the question, “How will we know if we’ve met our Objective?”
  • Organizations and/or teams should establish a limited number of Objectives (three to five) and, for each of those, they should develop three to five measurable Key Results.
  • OKR users should evaluate their progress at consistent intervals with a Scorecard that clearly defines success criteria and visually displays change over time (over each quarter, for example).

There’s certainly more to the entire model and, if you’re interested in using OKRs for your team, check out Measure What Matters or email me (because I absolutely love helping people develop their OKRs and spend quite a bit of time doing just that as a co-founder of Habitat Communication & Culture). But, for now, we’d found our solution and it was time to build an implementation plan to install it.

Employee at desk with two computer monitors.

Nearly anyone at Mostly Serious can lead a team, a client project, or an internal initiative and seamlessly use the Objectives and Key Results language to immediately enhance the quality of that goal.

The Self-Replicating Goal Framework

Here’s where a lot of companies struggle: installing things. There are good ideas, systems, models, frameworks, and processes everywhere all the time, but very few of them are properly installed. It’s challenging to do the necessary due diligence on the front-end, including building an education strategy and mapping the points of interaction, and it’s even more challenging to stick with the implementation as the pressures of other work (e.g., projects, other initiatives) mount. Then, for the small fraction of initiatives that actually reach a meaningful implementation milestone, it’s work just to ensure that people use the thing appropriately and consistently. A lot of great systems fall apart before they ever get to be great systems.

But we knew this could be a truly transformational system for us, and we took our time to ensure that we (hopefully) wouldn’t bungle the installation. Now, nearly two years later, the OKR model has become, functionally, a self-replicating goal framework at Mostly Serious. And, to be radically clear about what that means, I’m saying that our approach to educating and installing our team on the framework ensured that nearly anyone at Mostly Serious can lead a team, a client project, or an internal initiative and seamlessly use the Objectives and Key Results language to immediately enhance the quality of that goal (and its likelihood for success). Put another way, we built an installation strategy that trained a few team members and provided them with the tools to train a few more team members until we had total organizational coverage with an initial, single point of implementation. Here’s how we did it (and how you can do it, too):

Start Small

Rather than introduce OKRs to our entire team at once, we started by just building a set of trial Objectives and Key Results with our Leadership Team (made up of our Department Directors). The OKRs that we built were certainly meaningful and we hoped to accomplish them as part of our larger organizational strategy, but the more important outcome here was that our Directors learned to use the model. This process started by building a comprehensive set of educational resources (1-sheets for key concepts, a large presentation and associated deck about the most important elements, a list of additional reading materials and videos, to give our people maximum opportunities to understand and ask questions before trying the thing on. Then, we spent a half-day building strategic goals using the framework with a dedicated OKR champion/facilitator (that’s me) leading the conversation. At the end of this phase, we had a set of new, fairly educated OKR users ready to interact with the system.

Use the First 90 Days To Learn

After we established our initial set of Objectives and Key Results, we built our scorecard (the all-important visibility and accountability tool) and set a cadence for monthly reviews for the first three months. At each review, we reviewed our progress against our scoring criteria and talked about why we had (or hadn’t) moved the needle. More important than the progress against OKRs, though, were the opportunities to learn about the installation of the model in those meetings. When a Director misused the scoring criteria on a specific Key Result,for example, it became clear to me that we needed to enhance our educational content about that item. When we didn’t see much progress against our goals during the first scorecard review, which was deflating, it became clear to me that I needed to more properly set expectations for success in future roll-outs. Each of these learning opportunities were captured, considered, and incorporated into our educational and training ecosystem.

Process the Process (In Defense of the Post-Mortem)

At the end of the 90 days, we got together for our final scorecard review of our trial period. We’d made significant progress against some of those goals (hooray!) and the gap in achievement in just that short window of time was astonishing. But, in addition to reviewing our progress on the specific OKRs, we also had an open forum discussion on the system. What worked? What didn’t work? What do you wish you’d known beforehand to be more successful? What do you think other members of our team will struggle with when using the system? And our Directors were a wealth of information. They clearly outlined their challenges with the OKR model, identified key gaps in our educating/on-boarding process, and offered incredibly useful suggestions about how to ensure success in a future roll-out to the larger team.

Take a Breath

Rather than roll right into a whole-team implementation after that 90-day trial period, we took 30 days to review everything we’d learned. We made a hit-list of critical changes to our approach, built (and scrapped) some educational content, and totally revamped our scorecard tool. And, by including our Directors in the process, including having them work directly with me in making some of those changes, we were also giving them time to become even more expert on the model. At the end of this 30-day breathing window (and only a mere 120 days total after introducing OKRs into the Mostly Serious vocabulary), we now had seven OKR experts and a comprehensive toolbox of all the resources they would need to develop OKRs with their teams.

Go Bigger—Slowly

Rather than ask the entire team to show up to our annual strategic planning days (yes, we take two whole days in January to strategically plan for the year) and use a new framework they were learning about in real time, we phased OKRs into our organizational discourse. We started with a Lunch and Learn to introduce and overview the framework, as well as visually demonstrate the Leadership Team’s experience using the model via our scorecard. Then, we shared a folder of educational materials and asked our team members to at least look through some of those documents (but many looked at all of them, it seems). Then, at our next few weekly meetings, we dedicated some time to questions about OKRs. Then, nearly a full month after learning about the model, getting time to ask questions about the model, and hearing from others (our Directors) about the value of the model, we used the model. Our 2021 strategic plan, which includes our company-wide goals as well as department-level goals, was entirely driven by the Objectives and Key Results framework. And, because we’d taken the time to turn our Directors into OKR experts, they were able to run their team-based planning sessions without me being in the room.

The Secret is the Scorecard

We set aside a half-day at the end of every quarter for a team-wide review. We celebrate our wins, we talk openly about challenges, and we review a scorecard for each OKR at Mostly Serious. Company-wide goals kick off the discussion, and leaders are tasked with publicly talking about our progress (or lack of progress) on those critical items. That act of transparent accountability sets the tone for individual departments as they review their scorecards, sharing with the larger team where they’ve succeeded and where they’ve struggled. And the scorecard is the key. As a reasonable objective tool with clear scoring criteria, it takes pressure off of people to creatively interpret their progress. Rather than defending why they think they’ve made strides, they get to hold their efforts up against a rubric and articulate the gap. No judgment. No squishiness. Just an item held up against a ruler.

And Finally: Be Vigilant

We’re knee deep in 2022 and still using OKRs as our planning model at Mostly Serious. And, while I continue to run and manage our company-level goals, our Directors develop and manage their team’s goals (and some of our project-specific teams are even using the model). Over two annual strategic plans and five separate quarterly reviews, we’ve certainly seen moments where someone misses the mark on an Objective or Key Result. Rather than letting that stuff go because it’s “close enough,” we use those moments as an opportunity to educate and correct. We’d rather get it right than get it done, but that requires attention and intention. I, for one, can’t think of many things to be more attentive to or intentional about.

We set aside a half-day at the end of every quarter for a team-wide review. We celebrate our wins, we talk openly about challenges, and we review a scorecard for each OKR at Mostly Serious.

Conclusion

Our experience with OKRs has been truly transformative. We’re a better team and a better organization for having adopted the model. But, it was not a quick fix, nor was it an easy implementation. It took planning, time, honest feedback, and a commitment to second- and third-order consequences to get us to this high-impact planning mode. And we still make mistakes. We’ve modified some Key Results to reflect changes in our reality. We’ve added some Key Results to put more pressure on ourselves after learning we were aiming too low. We’ve even entirely scrapped some OKRs after working with them for a few months. That’s all okay, because getting it right the first time is less important than getting it right for you.

One other note about this approach to integrating a scalable framework into a team: this method works with more than OKRs (or your preferred goal-setting model). We’ve used the same approach with our performance management system and seen similar, high-impact outcomes. And, future big initiatives at Mostly Serious will follow this approach until we learn it doesn’t work, and then we’ll iterate and evolve (because that’s how we want to be).

CONTACT US

Want to work with a goal-oriented agency on your next project?

We're your team. Reach out today.

Tell us about your project.