Week 5 of the year 2025
- 4 minutes read - 773 wordsThis Week’s Challenges:
FY26Q1 planning - Q1 Game Plan (Management)
This week we’ve finalized the scope of the upcoming first quarter of the year 2025. The final step is personally called a Game Plan because it covers all our team moves throughout the future quarter.
The input for this process is a list of OKRs, which the team commits to, with their LOE in sprints. This is described in Level-of-Effort Estimations and this list is supposed to cover everything - spill-overs, product OKRs, and engineering (technical) OKRs.
The Game Plan is a next level of details needed to start burning down the Q1 scope. At this point, we’re finalizing OKR priorities and assigning an engineering owner (each OKR is being delivered by a dedicated senior-level engineer). The worksheet which I typically use to create a game plan is pretty straightforward.
Initiative | Priority | LOE | Engg. Owner | Sprint 1 | Sprint 2 | Sprint 3 | Sprint 4 | Sprint 5 | Sprint 6 |
---|---|---|---|---|---|---|---|---|---|
Spill-overs | |||||||||
OKR1 | P0 | 3 | Name 1 | Todo.. | Todo.. | Todo.. | |||
OKR2 | P1 | 2 | Name 2 | Todo.. | Todo.. | ||||
Product OKRs | |||||||||
OKR3 | P1 | 3 | Name 1 | Todo.. | Todo.. | Todo.. | |||
OKR4 | P3 | 3 | Name 2 | Todo.. | Todo.. | Todo.. | |||
Engg. OKRs | |||||||||
OKR5 | P2 | 1 | Name 2 | Todo.. |
As you can see, this allows us to detect resource constraints and approach the OKRs according to their priorities.
The dev game plan is a big step forward, but not a final one. The remaining thing is a story mapping. This is the most granular level of details as we can get in the planning process.
As with all previous steps, it requires tech leads sitting together in the room and brainstorming the KRs, scope, assumptions, and resource constraints (at the end of the day, even in a cross-functional team, not all roles are interchangeable).
The worksheet for Agile/Scrum story mapping:
Initiative (O) | Key Result (KR) | Story | Description | Story Points | Assgined To |
---|---|---|---|---|---|
Objective 1 | KR 1 | Story 1 | Todo… | 3 (M) | Name 1 |
Objective 1 | KR 1 | Story 2 | Todo… | 5 (L) | Name 1 |
Objective 1 | KR 2 | Story 3 | Todo… | 8 (XL) | Name 1 |
Objective 1 | KR 2 | Story 4 | Todo… | 2 (S) | Name 1 |
Objective 1 | KR 2 | Story 5 | Todo… | 1 (XS) | Name 1 |
Objective 1 | KR 2 | Story 6 | Todo… | 5 (L) | Name 1 |
And the very last steps:
- Host a meeting with the team to walk everyone through the game plan.
- Convert the stories from the worksheet above to JIRA stories and put them under the proper epics that represent the OKRs.
See also:
Engineering excellence metrics (Engineering)
From the engineering side, the engineering excellence (EE) metrics were on top of my mind, not surprisingly since the annual performance review season is coming.
I added two new features in the Codehub dashboard:
- Rework points
- Rework score
Both of them are on the PR level, and of course, we can roll them up to an individual and a group.
Rework Points (RP) =
number of "request_changes" reviews * 3 +
number of "comment" reviews * 2 +
number of "dismissed" reviews
Rework Score (RS) =
RP / log10(lines added + lines deleted + 1)
As a group, we’re still debating these formulas. As of now, it looks like “request_change” is the more significant issue and should be justified with x10 points. Also, the RS is quite a sensitive metric, which grows pretty fast, and obviously, there is no upper limit there - i.e., I would prefer something from 0 to 1. But let’s see where all these conversations land.
Another side of the “normalization” coin - how to score the complexity and quality of a pull request? Obviously, the “number of merged PRs” (in main or release branches, of course) along with PR duration are the most popular/useful EE metrics. I’m thinking about running a little hackathon project to use AI for this matter. And of course, there is a data problem here - the Codehub calculates EE numbers based on PR metadata, but for something more sophisticated, we’ll need to retrieve the code and actual PR patches to evaluate them.
For example, this is how I use GitHub Copilot for reviews - basically copy a URL of the PR of interest from the Codehub dashboard, check it out via GitHub CLI and use Copilot to understand it. Surprisingly, the /tests
command helps me a lot - the generated tests are comprehensive, so it’s easier to understand where to look during a review.
gh pr checkout <url>
See also: