About This Project
A people manager population navigating significant organizational change required practical, behavior-grounded feedback skills. Data and anecdotal signals from performance cycles pointed to a consistent gap: managers were having fewer developmental conversations, and when they did, feedback quality varied widely. Generic praise was common; observable, behavior-grounded feedback was not.
Conversations That Count was designed to close that gap—a live workshop for people managers focused on building practical feedback and communication skills, paired with a workbook artifact for continued reference and application. Josh designed and developed the content; delivery was handled by facilitation colleagues.
This demonstrates: curriculum design grounded in a diagnosed behavioral gap, instructional judgment in framework selection, blended design pairing a live workshop format with a durable self-service artifact, and the ability to design within a broader program architecture while delivering a complete, standalone experience.
Context
| Role | Content Designer & Developer, Learning Experience & Enablement |
| Year | 2024–2025 |
| Format | Live workshop (designed by Josh; delivered by facilitation colleagues) + participant workbook |
| Audience | People managers |
| Reach | ~60–70 managers during an active period of organizational change |
| Program context | Standalone workshop; component of a broader manager development journey being actively mapped |
The Problem
Feedback quality across performance cycles was inconsistent—a pattern confirmed by both cycle data and direct signal from managers and their teams. The existing feedback infrastructure provided a format (the SBID framework) but limited structured practice in applying it.
Two compounding factors made this harder:
- Organizational change at pace—rapid structural changes meant many managers were leading teams with new compositions, lower psychological safety, and higher ambiguity
- Performance data without application—feedback data from performance cycles was available but managers lacked a structured framework for translating scores into development conversations
Design Approach
The SBID Framework
S — Situation: Establish context—where and when did this occur?
“When we partnered on the Q3 planning process last month…”
B — Behavior: Name the specific, observable action—what did you see?
“You took initiative to pull in the analytics team before I asked, to get ahead of the data gaps…”
I — Impact: State the consequence—what was the outcome or effect?
“…which saved the team roughly 6 hours of rework during the final review week.”
D — Discuss: Invite dialogue—clarify, listen, and align on what’s next.
“I’d like to hear how you experienced that—and talk about where to take it next quarter.”
Session Structure
Part 1 — Grounding the Conversation
Opening with organizational context: why feedback quality matters now, what the data shows, and how this session connects to the manager’s role in development—not just performance evaluation.
Part 2 — Framework Application
Guided practice with the SBID model using real-pattern scenarios drawn from performance cycle data. Managers practiced moving from generic praise to observable, behavior-grounded feedback statements.
SBID → “When Alex joined the cross-functional sprint in Q2 (S), they proactively flagged a dependency the engineering team hadn’t surfaced (B), which prevented a two-week delay in the release timeline (I). I’d like to explore how we build that kind of early-signal thinking into their regular workflow (D).”
Part 3 — Application to Their Own Teams
Managers applied the framework to real team members and real scenarios from their current cycle—leaving with 1–2 draft feedback statements they could use immediately.
Workbook Structure
| Section | Purpose |
|---|---|
| SBID Quick Reference | One-page framework card with labels, prompts, and scale anchors |
| Worked Examples | Strong vs. weak feedback comparisons across core competency areas |
| Practice Space | Guided prompts for drafting SBI statements for 2 real team members |
| Development Conversation Planner | Template for structuring a 1:1 development conversation using cycle results |
| Reflection Prompts | Post-session questions to support ongoing application |
Scale & Outcomes
- Reached ~60–70 managers during a period of significant organizational change
- Session sat within a larger manager development journey being actively mapped—one component of a planned multi-module curriculum
- Workbook designed for continued self-directed use beyond the session
This Demonstrates
- Curriculum design grounded in diagnosed need—built from real performance data and organizational signal, not designed in isolation
- Instructional judgment in framework selection—chose to reinforce an existing model (SBID) rather than introduce competing vocabulary mid-cycle
- Blended design—live workshop format paired with a structured, durable self-service artifact
- Organizational context sensitivity—scoped and framed content for a high-change environment where psychological safety was already under pressure
- Program architecture thinking—designed as both a complete standalone experience and a component of a longer-term manager development curriculum