RISA Labs Β· Product Design Assignment Β·4 min read
Product Managers don't struggle with decision-making.
They struggle with gathering the context required to make decisions.
I asked: what if the browser itself became the execution layer?
The shift
The agentic browser doesn't just make PM workflows faster, it fundamentally changes what a PM spends their cognitive energy on.
| Before β PM as coordinator | After β PM as decision-maker |
|---|---|
| 10β20 tabs open, still lacking full context | Auto-grouped contextual workspace, full picture in one view |
| Manual standup updates rewritten every day | AI generates summary in one tap, PM reviews and sends |
| Blockers surface during standups, too late | Predictive blocker detection flagged before they cause delays |
| Meetings produce messy notes, manual conversion | Meeting Mode converts discussions to structured, assigned tasks |
| Following up with devs manually β "any update?" | Auto-notify Slack, AI drafts, PM approves in one tap |
01 β Problem
PMs are the most tool-dependent role in any product org, and the most penalised by fragmentation. I mapped the structural failures before opening Figma.
The core insight: the problem isn't too many tools. It's that context is never assembled at the moment of decision. Information exists everywhere, and is available nowhere when it's needed.
02 β Primary Research
Before committing to any direction I spoke with three PMs across different stages β Series A SaaS, early-stage startup, and fintech scale-up. Three questions: tool switching frequency, most repetitive task, and whether they'd ever missed a blocker that existed somewhere in their stack.
"Information is there but scatteredβ¦ nobody connects it until it's too late." β Senior PM, Fintech. This became the design brief in one sentence.
03 β Ideation
Each alternative had genuine merit. That's exactly why pressure-testing them mattered before committing.
Early explorations β wireframes
Before moving to high-fidelity I mapped the core flows in lo-fi to validate structure and information hierarchy without visual noise getting in the way.




The defining principle: don't replace the tools PMs use. Sit underneath them, at the browser layer, and make the entire stack intelligent from below.
04 β Design principles
05 β Final screens
Every element has a reason rooted in research, not aesthetic preference. Here's what I designed and the exact decision behind each key element.

06 β Visual identity
Every visual choice was functional β rooted in the PM's work context, session length, and cognitive environment. Nothing was chosen for aesthetics alone.
Colour palette
Typography
Display / Headings Β· Weights 500, 600
A single type family across the system keeps hierarchy clear without visual noise.
Inter β readable body and UI at every scale.
Body / UI text Β· Weights 300, 400, 500, 600
Light β Regular β Medium β SemiBold gives enough range for dense PM dashboards while staying consistent.
Dark mode wasn't an aesthetic choice β it was a UX decision. PMs work long sessions in front of high-density dashboards. A warm dark palette reduces eye strain while matching the dev tools ecosystem they operate alongside daily.
07 β Decision log
| Decision | Chosen | Rejected & why |
|---|---|---|
| Colour mode | Dark throughout β reduces eye strain for long sessions, matches dev tools ecosystem | Light mode β conflicts with long-session use case and creates dissonance with GitHub, VS Code |
| Navigation | Persistent left sidebar β mirrors Linear/Jira, predictable, reduces decision fatigue | Top nav β conflicts with PM's existing mental model and reduces vertical screen space |
| AI panel placement | Right panel β keeps main tool intact, AI acts alongside work not replacing it | Inline overlay β competes visually with the tool PM is actually using |
| Execution model | Step-by-step modal with content preview β trust built through visibility not assumed | Silent background execution β creates black-box perception, kills adoption |
| Homepage state | Empty canvas on load β PM starts calm, pulls context on demand | Dense homepage β replicates the overwhelm PMs are trying to escape |
| Primary interaction | "Ask Anything" centred β natural language, AI feels accessible not buried | Menu-driven AI access β requires PM to remember where features live |
08 β Validation
After completing the initial screens I shared core flows with the same three PMs from research β directional gut-check on the Dashboard, Agentic Task View, and AI Action Modal.
09 β Edge cases
Trust is built in failure moments, not success moments. I designed explicit handling for four critical failure scenarios.
Design principle for all error states: never fail silently, never guess confidently on low-signal data, always leave the PM in control of the final state.
10 β Success metrics
These hypotheses anchor every design decision to a measurable outcome β baselines drawn directly from research conversations.
| Metric | Baseline | Target | Feature |
|---|---|---|---|
| Context-gathering time per task | ~15β20 min | Under 2 min | Auto-tab grouping, AI summary panel |
| Daily standup prep time | ~25β30 min | Under 5 min | Auto-generate standup quick action |
| Blocker detection lag | 12β24 hrs | Under 30 min | Predictive blocker detection |
| Manual update actions/sprint | 40β60 actions | Under 10 actions | AI Action Modal, Auto-notify |
| AI suggestion acceptance rate | β no baseline | Above 65% at 4 wks | Analytics feedback loop |
11 β What's next
12 β Reflection
The most important decision in this project wasn't a UI choice β it was the framing. Choosing "browser as co-pilot" instead of "yet another dashboard" shaped every subsequent decision about interaction model, information architecture, and automation design. The framing is the strategy.
Working on an agentic interface forced me to think carefully about the human-AI trust curve. Automation is only valuable if users trust it enough to let it act β and trust is built through transparency, not capability.
If I were to iterate, I'd invest most in onboarding β specifically how a PM first connects their tools, sets privacy preferences for AI reading, and establishes the baseline trust that makes agentic behaviour feel safe rather than intrusive. The first 10 minutes of a product like this will determine whether a PM uses it for a week or for a career.