Designing an AI-Assisted interface
TIMELINE
Jul - Aug 2025
TEAM
Sibira Gopal
Shivangi Jain
1 PM and 3 Developers
MY ROLE
I conducted user research, developed end-to-end solution & prototype, and participated in QA testing up until the go-live date.
About Plane
Plane is a project management tool scaling for startups to enterprise level. It's a B2B SaaS product, directly competes with tools like Jira (Atlassian), Linear, Monday etc. It aims to simplify managing and assigning workflows internally in an org and also maintaining visibility with clients.
What?
To start with some context...
The initial brief came from the product team: “Enable users to ask questions about their workspace in plain language.” That was it - no specific use cases, success metrics, or guiding principles beyond “be helpful.”
On the surface, it sounded straightforward: build an AI chat inside Plane. Users wanted more control over how they view their work. But the brief didn't say much about why they wanted this, which grouping options mattered most, or how this would fit into their actual workflows. It also didn't address how this would scale since Plane supports everything from small startups to teams managing thousands of issues.

Initial set of concerns
Are they organizing work for themselves, or for their team?
Is analytics/data enough to provide real world answers?
What happens when someone has 500 tasks in a single project?
What concrete workflows or frustrations are we solving?
How quick is it for users to use the product, hassle-free?
How do we design for accuracy and safety?
Preliminary research
I reviewed support tickets and feature requests, talked to a few active users, and spent time in our own Plane workspace watching how the team used grouping day-to-day. A pattern emerged:

Users used group by status to focus on progress
→ switch to assignee to check their own work
→ then switch to priority to triage.
User personas

Manager/Scrum masters
- Primary concern is workflow- making sure work moves smoothly.
- Constantly scanning for bottlenecks and dependency on them.
- Their biggest frustration is when the board doesn't reflect reality. Issues get stale, statuses aren't updated, and they waste time asking "is this actually in progress?" in Slack.
- The tool gives them control, but doesn't give them clarity.

PMs or team leads
- Who's working on what? Is anyone overloaded? Who has capacity?
- Answerable to stakeholders: "When will this feature ship? Who's working on the integration?"
- Biggest pain point is visibility gaps. Someone might have 15 issues assigned, but 10 of them are blocked or low-priority.
- Tool shows assignments, but not actual capacity or progress.

Individual contributors
What do I need to focus on right now?
- They group by priority or filter to show only their assigned issues, sorted by due date.
- Their view is personal and immediate- they're not looking at the whole project, just their slice of it.
- With a new task added, their mental prioritization is scrambled. They want the tool to help them stay focused.
- Stuck on KT or dependencies to get don with their tasks.
- Users weren't just asking for more grouping options instead they were struggling with context switching and information overload.
- The tool was flexible, but it forced them to constantly reconfigure their view to get the information they needed. These questions helped pivot the brief from “add chat” to “augment insight discovery and reduce friction.”
Goals
Reframing the Problem
Rather than treating Plane AI as a generic chatbot, I reframed it as a design assistant for workspace context - a tool that helps users answer questions, analyze progress, and surface insights without deep dives into UI filters.
Key design questions I defined early on:
- What tasks do users really want to accomplish via AI?
- When does typing become easier than just filtering?
- How can responses reduce cognitive load instead of adding noise?
This shift moved the focus from:
Market research

Scaled with versions and iterations
Version 1 - Simple Query Interface
We initially built a basic chat overlay where users could type questions and see responses. Early prototypes focused on handling simple queries like “show recent bugs” or “what tasks are overdue.”
Observation: Users expected actions after answers - not just text responses.
Version 2 - Contextual Interaction
Next iteration integrated context awareness:
- The AI recognised the active workspace or project.
- Results referenced issues with links back into the UI.
- Users could attach screenshots or docs for richer queries.
Even here, we slipped into overly verbose responses that didn't help users take action - a common early-version trap.
Version 3 - Task-Oriented Queries
We emphasized task outcomes in the conversational UI, e.g., “show me all high-priority issues blocking this cycle” turned into a structured list with links back into the workspace. At this stage, user feedback highlighted another tension: clarity vs. verbosity. We iterated on response design patterns - concise bullets with optional expanded explanations.
Some of the wireframes sketches



Version 1:
A chat interface that helped with pre-designed prompt templates to lower the learning curve.
Version 2:
The need to take action based on context and edit responses was requested frequently.
Also, version 2:
An AI agentic feature was introduced to find duplicate tasks, by reading the title. The LLM was designed to pick up keywords and suggest discarding the task in case it existed already to avoid information overload.
Version 3:
Aimed for structural and cleaner UI, to incorporate complex actions.
Where things got interesting
1. Ambiguous Queries
Free-form natural language is inherently ambiguous. “Show overdue tasks” could mean different things depending on filters, projects, or views.
Solution:
We designed disambiguation prompts and offered filter suggestions to refine queries. If the AI wasn't confident, it asked clarifying questions before answering.
2. Balancing Accuracy and UX
There was a high chance of responses feeling helpful but being technically imprecise.
Solution:
We invested in previewing reasoning behind results, surfacing the logic- so users could understand why a certain set of issues matched their query. This transparency built trust and helped users refine their questions.
3. Designing for Read-Only at Launch
Originally, the read-only design (no automatic write capabilities) limited the assistant's usefulness.
Solution:
We focused instead on insight amplification. By steering users toward more actionable queries and linking results back to the core Plane UI, users still felt productive without the AI performing automated actions (which came later in the roadmap).
Outcomes
Plane AI launched as a conversational assistant capable of:
Though quantitative metrics are internal, qualitatively users reported less frustration with navigating complex workspaces and a noticeable reduction in time spent on analysis tasks. More importantly, the design set expectations for future iterations, including action execution and iterative refinement of conversational flows.
Intrigued?
Let's connect to discuss more about the AI agents, workflows and what the final designs looked like.