

| Sep - Dec '25
Chatbot to Proactive AI

Verdigris asked us to validate a chatbot created for data centre technicians. Our research showed it was not helping technicians — so, I designed the replacement: a proactive, push-first AI alert system built for the technician, not the dashboard.
ROLE
UX Researcher
•
Only UX & Brand Designer
•
Design System
what i did
Usability testing
•
Contextual Inquiry
•
0-1 AI interface design
TEAM
Researchers (4)
•
Designer
•
Engineer
•
PM
impact
9
Usability Studies
2
Contextual Inquiries
1
Product
Pivot
NEW
AI
product
CONTEXT
Verdigris makes AI sensors that detect equipment degradation in data centres before failure. Their dashboard was built for executives — not the technicians on the floor. A tablet chatbot seemed like a way to bridge that gap. It didn't work.

“We are testing this chatbot because a lot of the data centre technicians do not know what to do with the data itself. They don't know how to analyze the data. They just want the building to run efficiently.”
Jimit Shah, Head of Product @ Verdigris
The AI had the knowledge. The interface expected users to know how to use it.

Arc - The proactive AI assisstant.
The gap we addressed
Predictive alert sent before failure occurs - Verdigris' technology could detect issues early, but the dashboard made them hard to read and track in real time specifically for data center users.
the system
Arc is a proactive alerting system that surfaces issues early, tells technicians how to respond, automating the fix.
KEY ADDITION
Structured handoffs so issue management stays continuous across shifts as technicians change.


Contextual
Trustworthy
Proactive
Confident
Arc

01 / 05
Push notifications that proactively alert technicians
Goal is for technicians to be alerted if something is going wrong. Entry point being lockscreen instead of a chat text box allows for one tap to act. Push notification eliminates the blank-prompt problem entirely.

The chatbot failed to pique the interest of data center technicians because it added friction to their process and did not help them.
We ran moderated usability tests with 9 technicians and contextual inquiries with 2 data centre technicians to identify their daily workflows and surface their needs.

Glanceability
8/9 participants
Dense text, no hierarchy, technicians could not scan while standing at a panel.

Actionability
8/9 participants
The chatbot surfaced problems without advising on what to do with the data.

Trust
9/9 participants
Most participants were not AI savvy, did not trust the data and asked for its source.

Focus
9/9 participants
Blank prompt, open-ended chat interface. Technicians didn't know what to ask.
What technicians and energy managers actually need from AI.
Triage
When I'm between buildings, I need to see which ones need attention right now so I can respond to the most urgent issue first.
Diagnose + Act
When the AI flags an anomaly, I need to know what's wrong, its seriousness, and what to do so I can fix it without interpreting raw data.
Hand Off
When my shift ends, I need to pass context to the next operator so they can pick up where I left off without starting over.
As the sole designer, I replaced the chatbot with a proactive, notification-first alert system.
AI can predict when systems fail but the chat interface waits for the right query to surafce that, making poor use of the technology.
Our research showed that data center technicians are in fast-paced highs stakes conditions, on iPads/ phones + they are not electrical engineers with the knowledge to read dashboards and analyze next steps. They need a tool to analyze dashboards quickly and surface next steps.
User types a question into a chatbot

App sends a push notification when something goes wrong
Desktop-only dashboard

iPad and phone app for operators in the field.
Open-ended text box with no starting point

Chat only opens for one specific alert at a time.
AI gives answers with no source

Every answer shows which sensor, what time windoW + a confidence %.
Tells you what's wrong, then stops

Tells you what's wrong, suggests a fix, and lets you schedule it.
We recommended against the thing the client was testing. No free-form chatbot or augmenting the existing dashboard with AI.
We iterated on layout. The problem was the interaction model. An open prompt still put the analysis burden on technicians. Embedding alerts into the existing dashboard was closer, but operators are not at desks. Verdigris already had a dashboard. What they needed was an assistant that helped them bypass it.

Not every AI product needs to be conversational.
The AI already knew what was wrong. But, the technicians did not know its capabilities, which was hiding behind the right query, waiting to be asked.
The best AI interface is the one the user never has to learn.
We framed it as avoiding costly misalignment rather than invalidating their work. The AI layer is still there it moved from the interaction surface to the intelligence layer underneath.
Research is most powerful when it changes the direction, not just the design.
Our findings didn't improve the chatbot. It convinced Verdigris not to ship it. That's the more honest outcome and the more valuable one.