Case study
Chatbot UX
Implementing a real-time hybrid chatbot on the Choices platform to improve user engagement, answer common questions in the moment, and reduce drop-off caused by unanswered queries.
- Role
- UX/UI Design · UX Research · Design Strategy · Accessibility Consulting
- Timeline
- 3 months
- Tools
- Figma · Miro · UserZoom
- Outcome
- Response time 12h → minutes
- AI
- Conversational UX
- FinTech
Overview
The goal was to implement a chatbot on the Choices platform landing page and internal flow to improve user engagement, answer common questions in real time, and increase conversions. The platform's purpose is to move users' savings into Sun Life to keep them growing. The previous design lacked immediate support, leading to a high drop-off rate caused by unanswered queries during the decision-making flow.
The work covered conversational flow design, chatbot model selection, response copy, widget placement, seamless live agent handoff, and a post-response feedback mechanism for continuous improvement.
Problem & UX Research
Customer support volume clustered around a predictable set of repeatable intents, yet the existing experience handled them poorly. The absence of immediate support created three measurable failure modes.
Key Challenges
- Delayed response time, users had to wait for email responses, leading to frustration and abandonment.
- High bounce rate caused by unanswered queries during the decision-making flow.
- Low engagement on the landing page, users hovered around FAQs but left without taking action.
Research Insights
- User interviews found that users had repetitive questions about products, fees, transfer methods, and retirement policies.
- Users consistently hovered around FAQs but left without acting, passive information delivery wasn't enough.
- Competitor analysis revealed that chatbots significantly improved customer engagement and reduced drop-offs when designed with clear intent taxonomy.
Strategy & Discovery
The design thinking process structured exploration across four stages: Empathise, Define, Ideate, and Prototype, building from user research through to a validated hybrid chatbot model.
Empathise
User tests and feedback analysis identified the core pain points. Interview questions focused on what information users sought, what caused them to abandon, and what would have kept them engaged.
Define
The refined problem statement: users need quick, easy access to information without navigating away from the landing page. Every design decision was evaluated against this constraint.
Ideate
Three chatbot models were evaluated: a rule-based chatbot with predefined Q&A, an AI-powered chatbot using natural language processing, and a hybrid model combining both approaches. The hybrid model was selected for its balance of flexibility and accuracy, capable of handling structured intents while gracefully managing out-of-scope queries.
Prototype
Initial wireframes and chatbot interaction flows were created, mapping the top intents to structured conversation paths with explicit fallback logic at every branch.
Design Process
Chat Widget Placement
The widget was positioned in the bottom-right corner for consistent, low-friction access. This follows established conversational interface conventions, reducing the cognitive effort of finding support without competing with primary page actions.
Quick Action Buttons
Predefined quick-action buttons were added for the most common intents, allowing users to initiate queries without typing. This reduced the barrier to first interaction and surfaced the most valuable chatbot capabilities immediately on open.
Seamless Handoff to Live Support
A smooth transition to live agents was designed for complex queries outside the chatbot's scope. The handoff was treated as a designed experience, not a failure state. Agents received complete conversation context, and users saw a clear, reassuring status message confirming their request was being transferred.
Feedback Loop
After each response, the chatbot asked a probing satisfaction question. This real-time feedback mechanism enabled continuous improvement of responses over time and helped identify intent coverage gaps early after launch.
Solution & Key Improvements
The redesigned system solved the drop-off problem through architecture rather than content volume, every path through the bot led to a defined resolution state, even when that state was a graceful handoff to a live agent.
- Hybrid chatbot model combining rule-based accuracy for common intents with NLP flexibility for variation.
- Bottom-right widget placement for always-accessible support without disrupting primary page interactions.
- Quick action buttons surfacing top intents immediately on open, no typing required for the most common queries.
- Seamless live agent handoff with full conversation context passed automatically, reducing repeat explanation.
- Post-response feedback loop enabling continuous improvement of bot responses from real usage data.
Results
After launching the chatbot, performance was tracked over 60 days, demonstrating improvements across all key indicators.
- ↓
- Bounce rate, unanswered queries no longer the primary abandonment driver
- ↑
- User engagement and conversion rate post-launch
- 12h→min
- Average response time reduced from 12 hours to minutes
Learnings
- A well-placed and intuitive chatbot significantly improves engagement, placement and discoverability are as important as response quality.
- Combining AI with human support ensures a seamless experience: the hybrid model outperformed both pure rule-based and pure AI approaches for this intent profile.
- Continuous monitoring helps refine chatbot interactions, the post-response feedback loop was the most valuable ongoing improvement mechanism post-launch.
Next Steps
- Optimise chatbot responses based on new user queries and intent patterns surfaced by the feedback loop.
- Extend the chatbot to the entire enrolment flow, not just the landing page.
Conclusion
The Chatbot UX demonstrates that conversational design is a trust problem before it is a flow problem. The highest-leverage decisions weren't about intent coverage or NLP accuracy, they were about what happens when the bot reaches its limit and whether users feel handled or helped.
Designing the handoff as a first-class experience, not a fallback, and building a feedback loop into the product from day one produced a system that improved itself after launch and maintained user trust even when it couldn't resolve an issue independently.
Continue exploring
More work that pairs rigor with craft
Open another case study or return to selected work on the homepage.