Case Study · AI & NLP Analytics

User Sentiment Prediction Dashboard

Service desk teams were handling thousands of tickets with no way to measure how users actually felt. We built an end-to-end AI pipeline — from ServiceNow data to fine-tuned transformer model to automated Power BI dashboards — that makes user sentiment a first-class operational metric.

1,000+
Tickets labelled & trained
DistilBERT
Transformer model selected
100%
Automated pipeline
Real-time
Operational monitoring

The Problem

Service desk teams handle large volumes of tickets every day. Priority levels, SLA compliance, and resolution times are tracked — but user sentiment is not. How a customer actually felt about their experience was anecdotal at best, entirely invisible at worst.

The consequences were predictable: dissatisfied users were identified late, escalations were handled reactively, and there was no systematic way to correlate service performance with how users perceived it. Management could see that tickets were being closed, but couldn't see whether users were walking away satisfied or frustrated.

The gap: Structured fields (priority, SLA, resolution time) tell you what happened. Sentiment tells you how it felt. Without both, service managers are making decisions with half the picture.

What We Built

End-to-End AI Sentiment Pipeline

We designed and built a fully automated sentiment prediction pipeline — from data ingestion through to Power BI visualisation — with a fine-tuned transformer model at its core. The system processes both historical tickets (for trend analysis) and new tickets as they arrive (for live operational monitoring).

ServiceNow Data Ingestion

Ticket data is retrieved directly from ServiceNow via REST API. The system captures a rich set of ticket attributes — not just the description, but the full context needed to understand sentiment accurately:

This breadth of data ensures the model works from real ticket context — not isolated text snippets stripped of the information that makes them interpretable.

Sentiment Definition and Labelling

Before training, we established a clear and consistent sentiment definition aligned with operational decision-making:

Approximately 1,000 historical tickets were manually reviewed and labelled by domain experts to create a reliable, balanced training dataset. This upfront labelling investment is what made the model genuinely useful — rather than a generic sentiment classifier trained on unrelated data.

Model Selection and Training

We evaluated three candidate models before selecting the final approach:

Random Forest
Lower accuracy on text-heavy ticket data
LSTM
Improved, but limited contextual understanding
DistilBERT ✓
Best accuracy — selected for production

The selected model is a fine-tuned DistilBERT (uncased) transformer — a modern, lightweight transformer architecture optimised for natural language classification tasks. It was trained on the labelled ticket dataset, with care taken to balance positive and negative examples to reduce prediction bias.

The trained model is hosted and versioned on Hugging Face: bistecglobal/distilbert-uncased-classifier-new. Versioning the model on Hugging Face means the pipeline can be updated and improved without disrupting the production deployment.

Automated Prediction Workflow

The prediction pipeline runs automatically on a schedule:

The entire process — from new ticket arriving in ServiceNow to sentiment appearing in a Power BI dashboard — requires zero manual intervention.

Power BI Reporting

The dashboard layer translates AI outputs into operational insight. Service managers can track sentiment trends over time, identify which services or ticket categories consistently generate negative sentiment, and correlate SLA compliance with user experience — giving them the evidence base to prioritise improvement initiatives where they'll have the most impact.

Tech Stack

ServiceNow REST API DistilBERT (Hugging Face) Python Power BI NLP / Transformers Automated Scheduling

The Results

1,000+
Tickets labelled for training
DistilBERT
Production transformer model
100%
Automated end-to-end

Service managers now have a live, quantified view of user sentiment across their ticket queue — something that previously didn't exist. Dissatisfied users are identified proactively rather than after an escalation. The SLA-sentiment correlation gives leadership a more complete picture of service quality than SLA compliance alone ever could. And because the pipeline is fully automated, the insight is always current.


Key Takeaways

← Back to all case studies Discuss a similar project