Breadcrumbs

HALO Analytics: Interpreting Metrics for Agent Performance and User Experience Optimization

HALO Analytics provide insight into how agents perform and how users experience interactions. As described in the base documentation 

, Analytics consist of:

  • Conversation Resolution

  • Customer Feedback

  • Topics & Subtopics

This document explains how to interpret these metrics correctly and avoid common misinterpretations.


Conversation Resolution: understanding “Unsuccessful”

Success Rate is often treated as the primary KPI. However, “Unsuccessful” does not automatically mean the agent failed.

After approximately half hour of inactivity, conversations are automatically classified. If a user stops responding mid-conversation, the session will still receive a resolution status. This can negatively impact Success Rate, even if the agent handled the situation correctly.

Lower success metrics may reflect:

  • Abandoned sessions

  • Users not continuing the flow

  • Out-of-scope requests

  • Lack of explicit conversation closure

Therefore, resolution percentages should always be interpreted alongside conversation reviews.


Resolution Overrides and the Importance of Reasoning

When manually adjusting a resolution status, always provide a clear Resolution Reasoning.

Overrides are used as learning input for future automatic classifications. If a new conversation falls within the same Subtopic, HALO can benefit from previous overrides. Consistent and well-documented reasoning improves long-term classification accuracy.

In other words, accurate overrides today improve analytics reliability tomorrow.


Channel vs Product: correct segmentation

A common mistake is combining different traffic sources into a single overall Success Rate.

Analytics distinguish between:

Channel

Communication channels such as:

  • WhatsApp

  • WebConversations

  • Instagram

These should be analyzed via the Channel filter, as user behavior differs per channel.

Product

Underlying systems such as:

  • MSC

  • Test Centre

  • CAIC

These should be analyzed via the Product filter.

MSC may trigger conversations without explicit user initiation, which can influence resolution metrics. Test Centre is primarily used for internal testing and edge cases, potentially distorting overall performance if not filtered separately.

Proper segmentation ensures realistic performance insights.


Topics & Subtopics: where optimization truly happens

Topics represent broad categories; Subtopics reflect specific user intents.

Drilling down to subtopic level reveals where targeted improvements will have the greatest impact. High unresolved rates in specific subtopics often indicate:

  • Prompt gaps

  • Missing tooling

  • Emerging user needs

Combining Resolution, Topic and Feedback provides far more strategic insight than analyzing a single metric in isolation.


Pending Status

A conversation or topic may show as “Pending” when:

  • The inactivity window has not yet passed 

  • Fewer than 500 conversations exist (required for reliable topic analysis) 

  • Classification is still processing

Early-stage environments should avoid drawing conclusions before sufficient data volume is reached.


Customer Feedback: effectiveness vs experience

Resolution indicates operational success. Feedback reflects perceived quality.

High resolution with low satisfaction may indicate tone, UX, or clarity issues rather than technical failure.

Combining these metrics provides a balanced performance perspective.


From Insight to Optimization

After applying Channel, Product, Topic, or Resolution filters, the Conversations view allows detailed review and export of relevant chats.

Analytics should not be treated as static reporting, but as a continuous optimization instrument. By consistently reviewing trends, segmenting data correctly, and documenting overrides carefully, both agent performance and analytical accuracy improve over time.