Skip to content

Improve Atlas Quality

Voice Atlas flags a request as failed when it cannot find a confident answer in the Atlas content. This usually means the information is missing, phrased differently than the user request, or the assistant needs more guidance to match the question to the right content item. Review the analytics tools and apply the techniques below to boost the success rate.

Monitor Atlas Metrics

Open the Atlas Analytics dashboard to review conversation volume and performance over time. Explore the heatmap, spikes chart, and conversation timeline to spot trends, then decide which Atlases or topics need attention.

Tune FAQ Items

Every FAQ item contains a title (required), optional questions (up to three), and the response content. Adding at least one question helps Voice Atlas quickly match similar user phrasing. When the content alone can’t answer the question, the platform looks for other context to enhance the final response—so richer FAQs lead to better answers.

Use Analytics Detail to Prioritize

Export the raw JSON from the Metrics tab whenever you need the full conversation list. Each record includes the session UUID, question, answer, timestamps, and message count. Use this data to pinpoint high-interest topics and gaps that need more content.

Test Updates in Atlas Playground

Open the Atlas Playground after each change to confirm the experience users will have across channels.

Best Practices for AI Answers

  • Keep responses authoritative: Write answers in complete sentences and cite the source or owner when relevant so users know the information is trustworthy.

  • Capture intent variants: Add the top alternative phrasings in the optional question fields and link to supporting documents via Upload or Website items when extra context is required.

  • Avoid sensitive data: Don’t store credentials, personal information, or unpublished financial data in Atlas content. Use role-based sharing and API keys to control access instead.

  • Set review cadences: Tag items with owners and revisit them on a regular schedule—out-of-date content is the biggest driver of low-quality AI responses.

  • Test with trick questions: Keep a backlog of complex, ambiguous, or safety-sensitive questions and replay them after every content update to confirm guardrails still hold.

  • Watch generative shifts: High-quality content that names the product, ownership, and caveats verbatim produces confident generative answers, while vague or outdated entries cause hedging and hallucinations. Ask a trick question like “Can I deploy Atlas to unmanaged laptops this quarter?” twice: once with a detailed FAQ that states “Atlas is approved for corporate-managed macOS devices only; BYOD rollout is slated for Q4 after security review” and once with a stub that simply says “Atlas deployment is in progress.” The first yields a precise denial explaining timing and policy; the second triggers a speculative reply or fallback failure because the model can’t ground its answer. Use these A/B tests to prove how content quality directly shapes the AI narrative.

Voice Atlas™ and Chatlas™ are trademarks by Navteca LLC.
Microsoft Teams™ is trademark by Microsoft Corporation.
Slack™ is a trademark by Slack Technologies, Inc.