Skip to content

Use Atlas Playground

Atlas Playground simulates real user requests in a controlled space, letting enterprise teams harden a release before it hits production users. Treat Playground as the final staging gate: the same content, permissions, and integrations run here without exposing results to the wider org.

🧪 Enterprise testing checklist

  • Curate a living test matrix that covers each business-critical intent, regulated scenario, and known edge case.
  • Define approval thresholds (for example, 95% of scripted test prompts must return approved responses) and document who signs off for each business unit.
  • Run your content review workflow inside Playground: reviewers log how the response was sourced, annotate gaps or compliance concerns, and assign fixes to content owners.
  • When the checklist is met, capture evidence (screenshots, transcript exports) and tag the Atlas state as "staging-approved" before promoting it to production.

Test Your Atlas

  1. Open the Voice Atlas web app and select your Atlas.
  2. Use the Ask something… field on the right to enter questions just like a user would.

Ask something

  1. Review the responses and adjust your content items as needed.

Playground requests do count against your monthly quota, so schedule testing alongside the Improve Atlas Quality checklist to focus on the most impactful scenarios.

Close the Playground Feedback Loop

  1. Open the Transcripts tab for your Atlas (or export the run log) to pull the full question, answer, and metadata for each Playground test. Every entry shows which sources Voice Atlas used to construct the response, so you can verify sourcing and identify untrusted or missing citations.
  2. Annotate each transcript with test results: mark satisfactory answers, flag compliance concerns, and capture any gaps between the response and your expected policy/knowledge. Spot recurring misses (for example, repeated deflections or legacy policy references) by filtering on prompt tags or intents.
  3. Translate the findings into actionable edits. Update or add content items to cover the gaps, adjust ranking/guardrails, and tag each change back to the transcript that prompted it. Re-run the affected prompts in Playground to confirm the fix before moving to the next issue.
  4. Share the summarized findings (what was added, revised, or retired) with stakeholders so everyone understands how Playground data improved Atlas quality. This running report becomes your quality audit trail when you promote the Atlas to production.

Voice Atlas™ and Chatlas™ are trademarks by Navteca LLC.
Microsoft Teams™ is trademark by Microsoft Corporation.
Slack™ is a trademark by Slack Technologies, Inc.