FAQ
Questions and answers.
Everything you need to know about Ansight.
What is Ansight?
Ansight is an SDK and desktop studio that gives AI agents runtime visibility into your mobile app. The SDK captures telemetry inside the app, and Studio acts as a local MCP bridge so agents like Codex, Claude, or Cursor can query the app's live state, inspect sessions, and help fix issues.
How do I use Ansight?
Ansight provides a natural language bridge to the internals of your app. We expose discrete, primitive operations that any AI agent can build on. You could ask your agent to inspect a database by running a remote SQL query, analyze the current visual tree, take a screenshot, or download a sandboxed file from your app. We provide the tools; your creativity defines how you use them.
Which mobile frameworks are supported?
.NET MAUI is in beta now. Flutter, React Native, iOS (SwiftUI/UIKit), and Android (Jetpack Compose) are planned. The SDK is open source, and Studio works with any framework once the SDK is integrated.
How does the MCP bridge work?
Studio exposes your paired app's tools and data over MCP (Model Context Protocol). When an AI agent connects to Studio through MCP, it can request screenshots, logs, performance telemetry, and even call app-defined tools — all through the local network. The agent gets real evidence instead of guessing from source code alone.
Does my data leave my machine?
No. Studio runs locally on your desktop, and the SDK communicates with Studio over your local network. Pairing is secured with a public/private key-pair, time-boxed configs. No telemetry, sessions, or app data are sent to any cloud service.
Can I use Ansight in debug and release builds, and on simulators or physical devices?
Yes. Ansight is useful in both debug and release builds, and on simulators as well as physical devices. Debug and simulator workflows are usually the fastest place to integrate and iterate. Release builds and physical devices matter when you need to validate production-like behavior, performance characteristics, hardware-specific issues, or device-only bugs.
How is pairing secured?
Studio issues a signed, time-boxed pairing config that includes the host public key, a one-time token, and a challenge public key. On first pair, the app verifies the signed config, pins the host key, presents the token, and validates challenge proof during the handshake before the live session opens. In practice, that means pairing is bound to the intended Studio instance instead of trusting the network blindly.
What kinds of things could I use Ansight for?
Ansight is useful anywhere runtime evidence matters. Common examples include extracting high-quality bug reports, creating flow chart diagrams from a user's test session, analyzing memory usage, investigating performance regressions, understanding reproduction paths, and mapping user flows from real captured behavior instead of guesswork.
What AI agents work with Ansight?
Any agent that supports MCP can connect to Studio. This includes Codex, Claude Code, Cursor, and any other MCP-compatible client. You choose the agent and model — Studio is the bridge, not the AI.
Do you provide a CLI version of Ansight?
Not yet. Today, Ansight Studio is the main entry point for pairing, session inspection, and the local MCP bridge. A CLI is planned so teams can run more headless, scriptable workflows for automation, CI-style tasks, and tighter integration with existing developer tooling, while keeping the same local-first model.
How does the QA tester workflow work?
The SDK continuously captures logs, metrics, screenshots, and navigation state during QA. When a tester hits a bug, they trigger a snapshot from Studio. Studio shows the full session timeline, generates a Mermaid flow chart of the user journey, and lets the tester export or share the session with a developer — complete with reproduction steps and all the evidence.
What kind of analysis does the AI provide?
The AI agent reconstructs the execution timeline from the captured session, identifies anomalies, and correlates code paths with runtime behavior. It provides root-cause guidance with specific file and method references — not generic suggestions, but analysis grounded in real execution data.
Can I choose what the AI agent sees?
Yes. Selective analysis inputs let you decide exactly which data the agent receives: logs, FPS metrics, memory samples, screenshots, device metadata, and linked app context. You stay in control of the investigation.
How do I pair a device with Studio?
Scan a QR code from Studio, or copy a time-boxed pairing config as JSON. The device connects over your local network — setup takes under ten seconds.
Is the SDK open source?
Yes. The Ansight SDK is open source and available on GitHub. Studio is a separate desktop application.
What does it cost?
Ansight is currently in beta. Join the waitlist for early access — pricing will be announced closer to launch.
Can I export sessions?
Yes. You can package any session into a portable archive and attach it to a ticket, hand it to another developer, or revisit it later. Saved analyses can also be browsed, filtered, and reopened by app and session.
Put Ansight between your app and your agent.
Join the waitlist for early access to the local MCP bridge and Studio workflow.
You're on the list! We'll be in touch.