Tracing
Observability & tracing with Langfuse.
Gomus AI includes a built-in Langfuse integration so you can inspect and debug every retrieval and generation step of your RAG pipelines in near real-time.
Langfuse stores traces, spans, and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
A Langfuse workspace (cloud or self-hosted) with a Project Public Key and Secret Key.
1. Collect your Langfuse credentials
- Sign in to your Langfuse dashboard.
- Open Settings > Projects and either create a new project or select an existing one.
- Copy the Public Key and Secret Key.
- Note the Langfuse host (e.g.
https://cloud.langfuse.com).
The keys are project-scoped: one pair of keys is enough for all environments that should write into the same project.
2. Add the keys to Gomus AI
- Log in to Gomus AI and click your avatar in the top-right corner.
- Select API > Scroll down to Langfuse Configuration.
- Fill in your Langfuse Host, Public Key, and Secret Key.
- Click Save.
Once saved, Gomus AI starts emitting traces automatically — no code changes required.
3. Run a pipeline and watch the traces
- Execute any chat or retrieval pipeline in Gomus AI.
- Open your Langfuse project > Traces.
- Filter by name ~
gomus-ai-*.
For every user request you will see:
- A trace representing the overall request.
- Spans for retrieval, ranking, and generation steps.
- The complete prompts, retrieved documents, and LLM responses as metadata.
Use Langfuse's diff view to compare prompt versions or drill down into long-running retrievals to identify bottlenecks.