Langfuse joins ClickHouse! →Langfuse joins ClickHouse! Learn more →
Langfuse LogoLangfuse Logo
Hiring in Berlin and SFLooking for GOATS!
DocsSelf HostingGuidesIntegrationsFAQHandbookChangelogPricingLibrarySecurity & Compliance
GitHub
22K
Get DemoAppSign Up
  • DocsIntegrationsSelf HostingGuidesAI Engineering Library
  • Overview
    • Cloud
    • Product
    • Prompt Management
    • Security
    • Self Hosting
    • Tracing
Question? Give us feedback →Edit this page on GitHub

FAQ

By Tags

Evaluation

FAQ: Evaluation

Have any other questions? Please add them on GitHub Discussions.

How do I upgrade my trace-level evaluators to observation-level evaluators?How to create and manage Score Configs in Langfuse?How to evaluate sessions/conversations?How to retrieve experiment scores via UI or API/SDK?How to use Langfuse-hosted Evaluators on Dataset Runs?I have setup Langfuse, but I do not see any traces in the dashboard. How to solve this?What are scores in Langfuse and when should I use them?
Was this page helpful?
Support

Product

  • Observability
  • Prompt Management
  • Evaluation
  • Metrics
  • Playground
  • Pricing
  • Enterprise

Developers

  • Documentation
  • Self-Hosting
  • SDKs
  • Integrations
  • API Reference
  • Status
  • Talk to Us

Resources

  • Blog
  • Changelog
  • Roadmap
  • Interactive Demo
  • Customers
  • AI Engineering Library
  • Guides & Cookbooks

Company

  • About Us
  • Careers
  • Press
  • Security
  • Support
  • Open Source
© 2022-2026 Langfuse GmbH / Finto Technologies Inc.
TermsPrivacyImprintCookie Policy
GitHubDiscordXYouTubeLinkedIn