NovaHavenTech
  • Home
  • Products
  • About
  • Contact
Try Costara
NovaHavenTech
  • Home
  • Products
  • About
  • Contact
Try Costara →
  1. NovaHaven Tech
  2. /
  3. Products
  4. /
  5. Costara
Costara
OverviewFeaturesPricingCompareDocsSoonChangelogSoonTermsPrivacyRefund Policy

Costara

  • Overview
  • Features
  • Pricing
  • Documentation
  • Changelog

Compare

  • vs LangSmith
  • vs Langfuse
  • vs Datadog
  • All comparisons
  • Alternatives

Company

  • About
  • Contact
  • GitHub

Legal

  • Privacy Policy
  • Terms of Service
NovaHavenTech· © 2026 NovaHaven Tech. All rights reserved.
GitHub

Documentation

  • Getting Started
  • SDK Reference

Getting Started with Costara

This guide walks you from zero to tracking your first LLM cost event in under 5 minutes.

Note: Costara is currently in beta. The dashboard at costara.novahaven.tech is coming soon. Sign up to be notified when access opens.

Prerequisites

  • Python 3.8 or later
  • An OpenAI, Anthropic, or Google API key (you need at least one LLM provider)
  • A Costara account — sign up at costara.novahaven.tech

Step 1 — Sign up and create a project

  1. Go to costara.novahaven.tech/signup and create your account
  2. After logging in, create a new Project — this represents one application or service you want to monitor
  3. Navigate to Settings → API Keys and generate a new API key. It will look like cst_live_xxxxxxxxxxxxxxxx

Keep this key in an environment variable — don't hardcode it in source:

export COSTARA_API_KEY=cst_live_xxxxxxxxxxxxxxxx

Step 2 — Install the SDK

pip install costara

Or with Poetry:

poetry add costara

Step 3 — Initialize the SDK

Call costara.init() once when your application starts — in your main.py, app.py, or wherever you bootstrap your application.

import os
import costara

costara.init(
    api_key=os.environ["COSTARA_API_KEY"],
    project="my-app",          # name of your project in the dashboard
    environment="production",  # "production", "staging", or "development"
)

Step 4 — Track a call (Explicit mode)

After each LLM API call, call costara.track() with the usage data:

import time
import openai
import costara

client = openai.OpenAI()

start = time.time()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarize this document..."}],
)
elapsed_ms = (time.time() - start) * 1000

costara.track(
    provider="openai",
    model="gpt-4o",
    prompt_tokens=response.usage.prompt_tokens,
    completion_tokens=response.usage.completion_tokens,
    cost=costara.estimate_cost("openai", "gpt-4o", response.usage),
    latency_ms=elapsed_ms,
    feature_tag="document-summarizer",  # what feature triggered this call?
)

The feature_tag is the most important field — it's how Costara attributes costs to your features.


Step 4 (alternative) — Auto-instrument OpenAI

If you're using OpenAI and don't want to add costara.track() to every call, use auto-instrument mode:

import costara

costara.init(api_key=os.environ["COSTARA_API_KEY"], project="my-app", environment="production")
costara.patch_openai()  # patches the openai module globally

# All openai.chat.completions.create() calls are now tracked automatically
# You can still pass a feature tag as a kwarg:
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    extra_body={"costara_feature_tag": "customer-support"},  # optional
)

Note: Auto-instrument is currently available for OpenAI only. Anthropic and Google auto-instrument is coming in v0.2.0. Use explicit tracking mode for those providers.


Step 5 — View your dashboard

Open costara.novahaven.tech and navigate to your project. Within a few seconds you should see your first event appear in the Live Feed and your cost totals begin to update.


Next steps

  • SDK Reference — full API documentation for all SDK methods
  • Set up a budget alert in the dashboard under Budgets → New Budget
  • Invite a teammate under Settings → Team