{
  "heading": "Quickstart",
  "intro": "Run one guarded agent call locally. Add hosted later.",
  "note": "Install. Run. Verify.",
  "human_path": [
    {
      "number": "01",
      "title": "Install",
      "body": "Use one install command for the SDK and your provider client."
    },
    {
      "number": "02",
      "title": "Run locally",
      "body": "Start with one guarded call so you can see the stop conditions before you wire any hosted service."
    },
    {
      "number": "03",
      "title": "Verify",
      "body": "Use doctor, the CLI, and local traces to confirm the run behaved the way you expected."
    }
  ],
  "frameworks": [
    {
      "framework": "openai",
      "label": "OpenAI",
      "language": "python",
      "install": "pip install agentguard47 openai",
      "code": "import agentguard\nfrom openai import OpenAI\n\nagentguard.init(\n    service=\"openai-agent\",\n    budget_usd=5.00,\n    trace_file=\"traces.jsonl\",\n    local_only=True,\n)\n\nclient = OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[{\"role\": \"user\", \"content\": \"Give me a one-line summary of AgentGuard.\"}],\n)\n\nprint(response.choices[0].message.content)\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "Smallest OpenAI path: init once, keep the proof local, and let AgentGuard auto-patch the client.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_openai_quickstart.py",
        "agentguard incident traces.jsonl"
      ],
      "requiresEnv": [
        "OPENAI_API_KEY"
      ],
      "notes": [
        "local_only=True keeps trace output local. Your OpenAI call still uses OPENAI_API_KEY.",
        "Auto-patching is on by default in agentguard.init()."
      ]
    },
    {
      "framework": "raw",
      "label": "Plain Python",
      "language": "python",
      "install": "pip install agentguard47",
      "code": "import agentguard\n\ntracer = agentguard.init(\n    service=\"my-agent\",\n    budget_usd=5.00,\n    trace_file=\"traces.jsonl\",\n    local_only=True,\n)\n\n@agentguard.trace_tool(tracer)\ndef search_docs(query: str) -> str:\n    return f\"results for {query}\"\n\nwith tracer.trace(\"agent.run\") as span:\n    result = search_docs(\"agentguard quickstart\")\n    span.event(\"agent.answer\", data={\"result\": result})\n\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "Offline starter that proves local tracing and guard wiring without any API keys or network calls.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_raw_quickstart.py",
        "agentguard report traces.jsonl"
      ],
      "requiresEnv": [],
      "notes": [
        "This path is fully local. No dashboard and no provider API keys are required.",
        "Start here if you want a zero-risk first run before wiring a real model client."
      ]
    },
    {
      "framework": "langchain",
      "label": "LangChain",
      "language": "python",
      "install": "pip install agentguard47[langchain] langchain langchain-openai",
      "code": "from agentguard import BudgetGuard, JsonlFileSink, LoopGuard, Tracer\nfrom agentguard.integrations.langchain import AgentGuardCallbackHandler\nfrom langchain_openai import ChatOpenAI\n\ntracer = Tracer(\n    sink=JsonlFileSink(\"traces.jsonl\"),\n    service=\"langchain-agent\",\n)\nloop_guard = LoopGuard(max_repeats=3, window=6)\nbudget_guard = BudgetGuard(max_cost_usd=5.00, max_calls=20)\nhandler = AgentGuardCallbackHandler(\n    tracer=tracer,\n    loop_guard=loop_guard,\n    budget_guard=budget_guard,\n)\n\nllm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\nresponse = llm.invoke(\n    \"Give me a one-line summary of AgentGuard.\",\n    config={\"callbacks\": [handler]},\n)\n\nprint(response.content)\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "Callback-based LangChain starter with explicit loop and budget guards.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_langchain_quickstart.py",
        "agentguard report traces.jsonl"
      ],
      "requiresEnv": [
        "OPENAI_API_KEY"
      ],
      "notes": [
        "LangChain uses the callback handler rather than agentguard.init() auto-patching.",
        "This mirrors the current SDK LangChain integration path."
      ]
    },
    {
      "framework": "langgraph",
      "label": "LangGraph",
      "language": "python",
      "install": "pip install agentguard47[langgraph] langgraph",
      "code": "from langgraph.graph import END, START, StateGraph\n\nfrom agentguard import BudgetGuard, JsonlFileSink, LoopGuard, Tracer\nfrom agentguard.integrations.langgraph import guard_node\n\ntracer = Tracer(\n    sink=JsonlFileSink(\"traces.jsonl\"),\n    service=\"langgraph-agent\",\n)\nloop_guard = LoopGuard(max_repeats=3, window=6)\nbudget_guard = BudgetGuard(max_cost_usd=5.00, max_calls=20)\n\ndef research_node(state: dict) -> dict:\n    question = state[\"question\"]\n    return {\"question\": question, \"answer\": f\"local answer for {question}\"}\n\nbuilder = StateGraph(dict)\nbuilder.add_node(\n    \"research\",\n    guard_node(\n        research_node,\n        tracer=tracer,\n        loop_guard=loop_guard,\n        budget_guard=budget_guard,\n    ),\n)\nbuilder.add_edge(START, \"research\")\nbuilder.add_edge(\"research\", END)\n\ngraph = builder.compile()\nresult = graph.invoke({\"question\": \"What is AgentGuard?\"})\nprint(result[\"answer\"])\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "Local LangGraph starter with a guarded node wrapper and no provider dependency in the example itself.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_langgraph_quickstart.py",
        "agentguard report traces.jsonl"
      ],
      "requiresEnv": [],
      "notes": [
        "This example is fully local and keeps the first proof simple.",
        "Swap the node body for your real agent logic once the guard wiring is in place."
      ]
    },
    {
      "framework": "crewai",
      "label": "CrewAI",
      "language": "python",
      "install": "pip install agentguard47[crewai] crewai",
      "code": "from crewai import Agent, Crew, Task\n\nfrom agentguard import BudgetGuard, JsonlFileSink, LoopGuard, Tracer\nfrom agentguard.integrations.crewai import AgentGuardCrewHandler\n\ntracer = Tracer(\n    sink=JsonlFileSink(\"traces.jsonl\"),\n    service=\"crewai-crew\",\n)\nloop_guard = LoopGuard(max_repeats=3, window=6)\nbudget_guard = BudgetGuard(max_cost_usd=5.00, max_calls=20)\nhandler = AgentGuardCrewHandler(\n    tracer=tracer,\n    loop_guard=loop_guard,\n    budget_guard=budget_guard,\n)\n\nagent = Agent(\n    role=\"researcher\",\n    goal=\"Answer one short question clearly.\",\n    backstory=\"You are concise and careful.\",\n    llm=\"gpt-4o-mini\",\n    step_callback=handler.step_callback,\n    verbose=True,\n)\ntask = Task(\n    description=\"Explain what AgentGuard does in one short paragraph.\",\n    agent=agent,\n    callback=handler.task_callback,\n)\n\ncrew = Crew(agents=[agent], tasks=[task], verbose=True)\nresult = crew.kickoff()\nprint(result)\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "CrewAI starter that wires AgentGuard into both step and task callbacks.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_crewai_quickstart.py",
        "agentguard incident traces.jsonl"
      ],
      "requiresEnv": [
        "OPENAI_API_KEY"
      ],
      "notes": [
        "CrewAI still uses its normal model credentials. AgentGuard adds tracing and guard enforcement.",
        "Use this after the raw or OpenAI starter if you want the shortest first proof."
      ]
    },
    {
      "framework": "anthropic",
      "label": "Anthropic",
      "language": "python",
      "install": "pip install agentguard47 anthropic",
      "code": "import agentguard\nfrom anthropic import Anthropic\n\nagentguard.init(\n    service=\"anthropic-agent\",\n    budget_usd=5.00,\n    trace_file=\"traces.jsonl\",\n    local_only=True,\n)\n\nclient = Anthropic()\nresponse = client.messages.create(\n    model=\"claude-3-5-sonnet-20241022\",\n    max_tokens=128,\n    messages=[{\"role\": \"user\", \"content\": \"Give me a one-line summary of AgentGuard.\"}],\n)\n\nprint(response.content[0].text)\nprint(\"Traces saved to traces.jsonl\")",
      "summary": "Minimal Anthropic path with local trace output and automatic message tracing.",
      "nextCommands": [
        "agentguard doctor",
        "python agentguard_anthropic_quickstart.py",
        "agentguard incident traces.jsonl"
      ],
      "requiresEnv": [
        "ANTHROPIC_API_KEY"
      ],
      "notes": [
        "local_only=True affects only the trace sink. Anthropic requests still use ANTHROPIC_API_KEY.",
        "Anthropic auto-patching is enabled by default through agentguard.init()."
      ]
    }
  ],
  "dashboard_benefits": [
    "Hosted alerts",
    "Retained incidents",
    "Remote kill",
    "Shared visibility"
  ],
  "dashboard_connection": {
    "install": "pip install agentguard47 openai",
    "code": "from agentguard import BudgetGuard, HttpSink, Tracer, patch_openai\nfrom openai import OpenAI\n\nguard = BudgetGuard(max_cost_usd=50.00, warn_at_pct=0.8)\nhttp_sink = HttpSink(\n    url=\"https://app.agentguard47.com/api/ingest\",\n    api_key=\"ag_YOUR_KEY_HERE\",\n    batch_size=5,\n    flush_interval=0.5,\n)\ntracer = Tracer(\n    sink=http_sink,\n    service=\"openai-agent\",\n)\npatch_openai(tracer, budget_guard=guard)\n\nclient = OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[{\"role\": \"user\", \"content\": \"Summarize the latest support ticket.\"}],\n)\n\nprint(response.choices[0].message.content)",
    "notes": [
      "Keep the local SDK proof first. Add HttpSink when you want retained history in the hosted dashboard.",
      "The budget guard is passed to patch_openai so provider usage feeds the guard directly.",
      "The dashboard is the paid control plane for alerts, remote kill, retention, and team workflows."
    ]
  },
  "links": [
    {
      "label": "SDK GitHub",
      "url": "https://github.com/bmdhodl/agent47"
    },
    {
      "label": "PyPI",
      "url": "https://pypi.org/project/agentguard47/"
    },
    {
      "label": "Dashboard sign up",
      "url": "https://app.agentguard47.com/sign-up"
    },
    {
      "label": "Machine-readable quickstart",
      "url": "/quickstart.json"
    }
  ]
}