Documentation Index
Fetch the complete documentation index at: https://docs.yertle.com/llms.txt
Use this file to discover all available pages before exploring further.
yertle-sre lets you ask plain-English questions about your software systems — “are any alarms firing for the payments service?”, “what changed in this repo last week?”, “which AWS resources back this Yertle node?” — and get answers grounded in your Yertle architecture graph plus live state from the yertle, aws, and gh CLIs.
Repo: model-context/yertle-sre
Status: alpha (v0.1). The CLI surface and configuration are still evolving. Expect changes.
How it works
yertle-sre is a small LangGraph agent powered by Anthropic’s Claude. It runs in a ReAct loop: Claude decides which tool to call, the tool returns results, Claude decides what to do next, and the loop ends when Claude has an answer.
Three read-only tool runners are exposed to the model:
yertle_run— wraps theyertleCLI (orgs,nodes,tree,canvas,about,config) with--format jsonauto-appended.aws_run— wraps theawsCLI, restricted to read verbs (describe-*,list-*,get-*,show-*,search-*,head-*,lookup-*).gh_run— wraps theghCLI forlist/view/statusreads and GET-onlygh apicalls.
What you can do with it
from yertle_sre import ask, build_agent — for embedding in larger workflows.
Setup
Requires Python 3.11+ and theyertle, aws, and gh CLIs already authenticated on your machine. yertle-sre uses your credentials and inherits your shell environment — there’s no separate credential store.