Persistent graph indexes + small decision model = fantastic results on CPU
Stop rebuilding context every session. AleutianFOSS gives AI coding agents persistent understanding of your codebase using lightweight graph infrastructure and a small CPU-runnable model for decision-making. Six synchronized indexes. Zero GPUs required.
Current AI coding agents rebuild context from scratch every session. This wastes compute, hits token limits, and forces you to use large expensive models for simple structural queries.
❌ Typical AI Agent
Cost: $$$, slow, incomplete
✓ AleutianFOSS
Cost: ¢, instant, complete
The Insight: You don't need a large model to navigate code structure. You need persistent graphs and a small decision-making model. AleutianFOSS provides the graphs. You bring a CPU-runnable model (7B, 4B, even 1B works) to make decisions.
Six persistent graph indexes handle structure. Small CPU-runnable model handles decisions. No GPUs. No expensive inference. Fantastic results.
Six synchronized indexes stored in BadgerDB. Content-addressable caching (SHA256 hash). Build once per commit, query forever. Survives restarts, sessions, reboots.
Use 7B, 4B, or 1B parameter models on CPU. Graphs provide structure, model provides reasoning. No GPU inference costs. Fantastic results from small models + good data.
Graph cache hit: 5s → 1ms. HLD session startup: 6s → 163ms. Parallel BFS memory: 11.2MB → 493KB. Production-optimized for real codebases.
Dominators (bottlenecks), PageRank (criticality), Leiden communities (modules), Heavy-Light Decomposition (O(log²V) path queries). All pre-computed and cached.
find_dominators, find_hot_spots, get_callers, find_lca, path_aggregate, and 22 more. Ready-to-use for your agent. OpenTelemetry instrumented.
Open source with copyleft protection. AI systems must attribute usage. Prevents corporate LLMs from laundering your infrastructure work.
20+ algorithmic optimizations. Production-validated on 100K+ node graphs. All metrics from real codebases, not synthetic benchmarks.
| Operation | Without AleutianFOSS | With AleutianFOSS | Improvement |
|---|---|---|---|
| Graph cache hit | 5000ms (rebuild from AST) | 1ms (content-addressed) | 5000x faster |
| Find symbol definition | O(V) linear scan | O(1) hash lookup | 100x faster |
| Parallel BFS memory | 11.2 MB (concurrent maps) | 493 KB (optimized) | 22x reduction |
| HLD session startup | 6024ms (rebuild tree) | 163ms (BadgerDB load) | 37x faster |
| Community detection | Every query recomputes | Cache-first lookup | 100x faster |
| LCA path queries | O(V) tree traversal | O(log² V) HLD | 1000x on deep trees |
| Critical path analysis | Multiple BFS passes | Single traversal | 3.5x faster |
| Dominator computation | Full graph every time | Subgraph + memoization | 10-250x faster |
Why This Matters: A small 4B model on CPU can outperform a 70B model on GPU for code navigation tasks—when it has the right structural data. AleutianFOSS provides that data. You save on inference costs, hardware, and get faster results.
Get AleutianFOSS running on your codebase in under 5 minutes. Currently supports Go with Python, JavaScript, and Rust in development.
# Clone the repository git clone https://github.com/AleutianAI/AleutianFOSS.git cd AleutianFOSS # Build the trace agent (Go 1.21+) cd services/trace/cli go build -o trace . # Initialize graphs on your codebase ./trace init /path/to/your/go/project # Run queries ./trace query find_hot_spots --limit 10 ./trace query find_dominators --threshold 5 ./trace query get_callers --symbol "MyFunction"
Your small CPU-runnable model queries the graphs, AleutianFOSS returns structured data.
package main
import (
"context"
"github.com/AleutianAI/AleutianFOSS/services/trace/cli/tools"
"github.com/AleutianAI/AleutianFOSS/services/trace/graph"
)
func main() {
ctx := context.Background()
// Initialize graphs (persisted across sessions)
g := graph.NewCallGraph()
analytics := graph.NewAnalytics(g)
// Register tools
dispatcher := tools.NewDispatcher()
tools.RegisterExploreTools(dispatcher, analytics, g, nil)
// Your small model decides what to query
// AleutianFOSS provides structured results
result, err := dispatcher.Execute(ctx, "find_hot_spots", map[string]interface{}{
"limit": 10,
})
// result: Top 10 critical functions by PageRank
// No LLM needed for structural queries
}
Pro Tip: Use a 7B quantized model (GGUF format, CPU-runnable) as your decision maker. It queries AleutianFOSS graphs for structure, focuses its reasoning on high-level decisions. Costs pennies, runs on a laptop, beats 70B models on code tasks.
Three-phase architecture: Code Understanding (graphs) → Planning (small model decisions) → Code Generation. Phase 1 is production-ready. Phase 2 in active development.
6 indexes, 27 tools, production-ready
MCTS, beam search, active development
Builds on graphs + planning
40+ technical articles documenting architecture, optimizations, and algorithm implementations.
Join researchers and infrastructure teams building AI agents with persistent memory and CPU-runnable decision models. No GPUs required.