Long-lived crush agents running as Kubernetes Deployments. One operator. Many drones. One shared link - the vinculum.
“The processing device at the core of every Borg vessel. It interconnects the minds of all the drones. It purges individual thoughts and disseminates information relevant to the Collective.”
“It brings order to chaos.”
– Seven of Nine and Kathryn Janeway
Spinning up a fresh pod for every prompt is wasteful. Vinculum runs each Agent as a long-lived pod with an open crush session and a persistent workspace - so context survives, PVCs survive, and cold-start disappears.
A pod per Agent - no per-prompt cold-start. Crush session + /workspace PVC survive restarts.
Declarative Agent, Task, AgentSchedule, MCPServer CRDs. One operator reconciles them into Deployments, PVCs, RBAC, Services.
Azure OpenAI, Anthropic, OpenAI, or bring-your-own - a provider is just a labeled Secret.
Port-forwards through your active kubecontext - no exposed operator endpoint, no long-lived local state.
Four CRDs. Everything else is derived.
Long-running agent drone. Model, provider secret ref, instructions, workspace size, attached MCP servers. Operator creates Deployment, Service, PVC, RBAC.
A unit of work. Prompt, fresh, workspace mode (shared or ephemeral), timeout, artifacts, env. Serial execution per Agent.
Cron trigger that stamps Tasks from a template. Concurrency policy: Allow, Forbid, Replace.
Reusable Model Context Protocol server - stdio or http. Attach by name to any Agent. secretRef wired as envFrom.
Any Kubernetes cluster. Any active context. Helm chart + Homebrew CLI.
Helm OCI install - no repo add, no values overrides required for a default run.
helm install vinculum oci://ghcr.io/florianwenzel/helm/vinculum \ --version 0.1.0 \ -n vinculum-system --create-namespace
Homebrew tap for macOS + Linux. Prebuilt binaries available for all platforms on every release.
brew install FlorianWenzel/vinculum/vnclm
Drop API keys into a labeled Secret. Wizard handles the labeling + encoding.
vnclm create provider # interactive wizard
Pick a model, point at a provider, optionally attach MCP servers. Operator provisions the pod.
vnclm create agent # wizard
Streams live. Blocks until terminal phase. Run again later - crush picks up the session, history intact.
vnclm ctx set-agent locutus vnclm run "Compose a haiku about the Borg collective."
kubectl-shaped verbs. Interactive wizards. Per-invocation port-forward. No exposed endpoints.
vnclm ctx show | set-agent <name> | clear-agent vnclm get agents|tasks|schedules|providers|mcps [name] [-o table|wide|json|yaml] vnclm delete <kind> <name> [--yes] vnclm create provider|agent|task|schedule|mcp # wizard vnclm create -f manifest.yaml # apply (multi-doc) vnclm logs <task> [-f] vnclm run "<prompt>" [--agent] [--fresh] [--timeout N]
Give agents extra tools by declaring MCPServer resources - stdio processes or HTTP endpoints - and attaching them by reference. One MCPServer, many agents.
# stdio MCP - filesystem over the agent's /workspace PVC vnclm create mcp --name filesystem --command npx \ --arg -y --arg @modelcontextprotocol/server-filesystem --arg /workspace \ --enabled # http MCP with a secret injected as envFrom vnclm create mcp --name github --url https://api.githubcopilot.com/mcp/ \ --secret-ref github-mcp --enabled vnclm create agent # wizard → multiselect attaches MCPs