CLI Reference
The Alveare command-line interface. Run specialists from your terminal, pipe files, and script workflows.
Installation
npm install -g @alveare-ai/cli
Verify the installation:
alveare --version
# alveare-cli/1.4.2
Configuration
Set your API key before making requests. The key is stored in ~/.alveare/config.json.
alveare config set api-key alv_live_abc123...
You can also set the key via environment variable:
export ALVEARE_API_KEY="alv_live_abc123..."
alveare infer
Run a specialist on text input. The primary command for most workflows.
alveare infer -s summarise "The quarterly report shows revenue grew 23%..."
# Output:
# Revenue grew 23% YoY, driven by strong enterprise adoption...
Flags
| Flag | Alias | Description |
|---|---|---|
| --specialist | -s | Specialist to use: classify, summarise, extract, qa, chat, code |
| --max-tokens | -m | Maximum tokens to generate (default: 512) |
| --temperature | -t | Sampling temperature 0.0-2.0 (default: 0.7) |
| --json | Output raw JSON response instead of just the result text | |
| --file | -f | Read input from a file instead of an argument |
Examples
# Classify with low temperature for consistency
alveare infer -s classify -t 0.2 "I want a refund for my order"
# Extract structured data
alveare infer -s extract "Invoice #1234, Amount: $5,000, Due: March 30"
# Read from file
alveare infer -s summarise -f report.txt -m 256
# Generate code
alveare infer -s code "Write a Python function that checks if a string is a palindrome"
# JSON output for scripting
alveare infer -s classify --json "Need to update my address"
# {"id":"inf-abc123","specialist":"classify","result":"account_update","tokens_used":24,"latency_ms":98}
alveare chat
Start an interactive multi-turn conversation. Press Ctrl+C or type /quit to exit.
alveare chat
# Alveare Chat (alveare-chat) — type /quit to exit
#
# You: What is a cognitive hive?
#
# Alveare: A cognitive hive is an architecture where multiple
# specialized AI agents share a single underlying model. Each
# specialist has its own system prompt and parameters, but they
# all run on the same GPU, reducing memory and cost.
#
# You: How does that save money?
#
# Alveare: Instead of loading 6 separate models for 6 tasks,
# a hive loads one model and uses different system prompts to
# create specialists. That's 80-90% less GPU memory.
Chat accepts the same --temperature and --max-tokens flags:
alveare chat -t 0.9 -m 1024
alveare models
List available specialists.
alveare models
# ID OWNED BY
# alveare-classify alveare
# alveare-summarise alveare
# alveare-extract alveare
# alveare-qa alveare
# alveare-chat alveare
# alveare-code alveare
alveare usage
Show usage statistics for the current billing period.
alveare usage
# Period: 2026-03-01 to 2026-03-31
# Requests: 42,370 / 100,000 (42.4%)
# Tokens: 8,420,156
#
# By specialist:
# classify 18,200
# summarise 15,830
# extract 8,340
alveare health
Check the API health status.
alveare health
# Status: ok
# Version: 1.4.2
# Uptime: 30d 0h 0m
alveare config
Manage CLI configuration stored in ~/.alveare/config.json.
# Show all configuration
alveare config show
# api-key: alv_live_***...abc
# base-url: https://api.alveare.ai
# Set a config value
alveare config set api-key alv_live_newkey123...
alveare config set base-url https://custom.endpoint.com
# Get a specific config value
alveare config get api-key
# alv_live_***...abc
Piping and stdin
The CLI reads from stdin when no text argument or --file flag is given. This makes it composable with Unix pipes.
# Pipe a file
cat report.txt | alveare infer -s summarise
# Pipe command output
git log --oneline -20 | alveare infer -s summarise "Summarise these commits"
# Extract from curl output
curl -s https://example.com/api/data | alveare infer -s extract
# Chain specialists: extract then classify
cat email.txt | alveare infer -s extract --json | jq -r .result | alveare infer -s classify
# Process multiple files in a loop
for f in docs/*.txt; do
echo "--- $f ---"
cat "$f" | alveare infer -s summarise -m 128
echo
done
JSON output
Add --json to any command to get machine-readable output. Useful for scripting and piping into jq.
alveare infer -s classify --json "I love this product"
# {"id":"inf-abc123","specialist":"classify","result":"positive","tokens_used":24,"latency_ms":98}
alveare models --json
# {"object":"list","data":[{"id":"alveare-classify",...},...]}
alveare usage --json
# {"period_start":"2026-03-01T00:00:00Z","requests_used":42370,...}
# Extract just the result with jq
alveare infer -s extract --json "Jane Smith, jane@acme.com" | jq -r .result
The CLI exits with code 0 on success, 1 on client errors (bad input, auth), and 2 on server errors. This makes it safe to use in shell scripts with set -e.