Terminal AI coding agent with:
- an interactive TUI
- an autonomous single-shot CLI preset
- two execution modes: persistent
repland provider-nativestandard - patch-based editing
- skills, delegated child-session workers, planning, shell, and optional web search
- a Harbor Terminal Bench runner with a local results UI
Latest release:
curl -fsSL https://github.com/SamGalanakis/lash/releases/latest/download/install_lash.sh | bashFrom this repo:
./install_lash.shFirst run opens provider setup if needed:
lashAutonomous single-shot usage:
lash -p "summarize this repo"
lash --print "explain src/main.rs"--print runs a single autonomous turn, prints the final response to stdout, and skips the interactive prompt bridge.
While it runs, live progress and tool activity stream to stderr.
Common flags:
lash --model gpt-5.4
lash --execution-mode standard
lash --provider
lash --no-mouse
lash --resetStart with the docs hub:
docs/index.htmlIt links to:
README.md: install, CLI, execution modes, and integration overviewdocs/architecture.html: runtime, plugin, prompt, and tool architecturedocs/design.html: TUI visual system and interaction design
replmode: persistent runtime across turnsstandardmode: provider-native tool callinglashlangREPL withparallel { ... }concurrencyapply_patchediting flow- shell execution and streamed output
- durable sessions with resume and retry
- live token/context accounting in the TUI status bar as usage streams in
- skills loaded from global and repo-local skill directories
- image/file/path references in prompts
- planning with
update_plan - benchmark runner and results browser
repl is the default.
repl: persistentlashlangruntime, best when the agent benefits from state across turnsstandard: provider-native tool calling, including multiple native tool calls in one response for independent concurrent work
Choose explicitly:
lash --execution-mode repl
lash --execution-mode standardConcurrency surface:
repl: useparallel { ... }; in expression position it returns a source-ordered list of branch results, and bare expression branches contribute their values directlystandard: emit multiple independent tool calls in the same response; the runtime executes them concurrently and returns all results before the next model step
Selected lashlang semantics:
slice(value, start, end)treatsnullbounds as omitted:start=nullmeans from the beginning,end=nullmeans through the end- negative list/string indices count from the end; out-of-bounds indices return
null contains(record, key)checks record keys in addition to string substring and list membership- string comparisons like
"abc" < "def"are lexicographic to_string([1, 2])andto_string({ a: 1 })preserve integer formatting instead of forcing1.0
Shell surface:
standardandrepl:exec_commandandwrite_stdinare PTY-backed and return incremental terminal output
Inside the TUI:
/help/clear/fork [prompt]/version/info/model [name]/variant [name]/mode [repl|standard]/provider/login/logout/retry/resume/skills/tools/reconfigure/exit
Useful keys:
Esc: cancel run / close promptShift+Enter: newlineCtrl+V: paste imageCtrl+Shift+V: paste text onlyCtrl+Y: copy last responseCtrl+O: cycle expansionAlt+O: full expansion
Skills are markdown-based workflow bundles.
Locations:
- global:
~/.lash/skills/ - repo-local:
.agents/lash/skills/
Use them with:
/skillsin the TUI$<skill-name>in your prompt
When a skill is selected, lash injects a <skill> block with that skill's
instructions into the turn input before the model runs.
Override prompt sections at launch:
lash --prompt-replace "intro=You are terse."
lash --prompt-append "guidance=Always run tests."
lash --prompt-disable "available_tools"
lash --prompt-replace-file "guidance=./prompt.md"Run Harbor + Terminal Bench with the in-repo lash adapter:
scripts/run-terminalbench.sh --sample --execution-mode repl --model gpt-5.4 --variant high
scripts/run-terminalbench.sh --sample --preset smoke --execution-mode repl --model gpt-5.4 --variant high
scripts/run-terminalbench.sh --sample --preset fast-medium --execution-mode standard --model gpt-5.4 --variant high
scripts/run-terminalbench.sh --sample --execution-mode standard --tasks regex-log,sqlite-with-gcov --model gpt-5.4 --variant high
scripts/run-terminalbench.sh --full --execution-mode standard --task "git-*" --model gpt-5.4 --variant highRun the same harness with OpenCode:
scripts/run-terminalbench.sh --agent opencode --sample --model openrouter/openai/gpt-5 --variant high
scripts/run-terminalbench.sh --agent opencode --sample --model anthropic/claude-sonnet-4-5 --variant highNotes:
lashremains the default agent.--execution-modeonly applies tolash; OpenCode uses its native execution path.--preset smokeexpands toregex-log,log-summary-date-ranges.--preset fast-mediumexpands toregex-log,log-summary-date-ranges,fix-code-vulnerability,sqlite-with-gcov.--variantis required for all benchmark runs so provider-native reasoning settings are explicit and reproducible.- OpenCode benchmark runs require an explicit
--model provider/model. - OpenCode benchmark runs automatically copy local
opencode auth logincredentials from~/.local/share/opencode/auth.jsoninto the Harbor container when present. - OpenCode can still fall back to provider env vars such as
OPENROUTER_API_KEY,OPENAI_API_KEY, orANTHROPIC_API_KEY.
Each run exports a structured snapshot to .benchmarks/terminalbench/ by default, including:
- global stats
- per-task rollups
- per-trial timing and token usage
- per-trial CPU time and peak memory where available
- copied logs and verifier output
- run parameters such as agent, provider, model, variant, execution mode, concurrency, and timeouts
Open the local results UI:
python3 scripts/bench_ui.py --results-dir .benchmarks/terminalbench --openThe UI supports:
- browsing runs
- multi-run selection
- side-by-side comparison
- trial log inspection
- deleting runs end-to-end, including the exported snapshot and original Harbor job directory
Change the export location with:
scripts/run-terminalbench.sh --results-dir /path/to/results --sample --execution-mode replcargo build -p lash-cli
cargo build -p lash-cli --releaselash-core is available as a library:
[dependencies]
lash-core = { path = "../lash", default-features = false, features = ["full"] }Embedders provide model metadata explicitly and can choose their own catalog source and storage. The
first-party CLI is plugin-first: it builds the session from tool plugins plus stateful plugins such
as history, planning, and delegation. PluginHost can opt sessions into a
DynamicToolProvider, and the active PluginSession owns that live tool graph. The built-in
tool_surface plugin owns:
- which tools are injected into the prompt
- whether
search_tools()is exposed because additional tools were omitted from the REPL prompt - omitted-tool notes attached to the available tool list
For plugin authors, lash-core exposes grouped registrar namespaces on PluginRegistrar such as
reg.tools(), reg.prompt(), reg.surface(), reg.turn(), reg.tool_calls(), reg.output(),
reg.messages(), reg.tool_results(), reg.session(), and reg.external(). Small static
plugins can use StaticPluginFactory, context-sensitive declarative plugins can use
PluginSpecFactory, and bespoke SessionPlugin implementations can still register the full hook
set directly. Plugin factories build against a session context that includes both agent_id and
execution_mode, so a plugin can choose the correct per-mode tool instance up front. Turn
lifecycle orchestration is plugin-owned: the active PluginSession prepares turns, applies
checkpoint directives, and finalizes committed turns before history is persisted. The host-facing
SessionManager also exposes generic child-session orchestration primitives such as
create_session, start_turn_stream, await_turn, cancel_turn, and close_session, which is
how agent_call launches delegated workers without special core-level subagent state.
Stored config lives at:
~/.lash/config.jsonRelevant runtime settings include:
runtime.context_strategyruntime.low_tier_subagent_execution_modestandardby default foragent_calllow-tier child sessions; set toreplonly if you want low-tier delegates to execute vialashlang
Supported context strategies:
rolling_contextrecall_agent
Config shape:
{
"active_provider": "openai-generic",
"providers": {
"openai-generic": {
"type": "openai-generic",
"api_key": "...",
"base_url": "https://openrouter.ai/api/v1"
}
},
"auxiliary_secrets": {
"tavily_api_key": "..."
},
"agent_models": {
"low": "...",
"medium": "...",
"high": "..."
},
"runtime": {
"context_strategy": {
"type": "rolling_context"
},
"low_tier_subagent_execution_mode": "standard"
}
}The config file is saved with mode 0600 on Unix.
openai-generic defaults to:
https://openrouter.ai/api/v1All rights reserved.