I built Shex because I kept forgetting command syntax. Need to find files by content? Extract an archive? Check disk usage? I know what I want, but googling the exact flags gets old.
What it does:
shex find all python files larger than 1MB
shex show disk usage sorted by size
shex compress the logs folder to tar.gz
shex what processes are using port 3000
It sends your natural language request to an LLM, which generates and executes the appropriate shell command. The output streams in real-time.
Key design decisions:
Auto-retry on failure - If a command fails, Shex automatically tries alternative approaches (up to 3 retries by default). The LLM sees the error and adapts.
Safety first - Dangerous commands (rm -rf, format, etc.) require explicit confirmation. The LLM flags these automatically.
Multi-provider support - Works with DeepSeek, OpenAI, Claude, Gemini, Mistral, Groq, Qwen, and any OpenAI-compatible API. Bring your own key.
PTY support on Unix - Proper pseudo-terminal handling means progress bars and interactive prompts work correctly.
Install:
pip install shex
First run walks you through LLM configuration.
Technical notes:
1. Uses OpenAI function calling / tool use pattern
2. Streams both LLM thinking and command output
3. Cross-platform (Windows, macOS, Linux)
4. ~400 lines of core code
shex find all python files larger than 1MB shex show disk usage sorted by size shex compress the logs folder to tar.gz shex what processes are using port 3000
It sends your natural language request to an LLM, which generates and executes the appropriate shell command. The output streams in real-time. Key design decisions: Auto-retry on failure - If a command fails, Shex automatically tries alternative approaches (up to 3 retries by default). The LLM sees the error and adapts. Safety first - Dangerous commands (rm -rf, format, etc.) require explicit confirmation. The LLM flags these automatically. Multi-provider support - Works with DeepSeek, OpenAI, Claude, Gemini, Mistral, Groq, Qwen, and any OpenAI-compatible API. Bring your own key. PTY support on Unix - Proper pseudo-terminal handling means progress bars and interactive prompts work correctly. Install:
pip install shex
First run walks you through LLM configuration. Technical notes: 1. Uses OpenAI function calling / tool use pattern 2. Streams both LLM thinking and command output 3. Cross-platform (Windows, macOS, Linux) 4. ~400 lines of core code
Happy to answer questions. Source: https://github.com/YUHAI0/shex