Developers switch context constantly. You're in your editor, you need to ask Claude something, so you open a browser tab, navigate to claude.ai, type your question, copy the answer, close the tab, go back to your editor. Or you're in Slack, need to draft a message, so you open ChatGPT, draft it there, copy it back. Every one of these switches is a small interruption that adds up over a day into significant lost focus time. SlashGPT was our attempt to fix that — AI actions wherever you are, triggered with a slash command.
Table of Contents
- The Problem: AI in the Wrong Place
- What SlashGPT Does
- Tech Architecture Decisions
- The LLM Routing Layer
- What Was Unexpectedly Hard
- What We Shipped vs. What We Cut
- Lessons From Building Developer Tools
- The Open-Source Repo
- Ready to Build?
The Problem: AI in the Wrong Place
When AI tools arrived, they came as destinations — websites and apps you navigate to. Claude.ai. ChatGPT. Perplexity. You leave your workflow to use them, then come back. For casual questions, that friction is fine. For workflow-integrated tasks — drafting, summarizing, transforming text, generating code — it's a consistent interruption.
The tools that tried to solve this (browser extensions, sidebar integrations) did so by attaching to specific surfaces. Chrome extensions that only work in the browser. IDE plugins that only work in VS Code. Each integration is siloed.
We wanted something more composable: a command palette concept — like Alfred or Raycast — but built specifically for AI actions, accessible from any context, with the ability to define custom commands that trigger specific AI behaviors.
The slash command interface is the right paradigm for this because developers already use it (Slack, Notion, Linear all use /commands), it's discoverable, and it's composable — you can build a library of commands that cover your specific workflow needs.
What SlashGPT Does
SlashGPT is a system-level command palette with an AI layer. The core behavior:
Trigger: A global keyboard shortcut (configurable) opens the command palette from anywhere on your computer — editor, terminal, browser, Slack, doesn't matter.
Command library: You type / to see your available commands. Built-in defaults: /ask (direct question to LLM), /summarize (summarize clipboard content), /rewrite (rewrite selected text in a specified tone), /explain (explain code in clipboard), /draft (draft a message given a prompt), /commit (generate a git commit message from clipboard diff).
Custom commands: You define your own commands as simple config entries — a name, a prompt template, an optional input source (clipboard, selected text, typed input), and a preferred model. Store your personal command library in a YAML config file.
Output options: Results go to clipboard (default), to a floating text window, or directly pasted into the active text field (optional, requires accessibility permissions).
Model routing: Each command can specify a preferred model. Fast, cheap commands (summarize, rewrite) default to GPT-4o Mini or Claude Haiku. Complex commands (explain complex code, architectural reasoning) default to GPT-4o or Claude 3.5 Sonnet. Override at runtime with --model flag.
Tech Architecture Decisions
SlashGPT is built in TypeScript, running as an Electron application. That was a deliberate and somewhat reluctant choice.
Why Electron: System-level keyboard shortcut registration, cross-platform (macOS, Windows, Linux), clipboard access, and the ability to paste into arbitrary applications all require native OS APIs. Electron gives you those via Node.js native modules while keeping the UI layer in TypeScript/React — the same stack we use everywhere else at V12 Labs.
Why TypeScript: Type safety for the command configuration schema matters when you're letting users define arbitrary YAML configs that map to LLM prompt templates. TypeScript catches the category of errors where a user's custom command config is missing a required field before it tries to make an API call at runtime.
Why not a browser extension: Extensions only work in the browser. The core value proposition of SlashGPT is everywhere access, not browser access. A browser extension would have been significantly simpler to build, but it would have solved a smaller problem.
The config format: Custom commands are defined in ~/.slashgpt/commands.yml. We went with YAML over JSON because developers write their command configs by hand, and YAML is more readable for that use case. The config is validated against a TypeScript schema at startup.
Example custom command:
commands:
- name: "standup"
description: "Generate a standup update from recent git log"
prompt: "Here is my recent git log: {clipboard}. Write a concise standup update (3-4 bullet points) covering what I did yesterday and what I'm working on today."
model: "gpt-4o-mini"
input: "clipboard"
output: "window"
The LLM Routing Layer
One of the more interesting engineering decisions was the LLM routing layer. SlashGPT supports multiple providers (OpenAI, Anthropic) and multiple models within each provider. The routing logic needs to:
- Check which API keys are configured
- Apply per-command model preferences
- Handle the
--modelruntime override - Fall back gracefully if the preferred model is unavailable
We built a simple provider abstraction:
interface LLMProvider {
complete(prompt: string, options: CompletionOptions): Promise<string>
stream(prompt: string, options: CompletionOptions): AsyncGenerator<string>
}
OpenAI and Anthropic each get their own implementation. The router selects the right provider based on the model name prefix — gpt-* routes to OpenAI, claude-* routes to Anthropic. Adding a new provider means implementing the interface and registering the name prefix.
Streaming is important for the floating window output — users see the response generate in real-time rather than waiting for the full response before anything appears. The Electron IPC layer passes stream chunks from the main process (where the API calls happen) to the renderer process (the UI).
What Was Unexpectedly Hard
Cross-platform clipboard handling. Clipboard APIs behave differently across macOS, Windows, and Linux in subtle ways — particularly around preserving rich text vs. plain text, handling images, and the timing of read/write operations. We spent more time on clipboard reliability than on any other single component.
Accessibility permissions on macOS. For SlashGPT to paste into arbitrary applications (the most convenient output mode), it needs macOS Accessibility permissions. The user experience around requesting those permissions is rough — Apple's permission dialog is confusing, and if a user grants permission then it gets revoked by a system update, the failure mode is silent. We ended up building an explicit permission checker that runs at startup and surfaces a clear error if accessibility access is missing.
YAML config validation errors. When a user writes an invalid custom command config, the error messages from raw YAML parsing are cryptic. We built a custom validation layer with human-readable error messages: "Command 'standup' is missing required field 'prompt'" instead of "TypeError: Cannot read property 'prompt' of undefined."
The global keyboard shortcut conflict space. Our default shortcut (⌘+Shift+Space on macOS) conflicted with shortcuts in a surprising number of applications. We made it configurable on first run and documented common conflicts.
What We Shipped vs. What We Cut
Shipped:
- Core command palette with built-in default commands
- Custom command YAML config
- Multi-provider LLM routing (OpenAI + Anthropic)
- Clipboard and floating window output modes
- Paste-to-active-application output (macOS, optional)
- Command history
- macOS, Windows, Linux builds
Cut:
- Plugin system for community-contributed commands (complexity too high for v1)
- Context-aware command suggestions based on active application (required deeper OS integration than we wanted to maintain)
- Built-in command sharing/sync via cloud storage (scope crept)
- VS Code extension that surfaces SlashGPT commands natively in the editor (good idea, separate project)
Lessons From Building Developer Tools
Developer tools are unforgiving in ways that consumer products aren't. Developers notice every rough edge, inconsistency, and missing feature. They compare what you built to the mental model of what it should be, and the bar is high because they know what's possible.
The positive side: developers give detailed, specific feedback. Our early testers on SlashGPT filed issues with reproduction steps, system configuration details, and specific suggestions. That kind of feedback makes improving the product fast.
The thing we'd do differently: ship the macOS version first, get feedback, then do cross-platform. Supporting three operating systems from day one tripled the surface area for bugs and ate a significant portion of our development time. The v1 could have shipped faster and been better on one platform than it was across three.
The Open-Source Repo
SlashGPT is fully open-source at github.com/v12labs-engineering/slashgpt. The repo includes:
- Full Electron + TypeScript codebase
- Pre-built binaries for macOS, Windows, and Linux in releases
- Documentation for the custom command config schema
- The LLM provider abstraction with guides for adding new providers
- Contributing guide
If you're building developer productivity tools on top of LLMs, the routing layer and command configuration schema are probably the most reusable pieces.
Ready to Build?
At V12 Labs, we build production-ready AI tools for founders and development teams. $6K flat fee, 15-day delivery, full source code ownership.
If you have an AI developer tool in mind, book a discovery call at v12labs.io and let's figure out what we can ship in 15 days.