How Pocket Code's AI Works Under the Hood
A technical look at how we designed Pocket Code's AI assistant: smart autocomplete, contextual chat, and 15 tools integrated with the app's modules.
How Pocket Code's AI Works Under the Hood
Pocket Code's artificial intelligence isn't a generic chatbot glued to an editor. It's an integrated system that understands your project, your code, and the modules you're using. In this post, we'll tell you how we built it.
General Architecture
The AI system has three main layers:
1. Smart Autocomplete (LSP)
Autocomplete works through the Language Server Protocol (LSP). When you type code, the editor sends context (current file, cursor position, open files) to the AI model, which returns completion suggestions.
What makes our autocomplete different:
- Full project context: It doesn't just see the current file, but the project structure, imports, and dependencies
- Optimized models: We use temperature 0.0 and topK=1 for maximum suggestion accuracy
- Smart cache: We separate cache by model (chat vs completion) to avoid context contamination
- Fill-in-the-Middle (FIM): The model receives code before AND after the cursor for more precise suggestions
2. Contextual Chat
The AI chat isn't a simple prompt-response. When you ask a question:
- The current open file is analyzed
- Selected code is extracted (if there's a selection)
- The project structure is included
- Everything is sent as context to the model
This means you can ask "what does this function do?" without having to copy and paste code.
3. Module Tools (Tool Calling)
This is the most powerful part. We've registered 15 tools that AI can invoke to interact directly with the app's modules:
| Tool | What it does |
|---|---|
create_file | Creates files in your project |
run_terminal_command | Executes commands in the integrated terminal |
query_database | Queries SQLite databases |
preview_layout | Previews layouts in the designer |
git_status | Checks Git status |
run_tests | Runs unit tests |
install_package | Installs dependencies |
read_file | Reads project files |
search_code | Searches the entire codebase |
refactor_symbol | Safely renames symbols |
When you tell the AI "create a ViewModel for the user list", it doesn't just generate code β it creates the file, places it in the correct folder, and updates the necessary imports.
Security
All execution goes through strict validations:
- Command injection: Terminal commands are sanitized against injection
- Sandboxing: Commands can only run within the project directory
- Timeouts: Every operation has a time limit to prevent zombie processes
- Null safety: All tools validate null values before executing
Supported Models
Pocket Code works with multiple AI providers:
- Gemini (Google) β Default model, excellent for Kotlin/Android
- OpenAI (GPT-4, GPT-3.5) β Robust alternative
- Local models β Via Ollama for offline use
Users choose their model and API key in Settings > API Keys.
What's Coming
We're working on:
- RAG (Retrieval-Augmented Generation): So the AI searches Android and Kotlin documentation before answering
- Multi-file editing: Edit multiple files in a single operation
- Prompt templates: Reusable prompt templates for common tasks
Conclusion
Pocket Code's AI isn't a bolted-on component β it's an integral part of the development experience. Every app module (terminal, database, designer, Git) is connected, and the AI can orchestrate them all.
You can see the complete development status in our status post.