Skip to content

AI Chat Guide

AI Chat is IfAI's core feature - an intelligent conversational interface that understands your codebase and helps you write better code faster.

Quick Start

Opening the Chat Panel

There are several ways to open AI Chat:

  • Keyboard Shortcut: Cmd+K (Mac) or Ctrl+K (Windows/Linux)
  • Command Palette: Cmd+Shift+P > "AI: Open Chat"
  • Sidebar: Click the AI icon in the right panel

Interface Overview

AI Chat Demo

Layout Diagram

┌─────────────────────────────────────────────────────────┐
│  AI Chat                                             [×]    │
├─────────────────────────────────────────────────────────┤
│  💬 Message History                                    │
│  ┌─────────────────────────────────────────────────┐   │
│  │ You: How do I implement authentication?         │   │
│  │                                                  │   │
│  │ 🤖 AI: I'll help you implement JWT auth...      │   │
│  │     [Code Block]                                │   │
│  │     [Insert] [Copy] [Retry]                     │   │
│  └─────────────────────────────────────────────────┘   │
│                                                         │
│  [📎 Attach Files]  [🖼️ Add Image]                     │
│  ┌─────────────────────────────────────────────────┐   │
│  │ Type your message...                    [Send]   │   │
│  └─────────────────────────────────────────────────┘   │
│                                                         │
│  Context: auth.ts (352 lines)  +2 files                 │
└─────────────────────────────────────────────────────────┘

Main Interface Elements

  • Message Area: Conversation history with AI
  • Code Actions: Insert, copy, retry buttons for code blocks
  • Context Bar: Shows currently included files
  • Attach Buttons: Add files or images to conversation
  • Input Box: Type your questions or requests

Context Awareness

IfAI's AI Chat has context awareness - it automatically includes relevant information from your codebase.

What AI Sees

By default, AI Chat includes:

Context TypeDescription
Current FileThe file you're currently editing
Recently ViewedFiles you've recently opened
Symbol RelationshipsFunctions, classes, imports/exports
Project StructureDirectory layout and file organization
Conversation HistoryPrevious messages in current session

Managing Context

View Included Context

The context bar at the bottom of the chat panel shows:

  • Context: auth.ts (352 lines) +2 files

Add Specific Files

Mention file names in your message:

Check utils/auth.ts and helpers/api.ts

Or use the attach button to manually select files.

Remove Context

To exclude sensitive or irrelevant files:

  1. Click the context bar
  2. Uncheck files you want to exclude
  3. AI will re-analyze with updated context

Context Optimization

For best results, focus context on 3-5 relevant files. Too much context may reduce response quality.

Conversation Features

Multi-turn Conversations

AI Chat maintains conversation context, allowing you to:

  1. Ask Follow-up Questions:

    You: Create a login form
    AI: [Provides login form code]
    You: Add password validation
    AI: [Updates code and adds validation]
  2. Refine Responses:

    You: Explain this function
    AI: [Provides explanation]
    You: Explain it more simply
    AI: [Provides simplified explanation]
  3. Iterate on Code:

    You: Write a sorting algorithm
    AI: [Provides implementation]
    You: Optimize for space complexity
    AI: [Provides optimized version]

Streaming Responses

AI responses stream in real-time, showing progress as they're generated:

  • Instant Feedback: See the response as it's being written
  • Early Stopping: Can stop generation if response is going in wrong direction
  • Progressive Understanding: Start reading before completion

Code Generation

Ask AI to generate code based on specific requirements:

Create a user profile card React component with:
- Profile picture
- Name and email
- Edit button
- Use Tailwind CSS for styling

AI will generate production-ready code with correct imports and structure.

Error Diagnosis

Paste error messages directly into AI Chat:

I'm getting this error:
TypeError: Cannot read property 'map' of undefined
  at UserProfile.tsx:15:23

This happens when I render the user list.

AI will:

  1. Analyze the error message
  2. Check your code
  3. Identify the root cause
  4. Suggest fixes

Code Explanation

Select code and request explanation:

Explain what this useEffect hook does

Or use the /explain slash command for instant explanation.

Slash Commands

Slash commands provide quick access to common AI operations:

CommandDescriptionExample
/explainExplain selected codeExplain this function
/fixFix errors in selectionFix the type error here
/testGenerate unit testsGenerate tests for this component
/refactorRefactor codeRefactor for readability
/optimizeOptimize performanceOptimize this loop
/documentAdd documentationAdd docs for this API

Using Slash Commands

Type / in chat input to see available commands:

/ [explain] [fix] [test] [refactor] [optimize] [document]

Or select code first, then use the command:

  1. Select code in editor
  2. Press Cmd+K
  3. Type /fix
  4. AI analyzes and fixes selection

Advanced Features

Inline AI

Quick edits without leaving the editor:

  1. Select code to modify
  2. Press Cmd+K (or Ctrl+K)
  3. Describe changes:
    Add error handling to this function
  4. AI provides inline suggestions

File References

AI can read and reference multiple files:

Compare authentication logic in these files:
- src/auth/jwt.ts
- src/auth/session.ts

AI will analyze both files and provide comparison.

Code Selection Actions

Right-click selected code to perform AI operations:

  • Explain Code: Get detailed explanation
  • Refactor: Improve code quality
  • Add Tests: Generate unit tests
  • Find Bugs: Identify potential issues
  • Optimize: Improve performance

AI Providers

Cloud Providers

IfAI supports multiple cloud AI providers:

OpenAI

  • Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • Best For: Complex reasoning, creative tasks
  • Setup: Settings > AI Providers > OpenAI > Enter API key

Anthropic Claude

  • Models: Claude 3.5 Sonnet, Claude 3 Opus
  • Best For: Long conversations, detailed explanations
  • Setup: Settings > AI Providers > Anthropic > Enter API key

DeepSeek

  • Models: DeepSeek-V3, DeepSeek-Coder
  • Best For: Code generation, technical tasks
  • Setup: Settings > AI Providers > DeepSeek > Enter API key

Zhipu AI

  • Models: GLM-4.7, GLM-4.6, GLM-4.5V
  • Best For: Chinese language support, multimodal
  • Setup: Settings > AI Providers > Zhipu > Enter API key

Kimi (Moonshot)

  • Models: Moonshot-v1-8k, Moonshot-v1-32k
  • Best For: Long context windows
  • Setup: Settings > AI Providers > Kimi > Enter API key

Local Models

Use local LLMs for privacy and cost savings:

Ollama Integration

Hybrid Mode

Configure IfAI to use local models for simple tasks and cloud APIs for complex tasks. See Settings Reference.

Tips and Best Practices

Effective Prompts

  1. Be Specific: Instead of "fix this", say "fix the null pointer exception on line 23"
  2. Provide Context: Include relevant file names and error messages
  3. Use Examples: Show example code of what you want
  4. Iterate: Refine your request if first response isn't perfect

Token Management

  • Monitor Usage: Check token count in context bar
  • Clear History: Clear old conversations to save tokens
  • Optimize Context: Only include relevant files

Privacy Considerations

Cloud Providers: Code is sent to external servers

  • Check provider privacy policies
  • Avoid sharing sensitive data (API keys, passwords)

Local Models: Everything stays on your machine

  • No data leaves your device
  • Best for sensitive projects

Troubleshooting

"Cannot connect to AI provider"

Solutions:

  1. Check API keys in settings
  2. Verify network connection
  3. Check provider status page
  4. Try switching providers

"Exceeded context limit"

Solutions:

  1. Remove some files from context
  2. Clear chat history
  3. Switch to model with larger context window
  4. Use local model with unlimited context

Slow Response

Solutions:

  1. Reduce number of files in context
  2. Switch to faster model
  3. Use local model for instant response
  4. Check network speed

Next Steps

Released under the MIT License.