Getting Started with ADI: From Zero to Autonomous in 5 Minutes
No long tutorials. No complex configuration. Just four steps between you and autonomous task implementation.
What You'll Need
Before you start, make sure you have:
- GitLab account (GitHub support coming Q2 2026)
- AI API key - At least one of:
- Anthropic API key (Claude) - Get one here
- OpenAI API key (GPT-4/Codex) - Get one here
- Google AI API key (Gemini) - Get one here
- Task source - One of:
- Jira workspace access
- Linear workspace access
- GitLab project with issues
- GitHub project with issues (coming soon)
Time estimate: 5 minutes
Cost: Free during beta, then BYOK (you control AI spending)
Step 1: Connect Your GitLab Account
- Visit the-ihor.com
- Click "Activate ADI"
- Authorize ADI to access your GitLab account
- Grant permissions:
- Read repository information
- Write to repositories (for creating branches and MRs)
- Access CI/CD variables (for secure API key storage)
- Create merge requests
Why these permissions?
- ADI needs repo access to read code and create implementation branches
- CI/CD variable access allows secure storage of your API keys
- MR creation is how ADI delivers completed work
Security note: ADI uses OAuth. We never see your GitLab password.
Step 2: Add Your API Keys (BYOK)
ADI uses a Bring Your Own Key model. You provide the AI API keys, ADI uses them to implement your tasks.
Why BYOK?
- Full cost transparency - You see exactly what you spend on AI
- Data control - Your API keys, your usage, your data
- Flexibility - Switch providers, adjust limits, cancel anytime
Adding Your Keys
- In ADI dashboard, navigate to Settings → API Keys
- Add at least one provider:
Anthropic (Claude) - Recommended:
Key Name: Anthropic
API Key: sk-ant-api03-...
OpenAI (GPT-4/Codex):
Key Name: OpenAI
API Key: sk-proj-...
Google (Gemini):
Key Name: Google AI
API Key: AIza...
- Click "Test Connection" to verify each key works
- Save
Pro tip: Start with one provider. ADI will auto-select the best model per task. You can add more providers later.
Expected Costs
Based on real usage:
- Simple bug fix: $0.15 - $0.30
- Feature implementation: $0.40 - $0.80
- Technical debt cleanup: $0.20 - $0.50
- Evaluation only (rejected task): $0.05 - $0.10
Average monthly cost for active development: $30-80 in AI API usage.
Compare to developer time saved: 20-40 hours/month.
Step 3: Configure Task Sources
Tell ADI where to watch for tasks.
Option A: Jira
- Go to Settings → Task Sources
- Click "Add Jira Integration"
- Enter:
- Jira workspace URL:
yourcompany.atlassian.net - API token: Generate from Jira
- Project key:
PROJ(or multiple:PROJ,TEAM,ENG)
- Jira workspace URL:
- Configure auto-trigger:
- ✅ Trigger on: Status changed to "Ready for Dev"
- ✅ Only if: Labels include "ai-solvable"
Option B: Linear
- Click "Add Linear Integration"
- Authorize ADI to access Linear
- Select teams to monitor
- Configure auto-trigger:
- ✅ Trigger on: Status changed to "Todo"
- ✅ Only if: Labels include "auto-implement"
Option C: GitLab Issues
- Click "Add GitLab Issues"
- Select repositories to monitor
- Configure auto-trigger:
- ✅ Trigger on: Issue labeled "adi-auto"
- ✅ Only if: Issue assigned to ADI bot user
Option D: GitHub Issues (Coming Soon)
GitHub integration is in active development, ETA Q2 2026.
Multiple Sources?
You can connect all of them. ADI monitors everything and processes tasks from any configured source.
Step 4: Create Your First Task
Let's test the setup with a simple task.
Writing a Good Task
ADI works best with clear, specific tasks. Think: "briefing a competent junior developer."
Example Good Task:
Title: Add health check endpoint to API
Description:
Create GET /api/health endpoint that returns:
- Status: "ok"
- Timestamp: current ISO datetime
- Version: from package.json
Follow existing controller patterns in src/controllers/.
Include test coverage.
Why this works:
- Clear deliverable (new endpoint)
- Specific requirements (what to return)
- Context clues (where patterns live)
- Success criteria (test coverage)
Create It in Your Task Source
In Jira:
- Create new issue
- Set status to "Ready for Dev"
- Add label "ai-solvable"
- Save
In Linear:
- Create new issue
- Set status to "Todo"
- Add label "auto-implement"
- Save
In GitLab:
- Create new issue in configured repo
- Add label "adi-auto"
- Assign to ADI bot user
- Save
What Happens Next
Within 60 seconds:
- ADI picks up the task from your task source
- Evaluation starts (2-phase analysis)
- Simple filter: Can this be automated?
- Deep evaluation: How should it be implemented?
- You get notification: "Task evaluation complete"
- Check ADI dashboard to see evaluation result
If evaluation passes:
- Implementation pipeline starts
- ADI implements the task (usually 4-12 minutes)
- Merge request created in GitLab
- You get notification: "MR ready for review"
If evaluation fails:
- You get notification with rejection reason
- Task stays in your backlog for manual implementation
Step 5: Review Your First Merge Request
When implementation completes:
- Open the MR from notification link
- Review the changes:
- Code diff
- Implementation approach
- Test coverage
- CI pipeline status (all checks should pass)
- Check the MR description:
- Summary of changes
- Files modified
- Cost breakdown
- Iterations count
- Approve or request changes
If It Looks Good:
Click "Merge". Done. Task complete.
If Changes Needed:
Add review comments in MR. ADI won't auto-respond to comments (yet), so either:
- Make small fixes yourself
- Create new task with specific change requests
Monitoring and Metrics
Dashboard Overview
Your ADI dashboard shows:
- Tasks processed (last 7 days, 30 days, all time)
- Success rate (% of evaluated tasks implemented)
- Cost breakdown (per task, per month)
- Time saved (estimated developer hours)
- Active pipelines (currently running)
Usage Tracking
Every task creates usage records:
Task: Add health check endpoint
├─ Evaluation
│ ├─ Simple filter: $0.02 (0.5s)
│ └─ Deep agentic: $0.08 (12s)
├─ Implementation: $0.28 (6m 40s)
└─ Total: $0.38
Full transparency. You see every API call, every token, every dollar.
Advanced Configuration (Optional)
Once you're comfortable with basics:
GitLab Runner Tags
ADI uses GitLab CI runners. You can specify custom runner tags in settings:
Evaluation pipeline: evaluation
Implementation pipeline: claude, codex, gemini
This lets you use different runners for different AI providers or workload types.
API Rate Limits
Set monthly spending limits to avoid surprise bills:
Max monthly AI spend: $100
Action when limit reached: Pause new tasks
Webhook Notifications
Get notified via webhook for pipeline events:
Webhook URL: https://yourapp.com/adi-webhook
Events: evaluation_complete, implementation_complete, pipeline_failed
Troubleshooting
"Evaluation rejected my task"
Common reasons:
- Task description too vague
- Requires human judgment (design, UX)
- Missing context (which repo? which feature?)
Fix: Make the task more specific, add context.
"Pipeline failed during implementation"
Check:
- ADI dashboard → Pipeline logs
- Look for error in execution phase
- Common issues:
- Missing dependencies
- Test failures (ADI iterates, but may hit limit)
- API rate limits
Fix: Review error, update task requirements if needed.
"MR created but code quality is poor"
Remember: ADI follows patterns in your codebase. If it's producing poor code, check:
- Are existing patterns clear and consistent?
- Are tests present to guide implementation?
- Was the task specific enough?
Improve: Add better examples in codebase, write clearer tasks.
Best Practices for Success
1. Start Small
First 5-10 tasks should be simple, well-scoped. Build confidence.
2. Write Specific Tasks
"Add X endpoint following Y pattern" beats "improve the API."
3. Maintain Good Codebase Patterns
ADI learns from your code. Clean patterns = better output.
4. Review Thoroughly at First
Until you trust ADI's output, review every line. You'll learn its strengths and weaknesses.
5. Label Tasks Appropriately
Not everything should be auto-triggered. Use labels deliberately.
What's Next?
Now that you're set up:
- Create 3-5 simple tasks to test the flow
- Review the MRs to understand ADI's output quality
- Adjust your task writing based on results
- Expand to more task sources once comfortable
- Monitor costs and ROI
Need help?
- Read the technical deep-dive
- Understand what ADI solves
- Join the community (Discord link TBD)
Ready to activate ADI? Start here with $100 free credit. First 5 tasks are on us.