The "Guide Coding > Vibe Coding / Technique Exchange" session conducted a live survey of roughly 30 participants, tallying which AI coding tools they use and how they interact with them. The results paint a picture of a community deep in experimentation:
Tool usage:
| Tool |
Users |
Notes |
| Copilot | 11 | Most popular; bundled with IDEs; called "horrible" by some |
| Gemini | 8 | Surprisingly popular; partly due to Android Studio integration |
| Junie (JetBrains) | 7 | Strong IntelliJ ecosystem |
| Windsurf | 7 | Many teams adopted it in late 2024 beta |
| Claude Code | 6 | CLI terminal tool |
| Cursor | 6 | Noted for clever context-grabbing heuristics |
| Roo Code | 4 | Part of the Cline/Roo/Kilo family |
| Ollama (local) | 3 | For local/private experimentation |
| Zed | 3 | Editor with built-in AI |
| Codex CLI | 2 | OpenAI's CLI tool |
| Open Code | 2 | CLI tool; works with Copilot license |
| ECA (Emacs) | 1 | Generic Emacs client for any model backend |
| Cline | 1 | IDE plugin |
A telling observation: the two most popular tools (Copilot, Gemini) were described as "kind of horrible" in quality but widely adopted because they come bundled with IDEs. Claude Code, Cursor, Junie, and Windsurf were seen as the higher-quality options.
How people interact with AI tools:
| Mode |
Users |
| IDE Plugin | 17 |
| Chat (Web) | 15 |
| Full AI IDE (Cursor, Windsurf) | 9 |
| CLI (Claude Code, Codex, OpenCode) | 7 |
One participant noted the perception advantage of CLI tools: "Your boss walks by, 'This guy's doing some serious stuff.' Otherwise it's just on the website."
Techniques That Work: A Practitioner's Field Guide
The Technique Exchange session was a goldmine of concrete practices that participants had battle-tested. Here are the ones that generated the most energy in the room.
1 Aggressive Context Clearing
Write a plan to markdown. /clear the session. Feed it only the plan. Ask it to break down phase 1. Write that to markdown. /clear again. Only then implement.
This feels counterintuitive — you lose accumulated context. But it prevents stale context from polluting later tasks. Research cited from the book "AI Engineering" supports this: models use the beginning and end of prompts more than the middle. The middle is a dead zone.
2 See-Do-Teach
One participant implemented a feature manually first ("See"), polished it with good commits, then told the AI: "Look at my last commits and implement a similar feature following the same patterns" ("Do"). After iteration and correction, he extracted rules and commands so future features could be generated nearly perfectly ("Teach"). The initial setup took a full weekend, but it reduced subsequent similar features to 20–30 minutes each.
3 Hierarchical Context Files
Place CLAUDE.md files (or equivalent rules files) at different directory levels: root for general project rules, database directory for database access patterns, UI directory for frontend patterns. When agents work in a specific area, they only load relevant context. One participant reported this enabled "one-shot, chunky pieces of functionality."
Caution: A Windsurf user noted that .rules files have a 1,200-character limit. Files exceeding this are silently omitted — and the AI itself can generate rules files that exceed the limit, creating a frustrating loop.
4 Multi-Model Red/Blue Team
Use at least two different models and have them review each other's output. One practitioner runs a "red team / blue team" approach: one model writes code, another tears it apart, cycling back and forth.
5 Architecture Tests Over Prompt Rules
Multiple developers on the same team using different models leads to inconsistent coding styles. Rather than relying on prompt-based rules (which agents can and do ignore), participants recommended architecture tests — executable checks that enforce patterns.
"We started to enforce it by having architecture tests... the test will fail if you don't follow what we agreed."
Concrete tools recommended: ArchUnit (Java/Kotlin), Dependency Cruiser (JS/TS), CodeScene (language-agnostic behavioral analysis), ESLint, and Biome.
6 Baby Steps and Inside-Out Development
Work domain-first, then adapters, one layer at a time. The rule: "If you or AI produce code that you cannot explain, it does not pass to main."
7 MCP Integrations (With Caveats)
Participants shared experiences with Model Context Protocol integrations:
- Playwright MCP: Used not for writing tests but for driving browsers so agents can visually verify UI changes
- Figma MCP: Streamlining frontend development
- Notion MCP: Connecting documentation
But a cautionary tale: one team built an MCP on Jira that auto-created tickets from error logs. They "shut down the system after two days because the amount of backlog that generated was insane."
8 Sub-Agents and Swarms
Specialized agents that each handle one task and hand off to the next, avoiding context overload.
"They don't need to know so much. They just need to know what they need to know. And therefore the system, in a way, works better."
Voice-First Interaction: An Unexpected Breakthrough
One of the most surprising and personal revelations came from a participant in the "How We Can Improve Remote Work / Talking With AI" session. He described a simple change that transformed his relationship with AI tools: he started speaking instead of typing.
Using the built-in speech-to-text on his MacBook (free, "reasonably" good), he began dictating prompts while standing at his desk. The effects surprised him:
"Something happened to my brain. I felt a little bit more better, I felt a little bit more engaged, and I didn't hate AI so much like I did before."
He identified a qualitative difference: when typing, he is in execution mode with a fixed track. When standing and speaking, he enters a more creative, exploratory thinking mode. For a remote worker whose team met only twice a week, AI-as-voice-partner served as the pair he was missing.
Another participant took this much further — spending roughly 500 EUR/month on ElevenLabs for voice AI:
- Before starting a project, he walks around the city discussing it with a voice AI for 40–60 minutes, producing detailed specifications from the conversation
- He trained a voice AI as his PhD mentor/focus coach
- He uses AI as a "personal psychologist" for end-of-day debriefing
- He creates evolving AI "avatars" with distinct personalities
- He even sends AI avatars to students when he cannot meet them
He also recommended interacting with AI in your mother tongue rather than English, because the richness of expression in your native language provides better context.
Running Models Locally: Simpler Than You Think
The "AI And The Physical World" session (Day 2) included a surprisingly practical deep-dive into running models locally with Ollama. Participants who had tried it reported:
- Setup is "so simple that it feels a bit ridiculous"
- Download from ollama.com, install, pull models from Hugging Face or the Ollama library
- Models from Meta, IBM, Microsoft are freely available
- Works completely offline (one participant used it on a 10-hour flight)
- Rule of thumb: approximately 1GB of RAM per billion parameters
- On a Mac with 16GB RAM, 7B models work reasonably; 13B is slow
- Privacy benefit: no data sent to external companies
The quality trade-off is real — local models (3–24GB) are far less capable than ChatGPT-class models (~500GB). Practitioners recommended a split strategy: complex tasks go to cloud models, private or simple tasks go to local ones.
Scaling AI to Real Codebases
The "How to Transition From Toy Projects to Real-World?" session tackled the gap between impressive AI demos on small projects and the reality of large codebases:
- "30,000 → AI can generate it": Below about 30,000 lines/tokens, AI can effectively manage entire codebases. Beyond that, things break down.
- The modular code insight: "Actually we want modular code." The answer is not bigger context windows — it is better-structured code. And the most quotable insight from the session: "Human-friendly code is also AI-friendly."
- Serena for semantic search: The tool Serena was highlighted for providing semantic retrieval capabilities over codebases, going beyond simple text search.
- Make IDE knowledge available to agents: IDEs already understand type information, call graphs, and symbol resolution. Agents currently lack this. MCP servers were identified as the potential bridge.
- Architecture Decision Records (ADRs): Keep them in the repo. "For AI, provide example / add comments to good code / Have ADR in the Repo."