The Social Media Parallel

The most memorable framing of the unconference came from the "AI, Ethics, Energy Consumption" session (Day 2), captured on a flipchart as two parallel timelines:

(1) 10 years ago: Social media — cool idea! now: brain rot.
(2) Now: AI/LLM — cool idea! 10 years from now: ???

A participant referenced interviews with former Instagram engineers who said they were "only told to hit this number of people returning to the platform." They did not intend the harmful outcomes, but their algorithmic tweaks produced them. The implication for AI builders was clear.

The Ethics Catalogue

The unconference collectively assembled a catalogue of ethical concerns:

Copyright
LLMs trained on copyrighted materials without compensation
Greed as the driving force
AI advancement primarily solving a "wage problem," not genuinely innovating
Employment displacement
Not just job loss but quality degradation — people rehired for "lower jobs to do stupid jobs" checking AI output
Power concentration
"An extreme way to control a lot of information... a great way to form the masses"
Vendor lock-in
Deep dependency on AI tools creating structural inability to leave ecosystems
Bias amplification
AI trained on biased data amplifies existing biases
Impact on children
"Children communicate more with AI than with adults... cannot give feedback, only reinforcing, no corrective"
Energy consumption
US data centers relying on gas and diesel despite "green" marketing; major tech companies abandoning zero-emission goals; ~0.3W per prompt on average (Google's reported figure), though this probably does not capture training costs or infrastructure
Military and surveillance applications
The dual-use potential of AI capabilities

The Push for a Software Engineers' Manifesto

The "SWE Manifesto / Union" session grew directly from the Day 1 "AI Lean in Team Structure" discussion, where participants realized the scope of concerns exceeded what individual engineers could address.

The session produced four core grievances:

  1. Greed: AI-assisted programming is primarily solving a wage problem
  2. Enshittification: Degradation of products and tools
  3. Harm to psychological safety: One engineer proving high productivity with AI agents leads to unrealistic expectations for all
  4. Decay of skill (Juniors): A 2–3 year hiring gap already exists; the pipeline is breaking

Concrete next steps were proposed:

  • Organize an in-person gathering to continue the conversation
  • Compile and publish a document of concerns
  • Convene a "Wisdom Circle" structured dialogue
  • Map existing organizations (Weizenbaum Institute, Cyberpeace, Forum fuer Informatik, Gesellschaft fuer Informatik)
  • Establish a website and discussion board (potentially hosted through a university for anonymity)
  • Schedule regular follow-up meetings

The group debated organizational forms — union, lobbying group, professional association — and cited the Free Software Foundation and Open Source Initiative as examples of philosophical differences with pragmatic collaboration. The idea of professional certification was raised but deemed too politically divisive.

Do Our Metrics Still Work?

A session "What Happens to Accelerate?" (Day 2, Session 3) questioned whether the DORA metrics that have become industry-standard benchmarks — deployment frequency, lead time, change failure rate, MTTR — remain valid when AI changes how software is developed.

If AI generates code in seconds that previously took days, does lead time lose its diagnostic value? If AI produces more code faster but with higher failure rates, do the metrics create perverse incentives?

The question remained open, but the mere fact that it was asked signals a community grappling with the inadequacy of existing measurement frameworks for the AI era.

AI Beyond Software: The Physical World

The session "AI And The Physical World" was a refreshing departure, exploring what happens when AI tools meet problems that are not purely digital. A participant's use cases — repairing motor scooters and 3D printing — revealed fundamental limitations:

  • AI has no sense of time: It cannot distinguish between a follow-up question asked seconds later and one asked after hours of repair work
  • Sensory translation is lossy: Everything you see, hear, and feel must be translated to text, and much is lost
  • Physical iterations are expensive: Unlike code (essentially free to regenerate), each 3D print attempt costs time and material

The session's vision for the future: "It would be a game changer if AI would have more sensors... if it could listen or always be there and listen and watch." Until then, a neighbor who can physically see and hear a problem remains more helpful than a chatbot receiving text descriptions.

The Skill Atrophy Question

This theme surfaced in nearly every session but was most pointedly discussed in the "AI and non-AI in You" session (Day 2) and the physical world discussion:

  • Doctors getting worse: After AI diagnostic tools were introduced for cancer detection, doctors' own diagnostic capabilities measurably worsened within three months
  • The GPS analogy: Navigation skills have already atrophied for most people, and we adapted (mostly)
  • The evolutionary parallel: Humans stopped producing certain vitamins because dietary sources made self-production unnecessary. Our brains may naturally offload cognitive work to AI the same way — and we may not be able to reverse it
  • The counterargument: "I'd rather have a tool and do it than not have a tool and not do it" — pragmatic acceptance that some atrophy is an acceptable trade-off for capability

The counterpoint from the "Advice for New Coders" session: you can use AI as a trainer rather than a crutch. "Maybe you can ask the agent to give you exercises to help you improve your memory instead of trying to use a supplement."