AI Conference 2026 — When Observations Become Confirmation About Real AI Systems

At the AI Conference 2026 – Science × Business / AICon26 in Heidelberg, many topics were discussed: AI technology, regulation, healthcare, infrastructure, business value, and the impact on software development.

But for me, the most interesting aspect of the conference was something else.

Many discussions confirmed patterns that have already been emerging for some time.

Ideas I have been writing about during the last months appeared again in multiple talks and conversations:

  • data quality determines AI system behavior
  • AI systems increase operational complexity
  • AI introduces new categories of risk
  • software development shifts from coding to reviewing
  • understanding the real problem matters more than implementing technology fast

Seeing these ideas appear independently across different sessions was a strong signal.

AI is moving from experimentation into operational reality. Once AI becomes operational, the challenges change — as I already described in April 2024.

References and Sessions

This post is based on my personal notes and reflections from the AI Conference Science × Business / AICon26 in Heidelberg on April 14–15, 2026. The official conference agenda described the two days as Vision Day and Impact Day.

The official conference framing was useful for this post:

  • Vision Day focused on trends, technological development, and the long-term impact of AI on business, industry, and society.
  • Impact Day focused on implementation, high-value use cases, integration into existing systems, and scaling AI solutions across teams and processes.

The following sessions and speakers influenced the observations in this post:

  • Martin Förtsch and Thomas Endres — “Was bei der Einführung von KI wirklich zählt”
  • Hans-Petter Dalen — “Operationalise AI faster, better, more reliably and completely trusted”
  • Falk Borgmann, Dr. Nils Kaufmann, Fabian Schlier, Martin Mayr — “Ohne Daten keine KI: Mehrwert schaffen zwischen Datenstrategie, Souveränität und Datenschutz”
  • Florian Kieser — “Künstliche Intelligenz erfolgreich anwenden”
  • Manuel Haupt — “Conversational AI”
  • Anna Windisch — QUMEA GmbH / contactless patient monitoring reference
  • Matthias Blatz — “Auswirkungen von KI auf Rechenzentren und IT-Infrastruktur”
  • Fiona Sailer — “Wie verändert Künstliche Intelligenz unseren Wasserverbrauch”
  • Ralf Müller, Prof. Dr. Jürgen Hesser, Polina Galkin — “Wie wird Forschung marktfähig – Grüße aus dem Elfenbeinturm!”
  • Polina Galkin — “The evolution of explainable AI: Between research and application”
  • Andreas Loroch — “Menschliches Vermögen im KI-Zeitalter: Die neue Führungsqualität”
  • Dr.-Ing. Karl-Michael Nigge — “Monetarisierung von KI – Wie aus Mehrwert Umsatz wird”
  • Daniil Starikov — “From Volatile Energy Markets to Smart Production: Building Digital Products in Cement Industry”
  • My session — “KI verändert die Softwareentwicklung dramatisch: vom Coding zum Review”

Important additional personal exchanges:

  • Andreas Ediger — personal exchange around the “door handle instead of the whole door” metaphor.
  • Thomas Euler — inspiration reference for the vibe-coding experiment.

Table of content:

  1. Conference Context: Vision Day and Impact Day
  2. Data Quality: The Real Foundation of AI Systems
  3. Operational Complexity: AI Systems Are Still Software Systems
  4. The Risk Landscape of AI Systems
  5. The Shift in Software Development
  6. Understanding the Problem First
  7. Summary
  8. Resources

1. Conference Context: Vision Day and Impact Day

The official conference structure was useful because it separated two perspectives.

The first day, Vision Day, focused more on trends, technological development, and the long-term impact of AI on business, industry, and society.

The second day, Impact Day, focused more on implementation, use cases, integration into existing systems, and scaling solutions across teams and processes.

This distinction also reflects a broader movement I observed during the conference.

The discussion around AI is shifting.

It is no longer only about what AI can do in theory.

The more relevant questions are now:

  • How do we use AI responsibly?
  • How do we integrate AI into real systems?
  • How do we evaluate the results?
  • How do we handle risks?
  • How do we create business value?
  • Who remains responsible?

This is why the conference confirmed one of my main impressions:

AI is not only about models.
AI is about systems.

2. Data Quality: The Real Foundation of AI Systems

A recurring topic during the conference was data quality.

This connected directly to the session “Ohne Daten keine KI: Mehrwert schaffen zwischen Datenstrategie, Souveränität und Datenschutz”.

For me, this was one of the most important confirmations.

Organizations often struggle with:

  • fragmented knowledge
  • duplicated internal tools
  • inconsistent documentation
  • outdated information
  • unclear ownership of data
  • missing data governance

AI systems make these issues more visible. AI does not remove these issues. If the data is inconsistent, the answers will also be inconsistent.

This idea connects closely to something I described in another post:

The container (CUP), meaning the AI interface or AI system, is not the most important part. The quality of the content inside determines the result. In other words:

Good data → useful AI
Bad data → confident but unreliable answers

This was my main takeaway from the data strategy panel. The official agenda lists “Ohne Daten keine KI…” as a Vision Day panel with Falk Borgmann, Dr. Nils Kaufmann, Fabian Schlier, and Martin Mayr. 

3. Operational Complexity: AI Systems Are Still Software Systems

Another discussion topic was the complexity of operating AI systems.

Many conversations moved away from pure model performance toward operational questions such as:

  • How do we monitor AI systems?
  • How do we evaluate results?
  • How do we manage changing models?
  • How do we handle unexpected behavior?
  • How do we integrate AI into existing systems?
  • How do we scale AI solutions across teams and processes?

These questions highlight an important reality:
Even with AI, we are still building software systems —
and software systems always introduce operational complexity.

The core idea is that AI systems must be understood across multiple dimensions
simultaneously: model behavior, software architecture, and operational environment.

The idea behind that article was that AI systems operate across multiple dimensions:

  • model behavior
  • system architecture
  • operational environments

At the conference, this appeared again in different forms.

The session “Operationalise AI faster, better, more reliably and completely trusted” directly addressed the operational side of AI adoption.

The infrastructure discussion also supported this view. AI does not only need prompts and models. It needs compute, data, monitoring, governance, security, and operating discipline.

Once AI becomes part of production, the real work begins.

This observation is based on Hans-Petter Dalen’s session on operationalizing AI,
which was listed on Vision Day in the official conference agenda. This perspective is also supported by Matthias Blatz’s session on AI infrastructure
and by the Heidelberg iT article describing how AI applications change the
requirements for data centers and IT infrastructure.

 

This also fits to my post Exploring the “AI Operational Complexity Cube idea” for Testing Applications integrating LLMs.

4. The Risk Landscape of AI Systems

With operational systems comes risk.

If machines generate code, humans must become better reviewers of systems.

RiskDescription
RegulatoryCompliance with regulations such as the EU AI Act
ReputationalIncorrect AI output can damage trust and reputation
OperationalAI introduces new failure modes into production systems

This means AI systems must be designed differently from traditional applications.

Important aspects include:

  • monitoring
  • evaluation
  • human oversight
  • fallback mechanisms

AI systems cannot simply be deployed and forgotten.

References:

NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework

It’s All About Risk-Taking: Why “Trustworthy” Beats “Deterministic” in the Era of Agentic AI

5. The Shift in Software Development

One of the most visible changes discussed during the conference is the impact of AI on software development.

AI tools are already capable of generating large amounts of code. This changes the role of developers.

The value moves from:

writing code → verifying systems

Developers increasingly focus on:

  • architecture
  • system understanding
  • reviewing generated code
  • evaluating AI outputs

This was also the topic of my talk during the conference:

“AI changes the Software Development Lifecycle dramatically: from coding to review.”

If machines generate code, humans must become better reviewers.

6. Understanding the Problem First

One small idea from a conversation during the conference stayed with me.

When solving problems with technology we often assume the solution must be a complex system. But sometimes the problem requires something much simpler.

In a discussion with Andreas Ediger we used a metaphor:

Instead of building a whole door, sometimes you only need a door handle.

The challenge is not implementing technology.

The challenge is understanding the real problem.

8. Summary

Looking back at the conference, one thing became very clear: the discussion around AI is evolving. The important questions are no longer only about:

  • bigger models
  • faster GPUs
  • better benchmarks
  • more impressive demos

Instead, the focus is shifting toward:

  • data quality
  • operational complexity
  • engineering discipline
  • risk management
  • business value
  • responsible system design
  • human accountability

AI is moving from experimentation into real operational environments. Once AI becomes part of real systems, the real work begins. For me, the conference did not create a completely new view. It confirmed patterns that were already visible.
That is why the title fits:

When observations become confirmation.

The future of AI will not be decided by models alone — 
but by the systems we build around them.

8. Resources

  1. AI Conference – Science × Business
    https://www.dcyphr.io/conference
  2. Exploring the AI Operational Complexity Cube idea for LLM testing https://suedbroecker.net/2025/03/24/exploring-the-ai-operational-complexity-cube-idea-for-llm-testing/
  3. European Commission – EU AI Act https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
  4. NIST AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework
  5. Stanford AI Index Report
    https://aiindex.stanford.edu/report/

Note: This post reflects my own ideas and experience; AI was used only as a writing and thinking aid to help structure and clarify the arguments, not to define them.


#ArtificialIntelligence, #AISystems, #AIEngineering, #DataQuality, #AIinProduction, #ResponsibleAI, #SoftwareEngineering

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑