Skip to content

LLMs as a Direct Interface to Event Streams

Throughout this site, we have explored a structured pipeline: events flow into projections, projections become features, features feed models, and models produce predictions. This approach works well for production-grade, automated AI systems.

But what if you could simply ask your event store a question – in plain language – and get an answer?

Large Language Models (LLMs) open up a fundamentally different way of working with event data. Instead of building a pipeline first, you can start a conversation with your event store – exploring patterns, investigating anomalies, and generating insights interactively.

Beyond the Pipeline

The classical pipeline is designed for repeatability and scale: once set up, it runs continuously, producing consistent outputs. It is the right choice for automated decision-making, production models, and large-scale analytics.

But not every question requires a pipeline. Many valuable questions are ad hoc: a product manager investigating an unusual trend, a domain expert exploring whether a new event pattern has emerged, or a developer prototyping a new analytical perspective.

For these cases, an LLM that can directly access the event store offers a faster, more flexible path. You describe what you want to know, the LLM translates your intent into queries, retrieves the relevant events, and presents the results conversationally.

This is not a replacement for the pipeline approach – it is a complement. Pipelines handle production workloads; conversational access handles exploration and discovery.

Why Events and LLMs Fit Together

Events are naturally well-suited for LLM interaction, for several reasons:

  • Business language – events like BookBorrowed, LateFeeIncurred, or MembershipRenewed are already expressed in terms a human (and an LLM) can understand. There is no translation layer needed between the data and the question.
  • Immutable facts – events represent things that actually happened, timestamped and ordered. This gives the LLM a reliable foundation for reasoning, rather than mutable state that might be inconsistent.
  • Rich context – each event carries metadata (who, what, when, why) that helps the LLM understand not just what happened, but the circumstances around it.
  • Temporal structure – the ordered nature of event streams aligns well with how LLMs process narratives and sequences.

These properties mean that an LLM does not need extensive preprocessing or feature engineering to work with event data – it can engage with the raw event stream directly.

What Becomes Possible

When an LLM can access your event store, several new capabilities emerge:

  • Ad-hoc exploration – ask questions that no one anticipated when designing projections. No need to build a new read model first.
  • Conversational analytics – non-technical stakeholders can explore event data without writing queries or waiting for reports.
  • Rapid prototyping – test analytical hypotheses in minutes through conversation before investing in a full pipeline.
  • Incident investigation – interactively walk through sequences of events to understand what happened and why.
  • Event generation – describe a business scenario and let the LLM write well-structured events to the store, useful for testing, simulation, or bootstrapping.

Practical Example (Library Domain)

Consider a librarian who notices an uptick in late returns and wants to understand the pattern. In the pipeline approach, this would require:

  1. Defining the right projection (late returns by genre, time period, member category)
  2. Implementing and deploying the projection
  3. Waiting for the data to accumulate
  4. Querying the projection and interpreting the results

With an LLM connected to the event store, the librarian can simply ask:

"Which genres had the most late returns last quarter? Were there any spikes in specific weeks?"

The LLM translates this into the appropriate queries against the event store, retrieves the matching events, analyzes the patterns, and responds conversationally – perhaps noting that graphic novels had a 40% increase in late returns during the last two weeks of the quarter, coinciding with a school holiday period.

The librarian can then follow up naturally: "How does that compare to the same period last year?" – without any additional infrastructure.

If this analysis proves valuable enough to run continuously, it can then be formalized into a proper projection and pipeline. The conversational exploration served as a fast prototyping step.

AI/ML Considerations

Understanding where each approach excels helps you choose the right tool for each situation:

  • Exploration vs. production – conversational LLM access is ideal for ad-hoc questions and prototyping; pipelines are built for automated decisions at scale.
  • Setup time – an LLM can start answering questions immediately, while pipelines require design and implementation upfront.
  • Consistency – LLM responses are non-deterministic and may vary; pipelines produce highly consistent, reproducible outputs.
  • Audience – conversational access serves domain experts, analysts, and developers; pipelines feed automated systems and dashboards.

The two approaches are complementary. Use conversational access to discover what matters, then build pipelines to operationalize what you have found.

Best Practices

  • Govern write access carefully – enforce schema validation and ensure every LLM-generated event is clearly attributed. See Event Evolution and Schema Management for guidance on maintaining event integrity.
  • Maintain audit trails for every LLM interaction – what was queried, what was returned, and what actions were taken. This aligns with the principles in Privacy, Compliance, and AI Ethics.
  • Treat conversational insights as hypotheses – validate important findings through formal analysis before acting on them.
  • Scope access appropriately – apply the same controls you would use for any other interface to your event store.
  • Use structured event schemas – the more consistent and well-documented your events are, the better an LLM can reason about them.

By combining the structured rigor of event-sourced pipelines with the flexibility of conversational LLM access, you get a system that supports both continuous automation and on-demand exploration – meeting the needs of technical and non-technical stakeholders alike.

Next up: Return to Closing the Loop and review how all these elements fit together into a continuous learning and improvement cycle.