My AI 'Coworker': A Simple Mindset Shift

Portrait of Matthias Walter, Co‑Founder & CEO at run_as_rootMatthias Walter
24. Oktober 2025
6 mins
aisoftware-developmentagentsllmmental-models
My AI 'Coworker': A Simple Mindset Shift header image

A while back, I kept running into a point of friction when working with AI, and I couldn't quite put my finger on it. As someone with a background in development, I value determinism. I expect that if I provide the same input, I should get the same output. Every single time. Reproducibility is key.

But I found that Large Language Models (LLMs) don't work that way. I'd use the same prompt and get a slightly different result. This variability felt like a bug, a flaw that was causing frustration and slowing me down.

Then, I had a realization that shifted my entire approach: I was treating the AI like a strict function, but I should have been treating it like a collaborator.

My Realization: Variability Is a Feature, Not a Bug

The simple truth is that LLMs, like people, don't guarantee identical outputs for identical requests. Once I started approaching AI as a non-deterministic partner instead of a predictable machine, my frustration dropped and my results improved.

To be clear, this isn't about lowering my standards for quality. It's about widening the acceptable path to reach that quality. I now operate with what I call an "expectation band." I aim for about 80% alignment with my core request, but I accept that the phrasing, naming, examples, or specific solution paths might differ with each run.

Structure is Fixed, Details are Fluid

The key to making this work is to differentiate between structure and details. I now focus on locking down the essential structure of any task. For a document, that means defining the outline, the sections and the acceptance criteria. For a piece of code, it's the overall software design, the method signature, and the required logic.

Inside that fixed structure, I allow the details to vary. I fully expect to see different solutions, architecture, examples, or implementation details across different runs, and I've come to see that as a feature, not a bug.

The "Coworker" Mental Model

The most helpful mental model I've adopted is to treat an LLM as a coworker or, more accurately, as a team of distinct colleagues.

To make this more concrete, imagine giving the same feature request to three capable developers on your team. Each one can handle the technical implementation just fine. They will

  • write the code
  • create the unit tests
  • and update the documentation.

In the end, the result from all three is acceptable, meets all requirements, and adheres to the coding standards established for the project.

However, if you were to review their work, you'd notice their individual styles:

  • The software design might differ slightly.
  • The exact naming of methods or what gets encapsulated in a function would vary.
  • The unit tests might be structured differently.
  • The documentation could be written with different words and sentence structures.

A reviewer who knows their work could see their "handwriting", their unique signature.

This is exactly how I now see working with an LLM. Each new run is like getting a solution from a different developer, each with its own signature. The only real difference is that I've learned I sometimes need to give the AI a little more room and flexibility than I might with a human colleague.

This variability in style doesn't mean a lack of standards. Just like every developer on a team, the AI must still follow the same project guidelines. For software design, we follow core principles like Separation of Concerns, and for documentation, the required sections and format are non-negotiable.

Putting the "Coworker" Model into Practice

So, how does this "team of coworkers" model work day-to-day? It led me to another addition to my workflow: the multi-sample strategy.

Instead of asking one person for one perfect draft or technical concept, I now treat it like a quick brainstorming session with multiple coworkers.

I run the same prompt multiple times, essentially giving the same instructions to several of my 'coworkers' to see the different, creative approaches they each come up with.

This gives me several strong candidates to work with. From there, I can either pick the best draft or merge the strongest parts of each one into a final result.

With every run, I get new ideas because I might get a solution I hadn't thought of myself. I can also identify where my initial request was too vague and start over with a more refined prompt.

In the end, I've learned that AI is variational by nature. Expecting strict determinism from it is counterproductive.

What's Your Take?

This "coworker" model has been very helpful for me, and I hope it will be helpful to you too. I'd love to hear from you: have you found a helpful metaphor for working with AI?

Portrait of Matthias Walter, Co‑Founder & CEO at run_as_root

About Matthias Walter

Co‑Founder & CEO at run_as_root. I build e‑commerce and custom software with clean architecture, automation, and measurable results.