Frank Kroondijk

Executive Perspectives

Direct answers to the questions I get asked most by C-suite leaders.

Why step away from management and become an advisor?

I stopped fighting my own wiring and focussed on my personal USP.

Complex technical problems make me come alive in a way that managing P&Ls simply does not. What I discovered over a long career in software is that people who can genuinely zoom out to board level, grasp the business case, the strategic stakes, the organisational dynamics, and then zoom all the way back in to write the code or design the architecture themselves, are extraordinarily rare. That dual range is where I operate.

It also means my value is fundamentally intermittent. I am not the person you hire to run your AI department. I am the person you bring in when something important is stuck, unclear, or moving dangerously in the wrong direction. Your gut feeling tells you something is off, you call or message me. I analyse the business case, figure out the technology to actually realise it, build or stress-test the core of the solution, and when the hard problem is cracked, I hand off to your IT team or external provider and move on to the next challenge.

Think of it as a consigliere model. I work in the margins, outside the formal accountability chain, which means I can move fast, ask the questions nobody internally dares to ask, and 'hack' together a working prototype before the procurement committee has finished its first meeting. I am not here to write your governance policy. I am here to tell you whether your current AI trajectory will fail you within twelve months, and might show you a faster path.

Why is autonomous AI the defining enterprise challenge right now, not just another technology cycle?

Because this shift breaks your operating model in ways that ERP and cloud migration never did, and, likely, most of your IT department does not yet grasp the depth of that break.

When you moved to the cloud, your org chart stayed intact. Autonomous AI agents do not augment your workflow, they replace entire decision layers and work flows within it. Your vendors will not tell you this clearly, because it makes the sale more complicated. Your internal IT team may not tell you this clearly, because it makes their role more uncertain. That is exactly the kind of uncomfortable truth an outside eye is positioned to surface.

The organisations that will lead in 2026 are not the ones that ran the most pilots. They are the ones that had someone in the room who had actually built these systems before, who knew which architectural decisions were irreversible, which vendor promises were fantasy, and where the real complexity was hiding.

Sources

Where is the real, defensible ROI, and how do I explain it to my board?

The ROI frame has shifted. In 2024, the conversation was about productivity gains, tasks done faster, headcount avoided. Your board has heard that story. In 2026, what actually moves the needle is different:

  • 1. Self-Augmenting Systems Unlike software, well-architected agents improve with operational exposure. The system running your procurement workflow in Q4 should be measurably sharper than it was in Q1, not because you reprogrammed it, but because it has iterated and learned across thousands of real decisions. That is a fundamentally different asset class than a software licence.
  • 2. Speed of strategic execution. The real ROI is not replacing individual tasks, it is compressing the time between a strategic decision and its operational reality. Enterprises that can iterate on strategy in days rather than quarters have a structural advantage that compounds. That is the number worth putting in front of your board.
  • 3. Eliminating pilot purgatory. McKinsey found that roughly 90 percent of high-value, function-specific AI use cases never make it out of pilot mode, killed by technical debt, organisational friction, and lack of architectural clarity. The ROI conversation should start with a brutally honest answer to one question: why are your pilots not reaching production? That answer is usually technical, not political. And it is usually fixable fast, if the right person is looking at it.
Sources

What are the four critical problems your enterprise must confront in 2026?

1. Your IT department is probably asking the wrong questions.

The most dangerous AI risk in 2026 is not a rogue agent. It is an enterprise that has deployed a layer of AI on top of fundamentally broken data and process architecture, declared victory, and moved on. When things go wrong, and they will, no one will be able to explain why, trace the failure, or fix it without rebuilding from the ground up. The hard question to ask your IT team is not 'are we compliant?' It is: 'if this agent makes a catastrophically wrong call next Tuesday, can you show me exactly why it happened within the hour?' Most teams cannot. IT is traditionally focussed on implementing expected use cases and all its edge cases. The true power of AI lies in completely rethinking this, making software goal orientated and let it figure out the most efficient way to achieve it based on the current state. For classical schooled IT professionals this is a hard thing to really grasp and to work with.

Sources
2. Stateful, secure memory, the missing layer of enterprise AI.

Every agent that forgets who you are when you close the browser is not an enterprise tool. It is a demo. What enterprises need in 2026 is multi-level memory management: agents that carry institutional context, learn from cross-functional interactions, and maintain continuity across sessions, all within a data perimeter your security team can actually sign off on. This is the architectural difference between an AI assistant and an AI colleague, and most vendors are nowhere near delivering it. Personally I hate to explain something twice to an AI of being fooled by a surrogate sense of memory when it turns out the system did not remember my business objectives or core requirements. Here come multi-level memory solutions in play.

Sources
3. Multi-agent orchestration, the complexity your vendors are not warning you about.

Single-agent deployments are the easy part. The compounding value, and the compounding risk, emerges when you have agents operating across functions simultaneously: finance talking to operations, legal flagging customer success, procurement coordinating with logistics. Building that without clear handoff logic, conflict resolution, and failure containment is how enterprises end up with cascading errors that propagate at machine speed before any human can intervene. This is a systems design problem. It requires someone who understands your enterprise architecture as deeply as the AI capabilities sitting on top of it.

Sources
4. Your data is likely not AI-ready, and that is the actual bottleneck, not the model.

The uncomfortable truth behind most stalled AI initiatives: the constraint is almost never the AI. It is the fragmented, inconsistently defined, undocumented data architecture underneath it. Processes that live in spreadsheets and institutional memory do not become AI-ready by deploying agents on top of them. The fastest-moving enterprises in 2026 diagnosed this problem early and fixed the data layer first. Let me look into that.

Sources
Sources

What does working with me actually look like?

Not a retainer. Not a transformation programme. Not a 47-slide strategy deck.

You bring me a specific challenge, a stalled initiative, an architectural decision with long-term consequences, a vendor claim that does not quite add up, questions about a fancy presentation at a seminar, a pilot that keeps failing for reasons nobody can articulate. I look at it with fresh eyes, tell you what I actually think, and either fix the core of it directly or give your team a clear enough picture that they can.

The organisations that get the most from this model are the ones willing to let someone bypass the usual filters. I cannot do useful work if every finding has to be softened for internal politics before it reaches the decision-maker. The value is in the honest directness.

You get a direct line, no gate keepers, no assistants, direct contact via WhatsApp, Signal, email or phone. I am in the Lisbon time zone (CET/Amsterdam -1).

If you need someone to run a multi-year governance implementation or manage a compliance programme, I am not your person, and I will tell you that upfront. But if you need to know, quickly and honestly, whether your current AI architecture will hold up under real operational conditions, that is exactly the conversation I am here for.

Sources