The HYPE Innovation Blog

AI Opportunities & Challenges: Effective Governance & Delivering Real Value

Written by Colin Nelson | Apr 27, 2026

 

A session with industry leaders, led by Colin Nelson, on how organizations are exploring and applying AI in real business contexts.

What does it actually take to turn AI from experimentation into real business impact?

In this knowledge session, Colin Nelson hosted a discussion with two practitioners working directly on AI initiatives across very different environments:

Gavin McClafferty AI portfolio lead at Subsea7

John Toon Head of Technology at HLB

Drawing on their experience working inside complex, global organizations, they shared how they are approaching AI beyond the hype — from surfacing use cases and running experiments to navigating governance, culture, and scaling what works.

Rather than focusing on theory alone, the session explored what happens when organizations try to operationalize AI, where things break down, and what it takes to start delivering measurable value.

Use this summary for a fast read or dive into the full session to explore the discussion, real-world examples, and key takeaways in more detail.

Why AI Initiatives Stall Before Delivering Real Business Value

Many organizations have already started exploring AI. Pilots are running. Teams are experimenting. Tools are being tested across different functions.

But moving from that early momentum to real, measurable impact is where things start to break down.

A few patterns show up consistently:

  • Too many disconnected experiments
  • Teams explore AI in silos, often solving similar problems without visibility across the organization.
  • No clear link to business priorities

 Interesting use cases emerge, but they are not always tied to strategic outcomes or measurable value.

  • Uncertainty around where to focus

With so many possibilities, organizations struggle to decide what to scale, what to stop, and what actually matters.

  • Governance slows things down or blocks progress

Concerns around data, risk, and compliance create friction, especially in regulated environments.

  • Early enthusiasm fades

Without visible results, AI risks becoming another initiative that generates interest but not impact.

As Colin Nelson pointed out during the session, AI is not just another innovation topic. It moves faster, is easier to access, and spreads quickly across organizations.

That combination creates both opportunity and risk.

Without structure, alignment, and clear direction, AI efforts don’t fail because of the technology. They stall because organizations struggle to turn exploration into execution.

From Hype to Real Use Cases

One of the biggest shifts organizations are making is moving away from broad AI discussions toward specific, testable use cases.

At Subsea7, Gavin McClafferty described how they approached this transition. Instead of starting with a rigid strategy, they focused on understanding what actually matters to the business.

They began by working closely with leadership to identify the challenges keeping teams up at night. From there, they translated those challenges into targeted AI opportunities and launched an internal campaign to surface ideas across the organization.

The response was immediate.

Ideas came from engineering, supply chain, HR, and beyond. The volume of engagement was higher than anything they had seen before. More importantly, it created a pipeline of real use cases grounded in day-to-day problems, not abstract concepts.

From there, the process became structured:

This approach made it easier to move from curiosity to execution.

At HLB International, John Toon shared a similar shift. Early on, there was a temptation to explore everything at once. But that quickly became unsustainable.

Instead, the focus moved to experimentation with intent:

  • Start small
  • Test ideas quickly
  • Keep what works
  • Drop what doesn’t

This introduced a mindset that is not always natural in more traditional environments, especially where processes are structured and risk tolerance is low.

The result is not just better use cases, but better decisions.

AI stops being a broad ambition and becomes a series of concrete opportunities that can be tested, validated, and scaled.

What AI Actually Changes (Beyond Efficiency)

A lot of AI conversations start with efficiency. Faster processes. Lower costs. Less manual work.

Those gains are real. But they are not the most important shift.

What stood out in the discussion is that AI is enabling things that were simply not possible before.

At Subsea7, Gavin McClafferty shared how access to data has fundamentally changed. Like many organizations, they have vast amounts of information spread across systems. Historically, only a small fraction of that data could actually be used to inform decisions.

AI changes that.

Instead of manually searching, filtering, and connecting information, teams can now:

In one example, engineering data that would normally take significant time to analyze could be processed instantly, allowing teams to identify similar past projects, relevant experts, and key insights almost immediately.

This is not just faster. It changes how decisions are made.

At HLB International, John Toon highlighted a similar shift. Tasks that used to take days, like researching a new client or understanding an unfamiliar industry, can now be done in minutes with a level of depth that was previously difficult to achieve.

More recently, the impact goes even further:

  • Entire workflows can be automated end-to-end.
  • Multiple AI agents can handle different parts of a process.
  • Ideas can be turned into working prototypes in hours instead of weeks.

The result is a compression of time.

Work that once required coordination across people, tools, and systems can now happen in a fraction of the time, with fewer constraints.

But this also changes expectations.

When speed increases, the bar value increases with it. Organizations are no longer competing on whether they use AI, but on how effectively they use it to unlock insights, make decisions, and act faster than before.

The People Problem for AI

The biggest challenge with AI is not the technology. It is how people respond to it.

Across organizations, reactions tend to sit on a spectrum. Some are fully bought in. Others are cautious. Some actively resist.

At Subsea7, Gavin McClafferty emphasized that this is not something to ignore. People are trying to make sense of a shift that is happening faster than anything they have experienced before. The role of leadership is not just to push adoption, but to bring people along in a way that feels clear and non-threatening.

Because without that, progress stalls.

There is also a more direct concern that shows up in many organizations:

If AI can do parts of my job, what happens to me?

At HLB International, John Toon addressed this head on. In many cases, AI is already capable of handling large parts of process-driven work. Ignoring that reality does not protect roles. It delays the moment when change becomes unavoidable.

What matters instead is how people adapt.

This is not a new pattern. Technology has been reshaping roles for decades. The difference now is the speed and visibility of the change.

That is where culture becomes critical.

As Gavin pointed out, organizations need to create an environment where people can experiment, learn, and build confidence with these tools. Not just use them, but understand where they add value and where they don’t.

Because AI adoption is not just a technical rollout. It is a mindset shift.

And without that shift, even the best initiatives struggle to scale.

What Works in Practice

Turning AI into real impact requires more than good ideas. It depends on how organizations structure, govern, and test those ideas over time.

What stood out in the discussion is that successful teams don’t treat AI as a side experiment. They build just enough structure to move quickly without losing control.

At Subsea7, Gavin McClafferty described a model that balances flexibility with oversight. A small core team works across the organization, connecting with different functions, accessing data, and advancing initiatives. Around that, a broader group of stakeholders helps evaluate and prioritize opportunities.

This creates two things at the same time:

  • Space to experiment
  • A clear path to scale what works

Ideas are not treated equally. They are tested.

  • Early concepts are explored quickly
  • Promising ones move into proof of concept
  • Strong candidates become minimum viable products
  • Only then do they move toward production

This fail-fast, learn-fast approach reduces risk while keeping momentum.

At HLB International, John Toon highlighted a similar need for structure, especially in more distributed environments. With multiple teams and varying levels of maturity, the focus shifts to creating clear frameworks rather than strict control.

That includes:

  • Guidelines for evaluating tools and vendors
  • Shared approaches to data, risk, and compliance
  • Training to help teams use AI effectively, not blindly

The goal is not to standardize everything. It is to make better decisions, faster.

Governance also plays a critical role. Not as a blocker, but as a filter.

Organizations that move forward successfully are the ones that:

  • Involve the right stakeholders early
  • Align use cases with business priorities
  • Treat AI initiatives as part of a broader portfolio, not isolated projects

Because without that alignment, even strong ideas struggle to go beyond the pilot stage.

In practice, the difference is clear.

Teams that experiment without structure create noise.

Teams that structure without experimentation create inertia.

The ones that combine both are the ones that start to see real results.

How It All Connects

These elements don’t work in isolation. They reinforce each other.

When these pieces come together, AI stops being a scattered set of initiatives and becomes something the organization can build on.

Without that connection, progress stays fragmented.

With it, organizations can move faster, make better decisions, and turn early experimentation into something that delivers lasting value.

In practice, successful AI adoption depends on aligning use cases, governance, experimentation, and people into a single, coordinated approach.

Quick Action Checklist

Identify a small number of high-impact use cases tied to real business problems.

   Create visibility across AI initiatives to avoid duplication and misalignment.

   Test ideas quickly through proof of concepts before committing resources.

   Define a clear path from experiment to production.

   Involve cross-functional stakeholders early in the process.

   Establish simple governance principles around data, risk, and decision-making.

   Invest in training so teams know how to use AI effectively.

   Make outcomes visible to maintain momentum and credibility.

Want to Go Deeper?

If you’re planning your innovation year and want support with prioritization, portfolio clarity, engagement, or strategic listening, HYPE offers a free consultation and a short Innovation Management Assessment to benchmark maturity and identify prioritized next steps.