Finbar O’Hanlon, the original inventor and founder of ION Video (ASX: IOV) and its core technology, has re-emerged with a compelling tech demo and a clear thesis: video is the last major medium that AI cannot truly work with, and ION has built the infrastructure to change that.
In his first official communication since returning to lead the company, O’Hanlon presented what he calls “the world’s first prompt-to-virtual-video interface” — a system that allows AI to dynamically assemble video content from existing rendered files without re-editing or re-rendering.
The Problem
Video accounts for over 80% of global internet traffic and represents the largest repository of human knowledge and creativity ever assembled, yet it remains opaque to intelligent systems.
AI can label, describe, and timestamp video, but it cannot restructure or recompose it dynamically.
O’Hanlon calls this the “sealed artifact” problem — once rendered, a video becomes a static file that AI cannot build with.
Re-editing and re-rendering introduce cost, latency, rights complexity, and fragmented workflows.
As AI shifts from responding to prompts to composing experiences, video has become the bottleneck.
The Solution
ION’s approach is virtualisation.
The technology strips rendered video down to a lightweight manifest — a 52-megabyte file becomes a 49-kilobyte virtual representation — separating structure from samples so video can be addressed, sequenced, and assembled on the fly.
In the live demo, O’Hanlon used voice prompts to generate personalised cooking compilations from multiple source files in seconds, with no manual editing.
A follow-up prompt refined the output further, stripping dialogue and non-essential content to deliver only the cooking process and final dish.
The backend console showed how quickly files are virtualised, and how their internal structure is exposed so AI can sequence video samples like building blocks.
Key Questions for the Road Ahead
++Securing platform partnerships++
ION positions itself as a “Dolby for programmable video,” sitting beneath AI platforms and video networks.
O’Hanlon pointed to Google’s Gemini, which can reason across YouTube but still cannot assemble personalised video without traditional editing.
The market need is validated by the direction these platforms are heading.
Translating that alignment into formal integration commitments for ION will be a key milestone moving forward.
++Content coherence at scale++
Scene-level assembly worked well for recipe compilations.
As use cases grow more complex — narrative storytelling, education, entertainment — the demands on transitions, audio continuity, and contextual flow will increase.
This signals where ION’s technology will need to evolve.
++Monetisation model++
O’Hanlon spoke about shifting value “from files to moments” — the idea that content owners could monetise at the scene level rather than the file level.
This is an exciting concept and building it out into concrete pricing and revenue-sharing frameworks will be important for market confidence.
++Intellectual property++
New patent filings extend ION’s original foundation into the AI era.
In a market where major technology companies are racing to define the next content interface, well-timed IP is a genuine strategic asset that could prove critical as the category matures.
Why This Matters
ION is targeting one of the most significant infrastructure gaps in the AI landscape.
Google’s push toward personal intelligence with Gemini, and the broader shift toward intent-driven experiences, directly validates the market ION is building for.
O’Hanlon’s return, the strengthened patent position, and a working demo mark a meaningful step forward.
The challenges ahead—scale, partnerships, and commercial model—are the natural next stages for any infrastructure company moving from proof of concept to market.
If ION can execute, it is well positioned to become the connective layer between intelligent systems and the world’s video archives.
