It feels like one of those moments when you hear “video killed the radio star”. Except this time, no radio died. Video was reborn.
In the span of days, two launches reshaped what video even means. Synthesia 3.0 unveiled video agents, lifelike avatars, AI copilots, and interactive courses—video not as a clip but as a system. Then Sam Altman announced Sora 2, describing it as a “ChatGPT for creativity,” a social platform where anyone can spin ideas into video, complete with character consistency and playful cameos. One leans toward the enterprise layer, the other toward culture and connection. Together, they form the outlines of a Video Renaissance.
I’ve been inside the video industry since 2017, back when I joined RealNetworks to run Latin America and later EMEA. We experimented with AI-driven analytics—facial recognition, pattern detection—the kind of work that instantly collided with law and ethics. That was the early wave: video as something you watched or measured. What just happened feels different. Now, video has become something you think with.
Philosopher Andy Clark’s extended mind thesis argues that cognition spills out of our heads and into the tools around us—our notebooks, our smartphones, our feeds. Synthesia and Sora extend that argument into motion and sound. Video stops being an output and becomes a prosthetic for reasoning, remembering, training, even entertaining. It’s cognition, exported, it’s a mind, extended.
Synthesia’s keynote made the case on the professional side:
- Video Agents: Real-time, interactive colleagues that can quiz, roleplay, or interview based on specific, vertical even proprietary content, data, information.
- Avatars: Generated from prompts or images, but finally moving like real speakers.
- Copilot: A professional-grade video editor that writes scripts, pulls from knowledge bases, and suggests visuals.
- Interactivity 2.0: Polls, quizzes, hotspots—video that measures.
- Courses: Stitching it all into end-to-end training flows with analytics baked in.
Meanwhile, Altman’s Sora 2 paints the consumer horizon. If Synthesia is the operating system for the workplace, Sora is the playground. The launch post called it a “Cambrian explosion of creativity”—a world where anyone can drop themselves or their friends into video, remix ideas, and share them socially. But Altman also flagged the trepidation: the risk of AI video devolving into “Reinforcement Learning optimized slop feeds,” the dangers of deepfake misuse, the addictive pull of endless creation and consumption.
What ties these together isn’t just AI, it’s economics. The eclonomy—the emerging economy of information, knowledge, and learning (IKL)—will run on programmable video. Every economy has its medium. Steel powered the industrial age. Code powers the digital one. Video is becoming one of the pillars of the scaffolding in the IKL age: enterprise knowledge systems, creative networks, and personal prosthetics for thought.
The implications are dizzying. Reid Hoffman has suggested that by 2034, the 9-to-5 job could dissolve into atomized gigs—skills deployed fluidly, across contexts, like apps on demand. Synthesia is building the infrastructure for those modular jobs. Sora is building the canvas for culture and identity. Both will challenge our sense of stability, and both may liberate or destabilize, depending on where you sit.
I’m marveled by the vision and unsettled by the execution. For two years I’ve tracked digital clones and avatar labor. My benchmark study dreamed of a player that could stitch these ambitions together. This week, it emerged.
Video is no longer a file, an MP4 stuck in your inbox. It’s a colleague. A feed. An ecosystem. A prosthetic for thought. An extended mind. We don’t yet know if this Renaissance ends with a flourishing of creativity or with collapse into monoculture—but one thing is certain: video has finally been reborn.





You must be logged in to post a comment.