
So far, users have been captivated by Sora’s stunning ability to transform mundane prompts into gloriously baffling masterpieces. A request for “a man walking a dog on the beach” might yield a humanoid octopus dragging a tuba across a sandstorm while the sky rains lasagne. Naturally, fans call it “groundbreaking.”
Sora’s supporters—largely made up of tech bros who wear hoodies with the words *Prompt Engineer* embroidered in Comic Sans—insist this surrealism isn’t a bug but a feature. “This is what the future of storytelling looks like,” claimed Arjun Mehta, a Delhi-based NFT cinema critic and full-time chai-influencer. “Sure, the characters have seven fingers and melt into the pavement, but that’s intentional. It’s a metaphor for late-stage capitalism. Probably.”
Sora’s outputs have been embraced by the AI community with the enthusiasm usually reserved for startup pitch decks or overpriced oat milk. On forums like *Prompt Junkie* and *Render and Weep*, users share their AI-generated videos with comments like “Sora made my grandpa a jellyfish, 10/10” and “My wedding video now features a sky made of fish eyes. It’s perfect.” Some even sell these videos as NFTs under the title *This Is Not What I Asked For But I’ll Take It.*
But while Sora is bathing in praise for what critics call “courageous disobedience to prompts,” rivals are not sitting idle. Pika Labs, Gen-2 by Runway, and Synthesia have all scrambled to catch up, launching what experts are calling the AI Video Hallucination Arms Race.
Pika Labs responded by training their model to interpret “office party” as “interdimensional rave hosted by dolphins wearing sunglasses.” Their spokesperson explained, “We noticed Sora was cornering the market on disturbing surrealism. So we doubled down. If the user asks for a cat, we give them a sentient macaroni with anxiety issues. Art, baby.”
Runway’s Gen-2 has gone one step further by deliberately including “temporal drift,” a charming euphemism for timeline confusion. Videos generated on their platform may begin with a birthday party and end with a Viking raid, often blending the two into a traumatic Pixar short. “It’s storytelling with spice,” a representative told FD Staff during a late-night video call where their avatar was stuck morphing between a llama and a fax machine.
Synthesia, known for their stiff, humanoid presenters who look like they just emerged from a 1998 Microsoft demo reel, has leaned fully into the uncanny valley. “Our users demand realism,” said Ravi Sharma, Chief of Lip Sync Misalignment, “but not *too* real. So we’ve developed what we call 'Facial Jazz'—a patented system where the AI occasionally forgets what expression it’s supposed to be making. It’s edgy. It’s disruptive. It’s confusing.”
The tech community, unwilling to admit flaws lest it tank their VC pitch, has rebranded these glitches with impressive creativity. Sora’s frequent object-morphing? That’s now “dynamic identity fluidity.” Gen-2’s randomly moving backgrounds? “Narrative elasticity.” Synthesia’s tendency to randomly glitch speakers into t-posing avatars? “Digital dominance posture.”
A Bangalore-based creative agency, *HyperHyperReality*, has already launched an AI-powered streaming platform called *OopsFlix*, showcasing only AI-generated oddities. Their top trending show, *Cooking with Grandma and Occasionally a Wolf*, was created with Sora and features a non-linear narrative arc, a grandmother who floats, and frequent cutaways to spaghetti dancing in reverse. It has a Rotten Tomatoes score of “eggplant.”
Across social media, creators are glorifying these AI hallucinations under hashtags like #SoraSzn, #GlitchGlam and #MadeYouLookTwice. Influencer-cum-philosopher Komal Bhatt declared in a 3-minute TikTok wearing 2000s cyberpunk goggles: “What is a narrative, if not a sequence of lies strung together by memory? AI videos don’t lie. They forget, dream, and remember—simultaneously. Like my ex.”
Despite the fanfare, not everyone is convinced. Some viewers complain that AI videos still struggle with basic physics and logic, with one viral clip from Gen-2 showing a dog walking a man, both of whom phase through a wall while humming the national anthem in reverse. When asked about these errors, a Gen-2 developer replied, “Look, if people wanted realism, they’d just go outside.”
There are growing rumours that OpenAI’s next Sora update will include a “Vishwakarma Mode,” allowing Indian users to add *“mythical cinematic chaos”* to any scene. A simple prompt like “traffic jam in Pune” could now feature characters from the Mahabharata debating GST while an auto rickshaw levitates and bursts into poetry. The feature is already in beta and has been described by testers as “unusable and essential.”
Startups are springing up faster than Deepika Padukone at Cannes, each promising the ultimate AI video tool. One Hyderabad-based company, *MasalaMotion.AI*, guarantees “full Tollywood energy” in every video, complete with wind-blown hair physics, randomly exploding background items, and the occasional choreographed monsoon dance sequence—even in funeral scenes.
Meanwhile, Delhi’s *KyaSceneHai.ai* has pioneered a platform that turns everyday security footage into dramatic AI-rendered thrillers. A leaked test video shows a man buying samosas from a roadside stall, reinterpreted as a heist scene with dialogue like “They took my chutney. Now I take theirs.”
Traditional filmmakers are reportedly nervous but not entirely dismissive. A few directors have even started outsourcing their storyboarding to Sora, hoping its hallucinogenic flair might spice up stale remakes. Rumours suggest the next Rohit Shetty film may include AI-generated vehicles that explode before entering frame, as a statement on existential acceleration.
But not all are laughing. An anonymous editor from a Mumbai VFX house, currently recovering from a failed attempt to edit a Sora-made wedding video, confessed to FD Staff: “We tried to replace a glitching bride with the correct face. But every frame kept shifting. One moment she’s crying, next she’s flying. Eventually, the groom married a traffic cone. It’s now considered canon.”
Still, the market seems undeterred. Global brands are already commissioning AI ads based on this “weird is wonderful” philosophy. A soda commercial generated by Pika Labs—featuring a squirrel doing taxes while drinking cola on Mars—has won eight marketing awards, two of which were invented just to honour the ad’s “commitment to disorientation.”
Some tech insiders whisper that all this weirdness is secretly a way to make users lower their expectations so drastically that even a 12-second clip of a bicycle not melting becomes Oscar-worthy. One venture capitalist even told FD Staff, off the record and halfway through a matcha protein shake, “Reality is broken anyway. Why not capitalise on it?”