“What if your next favourite love story wasn’t just written by humans, but co‑crafted by algorithms?”
That’s the question Mumbai‑based director and lyricist Vivek Anchalia set out to answer when he spent over 1,500 hours collaborating with artificial intelligence to create Naisha—hailed as India’s first full‑length AI‑powered feature film. In this deep‑dive, we’ll take you behind the scenes of Naisha’s creation, explore the AI tools and techniques that brought its characters and worlds to life, and consider what this revolutionary approach means for storytellers everywhere.
Who’s Behind “Naisha”? Meet Vivek Anchalia
In 2024, Anchalia’s thriller Tikdam (Trickeries), set against a fictional hill town backdrop, garnered praise for its tight screenplay and atmospheric visuals. Yet, rather than ride that wave waiting for studio green‑lights, Anchalia felt a creative itch: “I didn’t want to mope around waiting for pitches to materialise,” he recalls. So he did what any restless filmmaker would—he turned to the tool on everyone’s mind: artificial intelligence.
What began as a modest AI‑generated music video soon revealed a far richer narrative. By iterating on prompts, visuals, and story arcs, Anchalia realised he had enough to craft a full‑length feature. Thus was born Naisha, a sweeping romantic tale that traverses eras and continents—all rendered through the lens of machine learning.
Why Vivek Anchalia Choose AI?
The Promise
- Speed & Cost: Traditional VFX can eat up massive budgets and months of work. AI‑driven effects, by Anchalia’s estimate, can cost less than 10% of conventional methods.
- Democratisation: Without deep pockets or big‑studio backing, indie creators can now conjure worlds—volcanic eruptions or sprawling palaces—with the same ease as dialogue scenes.
- Creative Partnership: Instead of replacing artists, AI can serve as an infinite intern, churning out storyboards, animatics, or set designs on demand.
The Puzzle
- Character Consistency: Generating the same face or costume across hundreds of shots remains a thorny challenge.
- Ethical Gray Areas: Voice cloning, de‑aging, and deepfake potential raise questions about consent and artistic integrity.
- Rapid Obsolescence: AI models evolve so quickly that techniques can date within months, forcing filmmakers into a constant tech‑chase.
Anchalia’s journey would tackle each of these puzzles head‑on—sometimes by brute persistence, other times by creative workarounds.
1,500‑Hour AI Odyssey says Vivek
“I spent more than 1,500 hours just wrestling with prompts and platforms,” says Anchalia. From dawn‑til‑dusk sessions tweaking lighting parameters to late‑night tests of voice‑cloning modules, he treated AI not as a magic bullet but as a demanding collaborator.
- Prompt Mastery: Breaking each scene into granular prompts—camera angle, mood, colour palette—he trained the system to ‘understand’ his vision.
- Iterative Refinement: Early outputs often felt uncanny or inconsistent. By feeding back results and adjusting parameters, the AI began to ‘learn’ Anchalia’s aesthetic.
- Cross‑Platform Synergy: No single tool did it all. He combined image‑generation engines for backgrounds, specialized VFX models for effects, and voice‑cloning APIs for dubbing.
By year’s end, Anchalia had a proof‑of‑concept that felt more like a fully shot film than a rough animatic—and distributors took notice. Naisha is now slated for theatrical release in May- June 2025, with talks underway for pan‑India distribution.
Building the AI Toolbox
Here’s a peek at the arsenal behind Naisha:
Stage | AI Application | Impact |
Storyboarding | Rapid animatic generation from text prompts | 80% time savings vs. manual sketches |
Set & VFX | Generative backgrounds, digital set extensions | High‑end visuals at indie budgets |
Camera Movements | Prompt‑driven virtual cinematography controls | Precise shot framing without a physical crew |
Voice Cloning | Accent modulation, multi‑language dubbing | Consistent performances, cost‑effective ADR |
De‑aging | Deepfake‑style age regression for actors | Seamless youth/flashback sequences |
Soundtrack | AI‑assisted composition suggestions (mood‑based loops) | Inspiration for composers, faster revisions |
Anchalia’s favourite discovery? Some platforms now offer near‑real camera controls—zoom, dolly, focus pulls—purely via text prompts. “It felt like puppeteering a virtual camera,” he laughs.

Anatomy of an AI Workflow in Film Naisha
- Ideation & Script
- Traditional writing, but enriched by AI‑generated moodboards.
- Early feedback loops where the AI’s visuals sparked new plot ideas.
- Previsualisation
- Generating 3–5‑second animatics per scene.
- Rapid iteration: swap out backgrounds, tweak character positioning, all in under an hour.
- Production
- No physical sets—every location was AI‑rendered.
- Virtual cameras ‘shot’ the film based on Anchalia’s detailed prompts.
- Post‑Production
- Voice recordings by actors over AI‑generated lip movements.
- Final render scheduled just weeks before release to leverage the latest model updates.
By collapsing what once took months into days, Anchalia proved that AI can turbocharge indie filmmaking without sacrificing artistic depth.
Maintaining the Human Touch in Film Naisha
Despite its AI DNA, Naisha is far from a soulless experiment. Anchalia emphasises:
- Actors’ Performances: Human voice‑overs ensure emotional nuance.
- Music by Daniel B George: Four original tracks and the score were composed traditionally, lending warmth and texture.
- Additional Songs: Two more numbers by Pratik Yuti Ghosh and Ujwal Kashyap ground the film in authentic Indian musical sensibilities.
“AI did the heavy lifting on visuals, but the heart of Naisha still beats to human rhythms.”
This blend of machine efficiency and personal artistry is what Anchalia believes will define the next golden age of storytelling.
Challenges, Ethics, and Character Consistency
The Consistency Conundrum
Generating the same character look across hundreds of shots is notoriously tricky. Early tests produced variations—subtle changes in facial structure or costume details—that threatened narrative immersion. Anchalia’s solution was relentless iteration, but he predicts that soon character consistency will be a one‑click feature as AI models standardise.
Ethical Crossroads
- Voice Cloning: Who truly ‘owns’ a cloned performance?
- De‑aging & Deepfakes: When does creative enhancement become deception?
- Creative Credit: How should AI contributions be acknowledged in credits?
Renowned director Shekhar Kapur weighed in at a Goa panel:
“AI has a long way to catch up with human imagination… for AI, everything is certain; for us, creativity thrives on uncertainty, love, fear.”
Anchalia echoes this caution: AI should amplify human voices, not silence them.
Democratising Filmmaking: Empowering Indies
Perhaps the most exciting promise of AI is democratisation. Anchalia envisions a future where:
- First‑Time Directors: Pitch entire visual trailers instead of dusty script binders.
- Regional Storytellers: Craft high‑quality films without metro‑city studios.
- Budget‑Conscious Crews: Allocate resources to story and performance rather than technical overhead.
“AI will empower small filmmakers and democratise filmmaking in India,” Anchalia asserts, noting that the traditional studio‑star loop often shuts out fresh voices.
This could spark a renaissance of diverse, personal narratives—stories that might never have seen the light of day in the old system.
Global Context in Film: AI in Hollywood and Beyond
India’s Naisha is not alone in pioneering AI cinema. Around the world:
- The Brutalist (2024): Used AI voice‑cloning tool Respeecher to refine Adrien Brody’s Hungarian dialogue after ADR fell short.
- De‑Aging in South India: Vijayakanth in GOAT (2024) and Mammootty in Rekhachithram (2025) were digitally rejuvenated for flashbacks.
- Hollywood Previsualisation: Major studios now pilot AI‑driven animatics to test visual styles before green‑lighting big budgets.
Yet controversies abound. The Brutalist sparked debate over authenticity and awards eligibility, with some arguing that AI‑enhanced performances muddy the line between human skill and machine assistance.
These global case studies underscore both AI’s potential to innovate and the importance of clear ethical guardrails.
Looking Ahead: The Next Frame of AI Cinema
What’s on the horizon for AI filmmakers?
- One‑Click Character Consistency: No more 1,500‑hour grind—just a simple toggle.
- Real‑Time Virtual Production: Directors walking through AI‑generated sets in VR before shooting.
- Adaptive Storylines: Films that adjust their narrative based on audience reactions, powered by sentiment‑analysis AI.
- Ethical Frameworks: Industry‑wide standards for AI crediting, consent, and creative ownership.
Anchalia is already exploring AI‑driven marketing campaigns that personalise trailers based on viewer profiles. “Why show the same trailer to everyone?” he asks.
Conclusion
Naisha stands at the crossroads of art and algorithm, proving that machine learning can be more than a gimmick—it can be a genuine creative partner. By marrying human emotion with computational power, Vivek Anchalia has opened a door to a future where every storyteller, regardless of budget or background, can paint with the full spectrum of cinematic tools.
So here’s my question to you: What story would you tell if you had AI at your fingertips? Share your wildest ideas in the comments below—because in the era of AI‑powered cinema, imagination truly knows no bounds.