Could the Next Big Game Be One That Doesn’t Exist Until You Ask for It?

TL;DR: Artificial intelligence is set to revolutionize gaming by potentially creating entire video games on the fly from simple text prompts. Today, AI is already making games more immersive with smarter NPCs and procedurally generated content. In the future, you might just ask for your dream game and watch it come to life. But while the idea is exciting, there are still big technical hurdles to overcome – from making AI worlds truly interactive and coherent, to keeping latency low for real-time play. For now, AI is a powerful helper in game development, but fully AI-generated games that respond to every move are still on the horizon.

Imagine a video game that doesn’t even exist until you ask for it. Sounds crazy, right? But with rapid advances in artificial intelligence, especially generative AI, this wild idea is starting to feel almost possible. AI is transforming how we play and create games in real time. It’s already helping developers design worlds and characters, and it might one day build entire games from scratch based on a few words you type.

In this article, we’ll explore what’s happening right now with AI in gaming and where it could go next. We’ll look at current real-world examples – like AI making game characters smarter and worlds bigger – as well as mind-blowing future scenarios – like an AI conjuring up a custom game just for you. We’ll keep things super simple and straightforward (no heavy tech jargon here) so that whether you’re a gamer or a game developer, you’ll get the full picture of this exciting AI-driven gaming revolution.

AI in Gaming Today: Smarter NPCs and Procedural Worlds

AI is already changing the way games are made and played, even if we don’t have on-demand AI-generated games just yet. Here are a few ways AI is being used in games right now in 2025:

  • Smarter NPCs and Conversations: Non-player characters (NPCs) in games are getting brainier thanks to AI. Remember those boring NPCs that repeat the same one-liner every time you walk by? That’s changing. Game developers and modders have started using AI language models to give NPCs more dynamic dialogue. For example, there’s a mod for Grand Theft Auto V that hooks up NPC dialogue to ChatGPT – meaning you can literally have a free-form conversation with an NPC, and they’ll respond with unique lines. It’s like NPCs suddenly went from scripted robots to improv actors. Some projects even let NPCs remember your past interactions and respond accordingly, making conversations feel much more real and unscripted.
  • Procedural Generation on Steroids: Gamers have seen procedural generation in titles like Minecraft or No Man’s Sky (which creates endless planets with code). Now, generative AI is taking procedural content to the next level. AI can analyze tons of game maps, textures, and level designs, then generate new ones that feel natural. This could mean infinite unique levels or landscapes crafted by AI rather than by hand. For instance, developers are experimenting with AI-generated dungeons, cities, and even whole planets. The idea is that an AI could learn what makes a level fun or a landscape believable, and then create new ones on the fly – saving developers time and surprising players with endless variety.
  • AI-Assisted Game Art and Audio: Creating all the art and sound for a big game is a huge job. AI is stepping in as a helper here too. Tools like DALL-E or Stable Diffusion (for images) and advanced text-to-speech models (for voice acting) are being used to generate concept art, textures, or even placeholder voice lines. Big studios like Ubisoft have introduced AI tools (like Ubisoft’s Ghostwriter program) to generate first drafts of NPC dialogue “barks” (those one-liner shouts NPCs make during events). This doesn’t replace human writers, but it gives them a head start. It’s super useful for filling a massive open world with lots of ambient chatter. Similarly, indie developers might use AI to quickly create artwork for their game or to voice minor characters without hiring an actor for every line.

In short, AI is already woven into the game development process in 2025, making games smarter, bigger, and helping speed up some of the grunt work. But what about the really futuristic stuff – like prompting an AI to make you a whole game? Let’s dive into that, because things are about to sound like science fiction (that’s slowly turning into science fact).

Grand Theft Auto VI: A Gold Mine for Training Game AI

One interesting idea floating around is that huge open-world games could become training data to teach AI how to build virtual worlds. Think about Grand Theft Auto VI (GTA 6), one of the most anticipated games ever. GTA games create incredibly rich cities with tons of detail – realistic physics, hundreds of unique NPCs walking around, dynamic weather, you name it. If you’re an AI trying to learn how a “world” works, a GTA game is like a jackpot of data.

Tech experts speculate that a game as complex and detailed as GTA 6 could be a “data gold mine” for AI models learning to simulate worlds. Every street corner in GTA, every pedestrian interaction, every car crash or police chase – it’s all training material. Feed enough of that into a generative model, and theoretically the AI could start to understand patterns of how a living game world operates. That could inch us closer to AI that can produce GTA-like environments on demand.

And it’s not just GTA. Other major franchises – Assassin’s Creed, Call of Duty, The Elder Scrolls, you name it – have years’ worth of content showing how levels are built, how enemies behave, how stories progress. These big titles could serve as sandboxes for AI, teaching an algorithm what a stealth mission looks like, or how a medieval fantasy world is structured, or how a shooter game spawns enemies and set-pieces. Imagine an AI digesting all the Elder Scrolls games and then spitting out a brand new fantasy world that feels like Skyrim, but isn’t any place you’ve ever been – because it literally generates a new world map, new lore, new quests on the spot.

Of course, game companies aren’t exactly handing over their raw game data to AI researchers (there are legal and ethical hurdles there). But the concept is intriguing: the next leap in AI-generated games might come from training on the massive, detailed games we already have. GTA 6, with its sprawling world, might quietly be paving the way for AI models that think like a game world.

Beyond One Game: Franchises as AI Content Sandboxes

Let’s extend that idea beyond just GTA 6. Picture all the major game franchises we love becoming the training grounds for AI. The more data, the better an AI can learn, right? Here’s how a few famous series could influence AI-generated gaming down the line:

  • Assassin’s Creed: This series spans different historical settings – from Renaissance Italy to Ancient Egypt. An AI could learn what makes an Assassin’s Creed game tick (stealth gameplay, parkour-friendly cities, historical characters) and potentially generate a brand new historical stealth adventure. One day you might prompt, “Hey AI, give me a stealth game in feudal Japan with ninja assassins,” and it could cook up a playable world that feels like an Assassin’s Creed spin-off set in Japan.
  • Call of Duty: Decades of warzone settings, weapons, and mission styles could train AI on how a blockbuster shooter is structured. You could ask for a “Cold War spy thriller shooter with missions in Berlin,” and an AI might assemble some maps and missions on the fly, inspired by what it learned from CoD campaigns.
  • The Elder Scrolls / Fallout: Bethesda’s open-world RPGs are full of lore, quests, and interactive NPCs. An AI could learn from these how to create compelling quests or interesting NPC interactions. Imagine getting a fresh RPG world – new maps, new lore, new factions – just by requesting it. It could feel like a brand-new Elder Scrolls-esque game tailored to whatever setting or theme you want (desert kingdom, arctic tundra, steampunk city – you name it).

The key point: these beloved franchises provide a blueprint for what works in game design. They show AI examples of good level design, story arcs, and gameplay mechanics. If an AI is trained on enough of these, the hope is it can start generating something that hits those same notes.

Now, we’re not at the stage where you can say “make me a game like Skyrim but in space with zombies” and get a perfect result – not even close. But we can see the building blocks forming. Big franchises could inspire AIs the same way they inspire human developers and modders to create new experiences.

Can We Really Make a Game from a Text Prompt?

Right now, you might wonder: is it actually possible to create a whole game just by typing a prompt to an AI? Today, if you use an AI image generator, you can type “a castle on a hill at sunset” and get a pretty painting of a castle. So what if you typed “a stealth-based historical adventure in feudal Japan” and expected a full playable game to pop out?

At the moment, the short answer is no – we can’t do that yet. Generating a video of a game or a nice screenshot is one thing; generating a real, interactive game is a whole other level of complexity. Here’s why making a true playable game from a prompt is insanely hard:

  • It’s not just about visuals: A game isn’t just graphics or video; it’s about interaction. You need to move around, press buttons, make choices, and see the world respond logically. An AI can spit out a sequence of images (like a video of someone playing a game), but that doesn’t mean it has the underlying game logic. It might create the look of a feudal Japan stealth game, but is there an actual mission to complete? Can you really climb that castle wall, or is the AI just painting a picture of it?
  • Consistency and Memory: Games require keeping track of a lot of things. Did you pick up the key in room 1 so the door in room 2 opens? Did you anger an NPC an hour ago who might react differently later? These kinds of state-tracking and long-term effects are hard for current AI. AI models don’t naturally have a built-in memory of everything that happened in your play session. They’d have to somehow remember all your actions to keep the world consistent. If the AI “forgets” that you already defeated a boss, it might spawn the boss again or break the storyline. Maintaining that coherent state over hours of gameplay is a huge challenge.
  • Understanding Game Mechanics: Every game has rules and mechanics. Gravity should work consistently. Characters have health and abilities. Enemies follow certain patterns. These mechanics are like the laws of the game world. If an AI is just generating frames or scenes, it might not truly understand those underlying rules. You could end up with a beautiful scene that doesn’t actually play correctly – like a car that looks real but when you try to drive it, nothing happens, because the AI never learned what it means for a player to drive a car in-game.

In summary, generating a coherent, interactive world is orders of magnitude harder than generating a simple video clip or image. It’s the difference between making a movie and making a video game: movies are passive (you just watch), games are interactive (you actively participate and the game world needs to respond).

This hasn’t stopped people from trying to move in that direction. We have some early experiments (more on those soon) that hint it’s possible to at least generate parts of a game environment via AI. But the dream of typing a prompt and getting a fully playable, polished game that reacts to your every move? That’s still science fiction as of 2025. We can dream about it and prototype it, but nobody’s cracked it yet.

Technical Hurdle: Latency and Real-Time Generation

Imagine for a second that an AI is churning out game content on the fly as you play. One big technical hurdle is latency – basically, delay. Games, especially fast-paced ones, need to feel responsive. If you press a button and there’s a noticeable lag before the game reacts, it feels terrible.

Now, consider if the game’s content (say the next area you’re entering, or the dialogue an NPC is about to say) is being generated by an AI in some cloud server the moment it’s needed. That generation process takes computing power and time. If it takes even a second to fabricate the next bit of world or come up with a response, you’ll feel that delay.

  • The 50 Millisecond Goal: In gaming, especially VR and online streaming, there’s a general goal to keep latency under ~50 milliseconds (0.05 seconds) for actions, because beyond that, people start to sense the lag. Current cloud gaming services (like streaming a game over the internet) try to minimize lag by sending pre-rendered frames as quickly as possible. If we add real-time AI generation to the mix, it’s like adding another layer of potential delay. The AI would have to generate content and then that content still has to stream to your device. Keeping all that under 50 ms is a massive ask with today’s tech.
  • Sheer Computing Power: These AI models aren’t light – they usually need powerful GPUs or specialized AI hardware to run. To generate high-resolution graphics and do it quickly, you need serious horsepower. For example, Google DeepMind’s latest demo (called Genie 3) can generate simple interactive scenes at 720p resolution and 24 frames per second. Cool, right? But that’s with beefy hardware behind the scenes. To get something like a full HD or 4K game at 60 fps, you’d need orders of magnitude more computing power. Not to mention if you want it on a VR headset (which needs even higher frame rates) – now it gets into sci-fi supercomputer territory for the moment.
  • Internet Speed and Infrastructure: Even if the AI on the server can whip up frames super fast, you still have to send those frames to the player’s device. That means you need a blazing fast internet connection with minimal lag. Many places in the world don’t have the kind of internet needed to make cloud gaming seamless, let alone cloud gaming with AI generation on the fly. If your connection stutters, the whole illusion breaks down (imagine the AI game freezing because your Wi-Fi hiccuped – immersion killer!).

So, for AI-generated games to work in real time, we’d need not just smart algorithms but a rock-solid tech backbone: insanely fast processing and top-tier internet, working together without a hitch. Otherwise, even the coolest AI-driven game idea could become unplayable if it feels laggy or unresponsive. This is a big reason why, in the near future, you’re more likely to see AI used for assisting game content (like generating a level ahead of time) rather than literally creating things frame-by-frame while you play.

DeepMind’s Genie and Other Glimpses of the Future

You might be wondering, has anyone gotten close to this idea of AI-generated interactive worlds? The answer is kind of, yes – in early forms. Researchers and companies are actively exploring this, and some demos feel like sneak peeks of the future:

  • DeepMind’s Genie: Google’s DeepMind (an AI research lab) has been working on what they call a generative “world model.” Their latest version, Genie 3, was announced in 2025. It’s a system capable of generating simple interactive environments in real time from a text prompt. Think of it as a baby step toward the Star Trek Holodeck dream (a simulation room where you can create any world you want). For example, you could type “a calm forest with a river” and Genie will generate a 3D scene of a forest that you can actually move around in. It runs at about 24 frames per second in 720p resolution – not high-res by today’s gaming standards, but impressive considering it’s all AI-generated on the fly. What’s wild is that Genie’s world isn’t just a looping video; you can interact with it a bit. The model has a sort of short-term memory: if you pick up an object or make a change in the scene, Genie keeps track of it for a while. That means the AI isn’t just making disconnected frames; it has some notion of the world state. Now, Genie 3 has limitations. The “agents” (like any characters or moving objects in the scene) have a limited set of actions they can do. Genie struggles with having many independent characters all interacting – it’s not going to generate a bustling city street with dozens of AI-driven NPCs doing their own thing. Multi-agent interactions in a complex environment are still too much. It also can only maintain the simulation for a short time (a few minutes) before things might need to reset, and the visuals, while coherent, are not as crisp or detailed as a modern game engine. And as mentioned, it demands a lot of computing power behind the scenes. So, Genie is more a proof-of-concept than a ready player one. But it’s a big leap from just a year or two ago when AI could only make static images or non-interactive video clips. It shows that interactive generative worlds are possible at a basic level.
  • Oasis (AI Minecraft): Another eye-opening project was something nicknamed Oasis, which was basically an AI-generated version of Minecraft. It was released in late 2024 as a tech demo. The developers trained an AI on millions of hours of Minecraft gameplay videos. The AI then tried to imitate the game – meaning there was no traditional game engine or code, just the AI predicting what should happen next in the game world based on what it learned. The result was a rough, real-time playable simulation of Minecraft. You could move around and do Minecraft-y things, but the world often looked surreal and dreamlike. Since the AI wasn’t following strict rules (like actual Minecraft code would), it sometimes produced weird glitches or unexpected scenery changes. Without a hard-coded memory or physics, the game could suddenly morph – one moment you’re chopping a tree, the next moment the landscape shifts or your inventory items transform because the AI got “confused”. It ran at a low resolution (think 720p) and about 20 frames per second, with no sound. Players described it as playing inside a fever dream version of Minecraft. As a traditional game, it was pretty broken. But as an experiment, it was stunning – it was the first time we saw an AI attempt to recreate an entire game world on its own. It showed that an AI could learn how a game is supposed to look and some basic rules just from video data, even if it didn’t get everything right.

All these glimpses show that we’re inching toward AI that can handle pieces of what makes a game. One system does visuals and basic physics (like Genie), another learns game appearance and simple dynamics (like Oasis), others might generate level layouts or dialogue. Stitch those advancements together down the line, and you can imagine a pipeline where an AI could indeed generate many aspects of a game automatically. But as of now, no single AI system can do it all, and certainly not at the quality of a AAA game made by a human studio.

Hyper-Personalized Games: A Dream and a Dilemma

Let’s talk about one particularly exciting (and maybe unsettling) implication of AI in gaming: hyper-personalization. This is the idea that games in the future could be tailor-made for one single player – you.

On one hand, it sounds like the ultimate dream. Don’t like how a game’s story went? Wish the game was set in a different world or had a different art style? In the future, you might just be able to ask for a version that suits your exact taste. It’d be like having an infinite library of games that an AI can generate whenever you want. You could have an AI create a game that mixes all your favorite elements – say, a game that has the exploration of Zelda, the building of Minecraft, and the social elements of Animal Crossing, all in one, just for you. Each player could experience a unique world and storyline. No more waiting years for a sequel or DLC – the AI can make a new adventure on demand.

However, this raises a big question: what happens to gaming as a shared experience? Part of what makes games (and movies, and books, and other media) special is the community and culture around them. Think about big hits like Minecraft, Fortnite, or Skyrim – millions of players played the same game, and that created a shared experience. We had common points of reference, like everyone remembering that first night in Minecraft when the creepers come out, or discussing how to defeat a tough boss in Skyrim. If everyone is playing a completely different, personalized game, we lose that common ground.

Imagine a world where there’s no single “big game” that everyone is talking about, because everyone’s busy playing whatever custom AI-generated game they requested that day. Sure, you got exactly the game you wanted, but you can’t hop on Reddit or Discord and chat about it, because nobody else is playing the same thing. We’d be trading the kind of communal excitement of a game launch (midnight release hype, shared memes, streaming a game and having viewers know what’s going on) for a bunch of isolated experiences.

There’s also a question of quality and balance. If games become one-player-sized, will they be as deep and well-crafted as games designed for a wide audience? You might have an AI spinning up content just for you, but it could also overfit to your preferences in a way that makes the game too easy or predictable. It could reinforce your personal preferences to an extreme, potentially leading people to stick to their comfort zones (like always designing games where they win easily). That might get boring or even a bit unhealthy over time.

We’re speculating pretty far into the future here, but it’s something to ponder: if AI tech allows entertainment to be completely personalized, does mass-market entertainment fade away? Will there be any more cultural phenomena where millions of people are playing the same game, or will it fragment into millions of unique games for each person?

Perhaps we’ll find a balance. Maybe AI will be used to personalize certain aspects of a common game – like everyone plays Pokémon, but your game has some unique Pokémon or custom storylines that only you experience. That way, you still share a core experience with others, just with personal twists. Or maybe new genres of games will emerge where the whole point is that it’s different for everyone, and the community forms around comparing what each person’s AI came up with.

It’s both exciting and a bit mind-bending to think about. The future of gaming could either bring us all into the same virtual worlds like never before, or put each of us in our own little world. Or both!

The Long Road Ahead: AI as a Tool, Not a Replacement (Yet)

So, after all this excitement, let’s ground ourselves. Fully AI-generated games that spring to life from a text prompt aren’t here yet, and they won’t be for a while. We’re likely years away from anything close to that level of sophistication. There are a lot of pieces to figure out – technologically, creatively, and even ethically.

The good news is, progress is happening steadily. AI models are getting better at understanding context, keeping things coherent, and running faster as hardware improves. Every year brings new breakthroughs – maybe a model that remembers longer, or one that handles multiple characters at once, or one that generates graphics more efficiently. Each step adds up.

Expect AI to remain more of a supporting character in development rather than the all-powerful game director for now. We’ll see more AI tools in game engines helping artists, designers, and writers in their tasks. For players, this means games with more life: NPCs that chat more naturally, side-quests that are a bit different for each person, or environments that feel fresh thanks to AI tweaks. These improvements will come under human guidance, ensuring the game still feels cohesive and fun.

Game creation still needs that human touch. A human designer with AI tools is like a director with a super-smart assistant: the creativity is human, but the AI can help execute and offer suggestions quickly. We’re already seeing indie developers use AI-generated art or even code assistants to speed up projects, and that trend will only grow.

It’s an exciting future, but we need to approach it thoughtfully. We don’t want a bunch of auto-generated games that are buggy or that do weird things a human designer would’ve caught. The creative magic that human game makers add – the soul of a Mario or a Zelda – isn’t something AI can replicate right now. Maybe one day AI will co-create such magic with us, but it’s not there yet.

At the end of the day, the dream of instantly playing your “prompted” game isn’t pure fantasy anymore – you can almost see it on the horizon. But it’s still a distant horizon. In the meantime, AI will keep sneaking into our games in smaller ways, making each new generation of games a bit more immersive and alive. So keep your eyes on the tech, but don’t uninstall your favorite games just yet – we’ll be enjoying human-crafted adventures for a while longer.

Someday, you might boot up a game that no human ever hand-crafted, and that will truly show how far we’ve come. Until then, happy gaming – whether it’s powered by humans, AI, or a bit of both – and let’s enjoy the ride as these two exciting worlds collide in the coming years.