Seeded Creativity for LLMs: Controlled Randomness That Helps

Generate random seeds outside the model, feed them into prompts, and let the LLM produce varied yet coherent output.

Large language models can feel frustratingly repetitive. Because they’re somewhat deterministic and stateless, every time you use one it doesn’t really have awareness of what came before—unless you explicitly provide that context. So you’ll see the same names show up, the same “random” numbers that aren’t really random, and the same kinds of defaults models tend to gravitate toward. If you’re trying to build an application—or even just generate lots of varied creative output—that repetition can be a problem.

But I actually think this is a strength.

That predictability means you can rely on the model to behave consistently. And when you truly need randomness and variety, it’s not that hard to add. You just have to do a little bit of coding instead of asking the model to do something it’s not great at.

Why “randomness” is better handled outside the model

Randomization is extremely easy to code for. Trying to get a model to be genuinely random doesn’t make much sense, because the model is going to keep falling into familiar patterns. What you want is to create your own randomness and then use the model for what it’s good at: building coherence, structure, and meaning from a starting point.

The practical approach is simple:

  1. Generate random seeds outside the model
  2. Feed those seeds into the prompt
  3. Let the model do the creative work from there

Use seeds to break the “default name” problem

A concrete example: let’s say I want a model to write stories, and I don’t want it using the same default names it always picks.

I’ll use a library like Faker (available in JavaScript and Python) to generate a random name—or even an occupation, or an age—and I’ll give that to the model as a seed and let it build from there.

Even something as simple as a random letter generator can work: give the model a letter and say, “You’ve got to choose a word that begins with this.” The point is to push it into a different space than the one it lazily returns to.

Build “random dials” you can mix and match

Once you start thinking in seeds, you can create a whole set of adjustable random dials. For example, you can pick:

  • a random name
  • a random occupation
  • a random location
  • a genre

…and then prompt:

Write a story about a person with this job in this city in this kind of story, and make something happen.

You can take it further by seeding structure too. For instance, you can define (or randomize) an outline like:

  • Act one: this happens
  • Act two: an upset
  • Act three: a reversal
  • Act four: they solve it

Or mix that structure up. The main idea is that you can control variability explicitly. Then you tell the model, “Now you have to tell a story that fits this,” or “Write an article that satisfies these constraints,” or “Create something new that fits in this space.”

If the model can use tools, let it run the randomness

With models that are capable of tool use, you can even tell the model to do this step itself. You can say: “Run a Python script, pull a random number, and start from there. Come up with X number of ideas based on that.”

It still takes a little bit of thinking. You’re designing how to prompt it into a very different random space rather than hoping it’ll magically diversify on its own.

Two more techniques to force novelty

  1. Anchor it with a large list You can give it a list of 100 movies or 100 book titles and say, “Pick one and think from there.” The list becomes a constraint that nudges it away from its usual defaults.

  2. Connect two unrelated things This is one of my favorite techniques—and it was something I used with GPT-3 to show how novel it could be.

Take two random movies like Goodfellas and Clueless and say: “Explain the connection between these movies.”

You’ll often get a genuinely imaginative bridge—an explanation connecting them in ways you wouldn’t expect, and sometimes in a way that actually makes a lot of sense. It’s a great example of how you can use randomness plus the model’s ability to discover patterns to create something new.

Predictable models, randomized inputs, better output

The model doesn’t need to be random. You just need to give it random starting conditions.

When you do that, you get the best of both worlds: the model stays predictable and coherent, and you still get variety—because you’ve deliberately moved it into different creative spaces. That’s a much better way to generate novelty than trying to coax “true randomness” out of something that’s designed to be consistent.