Can AI come up with new ideas?


Update 29 Apr 2026: In a twist of fate, just two days after I wrote this article the UK government announced that it was backing Ineffable Intelligence through its Sovereign AI fund, to scale a ‘breakthrough’ self-learning AI that actually can discover new knowledge. Ineffable Intelligence broke a European record with the largest seed funding round ever, with over $1.1bn raised. This is very exciting, and I’ll be sure to cover this closely as it develops!

Ask ChatGPT or Claude for “a completely new business idea” and it’ll confidently produce something plausible. Maybe even exciting.

After a little brainstorming with your digital friend, you may feel like Sir Isaac Newton when he saw an apple fall in his garden. But are ideas generated by AI able to actually be new, or is AI restricted to producing a smörgåsbord of ideas it has already absorbed?

Let’s take a dive into novelty, usefulness and originality (with a light touch of philosophy), and take a look at what AI is truly capable of now. I’m going to keep this article as high-level as I can, to make it accessible to a wider audience with a range of technical backgrounds.

Just to clarify, unless otherwise specified, when I refer to ‘AI’ in this article, I’m specifically talking about things like ChatGPT, Claude and Gemini. There are of course other types of artificial intelligence, but the most common (and public-facing) ones at the moment are those we call chatbots. These are now largely powered behind the scenes by technology called Large Language Models.

Isaac Newton and the apple

Contrary to popular belief, the apple probably didn’t hit Sir Isaac on the head. The better-known account is that he saw an apple fall from a tree, which inspired his thinking about gravity.


Can AI genuinely create new ideas, or does it just recombine existing ones?

If, like me, you had many ‘eureka!’ moments as a kid, you probably remember thinking up new inventions that felt revolutionary, helped along by that grandiose childlike curiosity. For example, one of mine (going back to the early ’90s) was a motorised system that could open and close your blinds by remote control, with an elaborate system of strings and pulleys.

“No one’s ever thought of this before!” you excitedly exclaim into the void, before realising that, as a literal child, you have no seed capital, investors, or ability to access credit, so you’re completely unable to bring your idea to fruition. The idea that could have made you millions gets archived in the dark recesses of your memory, where it gathers brain-dust.

Later in adult life, you happen across an article advertising the exact product you dreamed up as a child, presenting it as a completely new idea. “I thought of that first!” you grumble, as you recall those dusty memories to the forefront of your mind. The FOMO hits you hard.

Fast-forward to the 2020s and humankind now has technology, commonly referred to as artificial intelligence, that can pass some Turing-test-style interactions (under certain conditions)1.

Originally called The Imitation Game by the brilliant Alan Turing, the Turing test was devised to examine a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. Essentially, it asks whether a human can distinguish the natural language output from a machine (like ChatGPT) from output written by a human.

Portrait-style image of Alan Turing

Alan Turing devised techniques for breaking German ciphers during the Second World War, work that played a crucial role in the Allied war effort. He is widely regarded as one of the fathers of modern computing.

In fact, we’ve reached a bizarre point where, ironically, humans can often tell something was written by a machine because it’s too ‘intelligent’ (…but mainly because it’s just scattering em-dashes all over the place—the ones I’m using in this article are genuinely for editorial flair!)

You may think that models like Claude, ChatGPT and Gemini can provide endless opportunities for exploring new ideas and pushing the frontier of discovery. However, the very way these models work might be a limiting factor in their ‘inventiveness’. Understanding how they work can feel like learning how a magician does his tricks—shattering the mirage of ‘intelligence’. Is the output of ChatGPT a mere illusion, obfuscated by flowery-sounding prose? Do they actually think, or are they just word prediction machines?

In 1950, Alan Turing began his paper Computing Machinery and Intelligence with:

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’.2

Bear in mind this was written only 5 years after the end of the Second World War, so I think it’s fascinating that we’re still asking similar questions today, more than 75 years later. Before I digress into more of Turing’s work, and the philosophy that intertwines language, emotion, thought, intelligence and consciousness (of both natural and synthetic origin), let’s dial back to the question at hand. Can AI come up with new ideas? To answer that, let’s first take a look at what “new idea” actually means, followed by a brief overview of how a large language model works.


What is a ‘new idea’?

‘New’ can mean different things. I propose that novelty be split into three levels:

  1. New to the individual
  2. New combinations of existing ideas
  3. Truly new to the world

Let’s take the first, weakest form of novelty. It’s not much different to doing lots of research on Google and combining the ideas yourself, only much faster. But these ideas can spark deeper creativity, potentially leading to you discovering something truly new to the world.

Before AI came on the scene, you’d spend ages trying to find information—trying an assortment of search terms, sifting through results, trying to piece it all together into something useful. Before that, we’d rely on these physical repositories of information called ‘libraries’ along with a real-life version of Wikipedia called an encyclopaedia, which took even longer. Now, we’re able to be more vague, and get a response that, most of the time, genuinely answers what we asked.

But with asking AI vague questions comes the risk of getting answers that either miss the mark, or are just plain wrong. For things like medical advice, that could be plain dangerous. This is commonly referred to as ‘hallucinating’, and it’s a known issue. The AI isn’t actually ‘lying’ as such, but it can produce information that is inaccurate or misleading because these systems still aren’t that good at verifying the truth of their own output.

Garbage icon
‘Garbage in, garbage out’ was an old adage I learned while studying IT at school in the early 2000s, and it still applies to AI.

New combinations of existing ideas

The second level is where things get interesting. Think about a lot of the innovation you’ve seen in the 21st century. Most inventions actually build on existing technologies or concepts. It seems most originality now lies less in the ingredients, and more in the pairing, application or execution. This could represent an opportunity—are most startups simply new combinations of existing ideas? Can AI spot combinations that humans might have overlooked, because we’re all stuck inside our own industry assumptions?

For instance, an AI might come up with an idea for Tinder-style swiping for houseplants. It’s certainly a novel idea in itself (albeit not likely to be a very popular or commercially viable one), but it’s just taking one successful idea and applying it to a brand new domain. Large language models are actually quite good at coming up with combinations like this, and they can do it faster than humans can. Whether those ideas are any good is a different question, of course.

Going back to my childhood idea of the auto-blinds, this was clearly inspired by a combination of remote controls for the TV and the idea of motorised systems. If I had somehow invented the motor itself (a surprisingly impressive feat for a 7-year-old), that would have been a much more significant breakthrough. But the fact that I utilised existing ideas in a new way is still a form of creativity, and it’s something that AI can do.

Then there are more borderline cases. Some ideas may seem truly new to the world not because every component is new, but because the combination, timing and application create something unprecedented…

Gold Bitcoin token embossed with the B symbol on a pale surface
Bitcoin wasn’t the first attempt at digital cash, but it was the first decentralised cryptocurrency to solve the double-spending problem at scale.

Take cryptocurrencies and the blockchain for example. Bitcoin was world-changing because it solved a real problem in a new way: how to create a peer-to-peer ‘digital cash’ system without any kind of central authority (think governments, banks and the like). The ingredients, however, weren’t actually all new. Earlier work in 1991 by Haber and Stornetta3 explored cryptographic timestamping of digital documents—ideas that now look very blockchain-like.

Bitcoin’s proof-of-work mechanism built on Adam Back’s Hashcash4, which was originally proposed as a way to impose a computational cost on things like denial-of-service attacks and spam, but it was repurposed by Satoshi Nakamoto (the pseudonymous, mysterious creator of Bitcoin) to secure its blockchain—the public ledger of all Bitcoin transactions—by making it computationally expensive to rewrite or attack.

Those that contribute their computing power to the network (‘miners’) compete to add ‘blocks’ to the chain (blocks being the equivalent of ‘pages’ of the digital ledger), and are rewarded with new bitcoins when they successfully do so. This takes enormous computational power, which is what makes the system secure. So, while the components of Bitcoin weren’t new per se, the way they were combined and applied was revolutionary.

Theoretically, there’s nothing stopping AI from coming up with this kind of thing, but what matters is the timing and the execution. An AI could spit out a thousand ideas like this, but it’d be up to a human to decide which one to pursue, and how to bring it about.


How does a Large Language Model work?

At a high level, a model like ChatGPT predicts a continuation of a conversation based on patterns learned during training. It does this using a neural network. What does this mean exactly? Well, the model is fed training data—which might include news articles, Wikipedia entries, academic papers and social media posts—and it learns patterns from these. This is then encoded in the model’s parameters (think of this as the equivalent of a human brain’s neural connections that cause us to have reactions to certain things). When given a prompt, it’ll use this ‘knowledge’ to generate a response. It’s essentially continuing a conversation with you by predicting the next likely tokens—chunks of text—based on what it learned from training.

Teal, purple and blue nodes connected in layers to represent a neural network
A neural network is a series of interconnected nodes that process information. The way it’s designed is inspired by the human brain.

Think about this for a moment—it’s (broadly) similar to how you learn something. Humans generate ideas by combining prior experience, conscious memory, instinct and lessons from life: things we’ve read, problems we’ve seen, analogies, conversations, cultural references, etc. So going back to our original question—if AI is just recombining ideas, we should be honest that humans do this too.

The main difference is that humans have lived experience, intention, taste, emotions, stakes, and direct contact with the world. AI does not (as of 2026!)


Where AI falls short

Whilst AI can generate ideas that seem plausible, because it doesn’t have the added human layer of things that I mentioned above, it can’t reliably tell which ideas matter.

For instance, it doesn’t know whether people actually want the thing or not. It doesn’t experience frustration and boredom like humans do (and ideally never will!), so it can’t really tell whether a new idea would quickly lose traction. It might produce ideas that work from a technical standpoint, but don’t fit culturally.

Of course, we can always prompt the AI with more context to narrow its scope, but it doesn’t have that lived experience to know what people actually want, and what would be useful to them. This is a key point, because the most successful inventions and ideas are often those that solve a real problem or meet a real need.

A cluttered workshop filled with tools and materials
Human creativity and invention is often inspired by frustration or born out of emotion.

Truly new to the world

Going back to our three-tiered levels of novelty, this level is the most difficult to achieve. Is there a finite number of novel ideas in the world? If so, as time goes on, the chance of one of us coming up with something truly new to the world becomes smaller and smaller.

For an AI trained largely on existing human output? Well, this is difficult to imagine—at least in the case of the models available today.

But remember that we are also trained on human output. We learn from the world around us, particularly in those younger years where our brains are still forming. So, what’s the difference? Why can humans come up with ideas that are truly new to the world, but AI can’t?

A patent document illustration
Patents give inventors legal protection against others making, using, selling or importing their inventions without permission.

The answer lies in what we touched on earlier—humans have lived experience, intention, taste, emotions, stakes and direct contact with the world. An AI doesn’t care whether an idea works. It doesn’t feel the consequences of its ideas, and it can’t feel why an idea matters.

AI doesn’t suffer inconvenience. Inconvenience is often the catalyst for humans finding a gap in the market that can be exploited with a new product idea. My theory is that if we add these human elements to AI, then it may indeed be able to come up with ideas that are truly new to the world.

However, this is a very big ‘if’, because it raises a lot of ethical and philosophical questions. Providing AI with these characteristics would call into question the very nature of consciousness itself. These questions are beyond the scope of this article, but they’re ones we should keep thinking about as AI continues to evolve.


Working alongside AI

Collage of solar panel, crystal, semiconductor and mineral imagery from DeepMind materials research
DeepMind’s GNoME predicted over 2.2 million new crystals.

AI is a tool—one that can work alongside humans to help us come up with new ideas. Stepping back from chatbots, other types of ‘AI’ (such as Google DeepMind’s GNoME) are already doing this in a more direct way, predicting millions of crystal structures that would take traditional methods years to solve. This is a great example of how AI can be used to help humans come up with new ideas and push the frontier of discovery, because AI can search through a much larger space of possibilities than a human can, and it can do it much, much faster.

So, can AI come up with new ideas?

The answer depends on what we mean by new. AI can absolutely generate ideas that are new to us as individuals, and also combine existing ideas in ways that are surprising, useful or commercially interesting. But truly world-new ideas still seem to require things that AI doesn’t yet have: a sense of purpose, judgement, taste (however subjective that might be!) and the ability to care whether an idea succeeds.

For now, AI is probably best understood not as an inventor in its own right, but as a creative partner—one that can widen the search space, challenge assumptions and help humans notice possibilities they might have otherwise missed!


Thanks for reading! If you have any thoughts on this topic, or any questions, feel free to reach out to me (contact details on my main page here). I’m hoping to write more about this topic in the future, and if you have any ideas for future articles, please let me know!

Footnotes

  1. A 2025 preprint reported that GPT-4.5 was judged human 73% of the time in a controlled three-party Turing-test-style setup.

  2. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.

  3. Haber, S., Stornetta, W.S. How to time-stamp a digital document. J. Cryptology 3, 99-111 (1991).

  4. http://www.hashcash.org/papers/announce.txt

Continue reading

Explore more essays or get in touch if you want to discuss analytics, AI, or collaboration.

Jordan Cheney

Written by

Jordan Cheney

Data Science & Analytics

I write about AI, analytics, and the messy human bit behind the numbers!