Artificial intelligence has been infiltrating our daily workflows and routine tasks for while now. It may be AI working in the background, as with Gemini‘s integration across Google products, or you may be engaging more directly with popular content generators such as OpenAI’s ChatGPT and Dall-E. Looming in the not-too-distant future are amped-up virtual assistants.
As if AI itself weren’t futuristic enough, now there’s a whole new leap forward on the horizon: quantum AI. It’s a fusion of artificial intelligence with unconventional and still largely experimental quantum computing into a super-fast and highly efficient technology. Quantum computers will be the muscles, while AI will be the brains.
Here’s a quick breakdown of the basics to help you better understand quantum AI.
What are AI and generative AI?
Artificial intelligence is a technology that mimics human decision making and problem solving. It’s software that can recognize patterns, learn from data and even “understand” language enough to interact with us, via chatbots, to recommend movies or to identify faces or things in photos.
One powerful type of AI is generative AI, which goes beyond simple data analysis or predictions. Gen AI models create new content based on their training data — like text, images and sounds. Think ChatGPT, Dall-E, Midjourney, Gemini, Claude and Adobe Firefly, to name a few.
These tools are powered by large language models trained on tons of data, allowing them to produce realistic outputs. But behind the scenes, even the most advanced AI is still limited by classical computing — the kind that happens in Windows and Mac computers, in the servers that populate data centers and even in supercomputers. But there’s only so far that binary operations will get you.
And that’s where quantum computing could change the game.
Quantum computing
Classical and quantum computing differ in several ways, one of which is processing. Classical computing uses linear processing (step-by-step calculations), while quantum uses parallel processing (multiple calculations at once).
Another difference is in the basic processing units they use. Classical computers use bits as the smallest data unit (either a 0 or 1). Quantum computers use quantum bits, aka qubits, based on the laws of quantum mechanics. Qubits can represent both 0 and 1 simultaneously thanks to a phenomenon called superposition.
Another property that quantum computers can leverage is entanglement. It’s where two qubits are linked so that the state of one immediately influences the state of the other, no matter the distance.
Superposition and entanglement allow quantum computers to solve complex problems much faster than traditional computers. Where classical computing can take weeks or even years to solve some problems, quantum computing reduces the timeframe for achievement to merely hours. So why aren’t they mainstream?
Quantum computers are incredibly delicate and must be kept at amazingly low temperatures to work properly. They’re massive and not practical for everyday use yet. Still, companies like Intel, Google, IBM, Amazon and Microsoft are heavily invested in quantum computing, and the race is on to make it viable. While most companies don’t have the funds or specialized teams to support their own quantum computers, cloud-based quantum computing services like Amazon Braket and Google’s Quantum AI could be options.
While the potential is enormous, quantum AI faces challenges like hardware instability and a need for specialized algorithms. However, improvements in error correction and qubit stability are making it more reliable.
Current quantum computers, like IBM’s Quantum System Two and Google’s quantum machinery, can handle some calculations but aren’t yet ready to run large-scale AI models. Additionally, quantum computing requires highly controlled environments, so scaling up for widespread use will be a big challenge.
That’s why most experts believe we’re likely years away from fully realized quantum AI. As Lawrence Gasman, president of LDG Tech Advisors, wrote for Forbes at the start of 2024: “It is early days for quantum AI, and for many organizations, quantum AI right now might be overkill.”
The what-if game
Quantum AI is still in the early trial stages, but it’s a promising technology. Right now, AI models are limited by the power of classical computers, especially when processing big datasets or running complex simulations. Quantum computing could provide the necessary boost AI needs to process large, complex datasets at ultrafast speeds.
Although the future real-world applications are somewhat speculative, we can assume certain fields would benefit the most from this technological breakthrough, including financial trading, natural language processing, image and speech recognition, health care diagnostics, robotics, drug discovery, supply chain logistics, cybersecurity through quantum-resistant cryptography and traffic management for autonomous vehicles.
Here are some other ways that quantum computing could enhance AI:
- Training large AI models, like LLMs, takes massive amounts of time and computing power. It’s one reason AI companies need huge data centers to support their tools. Quantum computing could speed up this process, allowing models to learn faster and more efficiently. Instead of taking weeks or months to train, quantum AI models might be trained in days.
- AI thrives on pattern recognition, whether it’s in images, text or numbers. Quantum computing’s power to process many possibilities at once could lead to faster, more accurate pattern recognition. This would be particularly beneficial in fields where AI needs to consider many factors simultaneously, like financial forecasting for trading.
- Although impressive, generative AI tools still have limitations, especially when it comes to creating realistic, nuanced outputs. Quantum AI could enable generative AI models to process more data and create content that’s even more realistic and sophisticated.
- In decision-making processes where multiple factors need to be balanced, like drug discovery or climate modeling, quantum computers could allow AI to test countless possible scenarios and outcomes simultaneously. This could help scientists find optimal solutions in a fraction of the time it takes them now.