When fake data is a good thing – how synthetic data trains AI to solve real problems
In the evolving landscape of artificial intelligence (AI), synthetic data is gaining traction as a vital resource for training AI systems, particularly in scenarios where real-world data is scarce, costly, or ethically problematic to obtain. As highlighted by David Beniaguev, synthetic data—artificially generated representations of real-world phenomena—serves as a powerful tool for enhancing AI capabilities. For instance, when you capture a stunning photo using your smartphone’s AI-powered night mode, that feature may have been trained on computer-generated images of nighttime scenes, demonstrating how synthetic data can bridge the gap where actual data is limited. This trend is particularly significant as researchers face challenges in accessing sufficient real data due to privacy concerns and the inherent risks associated with data collection in unpredictable environments, such as during severe weather conditions for self-driving car systems.
The process of creating synthetic data involves advanced techniques, including data augmentation and generative AI models. While traditional data augmentation modifies existing images to create variations, synthetic data takes this a step further by producing entirely new images or scenarios. Researchers employ rule-based models, which simulate real-world physics, alongside generative AI, which utilizes vast datasets to create realistic outputs. However, the use of synthetic data is not without its pitfalls. There are inherent risks related to bias and inaccuracies that can arise from the models used to generate this data. For example, simulations designed to detect fraud in home insurance could inadvertently perpetuate stereotypes if the underlying assumptions are flawed. Therefore, it is crucial for AI developers to maintain transparency regarding the origins of their training data, distinguishing between real and synthetic sources to uphold ethical standards, especially as regulations like California’s “Generative artificial intelligence: training data transparency” law come into effect in 2026.
As the reliance on synthetic data grows, so does the need for human oversight to ensure that AI systems remain aligned with real-world applications and ethical considerations. While synthetic data can enhance AI learning, it is essential to recognize that machines lack an intrinsic understanding of reality; they operate based on patterns derived from the data they are trained on. Thus, the responsibility lies with researchers and developers to ensure that AI technologies are not only effective but also fair and trustworthy. The future of AI will depend on a balanced approach that leverages the benefits of synthetic data while safeguarding against its potential misuses, ensuring that the ultimate source of knowledge remains grounded in reality.
These faces are fake – generated by artificial intelligence – but useful for training other AI systems about human faces.
David Beniaguev
You’ve just finished a strenuous hike to the top of a mountain. You’re exhausted but elated. The view of the city below is gorgeous, and you want to capture the moment on camera. But it’s already quite dark, and you’re not sure you’ll get a good shot. Fortunately, your phone has an AI-powered night mode that can take stunning photos even after sunset.
Here’s something you might not know: That night mode may have been trained on
synthetic nighttime images
, computer-generated scenes that were never actually photographed.
As artificial intelligence researchers exhaust the supply of real data on the web and in digitized archives, they are
increasingly turning to synthetic data
, artificially generated examples that mimic real ones. But that creates a paradox. In science, making up data is a cardinal sin. Fake data and misinformation are already undermining trust in information online. So how can synthetic data possibly be good? Is it just a polite euphemism for deception?
As a
machine learning researcher
, I think the answer lies in intent and transparency. Synthetic data is generally not created to manipulate results or mislead people. In fact, ethics may require AI companies to use synthetic data: Releasing real human face images, for example, can violate privacy, whereas
synthetic faces
can offer similar benefit with formal privacy guarantees.
There are other reasons that help explain the growing use of synthetic data in training AI models. Some things are so scarce or rare that they are barely represented in real data. Rather than letting these gaps become an Achilles’ heel, researchers
can simulate those situations instead
.
Another motivation is that collecting real data can be costly or even risky. Imagine collecting data for a self-driving car during storms or on unpaved roads. It is often much more efficient, and far safer, to
generate such data virtually
.
Here’s a quick take on what synthetic data is and why researchers and developers use it.
How synthetic data is made
Training an AI model requires large amounts of data. Like students and athletes, the more an AI is trained, the better its performance tends to be. Researchers have known for a long time that if data is in short supply, they can use a technique known as
data augmentation
. For example, a given image can be rotated or scaled to yield additional training data. Synthetic data is data augmentation on steroids. Instead of making small alterations to existing images, researchers create entirely new ones.
But how do researchers create synthetic data? There are two main approaches. The first approach relies on rule-based or
physics-based models
. For example, the laws of optics can be used to simulate how a scene would appear given the positions and orientations of objects within it.
The second approach uses
generative AI to produce data
. Modern generative models are trained on vast amounts of data and can now create remarkably realistic text, audio, images and videos. Generative AI offers a flexible way to produce large and diverse datasets.
Both approaches share a common principle: If data does not come directly from the real world, it must come from a realistic model of the world.
Downsides and dangers
It is also important to remember that while synthetic data can be useful, it is not a panacea. Synthetic data is only as reliable as the models of reality it comes from, and even the best scientific or generative models
have weaknesses
.
Researchers have to be careful about potential biases and inaccuracies in the data they produce. For example, researchers may simulate the home-insurance ecosystem to help detect fraud, but those simulations could embed unfair assumptions about neighborhoods or property types. The benefits of such data must be weighed against risks to fairness and equity.
It’s also important to maintain a clear distinction between models and simulations on one hand and the real world on the other. Synthetic data is invaluable for training and testing AI systems, but when an AI model is deployed in the real world, its performance and safety should be
proved with real, not simulated, data
for both technical and ethical reasons.
Future research on synthetic data in AI is likely to face many challenges. Some are ethical, some are scientific, and others are engineering problems. As synthetic data becomes more realistic, it will be more useful for training AI, but it will also be easier to misuse. For example, increasingly realistic synthetic images can be used to create convincing
deepfake videos
.
I believe that researchers and AI companies should keep clear records to show which data is synthetic and why it was created. Clearly disclosing which parts of the training data are real and which are synthetic is a key aspect of responsibly producing AI models. California’s law, “
Generative artificial intelligence: training data transparency
,” set to take effect on Jan. 1, 2026, requires AI developers to disclose if they used synthetic data in training their models.
Researchers should also study
how mistakes in simulations or models can lead to bad data. Careful work will help keep synthetic data transparent, trustworthy and reliable.
Keeping it real
Most AI systems learn by finding patterns in data. Researchers can improve their ability to do this by adding synthetic data. But AI has no sense of what is real or true. The desire to stay in touch with reality and to seek truth belongs to people, not machines.
Human judgment and oversight
in the use of synthetic data will remain essential for the future.
The next time you use a cool AI feature on your smartphone, think about whether synthetic data might have played a role. Our AIs may learn from synthetic data, but reality remains the ultimate source of our knowledge and the final judge of our creations.
Ambuj Tewari receives funding from NSF and NIH.