Este artículo estará disponible en español en El Tiempo Latino.
Misinformation is nothing new. It has, however, become ubiquitous and, in some cases, more difficult and time-consuming than ever to debunk.

When we first started publishing in 2003 — which predated Facebook (2004), YouTube (2005) and Twitter (2006) — viral misinformation took the form of chain emails. Although they were a problem at the time, chain emails were to misinformation what the Pony Express is to ChatGPT.
As the popularity of social media platforms has grown, so too has the scope of viral misinformation and the speed with which it travels. And this falsehood-fraught environment is increasingly where people get their news.
In a survey of U.S. adults last year, the Pew Research Center found that “just over half of U.S. adults (54%) say they at least sometimes get news from social media.”
The incredible growth of podcasts also has helped spread misinformation on social media. According to the Pew Research Center, 42% of Americans 12 and older said they had listened to a podcast in the past month in 2023 — up from only 9% in 2008. In February, YouTube — the largest video platform — announced that it had more than 1 billion monthly podcast users.
The emergence of artificial intelligence, or AI, makes it even more difficult for social media users to separate fact from fiction.
“AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as ‘deepfakes’), present significant difficulties in distinguishing authentic content from synthetic creations,” Cathy Li and Agustina Callegari of the World Economic Forum wrote last year in an article on how to combat AI misinformation.
Our work aims to inform the public and debunk political falsehoods. But we can’t fact-check everything. Here’s our advice on how to identify bogus posts and factual distortions.
Think before sharing. We have long advised our readers, “Be skeptical, not cynical.” When it comes to online content, that means: Think twice before you share that social media post.
“Don’t hit reshare until you stop and think to yourself, ‘Am I reasonably sure that this is accurate … does this seem plausible?’” David Rand, a professor of brain and cognitive sciences at MIT, told PBS Newshour last year.
We know that this can be hard to do, particularly if the content evokes a powerful response in you and aligns with your beliefs — which is often the case. There’s two reasons for that:
- Seeking clicks, content providers give us text, images and videos that often provoke a reaction from us.
- Using algorithms, social media platforms feed us what they think we want to see and hear.
As a result, social media posts often play to our emotions, and, as humans, we are susceptible to confirmation bias — which is the tendency to give too much weight to information that confirms our beliefs. The combination of the two makes misinformation go viral.
But resist the urge to immediately reshare.
Consider the source. Who shared the claim? What do you know about this person or organization? Do they have any partisan or financial conflicts? What qualifies them to write or speak about the subject?
We’ve seen a lot of misinformation from people who draw conclusions and share opinions, despite a lack of expertise in the subject or a clear conflict of interest, or both.
At the height of the COVID-19 pandemic, we debunked bogus claims about the virus from several chiropractors. A multiple offender, chiropractor Eric Nepute, was sued by the Justice Department and Federal Trade Commission for violating the COVID-19 Consumer Protection Act. In a settlement, Nepute agreed to pay a fine and stop making false claims about supplements that he advertised and sold as preventatives and treatments for COVID-19. The civil complaint said Nepute and his companies “have earned a substantial amount of money from selling these and other Wellness Warrior Products.”
If someone is making claims in an effort to sell you something, that’s a red flag to be skeptical.
Of course, we also see a lot of misinformation from partisans — so be wary of liberal and conservative social media accounts making claims about the other side.
For example, we recently debunked the misleading claim spread by President Donald Trump and conservative commentators that Politico, an online news outlet, was being “completely” or “massively funded” by the U.S. Agency for International Development under the Biden administration. In fact, the media payments were for subscriptions that were common at many federal agencies under the Trump and Biden administrations.
Evaluate the evidence. Does the person making the claim provide any evidence, such as links to articles, published research or other sources? Are some sources mentioned, but no links provided? How credible is the evidence provided?
It’s a red flag if no sources are provided. If sources are cited, find the source material and see if the evidence supports the claim. You would be surprised how often the “evidence” doesn’t support the claim. (Be careful in clicking on links. Make sure they lead to a legitimate website.)
Last month, we did a story on social media posts that falsely claimed Trump ordered former Philippine President Rodrigo Duterte’s release from the International Criminal Court. The posts cited “Executive Order 2025-03” — which doesn’t exist. That’s not even the numbering system for executive orders.
You should also check the credibility of the source material provided in the social media post.
We recently debunked misleading claims about measles in a video posted to X by Mary Holland, the CEO of the anti-vaccine advocacy group Children’s Health Defense that was founded by Health and Human Services Secretary Robert F. Kennedy Jr. Holland based her claims on an article written by Sayer Ji, the founder of an alternative medicine website who was named in the Center for Countering Digital Hate’s “Disinformation Dozen,” a list of top spreaders of vaccine misinformation on social media. Ji has a bachelor’s degree in philosophy from Rutgers University.
Ji’s history of spreading misinformation and his lack of expertise in the area of infectious diseases are red flags.
In another case, we wrote about an article in a peer-reviewed journal that made numerous false claims about COVID-19 mRNA vaccines. The article — which was later retracted — was written by known vaccination opponents who have spread misinformation about the mRNA vaccines, and it was published in a journal that did not have the same standards as more reputable journals.
If the social media post includes an image that you suspect might be a fake, then you can use reverse image search engines, such as Google and TinEye, that may help you find the original image and where and when it appeared online. We have used such tools numerous times over the years.
Evidence or opinion? Cable TV commentators, podcasters and columnists have blurred the line between news and opinion.
If the evidence cited in the social media post comes from a news source — or purports to come from a news source, sometimes falsely labeled “breaking news” — you should consider if the social media post is sharing fact-based reporting or someone’s opinion of the news.
Everyone is entitled to their own opinions, but we’ve found that many partisan websites, podcasters and commentators – whether they are pushing a liberal or conservative agenda — aren’t telling the full story. Their version of the facts is often slanted to benefit their side.
Consult the experts. If you are still uncertain about the veracity of a social media claim, then you should consult the experts. That includes FactCheck.org — we’re on YouTube, Facebook, Instagram, Threads, X, BlueSky, WhatsApp and TikTok.
A good place to start is Google or the search engines of FactCheck.org and other fact-checking websites.
The search should include keywords or a short excerpt of the social media post, podcast or video. For example, social media posts claimed the Department of Government Efficiency stopped “royalties” to former President Barack Obama for “Obamacare,” formally known as the Affordable Care Act. The top two results of a recent Google search of “royalties + Obama + Affordable Care Act” turned up articles by FactCheck.org and AFP Fact Check, a France-based fact-checking organization.
Google has also created a tool called “Fact Check Explorer” — a searchable database of fact-checking articles from around the world. The same search for “royalties + Obama + Affordable Care Act” on Google’s Fact Check Explorer turned up six fact-checking articles – all debunking the claim about Obama.
Fact-checking articles take time to produce, so in some cases you may not immediately find a fact-checking article on the topic. You may, however, find some news articles on the subject — but make sure you are using trusted sources, such as the Wall Street Journal, Reuters, the Associated Press, New York Times and other established news outlets.
We know that trust in the media is low, but the fact is that legitimate news organizations, such as the Washington Post and New York Times, have written policies and procedures for such things as newsgathering, editing and corrections, as well as standards for ethical conduct and conflicts of interest.
Even when using such trusted sources, you might want to check more than one source to see what others are reporting. Multiple news outlets will report on breaking news and major news developments, so be wary if only one news organization is reporting on the “news” that you are seeing on social media.
AI-Generated Images
As we mentioned earlier, online content may be created by generative AI, which “can create original content — such as text, images, video, audio or software code — in response to a user’s prompt or request,” as IBM explains on its website.
We’ve already covered text in the section above. The same rules apply to text created by humans or AI services. Here we focus on AI-generated images, videos and audio.
We have been writing about fake photos for years. In the early years, the fakes were real images that were altered using Photoshop or other editing programs.
In 2008, for example, we wrote about an image that purportedly showed then-Alaska Gov. Sarah Palin wearing a red, white and blue bikini and holding a rifle. But it wasn’t her. Her head had been Photoshopped onto the body of another woman.
Using AI, people looking to entertain or cause mischief can create entirely new images, video and audio. Experts say you may be able to spot a fake by looking closely for red flags.
“It is possible to create realistic appearing images, audio, and video with today’s generative AI tools,” Matthew Groh, an assistant professor of management and organizations at Northwestern University’s Kellogg School of Management, told us in an email. “One of the best ways to spot a lie (and likewise AI-generated media) is to search for contradictions.”
Groh and his colleagues published a research paper in February that measured the accuracy of more than 50,000 participants who were asked to identify whether images were real or AI-generated. The participants were given “unlimited time, 20 seconds, 10 seconds, 5 seconds, and 1 second.” The paper found that “longer viewing times” improved the participants’ accuracy.
Unnatural body parts. Groh and his Northwestern colleagues identified telltale signs of AI-generated photos for an article last year in Kellogg Insight, a school publication. They advised social media users to look closely at various body parts for “anatomical implausibilities.”
“Are there missing or extra limbs or digits? Bodies that merge into their surroundings or into other nearby bodies? A giraffe-like neck on a human? In AI-generated images, teeth can overlap or appear asymmetrical. Eyes may be overly shiny, blurry, or hollow-looking,” the article said.
If the person is a public figure, you can compare facial features with existing news photos to spot discrepancies, the article also noted.
Odd objects. There may also be oddities in the way that body parts interact with objects, or even problems with the objects themselves.
For example, the Kellogg Insight article included an AI-generated image that showed a person’s hand inside a hamburger. The hamburger itself is improbably large.
“When there’s interactions between people and objects, there are often things that don’t look quite right,” Groh told Kellogg Insight, referring to these oddities as “functional implausibilities.”
Irregular shadows and reflections. AI also has difficulty with shadows and reflections. Shadows may be cast in different directions, and reflections may not match the object they pretend to reflect, Groh and his colleagues said.
For example, an AI-generated image in the Kellogg Insight article shows a person wearing a short-sleeved shirt, while his mirror image is wearing a long-sleeved shirt. The Northwestern research paper describes these irregularities as “violations of physics.”
The researchers also identified two other telltale signs of AI-generated images: “stylistic artifacts,” which refer to “overly glossy, waxy, or picturesque qualities of specific elements of an image,” and “sociocultural implausibilities,” which are “scenarios that violate social norms, cultural context, or historical accuracy.”
Nonsensical words. Jonathan Jarry, a science communicator with McGill University’s Office for Science and Society, explained in an article on the university’s website last year that AI-generated images have trouble with words. In his article, Jarry asked an AI service to create a photo of Montreal circa 1931. One problem, however, was that the lettering displayed on background signage was “gibberish.”
(Think you can tell real photos from bogus AI-generated images? Take the “Detect Fakes” test on Kellogg School’s website.)
AI-Generated Video and Audio
Unlike fake images, bogus video and audio are fairly new phenomena.
We recently wrote about an audio clip circulating on social media that purported to show Donald Trump Jr. saying “the U.S. should have been sending weapons to Russia,” instead of Ukraine. But we found no evidence that Trump ever made such a comment, and a digital forensic expert told us it was likely fake.
Look for contextual clues. Determine if the text of the post or the audio or video clip itself offers some contextual clues — such as where and when the words were allegedly spoken.
In the case of the fake Trump audio, one red flag was the claim that the president’s son made his remark about Russia on a Feb. 25 episode of his podcast, “Triggered with Donald Trump Jr.” However, Trump did not make any such comment about Russia during that episode.
Listen for audio anomalies. The European Digital Media Observatory, a project of the European Commission, offers tips for detecting AI-generated audio and video. When listening to audio, it says to “[p]ay attention to choices of words, intonation, breaths, unnatural pauses and other elements that can manifest anomalies.”
Watch for quality of video. The EDMO suggests checking “the quality of the video” to spot “out of focus contours, unrealistic features” and poor “synchronization of audio and video,” i.e., when the lips don’t match the audio.
Look for disclaimers. Some social media platforms — including Meta, YouTube and TikTok — require users to add a label on AI-generated content. Check to see if the platform you are using has such a policy and, if so, look for the disclaimers.
For example, Meta, which owns Facebook, Threads and Instagram, uses an “AI info” label “for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.”
Groh, the Northwestern assistant professor, said that Community Notes on X — the platform formerly known as Twitter — can be useful at flagging AI-generated content.
“Community notes can be very useful for adding context and directing people’s attention to possible tells,” such as this note in response to an image posted following Hurricane Milton, Groh said. “Likewise, context and insights from trusted sources like fact checkers or digital forensics experts can be useful for helping people on social media make up their minds about whether what they’ve seen online is AI-generated or real.”
Editor’s note: FactCheck.org does not accept advertising. We rely on grants and individual donations from people like you. Please consider a donation. Credit card donations may be made through our “Donate” page. If you prefer to give by check, send to: FactCheck.org, Annenberg Public Policy Center, P.O. Box 58100, Philadelphia, PA 19102.