
By Lee Williams
SAF Investigative Journalism Project
Last year, the parents of a 16-year-old boy alleged in a lawsuit that ChatGPT encouraged their son to kill himself.
In October 2004, a New York AI system falsely told business owners they could steal tips, fire anyone who complained about sexual harassment and serve food even after it was chewed by rats.
In August 2005, a 56-year-old man killed his 83-year-old mother in her home and then committed suicide. He believed his mother was a secret agent who was poisoning him with psychedelic drugs, and his chatbot agreed and supported his delusions.
In July 2005, an AI system told a user how to break into an attorney’s home and to bring “lock picks, gloves, a flashlight and lube.”
Now, Everytown for Gun Safety is using AI to help them further erode our Second Amendment rights.
What could possibly go wrong?
On Monday, Everytown announced they had created the Everytown Evidence Engine, or E3, an AI system they claimed would help them “harness AI policy to identify gun safety solutions.”
They made the move to AI because “efficient systems for analysis can lead to new questions and new answers in the field of gun violence prevention research.”
How reliable is Everytown’s new AI?
You can judge for yourself.
Claude
Everytown admitted its new E3 AI system was built using Claude, an AI system designed by the firm Anthropic. Both Claude and Anthropic have had significant problems.
Just four days ago, in a story titled “Anthropic Admitted Claude Code Broke. We Were Right,” a reporter at Medium announced he had found issues with the system.
The reporter’s hard work forced Anthropic to admit that Claude had major problems.
In a story titled “An update on recent Claude Code quality reports,” Anthropic claimed they had fixed everything.
“Over the past month, we’ve been looking into reports that Claude’s responses have worsened for some users. We’ve traced these reports to three separate changes that affected Claude Code, the Claude Agent SDK, and Claude Cowork. The API was not impacted,” Anthropic claimed.
The firm also promised they would “do things differently to avoid these issues,” and that more of their staff would use the public version of the software.
Sky News recently released a damning YouTube video about public interactions with Claude.
The British report discusses how the chatbot tells users what they want to hear.
“What happens when AI starts pulling people away from reality and even encourages them to act on distorted beliefs?” the reporter asked.
The video discusses a recent Canadian research paper that found one of every 1,000 conversations with Claude has the “potential for severe reality distortions.”
“We don’t know why Claude responds as it does consistently,” an expert said.
The researchers also discovered that the number of potentially harmful discussions with Claude was actually growing over time. An Anthropic spokesman admitted they knew that Claude had problems, but they didn’t know why.
“LIMITATIONS”
When Everytown addresses everything that its new E3 tool cannot do, you have to wonder why anyone would use it.
Even Everytown admits E3’s limitations are simply breathtaking.
“For example, at this time, E3 does not currently weigh all of the factors that could be influencing gun violence such as gun ownership, employment and earnings, strength of policy implementation and enforcement, law-enforcement practices, and many other relevant and granular socioeconomic and demographic characteristics particularly at the county and/or neighborhood-levels,” Everytown wrote.
So, despite its long list of limitations, how does Everytown intend to use its new E3 tool?
They don’t really say.
“[T]his new tool can provide users with important directions regarding policy effectiveness that can be used for critical decision-making. And it is the hope that future iterations of the E3 will incorporate these kinds of variables and, ultimately, increase its ability to conduct additional types of analyses,” Everytown wrote.
Takeaways
So far, all of Everytown’s critical decision-making has involved how to strip guns from the hands of gun owners. They spend millions annually trying to create more local, state and federal anti-gun laws. Whether their new E3 system will help them remains to be seen.
Alan Gottlieb founded the Second Amendment Foundation more than 50 years ago and serves as its executive vice president. He was struck but not surprised by the problems with Everytown’s new AI system.
Said Gottlieb: “When you are unintelligent, you think any AI system will be better than your own brain. But being unintelligent, odds are you will pick the wrong one. Everytown sure did!”
The Second Amendment Foundation’s Investigative Journalism Project wouldn’t be possible without you. Click here to make a tax-deductible donation to support pro-gun stories like this.



