Rising use, falling confidence — and a path forward
More Americans are adopting AI tools, according to a new Quinnipiac poll, even as a shrinking share of people say they trust the results. That disconnect is notable: widespread use demonstrates the technology's practical value and reach, while lower trust highlights where attention is needed to sustain long-term adoption and social benefit.
The poll shows voters are especially worried about transparency, oversight and the broader societal impacts of AI. Those concerns are not roadblocks so much as signals — clear priorities for developers, companies and regulators who want to make AI systems more reliable and broadly useful.
Practical steps can close the gap. Industry and public-sector leaders can respond with concrete measures that build confidence while preserving innovation:
- Increase explainability and user-facing transparency so people understand how outputs are generated.
- Adopt stronger guardrails, auditing and standards to reduce harm and ensure accountability.
- Invest in public education and clearer labeling so users know when and how AI is being used.
Viewed positively, the poll points to momentum: AI is reaching more people than ever, and the feedback about trust gives a roadmap for improvements that will make those tools safer and more effective. With coordinated action, the current moment can accelerate AI's ability to deliver broad societal and economic benefits.