What You Should Expect from the Media In Terms of AI Usage
Generative artificial intelligence (AI) continues to reshape the media landscape. While AI has ushered in a new era of rapid research and data synthesis and analysis, it has also created ethical dilemmas. These ethical concerns impact consumer trust or result in a crisis situation. It is imperative members of the media uphold best practices for AI use, but leaders must know what to look for and what to exect.
Below are standards we should expect the media to follow in terms of AI usage.
Transparency about AI Usage
You should expect newsrooms to disclose the use of AI in any portion of their reporting and information gathering process. We should assume that newsrooms may use AI to transcribe interviews, filter reader comments or replicate the same article for multiple outlets. We might also expect newsrooms to use AI to sift through voluminous documents. But if journalists are using AI to write articles from scratch, there should be associated disclosures.
Confirmation of Verification of AI-generated Data
You should expect reporters to confirm that they have verified the data they used. AI-generated data has its limits. Chat GPT 5.0, the latest model, has a knowledge cut-off of September 2024. Claude is Mar 2025, and Gemini’s is Jan 2025. Information and data created from these AI tools is based on the available training data from nearly a year ago. AI tools cannot reliably or robustly generate information from after the knowledge cut-off dates.
Assistance Confirming Authenticity
AI bots can hallucinate research, meaning they can literally create headlines and research citations that are not based on real evidence, but suggestions. It’s imperative any data cited by AI be double-checked for validity in an external tool. Additionally, AI translation tools, particularly for languages that are markedly different from English or Chinese, is often lacking in accuracy but also flatten the cultural nuances that also shape the language and human experience. We should expect members of the media to go the extra mile in confirming the authenticity of information gleaned from AI.
Awareness of and Commitment to Root Out Biases
We should expect members of the media to be aware of biases in coverage. These representations can influence policy and adversely impact marginalized communities. We do not want members of the media to unwittingly contribute to misrepresentations of different communities.
This is especially concerning for populations that have historically been marginalized, excluded, or negatively represented in mainstream data sources. We should expect journalists to exercise care in ferreting out biases in AI data models. If you’re like to discuss this more, or the crises that can result from AI, contact us immediately.
Protect Human Capital and Creativity
We expect media outlets to value human capital and capacity. It should never allow AI to be the primary brain behind its reporting and writing.
Besides, AI’s efficiency for humans is up for debate. While AI can save time on things like data and research, it can also add time due to the increased need for verification. There is conflicting research, with some studies showing AI actually increased workload for up to 77% of workers.
Additionally, over-reliance on AI for writing can flatten a person’s creativity and critical thinking. Studies have shown that students who over rely on AI, can experience cognitive atrophy. Efficiency matters, but sometimes friction is necessary for the best ideas to come through.
In sum, AI can be an efficient and powerful tool, but only when used correctly and accurately. If you want to dive into this further, book a consultation with Spotlight PR.
Tiffany Onyejiaka is a medical student and freelance writer with Spotlight PR LLC. Be sure to check out other blogs and subscribe for regular communications updates.

