News organizations are increasingly integrating artificial intelligence (AI) tools and infrastructure into their operations. Yet, the implications of this trend for the broader information environment that they constitute and shape—the public arena—are little understood. On one level, AI is further rationalizing news work through the logics of platform companies, prioritizing greater calculability and efficiency in journalistic processes and creating new opportunities for monetization. On another level, the use of AI tools in the news media raises a series of ethical questions about how they operate in the context of other societal issues, including privacy, transparency and bias.
Among the most prominent applications of AI are machine learning, data analytics and natural language processing (NLP), which help extract information from big datasets and analyze patterns. The tech also includes reasoning and decision-making systems that utilize logical rules, probability models and algorithms to make reliable decisions. Other forms of AI include perception – such as face recognition and object detection —and problem-solving, which uses data analysis to devise solutions for specific problems.
Generative AI is a type of AI that generates or transforms content based on inputs such as text, images or video frames. It can produce a range of content, from the simple —like an image with the incorrect number of fingers or limbs—to more complex, story-based outputs like headlines or articles. It is often hailed for its ability to produce accurate information, but critics point out that it can reproduce societal biases because much of its training material comes from public sources that may reflect existing social prejudices.