September 20, 2024


IIn mid-2019, I read a fascinating piece in Cosmos magazine, one of Australia’s leading science publications. There was this one image of a man lying on an operating table covered in bags of McCain’s frozen fries and hash browns.

Scientists have discovered that rapid cooling of the body can improve the survival rates of patients who have experienced heart attacks. This man was one such patient, hence the Frozen Food Fresco. The accompanying report was written by Paul Biegler, a bioethicist at Monash University, who visited a trauma ward at the Alfred Hospital, Melbourne, to learn about this method in an effort to understand. whether humans may be capable of hibernation in the distant future.

This is the kind of story I return to when I start to panic about AI’s infiltration of the news. After all, AI cannot visit the Alfred Hospital and – at least at the moment – he does not conduct any interviews.

But AI-generated articles are already being written and their latest appearance in the media indicates a worrying development. Last week it was revealed that staff and contributors to Cosmos claim they were not consulted about the launch of explanatory articles believed to be written by generative artificial intelligence. The articles cover topics such as “what is a black hole?” and “what are carbon sinks?” At least one of them contained inaccuracies. The explainers were created by OpenAI’s GPT-4 and then checked against Cosmos’ strong archive of 15,000 articles.

Full details of the publication’s use of AI was published by the ABC on August 8. In that article, CSIRO Publishing, an independent division of CSIRO and the current publisher of Cosmos, said the AI-generated articles were an “experimental project” to explore the “potential utility (and risks)” of using a model such as assessing GPT-4 to “assist our science communication professionals in producing draft science explanation articles”. Two former editors said that the editors of Cosmos were not informed of the proposed custom AI service. It comes just four months after Cosmos made five of its eight staff members redundant.

The ABC also wrote that Cosmos contributors were not aware of its intention to run the AI ​​model, nor did they notify them that their work would be used as part of the fact-checking process. CSIRO Publishing dismissed concerns that the AI ​​service was trained on contributors’ articles, with a spokesperson noting that the experiment used a pre-trained GPT-4 model from OpenAI.

But the lack of internal transparency and consultation left journalists and contributors feeling betrayed and angry. Several sources suggest the experiment has now been put on hiatus, but CSIRO Publishing did not respond to a request for comment.

The controversy provided a dizzying sense of deja vu. We’ve seen this before. The respected American technology website CNET, where I served as science editor until August 2023, published dozens of articles generated by a custom AI engine at the end of 2022. In total, CNET’s robot writer picked up 77 bylines and, after investigation by competing publicationsmore than half of his articles were found to contain inaccuracies.

The backlash was swift and damning. One report said the Internet was “horrified” by CNET’s use of AI. The Washington Post dubbed the experiment “a journalistic disaster.” Trust in the publication was broken basically overnight, and for journalists in the organization there was a sense of betrayal and anger.

The Kosmos example provides a startling parallel. The backlash was once again swift, with journalists weighing in. “Thoroughly Terrible,” wrote Natasha Mitchellhost of the ABC’s Big Ideas. And even the responses by the organizations are almost identical: call it an experiment, stop the deployment.

However, this time the AI ​​is used to present facts backed by scientific research. This is a worrying development with potentially catastrophic consequences. At a time when trust in scientific expertise and the media are both declining (the latter more leek than the former), rolling out an AI experiment with a lack of transparency is, at best, ignorant, and at worst, dangerous.

Science can reduce uncertainty, but not erase it. Effective science journalism involves helping the audience understand that uncertainty and, research showsimprove confidence in the scientific process. Generative AI unfortunately remains a predictive text tool that can undermine this process, which confident bullshit.

That’s not to say generative AI doesn’t have a place in newsrooms and should be banned. It is already used as an idea generator, for quick feedback on concepts or help with headlines. And, with appropriate oversight, perhaps it will become important for smaller publishers, like Cosmos, to maintain a steady stream of content in an Internet age hungry for more.

Even so, if AI is to be deployed in this way, there are outstanding issues that have not been resolved. The confident sounding fake information is just the beginning. Issues surrounding copyright and the theft of art to train these models have made their way to court, and there are serious sustainability issues to contend with: AI’s energy and water consumption, while difficult to quantify definitively, is enormous.

The bigger hurdle, however, is the audience: the University of Canberra Digital News Report 2024 suggests that only 17% of Australians are comfortable with news produced “mostly by AI”. It also noted that only 25% of respondents were comfortable with AI being used specifically for science and technology reporting.

If the audience doesn’t want to read AI-generated content, who is it made for?

The Kosmos controversy brings the question into stark relief. This is the first question that must be answered when rolling out AI and it is a question that must be answered transparently. Both editors and readers need to be aware of why an outlet might start using generative AI and where it will do so. There can be no secrecy or deception – this, we have seen time and time again, is how you destroy trust.

But, if you’re anything like me, you’ve reached the end of this article and want to know more about the heart attack man who was saved by a batch of McCain’s frozen food. And there is a lesson in that: the best stories stay with you.

From what we’ve seen so far, AI-generated articles don’t have that staying power.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *