July 27, 2024


If you’ve heard anything about the relationship between Big Tech and climate change, it’s likely that the data centers that power our online lives use a staggering amount of power. And some of the newest energy hogs on the block are artificial intelligence tools like ChatGPT. Some researchers suggest that ChatGPT alone can use as much power as 33,000 American households in a typical day, a number that could balloon as the technology becomes more widespread.

The startling emissions add to a general trend of panic driven by news about AI stealing jobs, helping students to cheat, or, who knows, take over. About 100 million people already use OpenAI’s most famous chatbot on a weekly basis, and even those who don’t use it are likely to encounter AI-generated content. But a recent study points to an unexpected upside to that broad reach: Tools like ChatGPT can teach people about climate change, and potentially move deniers closer to accepting the overwhelming scientific consensus that global warming is happening and caused by humans.

In a study recently published in the journal Scientific reports, researchers at the University of Wisconsin-Madison asked people to start a climate conversation using GPT-3, a major language model released by OpenAI in 2020. (ChatGPT runs on GPT-3.5 and 4, updated versions of GPT -3). Large language models are trained on large amounts of data, allowing them to identify patterns to generate text based on what they’ve seen, and converse somewhat as a human would. The study is one of the first to analyze GPT-3’s conversations about social issues such as climate change and Black Lives Matter. It analyzed the bot’s interactions with more than 3,000 people, mostly in the United States, from across the political spectrum. About a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.

However, that doesn’t mean they enjoyed the experience. They reported feeling disappointed after discussing the topic with GPT-3, and rated the bot’s attractiveness about half a point or lower on a 5-point scale. That creates a dilemma for the people who design these systems, said Kaiping Chen, an author of the study and a professor of computer communications at the University of Wisconsin-Madison. As large language models continue to evolve, the study says, they may begin to respond to people in a way that matches users’ opinions — regardless of the facts.

“You want to make your user happy, otherwise they will use other chatbots. They’re not going to come on your platform, right?” Chen said. “But if you make them happy, they might not learn much from the conversation.”

Prioritizing user experience over factual information can lead to ChatGPT and similar tools becoming vehicles for bad information, like many of the platforms that shaped the internet and social media before it. Facebook, Youtube, and Twitter, now known as X, is awash with lies and conspiracy theories about climate change. Last year, for example, posts with the hashtag #climate camera got more likes and retweets on X than those with #climatecrisis or #climateemergency.

“We already have such a big problem with disinformation and misinformation,” said Lauren Cagle, a professor of rhetoric and digital studies at the University of Kentucky. Big language models like ChatGPT are “tottering on the verge of exploding that problem even more.”

The University of Wisconsin-Madison researchers found that the kind of information GPT-3 delivered depended on who it was talking to. For conservatives and people with less education, it tended to use words associated with negative emotions and talk about the destructive outcomes of global warming, from drought to rising seas. Those who supported the scientific consensus were more likely to talk about the things you can do to reduce your carbon footprint, such as eating less meat or walking and cycling when you can.

What GPT-3 told them about climate change was surprisingly accurate, according to the study: Only 2 percent of its answers went against the commonly understood facts about climate change. These AI tools reflect what they are fed and tend to slip at times. Last April, an analysis by the Center for Countering Digital Hate, a British nonprofit, found that Google’s chatbot, Bard, told one userwithout additional context: “There’s nothing we can do to stop climate change, so there’s no point in worrying about it.”

It’s not hard to use ChatGPT to generate misinformation, although OpenAI does a policy against using the platform to deliberately mislead others. It took some prodding, but I managed to get GPT-4, the latest public version, to write a paragraph setting out the case for coal as the fuel of the future, even though it initially put me off try to steer clear of the idea. The resulting paragraph reflects fossil fuel propaganda, with ‘clean coal’ being a misnomer used to market coal as environmentally friendly.

Screenshot of a paragraph from ChatGPT extolling coal's virtues as an energy source

There is another problem with large language models like ChatGPT: They are prone to “hallucinations” or making up information. Even simple questions can turn up bizarre answers that fail a basic logic test. For example, I recently asked ChatGPT-4 how many toes a possum has (don’t ask why). It replied: “A possum typically has a total of 50 toes, with each foot having 5 toes.” It only corrected course after I questioned whether a possum had 10 limbs. “My previous response about possum toes was wrong,” the chatbot said, updating the count to the correct answer, 20 toes.

Despite these flaws, there are potential benefits to using chatbots to help people learn about climate change. In a normal, human-to-human conversation, many social dynamics are at play, especially between groups of people with radically different worldviews. For example, if an environmental advocate tries to challenge a coal miner’s views on global warming, this can make the miner defensive, causing them to dig in their heels. A chatbot conversation provides more neutral ground.

“For many people, this probably means that they don’t perceive the interlocutor, or the AI ​​chatbot, as having identity characteristics opposite to their own, and so they don’t have to defend themselves,” Cagle said. This is one explanation why climate deniers softened their stance slightly after talking to GPT-3.

There is now at least one chatbot specifically aimed at providing quality information about climate change. Last month, a group of startups “ClimateGPT,” an open-source large-language model trained in climate-related studies across science, economics, and other social sciences. One of the goals of the ClimateGPT project was to generate high-quality responses without absorbing an enormous amount of electricity. It uses 12 times less computing power than ChatGPT, according to Christian Dugast, a natural language scientist at AppTek, a Virginia-based artificial intelligence company that helped refine the new bot.

ClimateGPT is not completely ready for the general public “until proper safeguards have been tested,” according to its website. Despite the problems Dugast is addressing — the “hallucinations” and factual failures common among these chatbots — he thinks it could be useful for people hoping to learn more about some aspect of the changing climate.

“The more I think about this type of system,” Dugast said, “the more I’m convinced that when you’re dealing with complex questions, it’s a good way to get informed, to get a good start. “






Source link

Leave a Reply

Your email address will not be published. Required fields are marked *