November 17, 2024


Significant “social rifts” exist between people who think artificial intelligence systems are conscious and those who insist the technology feels unthreatening, a leading philosopher has said.

The comments from Jonathan Birch, a professor of philosophy at the London School of Economics, come as governments prepare to meet in San Francisco this week to speed up the creation of off-rails. take on the most serious risks of AI.

Last week, a transatlantic group of academics predicted that the dawn of consciousness in AI systems is likely by 2035 and one has now said it could lead to “subcultures that see each other as making big mistakes” about whether computer programs are owed similar welfare rights to humans or animals.

Birch said he is “worried about major societal ruptures,” as people disagree about whether AI systems are truly capable of enduring feelings like pain and joy.

The debate about the consequences of sentience in AI has echoes of science fiction films, such as Steven Spielberg’s AI (2001) and Spike Jonze’s Her (2013), in which humans struggle with the sentience of AIs. AI safety bodies from the US, UK and other countries will meet with tech companies this week to develop stronger safety frameworks as the technology rapidly advances.

There are already significant differences between how different countries and religions view animal sentience, such as between India, where most hundreds of millions of people are vegetarian, and America, which is one of the largest consumers of meat in the world. Views on the applicability of AI may break along similar lines, while the view of theocracies, such as Saudi Arabia, which positions itself as an AI hub, may also differ from secular states. The problem can also cause tension within families with people developing close relationships with chatbots, or even AI avatars of deceased loved ones, clashing with family members who believe that only flesh and blood creatures have consciousness.

Birch, an expert on animal sentience who has pioneered work that has led to a growing number of bans on octopus farming, co-authored a study involving academics and AI experts from New York University, Oxford University, Stanford University and the Eleos and Anthropic were involved. AI companies that say the prospect of AI systems with their own interests and moral significance “is no longer an issue just for science fiction or the distant future”.

They want the big tech firms developing AI to start taking it seriously by sensing their systems to determine whether their models are capable of happiness and suffering, and whether they can be benefited or harmed.

“I’m quite concerned about major societal divisions on this,” Birch said. “We’re going to have subcultures that see each other as making big mistakes… [there could be] huge social rifts where one side sees the other as a very cruel exploitation of AI, while the other side sees the first as deluding itself into thinking there is sentience there.

But he said AI firms “want to have a very tight focus on reliability and profitability … and they don’t want to get sidetracked by this debate about whether they might be creating more than a product, but actually a new form of conscious being does not create. That question, of utmost importance to philosophers, they have commercial reasons to dismiss.”

One method of determining how conscious an AI is could be to follow the system of markers used to guide animal policy. For example, an octopus is considered to have a greater sense than a snail or an oyster.

Any assessment will effectively ask whether a chatbot on your phone can actually be happy or sad or whether the robots programmed to do your household chores suffer if you don’t treat them well. Consideration should even be given to whether an automated warehouse system has the ability to feel thwarted.

Another author, Patrick Butlin, research fellow at Oxford University’s Global Priorities Institute, said: “We can identify a risk that an AI system will try to resist us in a way that would be dangerous for humans” and there could be an argument to “slow down AI development” until more work is done on consciousness.

skip past newsletter promotion

“These kinds of assessments of potential consciousness are not happening right now,” he said.

Microsoft and Perplexity, two leading US companies involved in building AI systems, declined to comment on the academics’ call to assess their models for sentience. Meta, Open AI and Google also did not respond.

Not all experts agree on the impending consciousness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher, said it “remains far away and may not be possible at all. But even if it is unlikely, it is unwise to completely dismiss the possibility.”

He distinguishes between intelligence and consciousness. The former is the ability to do the right thing at the right time, the latter is a state in which we not only process information, but “our minds are filled with light, color, shadow and shapes. Emotions, thoughts, beliefs, intentions – all feel to us in a specific way.”

But AI large-language models, trained on billions of words of human writing, have already begun to show that they can at least be motivated by concepts of pleasure and pain. When AIs including Chat GPT-4o were tasked with maximizing points in a game, researchers found that if there was a trade-off between getting more points and “feeling” more pain, the AIs would make it, yet ‘ a study published last week showed



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *