November 23, 2024


When better to hold a conference on artificial intelligence and the myriad ways it advances science than in those short days between the first Nobel Prizes awarded in the field and the winners heading to Stockholm for the sumptuous white tie ceremony?

It was fortuitous timing for Google DeepMind and the Royal Society to convene the AI ​​for Science Forum in London this week. last month, Google DeepMind won the Nobel Prize in Chemistry a day after AI won the physics prize. The mood was festive.

Scientists have been working with AI for years, but the latest generation of algorithms has brought us to the brink of transformation, Demis Hassabis, the CEO of Google DeepMind, said at the meeting. “If we get it right, it should be an incredible new age of discovery and a new golden age, maybe even a new renaissance of sorts,” he said.

Many could destroy the dream. AI is “not a magic bullet,” Hassabis said. To make a breakthrough, researchers must identify the right problems, collect the right data, build the right algorithms, and apply them in the right way.

Then there are the pitfalls. What if AI provokes a backlash, exacerbates inequality, creates a financial crisis, causes a catastrophic data breach, pushes ecosystems to the brink through its extraordinary energy needs? What if it falls into the wrong hands and unleashes AI-designed bioweapons?

Siddhartha Mukherjee, a cancer researcher at Columbia University in New York and author of the Pulitzer Prize-winning The Emperor of All Maladies, suspects it will be difficult to navigate. “I think it’s almost inevitable that, at least in my lifetime, there will be some version of an AI Fukushima,” he said, referring to caused the nuclear accident by the 2011 Japanese tsunami.

Many AI researchers are optimistic. In Nairobi, nurses are testing AI-assisted ultrasound scans for pregnant women, bypassing the need for years of training. Materiom, a London company, uses AI to formulate 100% bio-based materials, sidestepping petrochemicals. AI has changed medical imaging, climate models and weather forecasts and is learning how to contain plasmas for nuclear fusion. A virtual cell is on the horizon, a unit of life in silicon.

Hassabis and his colleague John Jumper won their Nobel for AlphaFold, a program that predicts protein structures and interactions. It is used throughout biomedical science, especially for drug design. Now, researchers at Isomorphic, a Google DeepMind spinout, are strengthening the algorithm and combining it with others to accelerate drug development. “We hope that one day, in the near future actually, we will reduce the time from years, maybe even decades to design a drug, to months, or maybe even weeks, and that will revolutionize the drug discovery process,” Hassabis said. .

The Swiss pharmaceutical company Novartis went further. In addition to designing new medicines, AI speeds recruitment to clinical trials, reducing a potentially years-long process to months. Fiona Marshall, the company’s president of biomedical research, said another tool helps with regulators’ inquiries. “You can find out – have those questions been asked before – and then predict what the best answer to give is likely to get you a positive approval for your drug,” she said.

Jennifer Doudna, who shared a Nobel Prize for the gene-editing tool Crispr, said AI would play a “huge role” in making therapies more affordable. Regulators approved the first Crispr treatment last year, but at $2m (£1.6m) for each patient, scores will not benefit. Doudna, who founded the Innovative Genomics Institute in Berkeley, California, said further AI-led work at her lab aims to create a methane-free cow by editing the microbes in the animal’s gut.

A major challenge for researchers is the black box problem: many AIs can make decisions but not explain them, making the systems hard to trust. But that could be about to change, Hassabis said, through the equivalent of brain scans for AIs. “I think in the next five years we’ll be out of this era we’re in now of black boxes.”

The climate crisis may prove AI’s greatest challenge. While Google AI-driven advances in floods, wildfires and heat waves forecasts, like many big tech companies, it uses more energy than many countries. Today’s big models are a big sinner. It can take 10 gigawatt-hours of power to train a single large language model like OpenAI’s ChatGPT, enough to power 1,000 American homes for a year.

“My view is that the benefits of those systems will far outweigh the energy consumption,” Hassabis told the meeting, hoping that AI will help create new batteries, room-temperature superconductors and possibly even nuclear fusion. “I think one of these things will probably come to fruition in the next decade, and it will completely, fundamentally change the climate situation.”

He also sees positive aspects in Google’s energy demand. The company is committed to green energy, he said, so demand should drive investment in renewable energy and lower costs.

Not everyone was convinced. Asmeret Asefaw Berhe, a former director of the US Department of Energy’s Office of Science, said advances in AI could cause suffering, adding that nothing worries more than the demand for energy. She called for ambitious sustainability goals. “AI companies involved in this space are investing heavily in renewable energy and hopefully this will spur a faster transition away from fossil fuels. But is it enough?” she asked. “It should actually lead to transformative change.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *