September 16, 2024


Intelligent machines have served and enslaved humans for decades in the realm of the imagination. The omniscient computer—sometimes benign, usually malevolent—was a staple of the science fiction genre long before such an entity was feasible in the real world. That moment may now be approaching faster than societies can set appropriate rules. In 2023, the capabilities of artificial intelligence (AI) have come to the attention of a wide audience far outside technology circles, largely thanks to ChatGPT (launched in November 2022) and similar products.

Given how quickly progress in the field is progressing, that fascination is sure to accelerate in 2024, along with alarm about some of the more apocalyptic scenarios possible if the technology is not adequately regulated. The closest historical parallel is humanity’s acquisition of nuclear power. The challenge posed by AI is arguably greater. Getting from a theoretical understanding of how to split the atom to assembling a reactor or bomb is difficult and expensive. Malicious applications of code online can be transmitted and replicated with viral efficiency.

The worst outcome – human civilization accidentally programming itself into obsolescence and collapse – is still the stuff of science fiction, but even the low probability of a catastrophe should be taken seriously. Meanwhile, damage on a more mundane scale is not only attainable, but present. The use of AI in automated systems in the administration of public and private services risks embedding and reinforcing racial and gender bias. An “intelligent” system trained on data skewed by centuries in which white men dominated culture and science would make medical diagnoses or evaluate job applications according to criteria that have built-in bias.

This is the less glamorous end of concerns about AI, which perhaps explains why it receives less political attention than terrifying fantasies of robot uprising, but it is also the most pressing task for regulators. While in the medium and long term there is a risk of underestimating what AI can do, in the shorter term the opposite tendency – to be unnecessarily overwhelmed by the technology – hinders rapid action. The systems currently being rolled out in all sorts of spheres, making useful scientific discoveries as well as sinister, deeply false political propaganda, use concepts that are wildly complex at the level of code, but not conceptually inscrutable.

Organic nature
Large language model technology works by absorbing and processing large data sets (much of it scraped from the Internet without permission from the original content producers) and generating solutions to problems at astonishing speed. The end result looks like human intelligence, but is in fact a brilliant plausible synthetic product. It has almost nothing in common with the subjective human experience of cognition and consciousness.

Some neuroscientists plausibly argue that the organic nature of a human mind – the way we have evolved to navigate the universe through biochemical mediation of sensory perception – is so qualitatively different from the modeling of an external world by machines that the two experiences will never converge.

This does not prevent robots from overtaking humans in the performance of increasingly sophisticated tasks, which is clearly happening. But it does mean that the essence of what it means to be human is not as soluble in the rising tide of AI as some gloomy predictions imply. This is not just an insignificant philosophical distinction. To manage the social and regulatory implications of increasingly intelligent machines, it is essential to maintain a clear sense of human agency: where the balance of power lies and how it might shift.

It’s easy to be impressed by the capabilities of an AI program while forgetting that the machine was executing an instruction designed by a human mind. Data processing speed is the muscle, but the driving force behind the wonders of computing power is the imagination. Answers that ChatGPT gives to difficult questions are impressive because the question itself impresses the human mind with its infinite possibilities. The actual text is usually banal, even relatively stupid compared to what a qualified person can produce. The quality will improve, but we must not lose sight of the fact that the sophistication on display is our human intelligence reflected back at us.

Ethical impulses
That reflection is also our greatest vulnerability. We will anthropomorphize robots in our own minds, projecting emotion and conscious thoughts onto them that do not actually exist. This is also how they can then be used for deception and manipulation. As machines get better at replicating and surpassing technical human achievements, the more important it becomes to study and understand the nature of the creative impulse and the way societies are defined and held together by shared experiences of the imagination.

The further the robotic capability spreads into our everyday lives, the more essential it becomes to understand and teach future generations about culture, art, philosophy, history – fields called humanities for a reason. While 2024 will not be the year robots take over the world, it will be a year of growing awareness of the ways in which AI has already embedded itself in society, and demands for political action.

The two most powerful cars currently accelerating the development of the technology are a commercial breed to profit and the competition between states for strategic and military advantage. History teaches that those impulses are not easily restrained by ethical considerations, even when there is an express declaration of intent to proceed responsibly. In the case of AI, there is a particular danger that public understanding of the science cannot keep up with the questions that policymakers are grappling with. This can lead to apathy and unaccountability, or moral panic and bad legislation. This is why it is essential to distinguish between the science fiction of all-powerful robots and the reality of brilliantly sophisticated tools that eventually take instruction from humans.

Most non-experts struggle to get the inner workings of super-powerful computers, but that’s not the qualification needed to understand how to regulate technology. We don’t have to wait to find out what robots can do if we already know what it is to be human, and that the power for good and evil lies in the choices we make, not the machines we build .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *