27/04/2023

AI: Regulation required!

 

Artificial intelligence refers to intelligence not displayed by humans or animals that is created with the support of human interaction. In short, artificial intelligence is generated by supercomputers to assist with a variety of needs from simple to complex. But, as AI has developed at a pace rarely seen before in scientific development, how can we ensure that its numerous setbacks don’t frighten society off its brilliant potential?

What is the history of AI?

Since computer scientist John McCarthy coined the phrase artificial intelligence in 1967, technology has made huge leaps – from successfully creating robots to defeat world chess champions (Deep Blue, 1997) to hoovering our floors unassisted in 2002. Thanks to significant funding, the years 2015-2019 saw a significant jump in research, with progress in developing artificial intelligence increasing by 50%. In 2020, during the development of the Covid-19 vaccines, AI was used to predict the RNA sequence of the virus in just 27 seconds.

AI is central to our daily lives

Google has embraced AI for over 10 years; YouTube uses AI to recommend potential videos, Gmail uses AI to block spam emails, and a new multi-search function will use AI to search for text and images simultaneously. Meanwhile, Spotify exists purely on AI – yes, your annual listening round-up is purely AI-driven. Using Collaborative filtering, Natural Language Processes and Audio modules, AI can work out your Spotify Wrapped and predict your new releases on your Spotify weekly. And the technology continues – soon AI will act as DJ, predicting what we want to listen to next while we stream music.

The arrival of ChatGPT raises even more questions about what artificial intelligence currently does, and more importantly, what it could achieve. It’s clear that AI is not a new phenomenon, but its rapid rate of development does not live up to the regulations and guidelines that are required to make it safe and fair.

What is ChatGPT?

ChatGPT (created by Open AI) is a familiar Chatbot experience with a difference. Using advanced technology, you can have a human-like conversation with the system and prompt it to respond to complex requests such as email writing, essays, poetry, and speech writing, code writing, writing Excel formulas, writing CVs and producing cover letters.

Starting where Siri finishes, GPT stands for Generative Pre-Trained Transformer and using LLM (Large Language Model), the system’s neural networks sift the internet for deep learning. In short, it’s consumed the entire internet and is ready to spit it back out at you. Using RLHF (Reinforcement Learning from Human Feedback), the intelligence fine-tunes its responses to make itself seem more human. The front end of the programme is the Chatbot that responds to you, while the advanced neural network acts as the back end, sifting the information for you.

The ‘Deep Learning’ AI, puts ChatGPT miles in front of any other readily accessible AI service, including the latest release from Google, Google Bard. Backed by Microsoft, it is hoped that ChatGPT could be embedded into Outlook, Word, Excel and PowerPoint, making it the fastest-growing app of all time.

Why do people worry about AI and ChatGPT?

People are worrying about ChatGPT and the AI that operates it due to its potential for negative, and in some cases, criminal activity. Here are just some of its problems:

  • ChatGPT has apparently already shown it will respond with content that violates Open AI’s content policy with responses that are sexist, racist and hate speech.
  • The Deep Learning AI has already been used to write assignments in a university context while evading plagiarism detectors.
  • Its concise and accurate delivery means that criminal activity such as email phishing scams can be created without their typical typos – the universally recognised indicator of a phishing email. And, it is capable of writing correct code, which in the wrong hands could be used in malware attacks.

At this point, the criminality of these acts lies in grey areas – who is at fault? Is it the person who made the request or the AI that actioned it? All of this is only the tip of the AI iceberg.

Is our fear of AI driven by media stories?

Despite the potential criminality of AI misuse, the media still remains focused on the philosophical and moral implications of advancing AI technology. The Evening Standard’s article on a London private school banning homework initially seemed shocking, until recent education policies reflected a move towards classroom based learning and research based homework. The Guardian’s article about the International Baccalaureate’s acceptance of ChatGPT seemed horrendous, until the education body provided a statement that the tool must be referenced properly. Similarly, Boris Eldagsen’s award-winning AI entry into the Sony Photography Awards provided headlines designed to shock – opening up conversations about AI’s role.

As threatening as AI is to academia and artistic expression, these issues could be resolved through academic debate and thorough regulation. Due to the approach of the media, society is preoccupied by the criminality of its methods rather than the criminality of its potential usage. This is helped by the fact that on an international level, little has been done to accommodate its future by developing a regulatory framework in the same way we have for cloning. For instance, despite the media framing Italy’s banning of ChatGPT as a moral choice, the Italian government had in fact taken issue with its data mining methods and infringement of GDPR first. Had there been an international regulation framework in place with UNESCO backing, Italy’s ban may not have happened.

Next step: Regulation and PR

The media’s heavy focus on the ChatGPT has made it clear that AI’s development should only continue with an international regulatory framework that anticipates the implications of rapid development. The next step perhaps isn’t further development of AI’s capabilities, but regulation while developing better dialogues with AI development companies.

It is all too clear that governments are moving too late to catch up with AI’s trajectory. For example, the UK published new proposals for an AI rulebook in July 2022, mere months before ChatGPT was released to the public with minimal exposure from the UK media and press.

How far other countries have got with similar rulebooks is still unknown, and one nation tightening up its regulatory framework is not enough. An international approach must be taken to ensure new AI technology developed by any nation is compatible with both legal and moral frameworks.

A better dialogue between the government and AI development organisations is not all that’s required. Governments need to fully brief their nation’s media to ensure that society is better informed about the steps that are being made to regulate and accommodate Ai’s advancement too, so that everyone is cooperating on positive development.

The rapidity of AI’s development has only taken 50 years and as research receives more focus and funding, the next 50 will take leaps of development never seen before. Worryingly, the unknowns to be discussed require more than a newspaper article’s word count or click-bait style blog. Unreported and unacknowledged, society moves onwards, unable to comprehend what using AI has in store for us as a consequence of the media’s focus on salacious headlines. Our international governments’ inability to confront the rapid trajectory of AI means that scientific breakthroughs such as ChatGPT exceeds our ability to comprehend the true extent of AI’s potential.

And besides, could you function with a Gmail inbox that doesn’t sift your spam? I don’t think I could…

 

Latest News