Written by Stephan Rudolph, Developer at Aqilla
The last couple of years has seen dramatic development in AI (Artificial Intelligence). Every time a computer needs to recognise text (OCR), voice or images it is done via a trained neural network. Text and voice recognition are practical at a human level. However, image recognition, at least in specialised cases such as cancer detection, far surpasses human expert capability. On the other side of this is generative AI, which is far more complex since it involves creativity – something that humans often like to say will never be the domain of computers… well, “hold my beer”, says OpenAI! In this article, we’ll be discussing the latest advancements in ‘creative’ AI: ChatGPT, its limitations, dangers, controversies and its significance for the future of finance.
Imagine training a neural network to recognise a dog by showing it numerous pictures of dogs. The AI itself will use this to decide the distinguishing features of the dog and use it to identify the specific animal in the future. Now imagine you train the same AI with every piece of information and how it is all connected: every book, image, video, website, etc. OpenAI did just this and created a model (of the world) called GPT-3. GPT-3 ‘understands’, or recognises, the world like a human that is trained, to college degree level, in every field of expertise that exists. As AI is so good at extracting patterns, it ‘knows’ the style of Van Gogh versus Picasso; Shakespeare versus Eminem; or Pong versus Call of Duty. AI is able to infer meaning and intent from vague requests and questions, even self-correcting when an error is indicated.
This latest development was, for many, considered the stuff of science fiction, alongside cold fusion or even faster-than-light travel! It’s no surprise that this new AI phenomenon has been branded as ‘revolutionary’, it is of course essentially a conversational, ‘Star Trek’ computer interface! Thankfully, however, it is not sentient or alive, nor does it have wants, needs, will or its own agenda. In fact, it’s curated to barely form an opinion about anything. So far it is a tool… a very, very capable tool. However, it is built and trained solely on human knowledge, which is incomplete and prone to error, meaning it can and does make mistakes, although probably less than the average human tasked with the same request!
DALL·E 2: This is an image generator that can create any image based on a text prompt. In order to successfully create a relevant image, it needs to understand every single word in the prompt, as well as how to combine them meaningfully. Is this creative? If the same prompt was given to a random person, would the outcome be ‘better’ or perhaps ‘more creative’? This exact line of thought has sparked huge controversy around copyright issues, when it comes to the sources the AI was trained on; what ‘real’ creativity is, whether is it a remix of prior impressions; as well as multiple other ethical questions.
CHATGPT: Let’s face it, if you haven’t seen or heard the name ‘ChatGPT’ since the end of 2022, you’re probably living under a rock! Released only as an open beta test, ChatGPT is essentially a super-powered chatbot: put in a question or request and you can have a conversation with the AI as if you were talking to another human. What is so impressive about this technology is that every answer you receive is freshly synthesised and generated, rather than ‘looked up’, copied and pasted into the chat. Here are a few ideas of what you can request:
You get the point. It is so good at giving relevant answers that educational institutions have begun to ban it due to its ability to complete students’ homework tasks. In addition, new websites have already been created to detect if a text is AI-generated!
Most people are rightly blown away by ChatGPT, even in its ‘beta version’. However, the real significance of this developmental milestone is often missed: how well ChatGPT actually understands things, including specifically, intent and meaning. These basic examples of how to use ChatGPT are essentially irrelevant when looking at the full potential of the underlying technology.
The question we should be asking is: if this isn’t even ‘Version 1’, what will ChatGPT, DALL·E 2 and other AI software look like in five years with fewer flaws and limitations, trained on even more data? Think about how the internet has changed our lives over the last 20/30 years, including the emergence of the smartphone in combination with high-speed mobile internet: programmes like ChatGPT have the potential to be far more transformative!
Limitations:
• It is trained on a closed dataset up to early 2022 – if you asked it, the queen is still alive
• It has no internet access so you can’t ask it for the weather or train times
• It restricts certain requests to avoid creating harmful content (like instructions on how to build a bomb)
• It does not ‘feel’ human empathy.
Potential dangers and controversies:
• It is always confident despite making mistakes
• Every answer has to be taken with a pinch of salt
• It doesn’t provide a rating of how sure/confident it is with the answer it gives
• It does not cite sources
• It could transform society and, in particular, the job market.
Based on what we have learned so far about ChatGPT, we can begin to speculate how this could change our lives, work and in particular the financial sector. Generally, most administrative work could be done by ChatGPT but eventually, we could have the equivalent of a personal secretary/assistant that knows your wishes, intentions and preferences so well that it can handle most day-to-day tasks for you. Alongside advancements in robotics, this could even enable households to have a maid for mundane household chores: tidying, cleaning, cooking, etc.
However, employment and job markets could be more drastically transformed by software like ChatGPT, with many desk jobs seeing major changes, and administrative and advisory job roles potentially becoming obsolete! For workers and artists in creative industries, the challenges posed by DALL·E 2 and ChatGPT may force them to find new and innovative ways to set their work apart from AI output and creations. Similarly, the same could be said for engineers and programmers, as the need for humans to write code themselves could become redundant thanks to ChatGTP.
And yet, it’s not all doom and gloom, or robots taking over our jobs! For industries such as the financial sector, programmes like ChatGPT could become an invaluable asset to increasing productivity. For example, time-consuming administrative tasks can now be diverted to AI, as well as the use of AI-assisted analysis, which is expanding throughout the industry as the technology develops. Of course, if you ask it, ChatGPT itself can predict the impact AI could have on the financial sector:
Me: How will AI like chatGPT change the financial sector?
ChatGPT: AI like ChatGPT has the potential to significantly impact the financial sector in a variety of ways. For example, it could be used to automate and streamline various financial processes, such as compliance and risk management. Additionally, it could be used to improve the customer experience by providing more personalised and efficient service through natural language processing and understanding. It could also be used to analyse large amounts of data and make more accurate predictions about market trends, which could help with investment decisions and portfolio management. Finally, it could be used to detect and prevent fraud more effectively.
Me: What are the dangers?
ChatGPT: While AI has the potential to bring many benefits to the financial sector, there are also potential dangers that should be considered. Some of these include:
• Job Loss: Automation of financial processes could lead to job loss for some employees.
• Bias: AI systems can perpetuate and even magnify the biases present in the data they are trained on. This can lead to unfair or discriminatory outcomes for certain groups of people.
• Security: AI systems can be vulnerable to hacking and other cyber attacks, which could compromise sensitive financial data.
• Lack of explainability: AI systems can be opaque and difficult to understand, which can make it difficult to explain how they arrived at certain decisions.
• Dependence: If a financial organisation becomes too dependent on an AI system, it could be at risk if the system fails or is unavailable.
• Transparency: AI-based decision-making systems can be opaque, making it difficult to understand the decisions made by the system and how it arrived at a particular outcome, which can make it difficult to trust the system.
For many, the truth behind programmes such as ChatGPT is that we are one step closer to the creation of artificial sentient life, or technological singularity, which could go either way: Star Trek or Terminator… However, on a less daunting note, ChatGPT will hopefully allow us to become more productive in our jobs and also enjoy much more free time. Whilst such revolutionary technology and societal changes may seem unnerving, the hope would be that used with positive intentions, proper regulation, transparency, and human collaboration, programmes like ChatGPT could mean a huge step forward for humanity and solving some of our world’s biggest challenges.