How to set a culture ready for the adoption of AI

Calum Chace, Author of “Surviving AI” & Chief Marketing Officer at Conscium, discusses the changes the fund management world needs to make to be ready for AI technology.

Andrew Putwain POSTED ON 10/8/2024 8:00:00 AM

Calum Chace, Author of “Surviving AI” & Chief Marketing Officer at Conscium.

Andrew Putwain: Why is everyone so excited about Artificial Intelligence (AI) at the moment? Is it just hype or is it going to change how we do business?

Calum Chace: The reason everybody's excited about AI at the moment, in short, is GPT.

ChatGPT was launched in November 2022 and became the fastest adopted consumer app ever. Then the much more impressive GPT-4, launched in March 2023, and people thought, “wow, AI has finally arrived, and this is going to change everything overnight”, which, of course, was wrong. But GPT-4 and its successors, which came out earlier this year, are impressive models.

These are AIs that we can all play with, can interact with directly, and they do give people who know how to use them a leg up in productivity.

"This isn’t hype. It's not going away, and it is going to change everything about

how we do business and the way we live."

This stemmed from the so-called second ‘Big Bang’ in AI, which happened in 2017 when Google researchers published a paper called “Attention is All You Need” (The first ‘Big Bang’ was in 2012 when deep learning arrived).

This second big bang in 2017 introduced transformer AIs, and that led to large language models.

The innovation hasn't stopped. It's going gangbusters and accelerating with constant new advances. For instance, text-to-video was unknown last year, and now it's incredibly good.

That's why everybody's excited about AI: GPT and large language models.

This isn’t hype. It's not going away, and it is going to change everything about how we do business and the way we live.

Now, there are people who say, “it's a bubble”. People such as Gary Marcus say that OpenAI is about to collapse in ignominy but the smart people who spend and invest money seem to disagree - since they've just valued it at $160 billion - and banks and financial services are all in on these products.

Finance is an area where there's a lot of data, a great deal of money, and a lot of smart people, so it's a great fit for AI. For example, the American banking industry spent $20 billion on AI in 2023 and $5 billion was spent in Europe.

So, there is a vast amount of money being spent on A, and it is going to change everything.

Andrew: What is the current status of AI in the world of work? What are the latest trends that we're seeing then?

Calum: It's a strange situation at the moment. It's a bit of a phoney war.

When GPT-4 came out, I and lots of other people thought this was going to dramatically improve the productivity of people around the world, which it has to an extent.

I don't understand why 100% of people working in offices aren't using it every day. I know they're not because I ask all the time, and the majority have tried it once or twice, found that it didn't do exactly what they wanted, and haven’t tried it again.

The management of organisations are nervous about GPT large language models, understandably, because they do hallucinate, they do make mistakes, and there is a worry that if you put your own proprietary data into them, then that data might leak back into the main model and then leak out to other people. There are good reasons to be hesitant.

Organisations are still trying to figure out exactly how to use these models, and they're getting there, but I'm still puzzled that so many individuals aren't using them.

The people who aren't already using the technology will learn, perhaps when the organisations learn to do so.

"The big tech firms are going to be working with companies, helping them

get over the ‘hallucination problem’ and the privacy problem."

For instance, I had a conversation with somebody in a top law firm recently and he wouldn't put a number on it, but he thinks that his firm and his clients have saved 30% of the cost involved in drafting legal documents, which is a lot of what they do.

They're doing that by working with Microsoft and a start-up in the legal sector called Harvey. That is, in all likelihood, going on in all industries. I'm sure it's going on in financial services.

The big tech firms such as Microsoft, IBM, and Google are going to be working with companies, helping them get over the ‘hallucination problem’ and the privacy problem, so they can deploy these systems in their organisations in an approved, organised, consistent way.

We're in this phoney war stage. Some people are using the technology, and often they're using it in a way that their bosses have not entirely approved of.

Andrew: Where is AI heading? Is the rapid change going to continue?

Calum: This is where it gets interesting. Yes, it is going to continue. We’re going to end up in some very strange places, and how strange depends on your time frame.

In the next five years or so, we're going to see people working closely with AIs every moment of every day. They won't do much without using GPT or its successors, and we’ll need to figure out what's the best way to manage this.

Also, we'll have inventions such as AI glasses before very long, which means we'll be interacting with [AI] all the time.

Now, if you go beyond five years, that's when it gets really interesting. There are two phases I like to think about in the future – the first I call the ‘Age of Miracles’ because we’re going to see some miraculous things. One of them is that there won’t be jobs for humans. Now, I don't know when this is going to happen. It might be 10 years. [OpenAI CEO] Sam Altman seems to think it's three or four years, and he's in a decent position to know. I see it as probably longer than that, and we’ll edge up to it bit by bit.

For example, I don't think jobs are going to be automated one by one. What will happen is there'll be a rapid and accelerating churn. But eventually, we’ll come to a tipping point where machines are able to do everything that we can do for money, cheaper, better, and faster than us.

What happens when there aren't any jobs for humans is that we will need a different version of capitalism.

Almost nobody is thinking seriously about this. Most people are in complete denial. If you ask them, they say, “there's something special about humans. We're alive, we're conscious. Machines aren't and can't be, and therefore they'll never take our jobs”. This strikes me as being dangerously complacent.

"Once we've created superintelligence, it won't be a tool anymore."

Another huge impact will be that we will figure out how to slow down ageing. We already know at a high level what causes ageing, but the details of human biology are unbelievably complicated, and you need machines that can deal with enormous amounts of complicated data in order to tackle that problem. This will affect the workforce and the economy in ways we can’t truly predict.

We can see that this is coming but we don't know when it will arrive, however, it’s going to have an enormous impact. There are several more developments in this Age of Miracles.

But the really big change will be the arrival of superintelligence. This, again, is something that very few people are thinking about seriously. There's not an awful lot of difference between humans and chimpanzees in terms of DNA, or in terms of the weight of our brains as a percentage of body weight, etc., but because we're a bit smarter than they are, there's nine billion of us, and there's a half a million of them, and we decide their future.

We are engaged in the fascinating project of making ourselves chimpanzees because once we've created superintelligence, it won't be a tool anymore. Right now, it is, but the time will come when it isn't. It will be an independent entity, and we will be the chimpanzees in the picture.

There are a number of different ways that might work out. I'm quite optimistic, it could have a good outcome, but it's risky, and some of the people who think seriously about this topic are deeply scared and think we should stop. Unfortunately, we cannot stop.

I can't tell you when this might all happen - 20 years? 50 years? Some people say it's not going to happen for thousands of years, but unless there's some strange physical limit on the ability of machines to continue to get smarter, it is coming, and we'll make it happen as soon as we possibly can because owning a smarter AI means you always win any competition that you’re engaged in.

Andrew: What should we be doing to prepare for our future with AI?

Calum: The most important thing is to get informed about what AI is, where it's going, and how to use it. As I say, for the moment, it's a tool.

For some time, the cliché will apply, that is that AI won't take your job, but somebody who knows how to work well with AI might take your job.

If you're informed about AI, and how to use it, you can avoid that.

We’re going to have to make decisions about AI. We're going to have to decide whether we want to try and stop certain types of models being developed, how we think that the rapid improvements in longevity science should be rolled out, etc. The better we are informed about it, the better we will make those decisions.

Andrew: What about the aspects around regulation?

Calum: There are quite a lot of people, and many of them live in Silicon Valley, who say regulation is a bad thing.

They say regulators always fight last year's battles and that they're not as smart as the people they're trying to regulate. There are some merits to this argument.

"There has to be regulation, simply because this is too important to leave to some

smart people in Silicon Valley, even if they're well intentioned."

However, AI is our most important technology, and it isn't good enough for clever people in Silicon Valley to say, “you’re too stupid to understand what we're doing, so let us do what we do, and we'll take care of it.”

Everyone has a right, even a duty, to insist that our political leaders understand what's going on with this evolving technology, and from time to time to say, “we want this, and we don't want that”.

There's been a big debate recently about Senate Bill 1047, which [California Governor] Gavin Newsom has just vetoed, which said anybody developing a frontier model (a model above a certain processing capability) has to report it to the government and has to prove that they've taken steps to avoid it causing any harm.

That's a perfectly reasonable idea. Silicon Valley was horrified and fought a strong lobbying campaign against it. That's the debate we all need to be involved in, and that's why we need to get informed. We have a civic duty to take part in these debates.

We need our governments to be involved. There has to be regulation, simply because this is too important to leave to some smart people in Silicon Valley, even if they're well-intentioned.

Calum Chace will speak at the Fund Operator Summit | Europe on 15 October 2024 in London.

Read the agenda, and find out how to register, here.

 

Please Sign In or Register to leave a Comment.