“Collaboration is crucial to use AI responsibly”

Published on: 23 February 2023

With the arrival of ChatGPT, Artificial Intelligence (AI) is suddenly finding its way into the living rooms of “ordinary” citizens. AI has become mainstream. And the whole world is rushing into it. What does this mean for financial services? Will it change the way we invest? What impact will it have on client contact? And: is our society ethically and legally ready for artificial intelligence? In this series, we explore the answers with experts from inside and outside APG.

 

In Part 1: Pieter Custers, Director Business Development & Community Development at Brightlands Smart Service Campus.

 

An absolute gamechanger. You can safely call ChatGPT that, says Pieter Custers. He has been involved with the potential of AI for some time at the Smart Service Campus (BSSC) in Heerlen and within the Brightlands AI hub. But also with its ethical and legal implications. In recent weeks, Custers also delved into the much-discussed tool of company OpenAI. Just like the rest of the world, for that matter. The chatbot with generative artificial intelligence - which enables it to create something new based on existing data - is being tried out and used in practice by entire populations. Competitors, in the form of Google, for example, are also stirring. And all the while, developments in the AI field have been in flux for years. “But in the case of ChatGPT, there is a breakthrough. You see that the models are now ready, so they can actually be implemented. When you start working with it, you also do notice that this is different from previous AI software. I liken it somewhat to the step Google Translate took, when it really took translation to the next level with deep learning.”

 

Custers would know. He explores the possibilities of artificial intelligence on a daily basis. He has been closely involved in the creation of the Brightlands AI hub, founded in 2021 by 20 major employers from Limburg, with APG and BSSC as co-initiators. The goal: To connect companies, knowledge institutions and other organizations in the region working on artificial intelligence and data science. Because AI offers many possibilities and opportunities, he argues, but in order to use them optimally and safely, collaborations like the Brightlands AI Hub are badly needed.

 

Why that need for collaboration?

“Developments in the field of artificial intelligence are moving very fast. Much faster, for example, than the laws and regulations governing it. And yet there are numerous legal, social and ethical components to the use of AI. You can compare it to the developments of smartphones: they are still moving at breakneck speed. So fast that we don’t really know how to deal with them. We know that smartphones are disastrous for our attention span, but strangely enough, it remains a debate whether or not they should be allowed in the classroom. To cite just one example.

With AI, we need to get ahead of that as much as possible. Through cooperation between governments, companies, knowledge institutions and citizens, we are able to use AI in a way that, as a society, we consider to be responsible and useful. And where privacy is protected and discrimination is prevented.”

 

If laws and regulations lag behind global usage, aren’t tools like ChatGPT really coming too soon?

“No, because the time is right for this technology. And the possibilities it offers are huge. We’re going to benefit a lot from that, too. Think about administratively heavy work, like that of lawyers and jurists. Much of their work consists of searching for precedents. Intensive work, which AI is currently capable of doing very well. Or consider programming: ChatGPT is able to help write code. And it seems to do that well. AI can and will also be a gamechanger in client contact, for example in pension administration. For example, APG is already deploying AI at the Client Contact Center [read more about this in the next installment of this series, ed.] In all these areas and more, AI can provide valuable support.”

 

Supporting work or replacing work? AI is also seen by many as a threat to their work.

“We need to start seeing AI much more as an additional tool we can use. An enrichment, rather than a replacement. Of course, it can take over certain tasks, including those done by humans. But that also frees up time and space for other things. It is now more about: who is most creative with input.

 

Think of it this way: at the end of the 1990s, it took you a day to create one Web page; now, thanks to handy tools, you can do it in five minutes. You’d be crazy if you still did it the old way. AI does the same thing in that respect: you greatly increase the opportunities for many people. That also offers serious prospects for less developed countries. If it’s marketed properly, of course. The need for democratizing - making and keeping the technology freely available - is therefore great. With this, AI has the potential to distribute power more fairly.”

Back to regulation and policy vs. lightning-fast developments. What can “we” do to make the use of AI to go as smoothly as possible?

“Create awareness around the use of AI. I believe very strongly in that. Education and retraining are essential for that. So, I would really argue for an education program in primary and secondary schools. There is a lot to gain there, we have to prepare this generation for the new future.”

 

Because the older generations are lost?

“No, not at all. But we didn’t grow up with it and we are not the ones who are studying on the daily basis anymore. Our generations have to get awareness around AI from SIRE ads. And that’s not enough.”

 

You praise the potential of AI, but stress with equal emphasis the importance of proper education. There seems to be a concern there.

“You should not be blind to the risks. ChatGPT operates on data. If that data is flawed or too one-sided, it affects the results. It can cause a chatbot to be biased and thus discriminate. Or consider algorithms on social media - also a form of artificial intelligence - and their impact on polarizing society. And sometimes even the disruption of society.

 

Another risk is the issue of responsibility: who is legally liable if a self-driving car causes personal injury? These are crucial issues that go hand in hand with technical developments in AI. There are still steps to be taken there. Because it turns out to be quite difficult to innovate ethically. In the Netherlands, incidentally, we are leading the way, but there is always room for improvement.”

 

Because ... those issues must first be “tackled” before we can use AI optimally?

“That is a tough one. You don’t want to slow down development, but because of the risks mentioned, you’re also seeing that parties are deterred from developing data innovations. Afraid of making legal or ethical mistakes. However, having the opportunity to address social problems such as poverty and climate change and not seizing those opportunities is also unethical. The question then becomes: which ethic wins over the other?”

 

Within the AI Hub, you are responsible for the ELSA Lab Poverty and Debt, which works on improving the ethical aspects of artificial intelligence, among other things. Can you give an example of how that works in practice?

“At the ELSA Poverty and Debt lab, we are working with universities, colleges, governments, companies (including APG) and citizens to use AI in an ethical way to combat poverty and debt.

 

Together we are conducting research and developing applications to share data between citizens, government and companies in a safe, privacy and ethically friendly way. We want to ensure, for example, that a mother on welfare no longer has to identify herself and declare herself many times when applying for support, with all the embarrassment that this entails. But that this process becomes much more citizen-friendly.”