Published: 6 Minutes read

Why AI Regulation is a Confidence Builder

In a fast changing world, our CEO explores how regulation and safeguards will be essential in boosting confidence and trust in AI applications.

On July 21, the President of the United States, Joe Biden, met with seven leading Artificial Intelligence (AI) companies that have committed to voluntary standards to step up safety and transparency around emerging AI technology and AI regulation.

Closer to home on June 16, the European Parliament passed the text of the Artificial Intelligence Act (the “AI Act”) by an overwhelming majority. This paves the way for the European Council to debate the text of the AI Act in the coming weeks and is one step closer to its formal introduction into law AI regulation. If passed the AI Act will be the first large-scale framework for the use of Artificial Intelligence (“AI”) globally. If adopted the AI Act will have a direct effect in Ireland and the regulation will automatically form part of Irish Law.

How GDPR legislation paved the way

In 2018, Europeans’ relationship with data came with the implementation of the General Data Protection Regulation (GDPR). For the first time citizens of the then 28 member trading block had a harmonised approach to finding out what information was being held about them, by who, what it was being used for and, as important, how long it could be held on to. For the first time there was also a consistent approach to reporting data breaches, the penalties involved and who would implement them. At first it was big and scary but its successes have informed similar legislation the world over. 

We have reminders of the success of GDPR every day, you probably got one when you clicked on our website and received a warning about our use of cookies. Most people might click these notices away but the point is that organisations are aware of the consequences should user data fall into the hands of bad actors. 

For us at Galvia, as a company with our head office in Galway that means we are overseen by the Irish Data Protection Commissioner and any breach of our data could cost us as much as 4% of our turnover. It’s a stiff penalty but the regulations are clear and compliance is a huge confidence builder when it comes to our clients and our product development journey.

GDPR has been a huge success and its fingerprints are on similar measures the world over. But the EU wasn’t done and has since introduced the Digital Services Act, and the Digital Markets Act to protect the rights of citizens and foster innovation at an international level. To this we can add the Artificial Intelligence Act, which is currently working its way through the European Parliament and is expected to be adopted by the end of 2023. Once enacted we believe this will herald a greater willingness to embrace emerging technologies like decision intelligence and reap competitive rewards. For us, it’s another logical step we’re already equipped to take.

Transparency is key to AI regulation

The Artificial Intelligence Act is a risk-based approach to regulation, more to do with principles than specific technologies, so it won’t have to be completely reviewed on a ‘per innovation’ basis. It cites “opacity, complexity, bias and partially autonomous behaviour of AI systems” as the biggest issues.

Applications the Act considers ‘high risk’ to citizens’ privacy and right to expression include social scoring and facial recognition, effectively anything that limits the quality of life or creates a chilling effect for public conversation or markets.

For example, an application that would not pass muster under the Act would be ‘social scoring’ where individuals are ascribed values based on their behaviour. This is clearly intrusive and likely to act as a form of control which is wholly unacceptable.

A ‘high risk’ application could be something much more benign but still could have serious consequences in the event of failure. It might be the dream to cruise from coast to coast in a self-driving car without ever having to put your hands on the steering wheel. For now the best thing we have is a situation where an assisted driving system can do some of the work, but you still have to maintain situational awareness and keep your hands on the wheel so you can take over at a moment’s notice.

In contrast, a ‘low-risk’ application could be something as simple as building a better spam filter. The gamut really is that wide.

University of Galway
University of Galway, Ireland

With AI potential comes responsibility

The AI Act has direct implications for our business. Chatbots are considered a ‘limited risk’ application, which means anyone has a right to know that they are interacting with an AI so they can make an informed decision about whether to continue an interaction or end it. That risk is significantly reduced with the depth of the interaction. We have no interest in creating virtual therapists or replacing a board of directors but an interactive and automated point of contact can deliver significant value while respecting user privacy and delivering a satisfying customer experience. 

As we found in our work with The University of Galway during the Covid-19 pandemic, our chatbot assistant, ‘Cara’, led to a 40% reduction in the number of mundane tasks handled by student services staff. By presenting information in a student-friendly way using our bespoke chatbot ‘Cara’ and push notifications to common applications like WhatsApp and Microsoft Teams, we were able to gauge student sentiment across campus, giving a valuable picture of the student body’s wellbeing. We were even able to create visualisations predicting trends in student behaviour based on past data.

By increasing engagement through Cara, we delivered an impactful service to a large educational institution without the need for unique identifiable data. User privacy came as standard.

As with GDPR before, a harmonised approach to AI will do much to clarify standards, promote confidence in new technology and clarify the legal ramifications of failing to deliver secure solutions that respect the rights of individuals through the EU. Our decision intelligence platform already goes far beyond our forthcoming obligations. We’re happy to talk with you about how we do it.

Book a call with one of our experts.

For more insights, sign up to our newsletter.