Why ethical AI matters
Often I’m asked by people who are interested in my career journey, “How did you know you were passionate about data science?” As a child, I always enjoyed solving problems and finding effective solutions. I’m driven by finding better ways of doing things. So it is no wonder I find myself working in AI and solving previously insurmountable challenges with elegant, efficient solutions.
I passionately believe in the enormous positive potential AI has for solving some of the world’s largest challenges. It is already revolutionising healthcare and education in many positive ways. However, I’m equally passionate about developing responsible and ethical AI solutions.
At Galvia, for instance, AI is at the heart of all our solutions. We help enterprises make intelligent decisions using AI to improve their business objectives, drive revenue growth, reduce costs, or automate their work practices. While developing these AI solutions, we constantly strive to integrate the latest technologies and state-of-the-art AI algorithms. However, our focus is not only limited to developing cutting-edge solutions but also developing solutions that are ‘ethical AI’.
While developing AI solutions it is imperative to take into consideration the ethical and moral implications of AI on an individual, the environment, and the society. Ethical AI is a broad discipline that mainly aims at implementing solutions with respect to applicable laws and regulations, ethical principles and values, and taking into consideration its social impact along with technical perspectives.
Today, Galvia’s AI solutions are being used by renowned organisations like the University of Galway, NTT Data, Talentguard, and Nestlé. One of our objectives while developing AI solutions is to get our stakeholders to ‘trust’ our AI solution. This is where having an ethical AI framework comes into play.
The European Union has developed guidelines that put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy. Galvia’s ethical framework is designed in line with the EU’s Ethics Guidelines for Trustworthy AI and here’s how we meet the requirements:
- A human-centric approach
Galvia’s AI solutions intend to ‘augment’ the end-user’s decision-making process and not make decisions on their behalf. That is, our AI solution will provide the human user with all the analysis and insights but the final ‘action’ of making a decision remains under the control of the human user. Thus, from the ideation of our AI solution to its realisation, we ensure that we have a strong collaboration with our stakeholders, SMEs, and end human users.
Human involvement in our AI solutions does not stop after deployment. All our AI solutions are integrated with a feedback mechanism that requires human intervention to evaluate the system outcomes and rectify system performance as and when needed. Having a ‘human-in-the-loop’ to evaluate the feedback not only helps us achieve our functional and innovative goals, but also uncovers any social, legal, compliance, or environmental issues that potentially be a societal threat.
- Technical Robustness and Safety
Professor of Ethics and Technology, Joanna J. Bryson rightly said that ‘Artificial Intelligence only occurs by and with design’. Design decisions with respect to the input data ETL (Extraction, Transformation, Loading), AI model, model outcomes, and deployment play an important role in the development of ethical AI solutions. All technical and ethical outcomes are a result of the numerous design decisions we make.
From a technical standpoint, we are committed to making responsible design decisions that ensure maximum protection, safety, security, accessibility, scalability, and reliability in our AI solutions. From an ethical AI standpoint, we thoroughly research, evaluate, and document our design decisions to guarantee maximum transparency in our AI solutions.
- Data Protection and Privacy
Data is the most precious resource in every business. When used responsibly and correctly it can be an opportunity to drive growth but when used incorrectly, it could be a threat to the business as well as the society. Galvia’s AI solutions are built with Information Security Management in mind. At Galvia, we comply with the GDPR (General Data Protection Regulation) guidelines and we also practice the ISO 27001 policies and procedures for our customer data protection, integrity, privacy, and confidentiality.
When there is a judgement made by a human, they can justify the rationale behind their judgement. But how do we ask for an AI solution to justify its outcome? This is where transparency comes into play. Galvia embraces the power of Interpretable or Explainable AI models in its AI solutions to provide an ‘understanding’ of the outcomes. Galvia also places considerable efforts in documenting design decisions and manuals to ensure that all developers and users have a clear and complete understanding of the AI solution, its functionalities, and behaviour.
Data is generated by humans. Humans are inherently biased to varying extents. This bias effectively creeps into the data which then perpetuates in the AI solution thus making the outcome of the AI solution unfair and biased. The problem needs to be fixed at the source. The data needs to be fixed before it enters the AI solution. At Galvia, we continuously strive to uncover and eliminate any potential unfair biases in the data and ensure fairness in the end-user’s decision-making. Collaborating with SMEs, stakeholders, and end-users to evaluate the outcomes of AI solutions is crucial in addressing unfair bias.
- Environmental Sustainability
At Galvia, sustainability is at the core of our values and operations. We are dedicated to creating a sustainable future for our organisation, the communities we serve, and the environment. To ensure this commitment, we have seamlessly integrated sustainability considerations into our AI solutions. Some of these include the use of energy-efficient cloud services that limit carbon emissions, implementing responsible data management practices to optimise storage and reduce duplication, etc.
In the transparency section, we came across the question of how we can get our AI solution to justify its outcome, but who do we hold accountable if AI solutions produce an outcome that has social implications? It is definitely the developers of the AI solutions.
At Galvia, we understand we are accountable for our AI solutions and we strive to develop responsible and ethical AI solutions. We achieve this through due diligence of each and every design decision of ours and documenting them for future reference, strictly assessing source input for unfair biases, assessing models for algorithmic or systemic bias, thoroughly testing our solutions before deploying them, logging and monitoring the outcomes continuously, and providing constant technical support to our stakeholders.
I’m planning a long and exciting career in AI technology. AI’s transformative power means that the ethical guidelines for AI will need to evolve as technology advances. It’s crucial that we keep working on improving and adapting these ethical rules to create trustworthy AI solutions.
If you’d like to speak to one of our experts about how our AI-powered decision intelligence platform can help you, reach out here.