The arrival of generative artificial intelligence (AI) apps like ChatGPT from OpenAI has been nothing short of seismic. In a few short months the advances of generative intelligence technology has really thrust AI into the mainstream.
What is it? ChatGPT generates human-like text based on prompts from users; it predicts the next word in any given text based on the patterns it’s learnt from a massive amount of data during its training process. It can be used to write news articles, poetry, and even scripts. It can also be used to translate text from one language to another.
However ChatGPT’s responses only touch reality at a tangent. While they may sound convincing, they are ultimately fictional creations of a system that nobody (outside the company) really knows how it works. OpenAI is representing intelligence yet we have no way to understand how they’re actually doing this.
In a recent statement regarding artificial general intelligence OpenAI stated that “ at some point it may be important to get independent review before starting to train future systems and for the most advanced effort to agree to limit the rate of growth of computers used for creating new models” – we have to ask “is now that time?”
Should they open source the technology? In short I believe they should not as it could have detrimental ramifications if in the wrong hands. But we do need to put a framework of understanding in place before we blindly sleepwalk into further versions and capabilities.
In my role as an advisor to the Irish government on the Enterprise Digital Advisory Forum, I am looking forward to a robust discussion on this topic. If Generative AI companies are not willing to engage in an open and transparent manner, maybe it is time to take a leaf out of the Italian government’s policy and take some hard measures?
Italy has become the first European country to block advanced artificial intelligence chatbot ChatGPT. Italy’s data-protection authority said it had made the decision to block the chatbot and investigate whether it complied with European Greater Data Protection Regulation (GDPR), over privacy concerns.
As an AI entrepreneur, I’m uncomfortable with the idea of blocking. Thousands of AI experts from Elon Musk, Steve Wozniak to Andrew Yang plus many CEOs from global technology companies across the world and leading AI university leaders signed an open letter asking for a “pause” in the R&D that AI systems with human competitive intelligence can pose profound risks to society and humanity and they want to pause in training systems that are more powerful than ChatGPT4.
So why are they calling for this? The main reason cited appears to be that “AI research and development should be refocused on making today’s powerful state of the art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.
Is this tool perfect? Absolutely not but preparing for a future means understanding it and to understand it, you need to try it, test it, break it and try again to really begin to understand how this AI will really benefit us and to also understand the potential pitfalls of this capability.
Just as road safety measures evolved to make the roads safe for all road users, I believe we need the same type of accountability measures for AI usage. We need to work together with both private and public bodies across Europe and the world to understand and really capture the benefits, concerns and capabilities that this new future may bring. Unpredictable black box models with very powerful capabilities must be transparent, understood and accountable.
Security & reliability, accountability & oversight, inclusive ethical and responsible use must take precedence over developing future more powerful AI systems.
At Galvia we make the power of AI accessible to organisations big and small delivered with transparency, auditability, openness and explainability. We strive to ensure all stakeholders understand the how and why AI is unlocking insights and predicting outcomes that drive growth and efficiencies in their organisations.
Digital trust in all emerging technologies must be established first.
AI systems with human-like intelligence propose profound benefits but also risks to both society and humanity. We are now possibly at a tipping point, is it time to focus on ‘should’ rather than the excitement of ‘yes we can’. Now I believe is the time to look deeper and put in some guard rails on governance, compliance and transparency.
AI is not science fiction, it’s not an experiment from the 70s or some movie you may have watched from the 90s.
It’s here now.
For more thoughts from John on the concerns surrounding Generative AI listen back to his interview on Newstalk FM, The Pat Kenny Show, April 11th, 2023.