Hype-cycles, risk and the need for responsible AI governance
- planaria.black
- Dec 12, 2023
- 3 min read
Updated: Dec 18, 2023

While AI and machine learning have been in existence for over 50 years, a giant leap in public consciousness took place when Open.AI released ChatGPT-3 in 2022 and various other generative AI and Large Language Models (LLMs) followed shortly after. This brought AI or at least generative AI from the fringes of business to mass adoption in many workforces.
In the 12 months that followed the rate of AI development has been exponential with almost weekly ‘breakthroughs’ announced. Paired with this, has been the increased conversation about regulation, safety and ethics with the White House releasing executive orders regarding the safety, security and trustworthiness of AI in October and the Global AI Safety Summit that was hosted at Bletchley Park in the UK last month.
As is often the case with technology, the pace of development outstrips the pace of regulation and in the case of AI, its development is also far ahead of the understanding of many users.
Why does this matter?
With the hype about AI being so great, there’s a real sense of FOMO across businesses. The sense that AI can increase productivity, efficiency, profitability and valuations is pretty compelling, however, there are also concerns seeping into boardroom conversations regarding copyright violation, data breaches, bias perpetuation, workforce displacement and potential ’hallucinations’ .
AI is not immune to manipulation and inaccurate information can be presented as fact. In addition, AI may not always perform as expected in complex, real-world settings and it is this that the policy seeks to mitigate against. As such, it’s critical that there are guardrails in place for the use of AI in order to de-risk the business’s exposure.
The need for governance
While AI brings huge opportunities, governance is required to ensure that it is used responsibly and safely and becomes a trusted technology both with employees and the business itself.
It’s fair to say that only a minority of businesses are actively embarking upon the development of proprietary AI solutions; however, the majority of businesses are adopting third party AI in one form or another, either formally or informally. This is exposing businesses to greater risk.
As such, businesses have two choices. One, ban AI use across the company. While this mitigates the risk the technology poses, it also raises the risk of the loss of talent, innovation impairment and value erosion. The second option is to put the governance in place to manage the adoption and use of third party AI across the business and, in the future, extend this to include the development, deployment and monitoring of proprietary AI.
What does responsible AI governance entail?
Bringing this complex undertaking back to basics, there are three key considerations:
Governance Framework - Definition of the core principles that will determine how AI is explored, assessed, used, monitored and reported on, to ensure that ethics, safety and explainability remain at the fore.
Governance Committee - The establishment of a group, composed of stakeholders (internal, external, AI experts and academics) to review and govern AI adoption, usage and approval.
Governance Policy - Formal documentation to include the governance framework, foundational principles, processes, management, reporting, working groups and escalation along with the business’s commitments to employees and red lines that won’t be crossed.
Once in place, the socialisation of the governance is critical. Transparent communication, open forums and ongoing education of the workforce are key in addition to ensuring the risks associated with AI and the dos and don’ts are fully understood.
We’ve heard on multiple occasions about employees uploading sensitive documents to GenAI in order to change the document’s tone of voice or create a summary, in addition to breaching data regulations by uploading personal data for analysis. It’s also not uncommon for employees to be using AI across their working day that has not been reviewed or sanctioned by their employer.
What’s involved?
This varies from business to business but it doesn’t have to be an arduous undertaking. Access to key stakeholders from around the business, an understanding of associated/connecting policies and internal processes and a policy owner is a great start.
There should also be an appreciation that this will be a ‘live’ document. As the recent drama at OpenAI proved, a lot can happen in a week when it comes to this technology so any policy must be reviewed intermittently and adapted to reflect major changes and developments.
In summary, AI has now surpassed the tipping point. It’s no longer a question of if businesses will adopt AI, it’s a question of when and, arguably more importantly, how they will adopt it. In our opinion, it will be those who take a responsible approach that will be the winners.
A provocative thought
With 92% of businesses expecting to use AI within 5 years and employees' day-to-day use predicted to be 88%, isn’t it time you started giving governance greater consideration?
Comments