Innovation versus regulation: managing risks of early AI adoption
As more companies rush to integrate AI into their workplace, there is growing concern about the lack of regulation in the field and the potential risks this poses for early adopters of AI
The global pandemic caused a boom in investment in Artificial Intelligence (AI) as businesses turned to technology to solves problems for them. But this drastic increase in AI adoption poses unique questions about the future of AI and work, and what considerations companies must make when integrating AI into their business operations.
Research by Stanford University suggests that between 2019 and 2020 investment in Artificial Intelligence increased 40 per cent, as opposed to a 12 per cent increase between 2018 and 2019. This reflects the huge impact that Covid-19 has clearly had on investment in AI and this trend is clearly here to stay, with predictions for the future of AI looking more and more positive year on year.
‘Between 2019 and 2020, investment in Artificial Intelligence increased 40 per cent…’
However, despite the enthusiasm for and clear investment opportunities in AI, as well as the potential for AI to become entrenched in all aspects of our working lives, the thorny question of regulation still challenges businesses looking to get on the AI bandwagon.
AI regulation lags behind
In its research into the future of work, PwC established four distinct models for how the future of work could look and function. One of its models highlights the potential for innovation and AI adoption to continue to outpace regulators, causing businesses that adopt AI strategies now without thinking about future regulation to face unexpected consequences. In PwC’s own words, ‘Today’s winning business could be tomorrow’s court case.’
Creating policy, particularly policy that intersects with questions of ethics, is a time-intensive process and must be approached from all angles. Policymakers cannot keep up with the pace of AI innovation in each and every business context; even so, they are still working hard on establishing ethical standards and codes of practice for AI, and there is potential that in the next few years we will see tighter regulations being put in place to regulate AI use in the private sector.
Consequently, if businesses start to adopt AI without considering the potential for tighter regulation in the future, they run the risk of falling into the gap between what is currently legal and what precautions future regulations will insist upon.
At the extreme end of the spectrum, companies could end up exposing themselves to legal action or, more broadly, be faced with starting their investment in AI from scratch in order to meet regulations and higher ethical standards.
The question of AI regulation has already established itself as a concern for business leaders. Research by KPMG suggests that more than half of people it surveyed working in organisations with high AI knowledge believe that AI adoption is moving faster than it should. In fact, out of the people KPMG surveyed with high AI knowledge, 92 per cent recommend that the government should be more involved in regulating AI technology.
Where does this leave AI?
Given the threats and challenges involved, does this mean that companies should steer clear of engaging with AI and integrating it into their working practices?
It is clear that while AI might have its complications, it also can offer companies a myriad of benefits. investment in AI remains a worthwhile consideration for businesses and will continue to play an ever increasing role in the future of working.
But, what this research does highlight is that companies must invest in the ethics side of AI, as well as the technical side.
Developing comprehensive and ethical AI policies may seem like a daunting task for companies that are just dipping their toe into the AI field. But ultimately, it is consulting experts and taking a holistic approach to AI integration that will safeguard companies from the potential impacts of future regulation and ensure that their investment into AI continues to add value.
‘Taking a holistic approach to AI that will safeguard companies from the impacts of future regulation…’
The Alan Turing Institute addresses this issue, stating that ‘Ethics should not be an add-on after the fact, or a roadblock, but rather the foundation that enables innovation to flourish’. Companies that adopt this approach, and believe in the importance of establishing ethical standards and policies around their use of AI ,will be placing themselves in the strongest position possible as they continue to develop as organisations.