Content Formula

What are the risks of using AI and how to deal with them

Artificial intelligence (AI) is changing the way we work and transforming the digital workplace. The release of ChatGPT has highlighted the rapid advances in generative AI, and opened the way for organisations to start to improve processes, automate repetitive tasks, supercharge productivity, reduce costs and more. AI has also received a lot of media attention – both covering its exciting potential but also some of the associated risks.

From the outset its worth acknowledging that there are some risks associated with AI. This is only to be expected for any truly transformative technology which is still in its relatively early days and will continue to evolve. This presents some challenges for digital workplace teams who are keen to extend the use of AI and enjoy the considerable benefits, but are not sure how to proceed, particularly in organisations who are risk averse. For example, some companies have chosen to ban the use of ChatGPT, for example.

In our view, there are significant actions that digital workplace teams can put in place to mitigate for some of the risks and establish the necessary guardrails and governance to extend AI into the digital workplace.

In this post, we’re going to explore some of the risks of AI and the actions and tactics that can help reduce that risk.

What are the key risks surrounding AI?

There are a number of key risk surrounding the use and implementation of artificial intelligence, some of which particularly relate to generative AI.

Data privacy and confidentiality

A major risk around the use of generative AI is around data privacy and confidentiality. As a Large Language Model (LLM) uses data submitted to help train the AI, anything submitted to at least the public version of ChatGPT is effectively no longer private or confidential. Breaking GDPR rules, submitting confidential information, and the inability to safeguard employee and customer data are just some of the reasons why generative AI is considered very risky. Thankfully, many paid AI services have guardrails in place to ensure data privacy and confidentiality concerns are met. We explore this in more detail below.

Intellectual property

A key risk of using generative AI is protecting copyright and intellectual property. A Large Language Model (LLM) like GPT-3 and GPT-4 is trained through vast amounts of content available through the internet, but does not take into account any of the intellectual property rights over that content. It is quite possible that a response generated by ChatGPT might reproduce copyrighted material; this not potentially infringes the rights of the copyright holder, but inadvertently places organisations using responses from ChatGPT at the risk of copyright infringement. Who is the true author of a piece of content?

Overall, this is a very grey and complex area, where there are legal cases in progress which will determine how things unfold in the coming months and likely years. We can expect the position to evolve as test cases, regulators, legal experts and AI providers like OpenAI all progress, take action and argue it out.

Fairness and bias

There are significant risks in fairness and bias relating to AI which can undermine processes where bias needs to be removed, for example in recruitment, and which might impact issues relating to Diversity, Equity and Inclusion.  The bias may reflect the material that the AI has been trained upon and the beliefs of the group that created the algorithms. Issues have been covered in the media via well documented experiments that have shown for example that AI can produce stereotyping relating to image production and facial recognition.

Ethics

AI is incredibly powerful and can be used for multiple uses, including ones which are ultimately unethical. For example, there have been widespread concerns that AI has been used to spread false news via social media, and could be used by “bad actors” for example to support cybercrime.

Accuracy

Accuracy is a significant problem with AI. Factual errors within responses from ChatGPT (known as “hallucinations”) have been well documented. What makes these errors riskier is that they are presented as fact – certainly in ChatGPT – often in an authoritative and confident way, and it becomes very difficult to know what is true and what isn’t.

Lack of explainability and transparency

One of the current issues with many AI and related products is a lack of transparency. It is understandable that proprietary algorithms are confidential, but it does get hard to have full confidence that an AI product meets various standards, for example relating to bias or ethical standards, when it is effectively a black box.

A related and more specific risk for generative AI is around a lack of “explainability” which specifically fails to acknowledge sources that have been drawn upon to create output.

Impact on roles

A lot of media speculation has centred on how AI will impact jobs. There tends to be negative in sentiment in terms of job losses across certain types of role and professions. As AI extends its influence in the way we work, it is inevitable that roles will change, leading to a significant risk of job losses.

However, this risk needs to be put in perspective. The process of automation leading to job losses has already been happening for many years, so it’s not a new issue. Secondly, AI and technology will also create new roles – many positions today simply didn’t exist thirty years ago. Thirdly, many people’s roles will be positively impacted by AI, making tasks easier and improving productivity.

Misconceptions

AI is still a very emotive topic that causes strong reactions. This can mean many have false perceptions about the promise or threat of AI, either positive or negative. Misconceptions about AI can also lead to false expectations about its impact, or an underestimation of the foundational work that sometimes needs to be done to get the best out of AI. Because of this, change management is an important factor in any AI implementation.

Reducing the risks around AI

Like any area of technology that is still evolving with the potential for dramatic impact, there are going to be associated risks. The good news is that there are solid actions that organisations can take to reduce the risks which don’t limit the opportunities for innovation and leveraging AI.

1.Create an ethical framework around use

It’s important to set high level guardrails about the use of AI. Creating an ethical framework around its use which leaders buy into and put their name to can be a powerful way to ensure AI is used responsibly. It also supports the definition of more detailed policies, guidelines and change management efforts. An ethical framework could be a manifesto or a high-level strategy that aligns with existing commitments to areas such as data privacy and existing ethical policies.

2.Provide clarity on usage by users
Within an ethical framework there then needs to be more detail about the usage of AI. Here, clarity is king. For example, this should spell out your organisation’s policy on using the public ChatGTP service. Providing the detail can be challenging and may be subject to change as this is a rapidly evolving landscape; it may be that it also leaves quite a lot up to individual judgement, based on your organisational culture. But AI is here and is being used across the digital workplace, so you need to have some guidance in place.

3.Build review processes around using AI in applications
There also similarly needs to be guardrails in place about using AI in applications, either in procuring new tools or building applications from scratch. This means ensuring that the use of AI should be considered within any existing technology due diligence or procurement process, but also be reviewed for any custom project too. The responsibility for any review will likely be within IT, but could also involve other stakeholders from compliance and risk functions. Processes may also need to cover implementation – for example around training AI.

4.Invest in change management and communications
Change management and related communications are critical when introducing AI in order to:

  • reduce risks around user behaviour
  • ensure there aren’t misconceptions about AI
  • raise adoption of new AI-based services.

Always invest in the change management side of things through messaging, e-learning, leadership communications, digital champions and more.

5.Create a safe environment for moving forward with AI
Creating a safe environment to experiment and use AI where you know data privacy and confidentiality issues are no longer a risk can make a huge difference in being able to leverage AI and move forward. You can create solutions that generate business value, learn what works and gain new skills and knowledge that are going to be important for the future.

One way to do this is to leverage OpenAI services through Azure and also use Microsoft products like CoPilot; here you can use generative AI through your own digital workplace knowing that data will remain on your Microsoft 365 tenant, content will still be security-trimmed and you can adhere to Microsoft’s existing governance measures that might already be in place for your digital workplace.

Moving forward with AI

AI is truly transformative, very powerful and evolving at breakneck speed. Inevitably there are associated risks. By taking the right governance and change management approach, organisations can mitigate for some of these risks and move forward with AI.

If you’d like to discuss reducing risks around AI or your individual AI project, then get in touch!

discuss a project with us

Find out more about using AI for your organisation...

Request a call back with one of our AI experts, for a free consultation about your business.

Get in touch to discuss your project


We use cookies to give you the best experience on our site. By continuing to use our website, you are agreeing to our use of cookies. To find more about the cookies, please see our cookie notice.

You can also read our privacy policy.