Content Formula

AI in the workplace: webinar recording and key takeaways 

Since the release of ChatGPT, the use of generative AI has probably been the number one topic in the digital workplace space. How can organisations best use AI both now and in the future? What are the issues they need to think about? And how do they get started? 

In the latest webinar from the Content Formula team we take a deep dive into AI in the workplace, exploring use cases, trends and some practical considerations about implementation. Featuring Dan Hawtrey (Managing Director), Joe Perry (Head of Technology) and Alex Yeomans (Business Development Manager), you can watch a recording of the webinar above. In this post we’re also going to explore some of the takeaways from the session.  

 

AI in the workplace: key takeaways 

Here are some of the key takeaways from the webinar.  

1. You don’t need to be intimidated by AI 

At the beginning of the session, Dan pointed out that some people feel slightly intimidated by AI, partly because it is such as new and powerful technology. However, actually a lot of the AI tools that can be deployed today have had a lot of work already put into them are actually very accessible, so there is no need to be intimidated by them. It’s actually easy to get started with AI – and this was one of the central messages of the webinar.  

2.There are different types of AI 

AI is a wide topic and there are many different types and “flavours” of AI. One way to segement AI is thinking about its strength. Although definitions are overlapping, Dan pointed out that there were three main types of AI: 

  • Narrow or weak AI: tools that have a very specific objective such as text to speech, sentiment analysis or a tool that identifies objects in a photograph. There are many examples of narrow and week AI services that are currently available through Amazon Web Services AI Services, and Azure Cognitive Services.  
  • General or strong AI: AI tools that can work on many different types of task with multiple objectives. ChatGPT is an example of this. Organisations can also now leverage Azure OpenAI Services to implement their own “private” ChatGPT relating to their own Microsoft 365 digital workplace and proprietary data, avoiding many of the risks associated with using the public version of ChatGPT.  
  • Super-human AI: The AI that doesn’t quite exist yet and is in the realm of science fiction, although it could arrive sooner than we think! 

3. There are six core functions of generative AI in the digital workplace  

Now that OpenAI services can be applied to the digital workplace, many Content Formula clients are asking what it can be used for. In our view there are six core functions of generative AI in the digital workplace: 

  • Answering questions: give employees an answer on anything that they ask. 
  • Drafting new content: writing an article, an email, meeting notes or other text based on a prompt. 
  • Editing: the ability to tidy up and improve text, correcting any mistakes. 
  • Summarising: summarising content, for example automatically creating a synopsis. 
  • Classification: getting generative AI to tag documents against a taxonomy. 
  • Advising & consulting: turn the LLM into an advisor or consultant, a use case not often highlighted, where you can program ChatGPT to support users on a particular area. 

In the session Dan demonstrated this using a natural language script which detailed a role, objective, approach and fix (with set lists of questions) relating to Cognitive Behaviour Therapy (CBT) which asks pertinent questions to the user about their situation.  

4.There are four key categories of use case for AI  

Content Formula has categorised AI uses cases into four area, each with an ascending level of complexity and subsequent need for investment, skill required, time to implement, size of data set and potential ROI: 

  • Using existing off-the-shelf tools such as ChatGPT. Many organisations are now working in this category.  
  • Using an existing Large Language Model (LLM) to scan documents, perhaps with a custom UI and super prompts, effectively creating a custom KM tool for your organisation. Many customers are currently looking into this option. 
  • Training an existing LLM with proprietary labelled data, although this can involve significant data cleaning and subsequent testing and alignment. This can be used to produce a high value solution like an effective customer-service bot; however it then requires ongoing monitoring and optimisation.  
  • Building a proprietary AI model from scratch using a large proprietary dataset, creating a solution such as a diagnostic tool that examines X-rays. This involves the most investment and data, with significant testing, alignment, monitoring and optimisation.  

5. Organisations can build their own custom solutions 

In the webinar, Dan stressed that he thought organisations should be focusing on the second and third categories of AI use cases, where there is a lot of “low-hanging fruit.” To illustrate this, he mentioned some actual projects that are currently be worked upon by Content Formula. 

Example #1: Document generation 

Document generation for a consultancy around proposals, briefs and market analyses. With a collection of exiting documents and propriety knowledge, Content Formula is building a guided prompt builder based on Azure OpenAI services with Azure Cognitive Search.  

Example #2: Querying contracts 

Querying contracts for a property company, with the ability to ask common questions about specific contracts. This uses Azure Cognitive Search to query an existing database of contracts and find a specific item, and then uses Azure OpenAI Service to answer the question about that particular document. 

Example #3: Department knowledge bots 

Answering common questions from employees for a healthcare company. There is already existing structured departmental content on the company intranet including policies, process and contact information. The bot uses Azure Cognitive Search to find this structured content and then Azure OpenAI services to interpret and answer questions. 

6. Powerful AI solutions don’t necessarily require much training 

To illustrate some of the above use cases, Joe carried out two live demoes of AI solutions that don’t need necessarily need any or much training to be effective within your organisation. 

The first solution helps employees interact with enterprise data, querying policies and an employee handbook. Here Azure Search finds the right documents and then ChatGPT is used to find answers within that document. For example, an employee could ask questions about a benefits plan and the solution will then come back with an answer based on the text of a particular document, with a link back to the base document too. Employees can then ask further questions in natural language.  

The second solution acts a support for call centre agents. Here the agent can be on a call with a customer (for example relating to motor insurance) and get prompts from a script. The call is then transcribed and fed back into OpenAI Services, which can then offer near real-time suggestions to the agent for further questions that need to be asked.  

Finally, the whole conversation can automatically be summarised and coded so it can either be stored in a database or used for analyis and reporting, for example through PowerBI. These conversations can also further train the model.  

Both examples don’t necessarily require training to set them up, and demonstate the power of the AI in the digital workplace.  

7. CEOs need to start thinking about AI now 

Many organisations are looking at how they can best leverage AI, with CEOs needing to act now to prepare for this exciting and emerging area. Dan shared some useful insights gained from a recent McKinsey article about what CEOs need to be doing to get started with generative AI including: 

  • The need to build a cross-functional team to identify high value use cases, consider the proprietary data that can be used, identify any risks and report back to senior leaders.   
  • Move forward with AI education and training. For example, at Content Formula we have “AI Mondays” where the team members present things they’ve learnt about AI back to the group.  
  • Evaluate the in-house technical resources, skills, infrastructure and access to AI models that you have at your disposal. Are there any gaps? 
  • Evaluate the “preparedness” of your proprietary data and whether it needs to be cleaned up or prepared. 
  • Identify some use cases and “low hanging fruit” for potential AI pilots or Proof of Concepts (POCs) so you can learn from these. 
  • Draft appropriate AI policies and principles to manage risks, ideally building on existing policies.  
  • Get familiar with the AI legislation that is being drafted and will impact AI in the future – for example the EU has published a draft AI framework.  

8. It’s important to consider the risks around generative AI 

Generative AI is still evolving and has the power to not only transform the digital workplace but also the way we work. It’s important to consider some of the risks involved. These currently fall into seven different areas: 

  • Fairness and bias: Generative AI can include inherent bias either within the Large Language Model or based on the documents it has been trained upon. This could be a serious issue, for example, if you are using it to examine CVs and is introducing bias into your recruitment process.  
  • Intellectual property (IP): If using a public AI like the popular version of ChatGPT it is possible that your IP is not protected in the responses that ChatGPT gives, or you could be inadvertently plagiarise someone else’s IP if publishing content generated by ChatGPT. 
  • Privacy and confidentiality: Using ChatGPT can mean a breach of private and confidential data. Even using the non-public version within your own company, you need to make sure that private data is not exposed internally to people who should not be able to see it. 
  • Security: The use of AI is likely to present new risks around cybersecurity with opportunities for new cyber criminals. Employees will need to be prepared for the associated risks.  
  • Explainability: AI does not explain the source behind its output or the reasoning, although some solutions are making advances in this area. 
  • Reliability & accuracy: There are some questions about the accuracy of generative AI – for example with some well-publicised “hallucinations” from ChatGPT with erroneous and made-up “facts”.  
  • Cultural  & social impact: As we go further with AI there will be more ethical questions that arise, especially as jobs and roles are impacted. 

 Our thanks to everyone who participated in the session! 

Want to discuss AI in the workplace? Get in touch! 

AI in the workplace is a very exciting area. If you’d like to discuss any of the issues described in the webinar, have a question or how to get your own AI project off the ground, then get in touch!  

discuss a project with us

Find out more about using AI for your organisation...

Request a call back with one of our AI experts, for a free consultation about your business.

Get in touch to discuss your project


We use cookies to give you the best experience on our site. By continuing to use our website, you are agreeing to our use of cookies. To find more about the cookies, please see our cookie notice.

You can also read our privacy policy.