What Exactly is Generative AI?

You’ve likely heard of something called ChatGPT. Maybe you have incorporated it into the workspace? Or you may have heard of similar creations by Microsoft and Google. We recently saw the welcoming of a new tool called DeepSeek. But what exactly are these AI tools all about, and why does it matter?

These new breed of internet tools, years in the making but only sweeping the mainstream in the last few years, are effectively what is called Generative AI. Essentially a machine that constantly learns different algorithms that use existing content. Things like existing text, audio, or images are used as an aid to create new content. The algorithms abstract the underlying pattern from content and use it to generate “similar” content.

generative AI

Open AI

Generative AI has been built into the world’s leading search platforms and business software applications. It can be used to search and generate an almost endless amount of topics. From poems and stories to programming code and websites. Generative AI will ultimately be increasingly incorporated into more and more of our everyday products and services. Two examples are ChatGPT (AI text) and DALL-E (AI images).

ChatGPT (Chat Generative Pre-trained Transformer) is an AI chatbot developed by OpenAI with backing from global tech giant Microsoft. It was launched in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and has been fine-tuned (a way for AI to learn) using both supervised and reinforcement learning techniques. 

Using Generative AI with Automation

Generative AI combined with automation could be a powerful partnership. It potentially saves a great deal of time and effort in the development of robotic process automations.

However, since Generative AI is still relatively new, we’ve yet to see the full effect of using generative AI models. There are some inherent risks when using them and caution needs to be exercised.

For instance, the outputs produced by Generative AI will mostly appear completely plausible and convincing, which is the intended result. However, this may not be the case. This is because the models use existing data and content, which can be incorrect, false or contain bias.

Future Uses

At the moment, generative AI has very limited knowledge of most robotic process automation platforms. But, some possible and plausible uses for Generative AI in the future could be:

  • Developer Code Creation: Automation developers could significantly speed up aspects of the automation development lifecycle using generative AI. They could create entire blocks of code in any required programming language suitable for the tools being used. This works equally well for quickly generating either large or complex code in less time.
  • Citizen Code Creation: Slightly trickier, but with a massive potential upside, is that non-technical and business users could also use generative AI. For example, generating automation code which they don’t have the skills or knowledge to do. They can simply ask for what they need using natural language descriptions.
  • Automation Testing: Automation developers could use generative AI for testing automations and generating the required test scripts and data. In theory, this could also be extended to the peer review process, and become a standard fixture in the automation development and testing procedures.
  • Customer Interaction: Because generative AI understands text, audio and images, it could begin to play a more important role. It could manage customer interactions, especially for multi-channel strategies. If combined with chat services and automation, it could enhance the customer experience and optimise the triage between the parties to those who can best address their needs.

Mitigating the Risks of Generative AI with Automation

The risks can, to some extent, be mitigated. Firstly, it’s important to understand what and how data is used to train the AI models and the source of the outputs searches produce. It may not be possible to know the answer to the first of those if you’re using an AI service from another vendor such as ChatGPT or BARD. Legislation for AI is underway to try and ensure that AI is both responsible and explainable. And, in the sources of reference for outputs produced, will hopefully become a more common feature as the technology evolves.

Probably more important but less obvious, is to ensure that a human-in-the-loop is in place. Someone to perform sensible human checks on the output of Generative AI before using it to develop Bots. To ensure they are fully tested, both as a single unit and integrated within the overall automation flow, with as many scenarios as possible. This goes especially for expected failures, building in additional notations and logging to capture anomalies and exceptions.

Governance Framework for Bots and Generative AI

If you’re going to use generative AI as part of your development process, you’ll want to begin by including this in your governance framework. Be clear on what to use, when and how to use it, how to verify and test, and specifically address security. Especially if you’re sharing potentially confidential or sensitive data on an open platform. The following framework outlines a suggested approach for generative AI automation governance.

Mitigating Errors, False Answers, and Bias

  1. Data Quality: It’s critical to ensure that the data used to train the AI model is of high quality and accurately represents the desired outcomes. This includes removing duplicates, correcting inaccuracies, and removing irrelevant data.
  2. Algorithm Selection: The selection of the appropriate AI algorithm for the task at hand is critical to avoiding errors, false answers, and bias. It is important to carefully evaluate the performance and accuracy of different algorithms and select the one that best meets the needs of the situation.
  3. Bias Testing: The AI model must be tested for bias in both the training and deployment phases. This may include using a diverse data set, testing the model against a variety of use cases, and monitoring the output for any patterns of discrimination.

Addressing Security and Data Requirements

  1. Data Security: Generated code must be secured to ensure that sensitive information is not compromised. This includes encrypting data in transit and at rest. Not to forget controlling access to the data through authentication and authorisation mechanisms.
  2. Compliance with Data Protection Regulations: If you’re using natural language descriptions to generate code, be aware that what you’re describing and any data that is being pasted in, doesn’t breach any relevant data protection regulations such as GDPR and HIPAA. This ensures that the privacy of individuals is protected.
  3. Threat Detection: Put in place a robust threat detection and response system to detect and respond to any security incidents.

Recommendations for Searching and Selecting Generated Code

    1. Code Repository: The company should maintain a code repository of all AI generated code, with clear documentation on the purpose and use of each code.
    2. Code Quality: The code generated by the AI should be reviewed and tested to ensure its quality and accuracy. The company should have a process in place for monitoring and addressing any issues that arise.
    3. Code Reuse: The company should encourage the reuse of code that has been previously generated and tested, rather than generating new code for each project.

Testing and Deployment

 

    1. User Acceptance Testing: The code should be thoroughly tested by end-users to ensure that it meets their needs and expectations.
    2. Performance Testing: The code should be performance tested to ensure that it can handle the desired volume and complexity of tasks.
    3. Deployment: The code should be deployed in a controlled manner, with clear procedures for monitoring and addressing any issues that arise.

Conclusion

  1. Continuous Improvement: The governance framework should be regularly reviewed and updated to ensure that it remains relevant and effective.
  2. Cross-functional Collaboration: The company should encourage cross-functional collaboration between the operational, AI, automation, and security teams. This ensures that all relevant perspectives are taken into account.
  3. Ethical Considerations: The company must ensure that the use of generative AI is ethical, avoiding potential harm to individuals or communities.

In summary, the governance framework above outlines only one possible approach to ensure the controlled use of generative AI when developing automations. It balances the benefits of this technology with the need to ensure quality, security, and ethical use. By following this kind of framework, the company can ensure that its automation projects are successful and meet the needs of its stakeholders.

Talk to us and share your thoughts on automation and the Future of Work.