Thoughts /

Navigating AI responsibly

March 6, 2024

Share
Share on LinkedInShare on X/TwitterShare on ThreadsShare on FacebookShare on WhatsApp

In an agency environment, the plethora of AI tools coming onto the market present an opportunity to fundamentally transform the way we work. But there is a danger of getting caught up in the excitement and losing sight of the fundamentals.

Keeping an eye on what matters

For me, establishing a pragmatic position on the use of AI is about three things:

Managing data
Ensuring that the AI tools used aren’t absorbing data from inputs (direct prompts or content of files) into their training models in breach of contractual confidentiality clauses. This means doing due diligence on the tools used, and understanding what happens to the data that is provided or accessible to those tools.

Managing quality and reputation
While AI has clear potential for improving efficiency, those gains should not come at the expense of the quality of the work. That requires the people using the tools to understand their limitations, and to ensure that the outputs are used as a starting point for exploration and discussion, rather than as a direct production tool. Expert (human) knowledge and creativity should be valued and nurtured; if responsibility for thinking and creativity is divested entirely to AI, there is a significant risk of becoming ‘ordinary’ (or plain wrong) and risking hard-earned reputation.

Managing cost
AI tools can be expensive to roll out. It’s important to identify those tools that provide the most value for the best cost, and who in the organisation gets the most value from them. Step carefully and focus on ROI, which can be tricky to measure for things like this (‘saved time’?). And avoid duplicated cost where the same functionality is provided in multiple tools.

Taking a position

A formal AI policy provides clarity to employees on the business’s position on the use of AI and the specific contexts in which it is permitted or not. A client-facing AI statement, based on that policy, also provides reassurance to clients that we recognise the limitations and risks and are taking steps to mitigate those. As an example, our client-facing AI statement at Flag is based around these five key principles:

1. Understand the tools we’re using
We will ensure that our use of AI is informed by researching the tools being used to understand:

  • any known limitations of the tool
  • what data has been used to train the underlying AI model
  • what happens to data that is used in inputs/prompts to the tool.

2. Protect confidential and sensitive data

  • We will not use any client-supplied information with AI tools without explicit written permission.
  • We will not use AI tools that expose client-supplied data to the tool’s underlying training models.
  • We will not use AI tools that expose their inputs or outputs to third parties.
  • We will not expose any personal or sensitive data to AI tools.

3. Follow responsible and ethical AI principles

  • We will only use tools that are reputable and have stated ethical and responsible AI principles.
  • We will avoid any actions that could harm others, violate privacy or facilitate malicious activities.
  • We will use AI tools in compliance with all applicable laws and regulations, including data protection, privacy and intellectual property laws.

4. Be sceptical about AI outputs

  • We will take the time to read and understand what the tool has produced.
  • We will not assume that the AI output is correct or complete; we will use it as a starting point for discussion and/or further exploration and seek additional corroboration.
  • We will also review all AI outputs for other potential issues, including checking for personal data; considering the risk of IP infringement; identifying content that is biased, discriminatory, inappropriate or offensive; and checking for inappropriate tone.

5. Be transparent about our use of AI

  • If we use AI tools, we will be transparent with you about where, how and why those tools were used.


I suspect that over time, AI will become less visible and more contextually integrated into processes and workflows in the background. But in this period of rapid evolution, it’s important to have some guardrails in place to ensure that AI is being used in an informed, responsible and ethical way that meets contractual and legal obligations.

March 6, 2024

Share
Share on LinkedInShare on X/TwitterShare on ThreadsShare on FacebookShare on WhatsApp

© Mike Taylor 2024