Thoughts /
March 6, 2024
In an agency environment, the plethora of AI tools coming onto the market present an opportunity to fundamentally transform the way we work. But there is a danger of getting caught up in the excitement and losing sight of the fundamentals.
For me, establishing a pragmatic position on the use of AI is about three things:
Managing data
Ensuring that the AI tools used aren’t absorbing data from inputs (direct prompts or content of files) into their training models in breach of contractual confidentiality clauses. This means doing due diligence on the tools used, and understanding what happens to the data that is provided or accessible to those tools.
Managing quality and reputation
While AI has clear potential for improving efficiency, those gains should not come at the expense of the quality of the work. That requires the people using the tools to understand their limitations, and to ensure that the outputs are used as a starting point for exploration and discussion, rather than as a direct production tool. Expert (human) knowledge and creativity should be valued and nurtured; if responsibility for thinking and creativity is divested entirely to AI, there is a significant risk of becoming ‘ordinary’ (or plain wrong) and risking hard-earned reputation.
Managing cost
AI tools can be expensive to roll out. It’s important to identify those tools that provide the most value for the best cost, and who in the organisation gets the most value from them. Step carefully and focus on ROI, which can be tricky to measure for things like this (‘saved time’?). And avoid duplicated cost where the same functionality is provided in multiple tools.
A formal AI policy provides clarity to employees on the business’s position on the use of AI and the specific contexts in which it is permitted or not. A client-facing AI statement, based on that policy, also provides reassurance to clients that we recognise the limitations and risks and are taking steps to mitigate those. As an example, our client-facing AI statement at Flag is based around these five key principles:
1. Understand the tools we’re using
We will ensure that our use of AI is informed by researching the tools being used to understand:
2. Protect confidential and sensitive data
3. Follow responsible and ethical AI principles
4. Be sceptical about AI outputs
5. Be transparent about our use of AI
I suspect that over time, AI will become less visible and more contextually integrated into processes and workflows in the background. But in this period of rapid evolution, it’s important to have some guardrails in place to ensure that AI is being used in an informed, responsible and ethical way that meets contractual and legal obligations.
March 6, 2024