The Responsible Use of AI In Communication
As a speaker and writer, I’m often asked how I maintain originality when using AI. But this isn’t a creative challenge; it’s an ethical one. In my world, responsible AI use requires a blend of transparency, training and what I call ‘confident tweaking’.
Transparency
To prevent misunderstandings and build trust, my clients must know when I’ve used an AI tool or Large Language Model, so I’ve developed my own AI policy.
I disclose when and where an LLM is likely to be deployed, which specific LLM/s I use, and where the LLM stops and the human starts (that is, when I step in and take over and why I always do).
Training
Everyone who works for, with and around me is trained to use AI responsibly, because it doesn’t come naturally. Our training clarifies:
Data exposure — What confidential information, proprietary ideas, trade secrets or sensitive data should never be entered into an LLM?
Privacy — How can we safeguard customer details and business strategies? How can we use anonymized or dummy data?
Ownership — Who owns which pieces of generated content? Me (the service provider), the client or the LLM provider?
Confident tweaking
Human communicators must be able to deftly massage LLM-generated output, to maintain the balance between automation, authenticity and nuance.
Part of becoming AI-literate is learning to pre-empt and remedy limitations and biases. For example, LLM copy might nail the grammar but fall flat on cultural sensitivity — and require confident human tweaking.
Connect with Tiffany Markman on LinkedIn for tips, tricks, ideas, and loads of free resources: https://www.linkedin.com/in/tiffany-markman/