With AI as a fast-becoming integral component of business, how should it be used and what should a company policy on using generative AI include? Geoff Wilson shares a few thoughts on the matter.
I’m going to continue this article with the same starting statement I will use with all AI articles: I’m still learning. We all are.
The question I and our team at WGP have been wrestling with is this: Now that Generative AI (GAI) is in the popular lexicon and is beginning to permeate academia and some workplaces, what is an effective and flexible policy statement that informs our own practices around it?
My current answer is basically four points. And, I would appreciate any reactions or feedback on how these policies might satisfy you as an executive or your customers if you are in similar professional services. I’m also genuinely curious to understand what this leaves out.
Our emerging AI policy for WGP is encompassed in the following four policy points:
Be Human-Centered – Because no generative artificial intelligence will replace understanding and judgment required when navigating organizations, cultures, and individual relationships; we will always have a human-in-the-loop when it comes to content, recommendations, and basic communications (yes, even automated emails, which we will never use). This means our people must be expert at understanding what AI can and cannot do.
Be Secure – Generative AI will have the potential to “see” very complex data associations through even basic user provided data. No proprietary data will be shared with generative AI platforms unless those platforms are trusted and certified as proprietary, walled, or otherwise data-safe. Otherwise, if we are feeding a GAI platform data or querying a GAI platform, we should treat those actions as if they were posted to social media.
Be Transparent – Use of AI as a force multiplier is quite possibly a general good. However, because it is not yet clear that generative AI platforms are reliable as to background facts, we will disclose when we use such tools to generate any content in a given document. This communicates the risks associated with acceptance of such output, and it prevents our professionals from misrepresenting their own capabilities and work behind an AI shield.
Be Ethical – Every deployment of complex technology has ethical use questions. We must remain independent in our recommendations on our and our clients’ use of AI in general as to its benefits, its risks, and its overall impact on society. We will not recommend uses that, in our judgment, create net-negative impact when private and public benefits and costs are considered.
Key points include:
- AI understanding
Read the full article, Framing our AI approach: Establishing professional policies, on WilsonGrowthPartners.com.