She’s a millennial whip-smart tech CEO of a small woman-owned business and her organization contracts with the federal government on cyber security and national defense. For the last two years it’s just been the two of us. However, now we call it our “coaching triangle” that includes an AI Coach. It’s new and definitely different. We’re figuring out if we’re liking it … how it’s working. (TO BE CONTINUED…) Here’s how it started.

We were discussing the contract renewal for the next year and she requested more “real-time” support (i.e., phone or text). Barely before she finished the request she laughed and interrupted herself by saying, “OK, probably not realistic” (both of us travel pretty extensively internationally). I said, “Well, hold on a second…” and shared that I’d been preparing for an upcoming presentation for the Capital Coaches Conference on the topic of “AI Coaches (October, btw)…and the exploration into AI Coaches had been pretty interesting .. AI Coaches are available 24/7 … etc., etc… new frontier … ~ PERHAPS ~ something we might consider (?) … etc.

Bottom line: She was game and wanted to include that option for us moving forward — wed bring in an AI Coach as part of our triangle coaching approach in the next year.

That decision kicked-off a load of other questions/considerations — too many to go into here, but suffice it to say a few things.:

1) We needed some new contract language that I didn’t have.

2) We needed to be clear about pinning down the logistical and other details about how we’d integrate “AICoach” into our existing work in the “triangle.” Triangles generally send the flags flying and crowds booing in my head — heebeejeebees that it’s a tee-up for persecutor/victim/rescuer dramas. Wed need to be intentional and effective when it came to integrating “data” and reflections gleaned from AI Coach into our creative, empathetic, and contextual process work as the two humans!

3) (and this was cool…) because I had some presentation prep/due diligence under my belt by that point, I could respond when she fired-off a few tech-focused questions., like: “Do you know what LLM (Large Language Model) that AI chatbot was trained on? I responded confidently (veiling my pride), Claude Opus 3 — a model developed by Anthropic.

(NOTE: Claued Opus 3 is a lineage of LLM I selected because Anthropic — at this point in time — is generally considered to be one of the “good ones” in the hyper-competitive AI world. They’ve been more concerned about and sensitive to ethics/ethical issues in AI. Given my first go with a client who KNOWS technology and the underbelly of these tools — I suspect she might have been wondering about inherent biases as a 1st generation Indian-American woman. That said, NO LLM’s are immune to bias.)

Before we launched our triangle experiment, I sent her a “quick-hit” (from ChatGPT4.0) on some key areas where bias might manifest in Claude Opus 3 — included below. Note the text highlighted in red, which I flagged because of the clear departure/s from pure coaching as defined the International Coaching Federation world (and that I predominantly subscribe).

  1. Data Bias: The training data used to develop Claude comes from publicly available sources, which are often biased in various ways. For example, text from social media platforms, news websites, and academic papers reflect the predominant views, ideologies, and cultural norms of the people who produce them. If certain groups, perspectives, or regions are underrepresented in these datasets, the model might unintentionally favor the more represented groups’ viewpoints.
  2. Cultural Bias: Large language models often reflect the dominant cultural context embedded in their training data. Western-centric, English-dominated content can influence the types of responses the model gives, potentially overlooking non-Western or less dominant perspectives. In a coaching context, this could lead to advice or feedback that inadvertently aligns with certain cultural norms while being less adaptable to others.
  3. Social and Gender Bias: Research has shown that language models can perpetuate harmful stereotypes related to race, gender, and social class. For instance, models trained on vast datasets without careful curation might unintentionally associate certain professions with specific genders (e.g., doctors with men, nurses with women). In the context of coaching, this could subtly influence the advice the model gives or the perspectives it reflects.
  4. Optimizing for Safety and Alignment: Anthropic, the creators of Claude, is known for its focus on developing “aligned” AI that operates according to human ethical values and safety standards. This means that Claude Opus 3 is likely tuned to avoid generating harmful or dangerous content. While this is a critical feature, overzealous filtering can sometimes limit the breadth of responses, resulting in a model that may avoid complex or sensitive topics altogether or provide overly cautious responses in nuanced situations.
  5. Confirmation Bias: AI models, including Claude Opus 3, might exhibit a form of confirmation bias, where they tend to agree with the user’s viewpoint rather than challenge it. In a coaching scenario, this could reduce the effectiveness of the model in helping clients explore new perspectives or challenge their assumptions. It might cater to the user’s existing preferences or beliefs more than a human coach would.
  6. Bias in Reinforcement Learning with Human Feedback (RLHF): If the human feedback used to fine-tune the model reflects certain biases (consciously or unconsciously), those biases could propagate into the model’s responses. For example, if feedback evaluators prefer certain types of responses—those that align with their ethical or cultural values—the model may overfit to those preferences and marginalize others.