The Notes From the Field: Gen AI in Customer Experience
“It is not the critic who counts, not the man who points out how the strong man stumbles or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at best knows, in the end, the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.” — Theodore Roosevelt.
A wise friend once shared this quote with me, and it resonated with me. It’s about those who are brave enough to take risks and try new things. As a Gen AI evangelist, I have the privilege of working with this cutting-edge technology. I work and interact with digital natives daily. They share their insights, and I share mine. With each interaction, I learn something new and gain valuable knowledge. The “Notes from the Field” series documents what I’ve learned in a conversational format.
My goal is to share the essence of these complex topics in a way that is easy to understand.
Generative AI is a transformative change that challenges how we create and use technology. It goes beyond technology and involves ethics, collaboration between humans and AI, social science, economic impacts, and regulations.
I recently participated in a panel discussion on Generative AI titled “CX Talks: Truth, trust, and Automation with Generative AI.” The conversation was thought-provoking and provided valuable insights. It highlighted the extensive and diverse nature of the Gen AI discussion.
The seven questions that were put across to the panelists ranged from topics related to strategy, efficiency measurements, risks, governance, loyalty, trust, misuse, and abuse of Gen AI systems. I used the following constructs to structure the conversation on these myriad topics:
- The SAO Framework
- The IDEA Framework
- The Responsible AI Framework
- Deployment Guiding Principles
- Pillars of Holistic Governance Framework
In this blog, the first of the “notes from the field” series, I have deliberated on the key points from that discussion.
The Conversation
What preparation needs to happen behind the scenes before a generative AI deployment?
Deploying generative AI isn’t just about integrating a new technology; it’s about bringing in a whole new way businesses operate and deliver value. Let’s dive into this:
- Co-Pilots Concept: First and foremost, let me introduce the ‘Co-Pilots.’ Think of generative AI as not just standalone entities but as co-pilots working alongside human teams. Whether across existing business processes or developing new products, these AI co-pilots boost human capabilities, making things more efficient and innovative.
- Structured Approach over Random Deployment: Moving away from a ‘spray and pray’ approach is crucial. Deploying AI everywhere without a clear strategy can waste resources and miss out on opportunities. Instead, organizations need a structured framework to guide their AI deployments.
- The SAO Framework brings me to the SAO Framework, which I believe is essential for any organization looking to deploy generative AI. SAO Framework looks at Gen AI opportunities using three complementing lenses:Strategic: Start by evaluating your business processes. Identify the areas where co-pilot deployment can add the most value, is feasible, and significantly improves the organization’s capabilities. It’s about aligning AI deployment with business goals. Architectural: Once you’ve identified the high-value Gen AI deployments, the next step is to design a blueprint. How will these AI co-pilots be integrated into the existing infrastructure? The focus should be on cost-effectiveness, modularity, and scalability. It’s about building a strong foundation. Operational: The final piece of the puzzle is the actual deployment. Organizations must be careful, considering factors like compliance, ethics, and change management. Deploying AI isn’t just a technical challenge; it’s an organizational one. Ensuring smooth integration while mitigating risks is crucial.
In conclusion, being prepared is key as we stand on the edge of a generative AI revolution. By seeing AI as co-pilots, adopting a structured approach, and using the SAO framework, organizations can set the stage for successful and transformative AI deployments.
How do you decide where to use generative AI, and how do you measure the efficiency gains of those different use cases?
This is an important question for any organization entering the Gen AI space. Measuring efficiency gain through Gen AI involves four steps.
- The first step is understanding the process, assessing the impact, evaluating feasibility, and taking action. The IDEA Framework for Gen AI deployment helps with this:
- I — Inventory of Processes: Take stock of all business processes and product functionalities. Understand where you’re starting from before considering Gen AI.
- D — Determine Potential Impact: Evaluate how Gen AI can significantly affect each process. This could involve enhancing creativity, automating tasks, or improving decision-making.
- E — Evaluate Feasibility: Not all processes suit Gen AI integration. Consider factors like data availability, technological readiness, and potential risks.
- A — Action and Integration: Once the right processes are identified, integrate Gen AI solutions while ensuring seamless collaboration between human and AI components.
2. The second step is to measure the efficiency gains from the existing state (As-Is) to the desired state (To-Be).
- Baseline Metrics: Before integrating Gen AI, capture the current performance metrics. This provides a reference point for measuring efficiency gains.
- Post-Integration Metrics: Continuously track the same metrics to gauge improvements after deployment.
3. Next are two types of assessments for measuring efficiency gains.
- Quantitative Assessments: Use measurable metrics such as time saved, cost reduction, increased output, or improved accuracy. For example, if Gen AI is used in content generation, measure the time taken before and after AI integration.
- Qualitative Assessments: Besides numbers, consider factors like improved user satisfaction, enhanced creativity, or better decision quality. Surveys, focus groups, or expert reviews can provide valuable insights.
4. Balancing assessments is important when it comes to Gen AI. Organizations should recognize that Gen AI is a new capability and be willing to invest in its development for future benefits.
While quantitative metrics provide tangible evidence of efficiency gains, it is equally important to consider the qualitative benefits of Gen AI. For example, Gen AI has the potential to drive innovation in product designs, which may not immediately translate to measurable numbers but can lead to long-term advantages.
In conclusion, deciding where to deploy Gen AI and evaluating its effectiveness requires a combination of structured frameworks like ‘IDEA,’ careful tracking of both objective and subjective metrics, and a balanced approach that considers both quantitative and qualitative benefits. It is a continuous journey of learning and improvement.
So in the context of deploying generative AI to build trust and loyalty, where should it be deployed first to drive loyalty and trust?
When we talk about introducing new technologies, especially something as powerful and transformative as Generative Artificial Intelligence (Gen AI), it’s crucial to approach it with caution and foresight. Let me explain why.
Imagine Gen AI as a new car model that’s just been designed. Before it’s sold to the public, it undergoes rigorous testing on private tracks, ensuring it’s safe and performs as expected. Similarly, before Gen AI interacts directly with consumers, we must test it in controlled, “fail-safe” environments.
- What do I mean by “fail-safe”? It’s an environment where, if the AI makes a mistake or behaves unexpectedly, the consequences are minimal, and the situation can be controlled. Think of it as a sandbox or a playground where the AI can learn, make errors, and be corrected without causing any real-world harm or inconvenience to consumers.
- Why is this important? Gen AI has the potential to revolutionize how we interact with technology. It can create content, make decisions, and even predict behaviors. But with great power comes great responsibility. If deployed prematurely to the public, and it makes an error, it could lead to mistrust, financial losses, or even harm in some cases.
So, the idea is simple: First, let’s deploy Gen AI in these controlled, fail-safe environments. Let’s understand its strengths and weaknesses. Once we’re confident in its reliability and have addressed potential issues, we can introduce it to the broader consumer market, ensuring a safer and more beneficial experience for everyone involved. This implies that it is apt for the Co-Pilot to be ring-fenced to an internal audience before exposing it to the wider consumer.
In essence, it’s about being proactive rather than reactive, ensuring that by the time consumers interact with Gen AI, it’s as refined, reliable, and safe as possible.
I have thought about three guiding principles to make this happen:
- Think Big. Start Small: Envision a comprehensive co-pilot but initiate with laser-focused precision on one or two pivotal features.
- Agility: Prioritize rapid value delivery and actively seek business user insights. Commit to an iterative refinement fueled by real-world feedback.
- Architect for Tomorrow: Craft with an expansive future in mind. Embrace a production-ready stance from day one, harnessing the power of Infrastructure as Code, A/B testing, and rigorous security protocols.
What are your top tips for organizations to ensure they stay on the right side of that line and in the context of personalization, they don’t overuse or misuse generative AI?
Generative AI is a powerful technology that can enable personalized customer experiences, but it also comes with ethical and legal challenges. Organizations need to be careful not to cross the line between personalization and manipulation or to violate the privacy and consent of their users. Here are my top three tips for responsible use of generative AI for personalization:
- Tip 1: Use generative AI to augment, not replace, human creativity. Generative AI can help generate relevant, engaging, and diverse content, but it cannot replace the human touch and judgment essential for effective communication. Organizations should use generative AI to enhance their existing content creation processes, not as a shortcut to bypass them. They should also ensure that there is always a human in the loop to review, edit, and approve the generated content before it is delivered to the users.
- Tip 2: Use generative AI to empower, not exploit, users. Generative AI can help create personalized experiences that cater to the needs, preferences, and goals of each user, but it should not be used to manipulate or coerce them into actions that are not in their best interest. Organizations should use generative AI to provide value and convenience to their users, not to trick or deceive them. They should also respect the users’ right to know how their data is used and how the content is generated and provide them with options to opt out or customize their preferences.
- Tip 3: Use generative AI to optimize, not overdo, personalization. Generative AI can help create content that is tailored to each user’s context and situation, but it should not be used to bombard or overwhelm them with too much or too frequent information. Organizations should use generative AI to optimize the quality and quantity of their content, not to overdo it. They should also monitor the performance and feedback of their content and adjust their strategies accordingly.
Where should it be deployed first to drive loyalty and trust? I’d start with areas that have immediate, tangible benefits without compromising the human touch. Maybe it’s in after-sales support, where AI can provide instant solutions and then hand over trickier issues to human experts. Or perhaps in content recommendations, where AI can curate personalized user experiences. The key is to start where AI can be a reliable supporting act, building trust and setting the stage for bigger roles.
In conclusion, while generative AI has the potential to revolutionize CX and EX, it’s essential to ensure it plays in harmony with human elements.
How should organizations adapt their data policies in preparation for a generative AI deployment?
The overall governance framework becomes super important when discussing data policies for a Gen AI deployment. Let’s break this down into three key areas of focus:
- Data Governance, AI Governance, and Organizational Governance.
1. Data Governance: When discussing Data Governance in the context of generative AI, it’s not just about managing data. It’s about ensuring the right quality and integrity of data that feeds into these AI systems. A few key elements of Data Governance include:
- Data Classification: We must classify our data based on sensitivity and relevance. This helps us determine which datasets are suitable for training generative AI models and which might pose privacy or ethical concerns.
- Data Quality: The data we use must be accurate and free from biases. Remember, the output of our AI models is only as good as the input data. Poor data quality can lead to biased or inaccurate AI outputs.
- Access Control: Strict controls on who can access and modify data sets are crucial. This safeguards against unauthorized data tampering or misuse.
2. AI Governance: AI Governance goes hand in hand with Data Governance, especially when deploying generative AI. A few key elements of AI governance include:
- Model Transparency: Our AI models should be transparent. Stakeholders should be able to understand how decisions are made, ensuring accountability.
- Regular Audits: We must audit our AI models regularly. This isn’t just about checking for technical accuracy but also for biases or unintended consequences.
- Feedback Loops: Implementing feedback mechanisms is essential. It allows for continuous refinement of our models based on real-world feedback.
3. Organizational Governance: Lastly, Organizational Governance is about the broader picture. How does the organization, as a whole, approach and manage the deployment of generative AI? A few key elements of Organizational Governance include:
- Ethical Guidelines: Organizations should establish clear ethical guidelines for AI deployment. This isn’t just about compliance; it’s about ensuring that AI aligns with the organization’s values and societal norms.
- Stakeholder Engagement: Engage with internal teams, external experts, customers, and possibly even the public. Diverse perspectives can highlight potential pitfalls and opportunities.
- Continuous Learning: The AI landscape is evolving rapidly. Organizations should invest in continuous learning and training programs to ensure teams remain updated and can leverage AI responsibly.
In conclusion, as we stand on the cusp of a generative AI revolution, it’s not just about technology. It’s about ensuring that our organizations are prepared, from a data, AI, and broader governance perspective, to deploy AI in a manner that’s responsible, ethical, and adds genuine value.
What risks does generative AI pose to customer trust, and how can that risk be mitigated?
Generative AI sometimes feels like using those fancy new kitchen gadgets. They promise to make the perfect meal every time, but occasionally, you end up with burnt toast. And nobody likes burnt toast.
Generative AI, as powerful and promising as it is, comes with challenges, especially regarding customer trust. Some of these risks include:
- Misinformation and Fake Content: Generative AI can create content that’s so convincing that it’s hard to tell if it’s real or AI-generated. This can lead to misinformation or even fake news. Imagine an AI writing a fake review for a product or creating a fictional news story. It’s like that kitchen gadget promising a gourmet meal and delivering a burnt toast.
- Loss of Personal Touch: Another concern is losing the human touch. If every piece of content, every customer support message, or every creative artwork is generated by AI, where’s the human element? It’s like receiving a mass-produced gift instead of a handcrafted one.”
- Over-reliance and Laziness: Then there’s the risk of becoming too dependent. If we rely solely on AI for everything, we might become lazy or lose our creative edge. It’s like using that kitchen gadget for every meal and forgetting how to cook on your own.
For any organization, trust is paramount. It takes years to build it and moments to lose it. Having a Responsible AI Framework across the board that promotes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability is important. The threads of this framework need to be interwoven in every AI deployment.
So, while generative AI poses some risks to customer trust, we can harness its power responsibly with the right approach. And maybe, just maybe, we can avoid serving burnt toast!
What is the outlook for generative AI in CX for 2024?
As we look towards 2024, the trajectory for generative AI, especially in the realm of Customer Experience, is undeniably upward.
- Overall View: To provide a bird’s-eye view, we’re witnessing an exponential rise in Gen AI deployments. If we refer to Gartner’s hype curve, which is a trusted barometer for technology adoption and expectations, Gen AI is swiftly moving from the ‘Peak of Inflated Expectations’ towards the ‘Plateau of Productivity.’ This suggests that the technology is maturing and finding its footing in real-world applications. This means that we can expect a rapid increase in the adoption and deployment of generative AI solutions in various domains and industries, especially in customer experience (CX). However, this also means there will be a lot of hype and unrealistic expectations about what generative AI can do and how easy it is to implement and scale.
- Innovators and Laggards: Like any technological revolution, there’s a spectrum of adoption. I think that in 2024, we will see a clear distinction between the innovators and the laggards in terms of generative AI adoption. The innovators will be those with a clear vision and strategy for leveraging generative AI to enhance their offerings and who are willing to experiment, fail, and learn from their mistakes. Conversely, we have the laggards, who are more cautious, often waiting to see proven results before jumping in. In 2024, the laggards will start the Gen AI journey, as 2023 proved the art possible.
- A Holistic AI Strategy: One thing will become abundantly clear by 2024: A piecemeal approach to AI won’t suffice. Organizations will realize that for AI to be truly transformative, the strategy must be comprehensive. 2024 will prove that a comprehensive AI strategy that is not siloed but holistic across the organization will be necessary for continually innovating and being relevant. Generative AI is not a standalone solution that can be plugged into any existing system or process. It requires a deep understanding of the customer’s needs, preferences, behaviors, business goals, values, and culture. It also requires close collaboration between different teams and functions, such as marketing, sales, product development, design, engineering, and data science.
In conclusion, Generative AI is not a magic bullet that can solve all the problems. It is a powerful tool that can augment human creativity and intelligence and enable new possibilities for customer engagement and satisfaction. However, it also comes with challenges and risks that must be carefully managed and mitigated. Therefore, I believe that in 2024, successful organizations will be those who can balance the opportunities and challenges of generative AI and who can integrate it into their overall strategy responsibly and ethically.