Editorial

AI as a persona; defining the new rules of engagement

As businesses embrace AI internally, a new challenge emerges: How to strategically engage with customers’ AI agents as distinct personas – by Sarah Peña.

Posted 21 August 2025 by Christine Horton


As AI adoption accelerates across industries, organisations will need to start grappling with an often-overlooked challenge; how do we interact with our customers’ AI agents? More fundamentally, should we be treating AI as distinct personas with defined capabilities and limitations?

Sarah Peña

While businesses are understandably focused on leveraging AI internally, from automating tasks to enhancing decision-making and deploying customer service chatbots, the external-facing aspect of AI interactions demands urgent attention and strategic thinking.

The emergence of AI-to-business interactions

We’re entering an era where AI agents will start to routinely interact with business systems on behalf of their users. Consider these scenarios:

  • A customer’s AI agent calls your contact centre to dispute a charge at 10am, armed with perfect recall of every previous interaction
  • AI agents engaging with your live chat systems, processing information faster than human agents can respond
  • Automated AI systems handling email correspondence with sophisticated conversational abilities

These interactions raise critical questions. What are your policies for these engagements? How do they impact frontline processes and terms of service? When something goes wrong, where does liability rest?

Beyond internal AI. Recognising external AI personas

Many businesses are already working with AI as internal users within their systems, defining capabilities, data requirements, and staff interactions. However, fewer are explicitly modelling external AI entities as distinct personas requiring specific consideration.

This gap extends to three key areas:

Internal AI personas

For AI tools supporting your business operations, are you defining them with the same rigor as employee personas? Consider their ‘goals,’ ‘pain points,’ and ‘behaviours’ from a systems perspective. An AI customer service assistant, for instance, needs clearly defined boundaries around escalation triggers, data access levels, and decision-making authority.

Customer AI personas

As customers increasingly deploy AI agents for routine interactions, your systems must be prepared to engage with these entities effectively while maintaining security and service standards.

Partner and supplier AI integration

Third-party AI systems will inevitably interact with your processes. How do you design workflows to integrate with external AI entities while maintaining control and accountability?

Defining the boundaries: what AI can and cannot do

Beyond understanding AI capabilities, organisations must establish clear limitations and identify potential risk areas:

Security and access control

  • Should a customer’s AI agent access sensitive data if it passes current authentication protocols?
  • What additional verification layers might be necessary for AI-initiated requests?
  • How do you prevent AI systems from circumventing standard security processes through sophisticated social engineering?

Accountability and liability

  • When an AI agent makes commitments or provides incorrect information, who bears responsibility, the customer deploying the AI or your business for engaging without clear boundaries?
  • How do you establish audit trails for AI-driven decisions that could lead to financial or legal consequences?

Transaction complexity and volume management

  • Which transactions or discussions are too sensitive or complex for AI handling, regardless of capabilities?
  • How do you prevent legitimate AI interactions from overwhelming your systems or creating unintended bottlenecks?

Deception and manipulation prevention

  • How do you identify when an AI agent’s conversational style or ‘personality’ might be designed to circumvent standard processes?
  • What safeguards prevent subtle forms of system manipulation through AI interactions?

Proactive integration

These considerations must be embedded early in organisational planning, during requirements gathering, process mapping, service design, governance frameworks, and system architecture. By routinely asking “What if AI is handling this interaction?” and “What are our red lines for AI engagement?” organisations can identify both challenges and opportunities while implementing necessary safeguards.

The conversation about AI persona management is quietly emerging across industries. Organisations that address these questions proactively will be better positioned to take the benefits of AI-driven external interactions while mitigating associated risks.

Taking action

The time for reactive approaches has passed. Individuals could potentially deploy basic AI agents to interact with businesses much faster than organisations can prepare adequate policies and safeguards to handle them. I could have a functional AI agent calling your contact centre within days, while your business might need weeks or months to develop appropriate response protocols.

Organisations should begin:

  • Auditing current processes to identify AI interaction points
  • Developing AI persona definitions and interaction protocols
  • Establishing clear policies for AI-to-business engagements
  • Training staff and systems to recognise and appropriately handle AI interactions
  • Creating governance frameworks that address liability and accountability
  • The future of customer engagement will be increasingly mediated by AI. The organisations that define the rules of this engagement today will lead tomorrow’s digital marketplace.

What steps is your business taking to prepare for AI-mediated customer interactions? The conversation starts now.

Event Logo

If you are interested in this article, why not register to attend our Think AI for Government conference, where digital leaders tackle the most pressing AI-related issues facing government today.


Register Now