
How Companies Can Build Online Trust Amid Deepfakes And Agentic AI
Aaron Painter is the CEO of Nametag Inc., the identity verification company revolutionizing online account protection.
Whether you’re trying to get a refund for a purchase or meeting someone on a dating app, chances are you’ll eventually encounter agentic AI. Agentic AI takes generative AI (GenAI) several steps further. Instead of responding to user prompts, it acts as an independent “agent” that can think and make decisions on its own, without human input or instruction. For example, in March, Observe.AI launched human-sounding VoiceAI agents that can speak to customers to help alleviate some of the burden on human call center agents.
While agentic AI offers benefits to customers and companies, there is a downside: Like GenAI and deepfakes, agentic AI could inadvertently hurt people’s trust in everyday interactions. After all, how do you know whether the agent you’re talking to is human or AI? Conversely, how does an AI agent verify the human it is speaking to is really a human before it, for example, helps that person with a password reset?
Agentic AI trust issues are further amplified by data privacy concerns. Per Business Insider, Meredith Whittaker, president of Signal, said agentic AI “poses serious security risks to users” due to the large amount of data it needs to effectively do its job. Whittaker pointed out that for agentic AI to book concert tickets, make reservations or open an app and create a group chat, the AI may access data we’d prefer to keep private. With this level of access, traditional encryption could become ineffective, as the agentic AI would “almost certainly” process data through cloud servers.
How AI Is Impacting Trust
It used to be that when talking on the phone with someone, you could reasonably trust they were a real human. And it was reasonably easy to verify that your interlocutor was really who they claimed to be. But now, with audio deepfakes and social engineering, many people may struggle to know whether they’re talking to the right person.
We’ve even seen GenAI successfully “hijack” and modify a phone conversation in real time. In an experiment by IBM, large language models (LLMs) successfully changed the details of a financial conversation between two human speakers and redirected funds to a fake account instead of the intended recipient. Instead of using a deepfake voice during the entire call, which is easier to detect, the experimenters “discovered a way to intercept a live conversation and replace keywords based on the context.” Whenever “bank account” was mentioned on the call, the LLM replaced the legitimate bank account number with a fake one.
Similarly, live video deepfakes have harmed trust in video calls. One financial firm lost millions because an employee was duped. In fact, a Deloitte study on deepfakes in the banking sector found that financial losses are expected to jump from $12.3 billion in 2023 to $40 billion by 2027.
Restoring Confidence
Companies that employ AI agents should implement certain ethical standards around their use, such as ensuring the AI agent discloses that it’s not a human with every interaction.
Companies can also shore up their safety protocols. Consider Bumble, for example: On the dating app, users submit a photo of their government-issued ID to authenticate their identity. Their profile is then updated with a badge to let other users know who is and isn’t verified.
Evaluating Identity Verification Tools
While using identity verification (IDV) to rebuild trust in online interactions is good, companies should keep in mind that not all IDV methods are equally reliable. My company provides IDV solutions, and because AI can convincingly mimic anyone’s voice or appearance, I’ve found that traditional know-your-customer (KYC) tools can fall short.
Deepfake detection solutions often use AI to identify synthetic media, such as voice clones, fake IDs and AI-generated selfie photos or videos. This approach, however, can create a perpetual cat-and-mouse game where defensive technologies are inherently reactive. These systems must continuously retrain their models using new examples of AI-generated content, keeping companies constantly on the back foot. Additionally, some IDV solutions only analyze data, not the data’s source, which can leave them vulnerable to digital injection attacks that circumvent the prescribed capture process in a way that’s extremely difficult to detect post-facto.
Given this, when evaluating IDV solutions, buyers should consider three primary factors in their buying process: assurance, integrations and deployment.
Assurance: For a high level of identity assurance, ask the solution provider if the tool uses advanced security features, such as modern cryptography, to prevent the injection of deepfake IDs and selfies.
Integrations: Examine whether the IDV company provides integrations that serve the specific enterprise applications and systems you are looking to protect.
Deployment: Make a thorough evaluation of how much time and money it will take to deploy each system. Does the company offer solutions or an API that you’ll need to integrate into your own front-end and back-end? Which model is appropriate for your particular use case?
Going Beyond Technology
There has long been an adage in cybersecurity that security works best when it combines people, processes and technology. To build trust in the age of deepfakes and agentic AI, companies should adopt a similar approach. Beyond technology, there are people and process elements to consider as well.
Train your people—your customers, employees, executive leadership team, etc.—to understand the threats of deepfakes and agentic AI. I recommend running simulated attacks to make the threats feel real.
Additionally, build business processes to be resistant to deepfake and agentic AI threats. For example, require multiple layers of approval for large money transfers, and use the out-of-band-authentication model for obtaining that approval.
Only by layering training and awareness, robust business processes and secure technology tools can companies truly build online trust.
Final Thoughts
Over the coming years, I believe agentic AI will continue straining the foundations of authentic interaction. On a business level, cybersecurity leaders and executives must work together to implement proactive, multi-layered protections throughout their customer and employee journeys.
And everyone involved in agentic AI development must consider both technological innovation and ethical standards. We cannot allow trust—the basis of any society—to fall by the wayside in the rush toward a future we haven’t fully conceived.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?