As artificial intelligence adoption rises and cyber threats grow more sophisticated, security leaders across Asia Pacific are optimistic about using AI agents to bolster their defences. However, many still face critical gaps in data readiness and compliance measures, according to new findings from Salesforce’s State of IT report.
The global study, based on responses from over 2,000 IT security leaders — including 588 in Asia Pacific — found that while all APAC respondents believe AI agents could help improve at least one area of their security concerns, only half feel confident their organisations are ready to support these technologies effectively.
Data foundations and governance remain weak links
AI agents have the potential to automate routine security tasks, allowing IT teams to focus on more complex problem-solving. Yet their success depends on reliable data foundations and proper governance. In Asia Pacific, 50% of security leaders say their data infrastructure is not yet ready to support agentic AI, and 57% are not fully confident that their organisations have the right guardrails in place for responsible deployment.
Salesforce’s Gavin Barfield, Vice President & Chief Technology Officer for ASEAN, emphasised the importance of data governance. “Organisations can only trust AI agents as much as they trust their data. When 62% of security leaders in Asia Pacific report that customers remain hesitant about AI adoption due to security and privacy concerns, it’s clear that robust data governance isn’t optional, but essential. IT teams that establish strong data governance frameworks will find themselves uniquely positioned to harness AI agents for their security operations all while ensuring data protection and compliance standards are met,” he said.
Despite the challenges, CIOs are starting to invest in the foundations needed to support AI agents. A separate CIO survey revealed that companies are allocating four times more budget to data infrastructure and management than to AI technology itself, indicating a focus on long-term readiness.
AI deployment runs up against compliance hurdles
Security teams are also facing pressure to ensure AI adoption complies with fast-evolving global privacy laws. While 82% of IT security leaders see AI agents as tools to improve regulatory compliance, only 52% are fully confident they can deploy these agents while meeting all regulatory requirements. A further 85% report that their compliance processes remain largely manual, raising the risk of errors.
Compounding the issue is the ongoing complexity of international and industry-specific regulations, which continue to evolve. The lack of full automation in compliance workflows makes it difficult for companies to scale AI responsibly.
Trust, transparency and adoption remain in progress
Even as AI gains momentum, a trust gap persists among consumers and within organisations. A recent consumer survey showed that 61% of APAC respondents believe AI advancements make trustworthiness more critical, and 64% say they now trust companies less than they did in 2023.
Within IT teams, confidence in AI output is still developing. About 51% of security leaders are not fully confident in the accuracy or explainability of AI-generated results. Additionally, over half of the respondents (54%) do not yet offer full transparency into how customer data is used in AI applications, and an equal percentage admit they have not finalised ethical guidelines for AI usage.
Nevertheless, the use of AI agents is on the rise. Currently, 45% of IT security teams in the region use them in daily operations, a figure expected to reach 74% within two years. Security teams expect agents to support a wide range of activities, from threat detection to performance auditing of AI models.
Strengthening for the agentic future
Despite the clear momentum, most organisations are still not fully prepared. Only 57% of IT security leaders in Asia Pacific believe their current security and compliance practices are ready to support the implementation of AI agents. These findings underline the urgent need for stronger data management, clearer ethical frameworks, and more streamlined compliance processes as businesses move towards a future shaped by agentic AI.