A vast majority of professionals in Pakistan are integrating artificial intelligence tools into their daily routines, yet many remain untrained in how to use them securely and responsibly, according to a new study by Kaspersky titled “Cybersecurity in the Workplace: Employee Knowledge and Behavior.” The report highlights that 86% of professionals across various sectors in Pakistan now rely on AI tools for their work. However, only 52% have undergone any formal cybersecurity training related to AI, revealing a significant gap between technological adoption and security awareness.
The findings show that while 98% of respondents understand the concept of generative artificial intelligence, practical knowledge and safe use remain limited. Many professionals have incorporated AI tools into their regular workflows, using them for writing and editing content, creating emails, generating images and videos, and analyzing data. Despite this widespread adoption, 21% of professionals admitted to having received no AI-related training at all. Among those who did, 66% were trained in effective prompt creation and usage, while only half received guidance on cybersecurity risks associated with AI systems, such as data leaks, prompt injection, and misuse of neural networks. This imbalance between utility and security awareness underscores an emerging challenge for organizations adopting AI without proper training programs or governance frameworks in place.
The report also highlights that AI use has become officially accepted in most workplaces. Around 81% of employees said generative AI tools are permitted within their organizations, while 15% reported a ban and 4% were unsure of company policy. However, Kaspersky warns that many employees use AI without adequate oversight, contributing to what is commonly referred to as “shadow IT.” This occurs when technology is adopted independently by staff, often outside corporate approval processes, creating potential vulnerabilities in the organization’s data environment. To counter this, the report recommends that companies establish clear and comprehensive policies governing AI use. These policies should outline approved tools, define data sensitivity levels, and specify usage limitations in critical or confidential business functions.
Rashed Al Momani, General Manager for the Middle East at Kaspersky, said that both complete restrictions and unrestricted access to AI can be counterproductive. He advised that a balanced strategy, allowing varying levels of AI access depending on the sensitivity of departmental data, ensures both flexibility and security. When complemented by structured training programs, such policies can help organizations leverage AI efficiently while maintaining strong cybersecurity standards. Kaspersky further recommends that businesses provide dedicated AI security training to all employees and advanced courses to IT professionals on identifying and defending against AI-related threats. The company’s Automated Security Awareness Platform includes specialized modules on AI safety, while its Large Language Models Security course focuses on protecting systems that integrate generative AI tools.
The cybersecurity firm also advises that organizations deploy robust endpoint security solutions, such as Kaspersky Next, to safeguard devices from phishing, malware, and fake AI tools. It urges companies to build comprehensive AI-use frameworks aligned with Kaspersky’s best practices for responsible and secure implementation. The report makes clear that Pakistan’s workforce is rapidly embracing AI, but the pace of technological adoption must be matched by greater focus on training, governance, and data protection to ensure safe and sustainable integration across industries.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.