Quarter of Entities Halt Generative AI Over Privacy Concerns, Cisco Reports
A significant portion of organizations are stepping back from using Generative AI (GenAI) due to concerns over privacy and data security, according to a Cisco study. In a global survey, it was found that 27% of entities have temporarily prohibited its use.
Insight from the Cisco Data Privacy Benchmark Study
The 'Cisco 2024 Data Privacy Benchmark Study' highlights the cautious stance of organizations towards GenAI. Drawing from a pool of 2,600 privacy and security professionals from diverse regions including India, Australia, Brazil, China, France, Germany, Italy, Japan, Mexico, Spain, the U.K., and the U.S., the survey provides a comprehensive look at the GenAI landscape.
The report noted that businesses are facing trust issues when deploying AI and are also realizing lucrative benefits from investing in privacy. Privacy has evolved beyond just ticking boxes for regulation and is now viewed as fundamental to managing an organization's legal and Intellectual Property rights.
Risk Management and Limitations on GenAI
When addressing the risks of GenAI, organizations are employing various strategies. These include imposing limitations on what data can be input into these systems (63%), restricting the usage of certain GenAI tools by employees (61%), and completely banning GenAI applications for the moment (27%). Despite these controls, many instances were found where sensitive information including employee details or confidential company data was input into GenAI systems.
Dev Stahlkopf, Cisco's Chief Legal Officer, emphasized the distinct challenges posed by GenAI and the need for new methods to manage the associated risks and data. The survey shows that over 90% of respondents agree that GenAI demands novel strategies for governance and risk management.
privacy, security, GenAI