Study: Nearly Half of Companies Exclude Cybersecurity Teams When Developing, Onboarding and Implementing AI Solutions
Only 35 percent of cybersecurity professionals or teams are involved in the development of policy governing the use of AI technology in their enterprise, and nearly half (45 percent) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from ISACA, a global professional association advancing trust in technology.
In response to new questions asked by the annual study, sponsored by Adobe—which showcases the feedback of more than 1,800 cybersecurity professionals on topics related to the cybersecurity workforce and threat landscape—security teams noted they are primarily using AI for:
- Automating threat detection/response (28 percent)
- Endpoint security (27 percent)
- Automating routine security tasks (24 percent)
- Fraud detection (13 percent)
“In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring,” says Jon Brandt, ISACA Director, Professional Practices and Innovation. “But cybersecurity leaders cannot singularly focus on AI’s role in security operations. It is imperative that the security function be involved in the development, onboarding and implementation of any AI solution within their enterprise – include existing products that later receive AI capabilities.”