PwC's recent "Responsible AI Survey" with 1001 US-based executives in business and technology roles reveals that 73% of the respondents use or plan to use GenAI in their organizations.
Of those, slightly more are focused on using the technologies solely for operational systems used by employees (AI: 40%; GenAI: 43%). A slightly smaller number of companies are targeting both employee and customer systems in their AI efforts (AI: 38%; GenAI: 35%).
However, only 58% of the respondents have started a preliminary assessment of AI risks. Survey respondents were asked about 11 capabilities that PwC identified as “a subset of capabilities organizations appear to be most commonly prioritizing today.” These include:
Upskilling
Getting embedded AI risk specialists
Periodic training
Data privacy
Data governance
Cybersecurity
Model testing
Model management
Third-party risk management
Specialized software for AI risk management
Monitoring and auditing
According to the PwC survey, more than 80% reported progress on these capabilities. However, 11% claimed they’ve implemented all 11, though PwC said, “We suspect many of these are overestimating progress.”
PWC added that some of the markers for responsible AI can be difficult to manage, making it challenging for organizations to fully implement them. It further stated that data governance will be necessary to define AI models’ access to internal data and put guardrails around it.