JPMorgan Chase warns: AI launch speed is outpacing security. This California lawmaker has a bill to fix that.
In an extraordinary callout to the AI, cybersecurity, and financial industries, JPMorgan Chase Chief Information Security Officer Pat Opet published an open letter earlier today warning that software-as-a-service (SaaS) companies are adopting AI-powered systems at a speed that leaves the entire financial system vulnerable to “potentially catastrophic systemwide consequences.”
“Convenience can no longer outpace control,” wrote Patrick Opet, left, CISO of the nation’s largest banking firm.
Security Week’s Ryan Naraine noted that the missive was timed to coincide with the opening of the annual RSA Conference, a major gathering of information security professionals in San Francisco. The open letter, Naraine wrote, “landed like a sobriety test.”
JP Morgan Chase’s Opet wrote:
“Fierce competition among software providers has driven prioritization of rapid feature development over robust security. This often results in rushed product releases without comprehensive security built in or enabled by default, creating repeated opportunities for attackers to exploit weaknesses.
“The pursuit of market share at the expense of security exposes entire customer ecosystems to significant risk and will result in an unsustainable situation for the economic system.”
When Opet says “entire customer ecosystems,” he’s talking about the personal financial data of millions of banking customers.
what this means, and why it’s important
Steve Wimmer, Transparency Coalition’s policy and technical advisor, has been working on this exact issue with lawmakers in California. He notes that California State Sen. Josh Becker’s SB 468 specifically addresses these security risks and vulnerabilities for AI systems that make consumer decisions based on personal information.
“The JP Morgan open letter recognizes that SaaS suppliers have unfettered access to critical company infrastructure and consumer data,” Wimmer explained, “so these AI platforms as described in SB 468 are extremely valuable targets for bad actors.”
“These life-altering systems handle vast amounts of personal data.”
Sen. Josh Becker, testifying on behalf of his AI cybersecurity bill, SB 468.
More specifically, Opet calls for sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems.
Wimmer noted: “This is consistent with the language in Sen. Becker’s bill, which requires developers and deployers to follow industry best practices and carry out periodic reviews of infosec protocol implementation.”
SB 468 and what it would do
Sen. Becker’s bill holds AI developers and deployers accountable for the security of the personal information of consumers. “SB 468 insures that businesses using high-risk AI systems to process personal data have strong security measures in place,” Becker noted during a committee hearing last week.
The bill would impose a duty on a covered deployer (a business that deploys a high-risk artificial intelligence system that processes personal information) to protect personal information held by the covered deployer.
From the bill’s legislative overview:
Deployers whose high-risk artificial intelligence systems process personal information to develop, implement, and maintain a comprehensive information security program, as specified, that contains administrative, technical, and physical safeguards that are appropriate for, among other things, the covered deployer’s size, scope, and type of business. The bill would require the program to meet specified requirements, including, among other things, that the program incorporates safeguards that are consistent with the safeguards for the protection of personal information and information of a similar character under applicable state or federal laws and regulations.
Wimmer notes that the bill would require written information security programs for businesses using high-risk AI systems that process personal data. “That includes designating a security manager, conducting regular risk assessments, training employees, restricting physical access to personal information, overseeing third-party access, and having clear incident response and post-breach procedures. These are all common sense best practices.”
Violations would be treated as deceptive practices under California’s Unfair Competition Law, and the California Privacy Protection Agency would be empowered to adopt regulations to keep standards current.