The boom in generative AI is exposing small to medium enterprises (SMEs) to a unique set of problems—AI-powered cyberattacks—leaving budding businesses overwhelmed.
In a recent US Congressional hearing, a group of cybersecurity experts from leading organizations including IBM, Hitachi, Protect AI, and SentinelOne highlighted the escalating scale and efficacy of cyberattacks facilitated by generative AI.
The experts voiced concerns about the effects on small and medium-sized enterprises (SMEs), speculating that the increase in threats has been facilitated by the growing usage of AI applications by both private sector businesses and cybercriminal organizations.
Also read: Nintendo Halts eSports Events in Japan Amid Rising Safety Concerns
A losing battle
According to an article by TechMonitor, SentinelOne’s chief trust officer, Alex Stamos, emphasized the vulnerability of SMEs due to cyberattacks. Smaller businesses, he said, were finding it hard to defend themselves against hackers.
“We’re kind of losing” the battle against cyber threats, according to Stamos sentiments.
He pointed to hacker groups like BlackCat and LockBit, attributing to them specialized capabilities previously associated with state-sponsored entities, such as Russian intelligence agencies.
A Sage survey released in October highlighted that almost half of SMEs (48%) had experienced at least one cyber incident in the past year.
Stamos also highlighted his fears of the ability of future malware software to detect vulnerabilities in systems and take down grids, even the ones that have been air-gapped.
The expert also criticized recent incident reporting requirements imposed by the Securities and Exchange Commission (SEC), claiming that the mandated 48-hour reporting window complicates effective cyber defence.
“Usually, at 48 hours, you’re still in a knife fight with these guys,” said Stamos.
Stamos pointed out a recent incident where the cybercriminal gang BlackCat exploited the reporting process, announcing that it had reported a hacked company to the SEC for failing to disclose the breach promptly.
“SolarWinds moments” could repeat themselves
ProtectAI CEO Ian Swanson called on industry leaders to take concerted steps to solve systemic security issues associated with AI and machine learning (ML) services. To uncover security vulnerabilities specific to machine learning (ML) goods and services, Swanson suggested developing a “machine learning bill of materials” standard.
“Manufacturers and consumers of AI systems must put in place systems to provide the visibility they need to see threats deep inside their ML systems and AI applications quickly and easily,” said Swanson.
Referring to the 2020 software supply chain attack, he warned of a future “SolarWinds moment” for ML applications and demanded stronger federal investment in best practices and standardized security standards for open-source AI/ML software.
Focus on increasing cybersecurity education
“Bad things are going to happen; when you look at the solutions that are in the marketplace in general, the majority of them are on the front end of that loop,” IBM Consulting’s vice president for global cybersecurity, Debie Taylor Moore, said.
“The back end is where we really need to look toward how we prepare for the onslaught of how creatively attackers might use AI.”
She also emphasized the need for a focus on cybersecurity education and the resilience of businesses targeted by hackers. According to her, politicians have a critical role in implementing this.
Moore stressed that while cyber threats are inevitable, the key lies in how a company rebuilds after a data breach.