Featured February 12, 2023
Humans Are Insured Against Aliens, but Not Against AI… Why?
If investments and trust in AI are building up, insurance should follow suit sometime. More so, as robots can cause accidents, and can create a new kind of accountability for enterprises using them instead of humans. But insurance here is still a subject matter of unsolicitation.
UFO kidnappings, bad weddings, ghosts, moustaches and taste-buds. Not only is there someone somewhere asking for insurance for every strange possibility but someone somewhere providing it too.
And yet, we are far from figuring out questions like – who is liable for the death of that young bride-to-be who was crushed by mistake of a robot in an automotive factory in Alabama? Or for a worker trapped amidst confused robots in the world’s biggest e-commerce warehouse? Or a welder breaking his skull in a factory in Chakan – because he forgot to wear a helmet near a robotic machine?
AI’s liability – or penalty
Incidentally, as per some media reports – the serious injury rate in world’s top-most e-commerce player’s warehouses is high when the humans work with robots than in those without robots.
So why is AI being considered only as an enabler and not as a risk area, per se, by most insurers? After all, we have already started to witness accidents caused by robots. To add to that, humans can suffer not just unemployment, displacement and augmentation but also workplace-injury-risks when bots arrive in their factories and offices. Or is that a Gray Swan still far away somewhere?
Before we think of insurance, we need to ask some tough questions – What is a robot? Who takes the blame for it? Who owns it? Is cyber-insurance enough?
What will complicate or simplify matters here is how one defines a robot, as the Swiss Re paper argues. Also Policy language would be crucial for disputing parties when they guidance about coverage. Insurers also need to shape up intent and exposure. There is also time to flesh out who will bear liability when there are multiple contributors to a robot (like manufacturers, software designers, operators, data-service providers etc.). What will kick in – the owner liability, agency theories, traditional underwriting models or moth-balled corporate legal-entity theories that insurers have used so far?
Very few insurers have started to talk about, and think of, solutions in this emerging, but unfamiliar, terrain. Munich Re and Swiss Re – are the top ones that come on the radar here.
There is a solution called aiSure in Munich Re’s stable of insurance solutions. As Irmgard Joas, Spokesperson, Group Media Relations, Munich Re explains,
“Munich Re helps to insure the performance of AI solutions innovatively by e.g. absorbing risks of AI underperformance. Munich Re backs the performance guarantee of companies towards their clients.”
According to Munich Re reports, Modelling risk related to robots causing accidents is a new field for insurance risk management. “It is strongly related to the question of insurability of algorithms, which represent the fundamentals for robot actions.”
A Swiss Re paper rightly reminded us that – “Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. And roughly 30% of leading organizations will create a chief robotics officer role or a similar role for their business in the next two years. Ready or not: the robots are here and more are coming.” It explains how more and more robots introduce new coverage and/ or liability issues for nearly every line of business in insurance.
So let’s ask just two questions for now.
We are entering the worm hole to a new future. Are we insured against possible mishaps?
Whose collar to pull?
Indranil Bandyopadhyay, Principal Analyst, Financial Services, insurance, Data Science, AI at Forrester, offers an objective comparison between humans and robots here. “In general, the error rate of humans can be three to six errors per hour. Mechanical robots are, slightly, in a better shape that way. AI and robotic solutions are emerging fields and should be treated with cognizance to their novelty and not merely with a dystopian view. There’s always a probability of something going wrong. Some things do not work as envisaged. That’s where insurance and compensation for AI-failure can come in. I am aware of only one organization – Munich Re in that context. It’s again something that would need a ‘Horses for Courses’ mindset.”
“There’s a big difference between general cyber insurance and AI/ML insurance. Cyber insurance covers failures of digital systems, such as business interruptions, and information security and privacy liability breaches.” notes Suresh Pokhriyal, Vice President, Xceedance as he explains why AI/ML-specific insurance policies are still in their early stages, and why as the use of these technologies grows, more businesses will likely need to purchase coverage.
Pokhriyal avers:
“What will be covered by AI/ML-specific insurance policies is still unknown. Still, the policies will probably protect companies from losses related to data corruption, model theft, and adversarial attacks.”
First #ChatGPT answer which made me pause. pic.twitter.com/nbc9uRoWlA
— Kevin Schawinski (@kevinschawinski) January 12, 2023
What to ask for – and under which tab?
AI-related insurance can be under various areas like Commercial General Liability, Product Liability, Employment Practices Liability, Technology Errors and Omissions, Workers’ Compensation, Cyber Coverage, Professional Liability, and Directors and Officers Liability, and, of course, standalone robotics policies.
The Swiss Re paper pointed out that – “Bundled or hybrid policies that include many component coverages are attractive as one-stop offerings because insureds often prefer broad coverages (vs. numerous standalone policies). Bundled offerings can simplify purchasing and help reduce an insured’s risk of insurance gaps.”
These contours will get even more clear when we have better and precise standards and guidelines for this new field. In their 2021 report ‘AI Accidents: An Emerging threat’, Zachary Arnold and Helen Toner from Center for Security and Emerging Technology, Georgetown University, outline how Policymakers can help reduce these risks. Policymakers should – among other things- invest in AI standards development and testing capacity, which will help develop the basic concepts and resources needed to ensure AI systems are safe and reliable, they point out.
The area of standards seems to be in progress with proposals from the International Standards Organization (ISO), as well as the American National Standards for Industrial Robots (ANSI) and the Robotic Industries Association (RIA).
Looks like there is a long way to go to capture this side of AI. And if we do that soon, we will remove a lot of cynicism around robots too. ‘The monster we do not know is always more scary than the one we can sketch.’ Right?
That’s why Bandyopadhyay strongly recommends that we should be excited about technology.
“Every industrial revolution has been plagued by a cynical view. Let’s not succumb to the idea that ‘Machines will eat us’. Robots are consistent, efficient, precise; and they lead to lesser production costs, and better Economics, eventually.”
AI
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
A lawyer in the United States is facing disciplinary action after his law firm used popular AI chatbot ChatGPT for legal research and cited fake cases in a lawsuit.
Steven A. Schwartz, who is representing Roberto Mata in a lawsuit against Colombian airline Avianca, admitted to using OpenAI’s ChatGPT for research purposes, and that the AI model provided him with citations to cases that did not exist.
Mata is suing Avianca for a personal injury caused by a serving cart in 2019, claiming negligence by an employee.
Also read: Opera Unveils GPT-Powered AI Chatbot Aria
Bogus all the way
According to a BBC report, the matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support Mata’s case.
But the opposing counsel flagged the ChatGPT-generated citations as fake. US District Court Judge Kevin Castel confirmed six of them as non-existent. He demanded an explanation from Schwartz, an attorney with New York-based law company Levidow, Levidow & Oberman.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in a May 4 order.
“The court is presented with an unprecedented circumstance.”
The supposed cases include: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, none of which did not appear to exist to either the judge or defense.
Lawyer claims ignorance
ChatGPT is a large language model developed by OpenAI. Launched in November, the AI is trained on billions of data from the Internet and can perform a variety of tasks like generate text, translate languages, and even write poetry, and solve difficult math problems.
But ChatGPT is prone to “hallucinations” – tech industry speak for when AI chatbots produce false or misleading information, often with confidence.
In an affidavit last week, Schwartz said he was “unaware of the possibility that its [ChatGPT] content could be false.” He also said that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future.
Schwartz claimed to have never used ChatGPT prior to this case. He said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The career attorney now faces a court hearing on June 8 after accepting responsibility for not confirming the authenticity of the ChatGPT sources. Schwartz was asked to show cause why he shouldn’t be sanctioned “for the use of a false and fraudulent notarization.”
ChatGPT’s confident lies
According to the BBC report, Schwartz’s affidavit contained screenshots of the attorney that confirmed his chats with ChatGPT.
Schwartz asked the chatbot, “is varghese a real case?”, to which ChatGPT responded “yes, [it] is a real case.” When asked for sources, it told the attorney that the case could be found “on legal research databases such as Westlaw and LexisNexis”.
Again, the attorney asked: “Are the other cases you provided fake?” ChatGPT responded “No”, adding that the cases could be found on other legal databases. “I apologize for the confusion earlier,” ChatGPT said.
“Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused,” the chatbot replied with confidence.
Cryptocurrencies
Hong Kong Police Launch Metaverse Platform to Fight Cyber Crime
The Hong Kong Police Force cybersecurity unit has launched a metaverse platform, CyberDefender, to promote metaverse crime prevention and highlight the risks associated with Web3. The initiative will equip citizens with skills and strategies relevant in tackling technology-related crimes in the digital age.
The city is also ramping up its regulatory efforts to prevent criminals from using crypto to launder money.
Also read: UK Police Record Child Abuse in the Metaverse
To mark the launch, the police force organized an inaugural event titled “Exploring the Metaverse” within the virtual realm.
This is an initiative to raise public awareness regarding the potential risks linked to the metaverse and Web3, at a time when digitalization is fast growing and gaining traction all over the world.
The launch event took place across three virtual venues and was organized on the newly-launched platform with the aim of engaging participants in proactive conversations about ensuring safety within this virtual realm.
During the event, chief inspector IP Cheuk-yu from the Cyber Security and Technology Crime Bureau (CSTCB) presented on the dangers associated with Web3 and urged the public to exercise caution.
Metaverse a breeding ground for criminals
There have been reports on cases of verbal and sexual harassment within VR games that surfaced last year. Later, campaigners said an avatar of a 21-year old researcher was sexually assaulted in Meta’s VR platform Horizon Worlds.
“All crimes in the cyberspace could also happen in the metaverse such as investment frauds, unauthorized access to systems, theft and sexual offenses,” said the chief inspectator.
UK police forces also recorded 45 cases of child abuse in the metaverse while 30,925 individual offences involving indecent images of children on social media platforms were also recorded in 2021-2022, according to figures from the National Society for the Prevention of Cruelty to Children (NSPCC).
The chief inspector further emphasized that the metaverse presents potential dangers such as hacking and theft of digital assets by modern cybercriminals.
“The decentralized nature of virtual assets in Web3 may also increase the likelihood of cybercriminals targeting endpoint devices, virtual asset wallets and smart contracts,” he added.
Attendees at the event were enlightened about the advancements made in combating crypto crime and the ongoing efforts to mitigate its impact, providing valuable insights into the evolving landscape of cybercrime and efforts taken to curb illicit activities involving digital assets.
Increase in cybercrimes
In 2022 alone, the city witnessed a staggering 2,336 virtual asset related crimes, according to the Hong Kong Police Force in a press release that accompanied the launch.
The incidences resulted in financial losses of $1.7 billion for victims. Figures from the police force also show that 663 cases of a similar nature have already been reported during the first quarter of 2023 alone.
These losses amounted to $570 million, an alarming increase of 75% compared to the same period last year. The police stated that most of the cases involved virtual asset investment.
“Criminals took advantage of the public’s lack of knowledge about virtual assets and lured them into non-existent investments,” they warned.
According to the police, such figures underscored the urgent need for proactive measures to address the rising trend in virtual asset-related crimes and protect individuals from significant financial harm.
City gets tough on money laundering
Concurrent with the introduction of the new metaverse platform, the Hong Kong Securities Regulatory Commission (HKSRC) released revised anti-money laundering (AML) guidelines.
The guidelines outline the tactics employed by offenders to launder money through digital assets and offers comprehensive measures for financial institutions to shield themselves from illicit engagements. Changes include enhanced Know Your Customer (KYC) and due diligence requirements.
Enforcing the enhanced KYC rules means Hong Kong is stepping up efforts to prevent dirty money from flowing through the city, which will also make it less attractive for criminals to use crypto for their illicit transactions.
Under the updated guidelines, institutions that facilitate crypto transactions valued at 8,000 RMB or more must collect identifying information about both sender and receiver.
International efforts
The increase in cyber-related crimes is pushing authorities to aggressively tackle the problem and raise awareness among the public.
Aside from Hong Kong, other jurisdictions adapting their AML guidelines to keep up with the use of digital assets by criminal networks include Japan, which recently announced stricter AML rules for crypto transfers. The country will specifically impose what is known as the “travel rule,” whereby exchanges must ensure details about the sender are shared with other parties.
If effective, efforts to fight crime are expected to be as international as the criminal networks themselves. Last month, reports suggested the International Revenue Service (IRS) would deploy cyber agents internationally to investigate the use of crypto in financial crimes.
Business
Microsoft Accuses CMA of Irrationally Blocking $68.7bn Activision Takeover
Microsoft has blamed Britain’s Competition and Market Authority (CMA) for “irrationally” blocking its $68.7 billion takeover of Call of Duty video game maker Activision.
The tech giant announced its plan to acquire Activision to “bring the joy and community of gaming to everyone, across every device,” back in January 2022. The proposed takeover aimed to bring Activision’s popular franchises, like Call of Duty, under Microsoft’s umbrella.
“Microsoft will acquire Activision Blizzard for $95.00 per share, in an all-cash transaction valued at $68.7 billion, inclusive of Activision Blizzard’s net cash,” stated the company.
Read Also: Google Opens Up Access to its Search Labs with Generative AI
However, its bid faced regulatory obstruction amid concerns over industry consolidation. The deal’s block provoked Microsoft to appeal, heightening anticipation for the outcome and its potential implications on the gaming landscape.
The regulatory body blocked Microsoft’s takeover of Activision in April, saying it would encourage market monopolies and stifle competition in the growing video game streaming market.
EU offers favorable conditions for businesses
The company has accused the regulator of making “fundamental errors” while blocking its deal.
Microsoft has claimed the CMA had not taken “proper account of three long-term commercial agreements which Microsoft had entered into with the other party” in the filing with the Competition Appeal Tribunal.
After the ruling, Microsoft president Brad Smith slated the regulator, suggesting the decision conveyed a “clear message” that the European Union (EU) offered more favorable conditions for starting a business in comparison to Britain.
Additionally, Activision, which is also the creator of the popular mobile game Candy Crush, accused the UK of having an unwelcoming business environment, stating that it was “closed for business.”
Hence, while the CMA has halted the acquisition, the EU has given the green light for the merger.
Need for broader understanding
Chancellor Jeremy Hunt expressed his belief that regulators should understand their “wider responsibilities for economic growth” following the blockage of the deal.
“I would not want to undermine that at all, but I do think it’s important all our regulators understand their wider responsibilities for economic growth” stated Hunt.
One of the reasons companies like Microsoft and Google are interested in investing in the UK is due to the presence of independent regulators that are not influenced by politicians, argued Hunt.
‘Takeover won’t be unfair’
The fact that the deal was blocked by the UK but welcomed by the EU has made headlines. Evidently in giving the green light, EU officials believe Activision’s takeover by Microsoft won’t be unfair.
In the meantime, it is still awaiting a confrontation with the US Federal Trade Commission which has filed a lawsuit to block the deal. The trial is scheduled to start in early August, with a decision expected by the end of the year.
EU approves — with conditions to license to rivals access to Activision's content for 10 years — @Microsoft's $MSFT's deal to buy 'Call of Duty' maker @Activision https://t.co/kjSxzJzjwm via @YahooFinance
— Alexis Keenan (@alexiskweed) May 15, 2023
“Where we diverged with the CMA was on remedies,” stated Margrethe Vestager, the EU’s competition chief.
She stated that a 10-year free license was granted to consumers, enabling them to stream all Activision games they hold licenses for via any cloud service.
“And why did we do this instead of blocking the merger?” she questioned.
“Well, to us, this solution fully addressed our concerns. And on top of that, it had significant pro-competitive effects.”
However, the Chief Executive of the CMA Sarah Cardell reiterated her support for the decision, emphasizing the regulator’s objective to establish favorable conditions for competition that would foster the growth of both large and small companies.
“I don’t find that we are operating sort of, broadly speaking, in a hostile environment,” stated Sarah.
-
BusinessThu 1 Jun 2023 06:36 GMT
Metaverse Gaming Market Expected to Reach $119.2 Billion by 2028
-
AIWed 31 May 2023 17:45 GMT
Judge Orders All AI-Generated Research To Be Declared in Court
-
CryptocurrenciesWed 31 May 2023 07:03 GMT
Floki Inu (FLOKI) Volumes Surge 300% on China Metaverse Game Plans
-
AITue 30 May 2023 15:07 GMT
Nvidia Debuts AI Tools in an Era Where “Anyone Can Be a Programmer”
-
BusinessTue 30 May 2023 10:43 GMT
Chinese City Pledges $1.42bn to Boost Metaverse Industry Growth
-
AITue 30 May 2023 06:40 GMT
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
-
AIMon 29 May 2023 20:30 GMT
Sandbox Founder Remains Bullish on Metaverse ‘Marathon of Many Sprints’
-
CryptocurrenciesMon 29 May 2023 17:00 GMT
Hong Kong Police Launch Metaverse Platform to Fight Cyber Crime