New AI Tool Claims to Stop ChatGPT Taking Company Secrets

New AI Tool Claims to Stop ChatGPT Taking Company Secrets

Tech entrepreneur Wayne Chang claims he has developed an AI tool, LLM Shield, that prevents chatbots and large language models (LLMs) like ChatGPT from taking company secrets. The use of AI tools has been increasing of late, raising concerns that companies could be vulnerable to data leaks and lawsuits.

According to a report by Cyberhaven, an estimated 2.3% of workers have put confidential company information into ChatGPT while 11% of data pasted into the chatbot is confidential.

As a result, many businesses including JP Morgan Chase & Co have reportedly blocked employee access to the tool.

Also read: Are Metaverse Offices Making a Comeback?

LLM Shield is designed to protect businesses and governments from uploading sensitive data to AI-powered tools like ChatGPT and Bard.

Chang told Fox Business that when OpenAI’s ChatGPT was released to the market in November, he saw how powerful it would be and that it came “with big, big risks.”

“Things are going to escalate pretty quickly – that’s both a positive and a negative,” he said.

“My focus here is that I want to make sure that we can positively steer AI in a positive direction and avoid the downside as much as possible.”

Technology to control technology

Released only last week, LLM Shield alerts organizations each time there is an attempt to upload sensitive information. According to Fox Business, administrators can set guardrails for what types of data the company wants to protect.

Any attempt to upload such information will prompt LLM Shield to alert users that they are about to send sensitive data, and obfuscate details so the content is useful but not readable to humans.

LLM Shield is expected to continue getting smarter, just like the AI tools whose powers it is tasked with limiting. It also updates like spam filter, so as more bots hit the market, it will automatically update itself to strengthen protection.

According to Chang, there are plans to release a personal version for individuals to download for home use.

Data leaks worry companies

This year has seen a boom in the use of AI tools by businesses, and it’s likely we will look back on 2023 as the year when the AI revolution began in earnest.

However, the influx of these tools has also raised concerns that workers may willingly or accidentally leak sensitive information  to AI-powered tools like ChatGPT.

The risk is already there. Samsung recently experienced a series of data leaks when its employees allegedly pasted a source code into new bots, which could potentially expose proprietary information.

Increasingly, employers are getting nervous about how their staff might use AI tools at work. Some businesses have already banned the use of ChatGPT at work, though employees still find their way around the restrictions.

In a leaked memo, Walmart has warned employees not to share confidential information with ChatGPT. The group said it previously blocked the chatbot due to “activity that presented risk to our company.”

Amazon has also warned its employees of the same challenge. Its corporate lawyer told employees the company had already seen instances of ChatGPT responses that were similar to internal Amazon data.

Sneaky employees

As in countries where ChatGPT is prohibited but still used – China and parts of Africa, for instance – employees have not been deterred from using the AI-powered tool in their organizations.

A Fishbowl survey released in February revealed that nearly 70% of employees are using ChatGPT at work without the knowledge of their bosses.

Cyberhaven CEO Howard Ting told Axios that employees always found ways to evade corporate network bans blocking access to chatbots.

He said some companies believed employees had no access to the chatbot before his firm told them otherwise.

“Users can use a proxy to get around a lot of those network based security tool,” he revealed.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.