MetaNews was asked to provide sensitive information including personal documentation and biometric data when signing up for the Sandbox metaverse.
With the possibility of users’ every move being tracked in the virtual realm, the attachment of such personal information raises questions about data use and personal privacy.
What are users signing up for?
A MetaNews investigation of Sandbox shows that users are required to share significant amounts of personal information with the metaverse firm.
During sign-up registration for Sandbox, this journalist was asked to provide a cryptocurrency wallet address (MetaMask, Coinbase, Bitski, WalletConnect or Venly), an email address, personal documentation such as a passport or driver’s license, and complete what is known as a “liveness” biometric data test. Submitting this personal information required the use of both a laptop and a personal mobile phone, potentially linking multiple devices to the account as well.
Sandbox maintains that thanks to decentralization, users will own their data and in-game assets – but more privacy-focused users might balk at a game that requires them to tie sensitive KYC information to one of their emails, as well as their personal Ethereum addresses.
As another potential sticking point, Sandbox also encourages its users to link their Sandbox accounts to other sites such as their Google and Facebook accounts.
Once the registration process is complete, users are required to download the Sandbox installer app, a 440-megabyte installation client, on their personal laptop or home computer. Warning: to install Sandbox on your system, you may have to bypass system security warnings.
Given the very real possibility of digital surveillance in the metaverse, and given that social media firms including Meta have misused personal data in the past, there are certainly grounds for privacy concerns. In its terms of service, Sandbox confirms that harvesting data during registration is only the very beginning.
The Sandbox terms of service explains: “We and our service providers collect personal information in a variety of ways.” Those variety of ways are numerous and ever-present. Everyday activities in Sandbox including purchases, the completion of surveys, and applying for raffles, sweepstakes or promotions are all recorded by the firm. Personal information is also collected from crypto wallet providers. Users may be justified in feeling that this level of personal intrusion is the start of a data privacy nightmare.
An explosion of information
The threat to personal privacy in the metaverse is severe, and can be measured in a variety of ways including unique data elements.
The two-dimensional web already collects user info. It does this in the form of mouse clicks, cookies, and text submitted through forms and search engines. The metaverse can collect far richer data and far more quickly.
Twenty minutes of immersion in a virtual reality (VR) landscape creates two million unique data elements. That information can include the way you walk, move, or look and can even be leveraged to infer more invasive, deeper-level insights, on everything from the way you breathe to the way your brain processes information. It’s not quite reading your mind, but it’s not far away either. The collection of this sort of data is involuntary and continuous, making consent a virtually meaningless notion.
What we know about the two-dimensional web, is that the people who collect this data do not always have the best interests of their users at heart.
Case study: Cambridge Analytica
If there is a lesson to learn from social media it’s that large tech firms do not place the privacy of their users ahead of corporate profits. Meta, the social media giant formerly known as Facebook, is one of the greatest offenders in this regard. In December of last year, Meta agreed to pay out $725 million to the plaintiffs of a class action lawsuit suing for privacy violations in relation to the Cambridge Analytica scandal.
Meta, which owns Instagram, WhatsApp, and Facebook, has been clear about its long-term strategy to re-focus on the metaverse and metaverse-related technologies.
The Cambridge Analytica (CA) lawsuit argued that from 2015 onward, the data of millions of Facebook users was harvested without their knowledge or consent. This data was then sold for profit to a variety of political actors including United States Senator Ted Cruz, Donald Trump, and the Leave campaign of the British Brexit referendum. The data was especially valuable to CA and its clients.
Although these three instances of data harvesting became big news in the west, Cambridge Analytica and its parent firm SCL Group also had significant dealings in the developing world, where it had been conducting what whistleblowers called “psychological operations” for a considerably longer time. The scandal was eventually exposed by British newspaper The Guardian in May 2017.
The crux of the case is that despite promises to the contrary, Facebook and Meta failed to prevent app developers from harvesting user data, and from using that information. The suit stated that “Facebook, despite its promises to restrict access, continued to allow a preferred list of app developers to access the information of users’ friends.”
Naturally, Meta agreed to the $725 million settlement on a “no-fault” basis – a lot of money to pay for something they now claim they were not responsible for. In any case, the fact that Meta is now so focused on the metaverse does not seem to augur well for individual privacy, unless you believe the company is now wiser and has learned its lesson.
AI may make data privacy moot
Whether metaverse firms make personal privacy a concern or not, AI may ultimately render the notion a moot point.
A study from 2020 shows that given 5 minutes of VR data, a machine learning algorithm – artificial intelligence (AI) – can identify a person with 95% accuracy.
A more recent study is even more frightening, reflecting the significant leaps AI has made in the past couple of years.
As MetaNews previously reported this week, new work from the University of California Berkeley shows that 100 seconds of recorded head and hand motion is enough to accurately identify a person with 94% accuracy from a pool of 50,000 participants.
Given the ability of AI to so swiftly identify a person from just a few head and hand motions in the metaverse, will users really want to have that identification then tied to their passport details, biometric data, and personal financial information?