Josh Entsminger | Ethics for the Metaverse | Company

0

The “metaverse” isn’t here yet, and when it does, it won’t be a single domain controlled by a single company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments from Microsoft and Roblox.

All seek to shape how virtual reality and digital identities will be used to further organize our daily lives – from work and healthcare to shopping, gaming and other forms of entertainment.

The metaverse is not a new concept. The term was coined by science fiction novelist Neal Stephenson in his 1992 book Snowfallwhich depicts a hypercapitalist dystopia in which humanity has collectively opted for life in virtual environments.

So far, the experience has been no less dystopian here in the real world. Most experiences of immersive digital environments were immediately marred by bullying, harassment, digital sexual assaults, and all the other abuses we’ve come to associate with platforms that “move fast and break things.”

None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. Therefore, independent parties should provide governance models as early as possible, before interested companies do so with their own profit margins in mind.

The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image recognition in 2012, business and government interest in the field exploded, attracting major contributions from ethicists and activists who published – and reposted – research on the dangers of training AIs on biased data sets. A new language has been developed to integrate the values ​​we wish to defend into the design of new AI applications.

Thanks to this work, we now know that AI “effectively automates inequality” as Virginia Eubanks of the University at Albany, SUNY puts it, while perpetuating racial bias in law enforcement. To draw attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab started the Algorithmic Justice League in 2016.

This first wave of response was intended to draw public attention to the ethical issues associated with AI. But it was quickly overshadowed by a new push for self-regulation within the industry. AI developers introduced technical toolkits to conduct internal and third-party assessments, hoping this would allay public fears. This is not the case, as most companies that pursue AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.

To take the most common example, Twitter and Facebook will not effectively deploy AI against the full range of abuse on their platforms, because it would hurt “engagement” – outrage – and therefore profits. Likewise, these technology companies and others have taken advantage of value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now voluntarily give up the power they have acquired.

More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical misconduct. Those working on AI at large tech companies would be eager to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data used to train the AI ​​applications is reliable.

To this end, many tech companies have created so-called independent ethics boards. However, the reliability of this form of governance has since been called into question following the ousting of high-level in-house researchers who have raised concerns about the ethical and social implications of certain AI models.

Establishing a strong ethical foundation for the Metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also need to be aware of how the metaverse is already deviating from AI. While AI has been largely focused on internal business operations, the Metaverse is decidedly consumer-centric, which means it will come with all sorts of behavioral risks that most people won’t. not taken into account.

Just as telecommunications regulation, specifically Section 230 of the US Communications Decency Act of 1996, provided the governance model for social media, social media regulation will become the default governance model for the metaverse.

This should worry us all. Although we can easily predict many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we may be underestimating the scale they will reach and the ripple effects they will have.

It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A fully digital environment creates the potential for even more comprehensive data collection, including personal biometric data. And since no one really knows exactly how people will react to these environments, there are good reasons to use regulatory sandboxes before allowing wider deployment.

Anticipating the ethical issues of the metaverse is still possible, but time is running out. Without effective independent oversight, this new digital realm will almost certainly go rogue, recreating all the abuses and injustices of AI and social media – and adding more than we even bargained for. A ‘Metaverse Justice League’ may be our best hope.

– Josh Entsminger is a PhD student in innovation and public policy at the Institute for Innovation and Public Utility at UCL; Mark Esposito, co-founder of Nexus FrontierTech, is policy associate at University College London Institute for Innovation and Public Purpose and professor at Hult International Business School; and Terence Tse, co-founder and executive director of Nexus FrontierTech, is a professor at Hult International Business School.©Project Syndicate 2022www.project-syndicate.org

Share.

Comments are closed.