Digital technology has changed our society in immeasurable ways. From cellphones to social media to email, our entire lives are now shaped by technology in ways we may not even be fully aware of.
However, the more digital technology permeates our world, the more people worry about its negative effects. The displacement of workers, data privacy, the rise of misinformation, the environmental crisis and the global mental health emergency can all be, at least in part, attributed to the growing prevalence of technology in all facets of the society.
While many of these societal ills are most commonly associated with social media platforms, gaming systems and popular mobile apps, the truth is that even the most benign of technology can, intentionally or unintentionally, be easily weaponized to cause damage. In addition, the success of digital transformation depends on the trust of stakeholders. If users and customers don’t trust your organization or the technology you operate, your digital transformation, and your business with it, will likely fail.
And we are not alone in thinking so. According to a recent study conducted by Deloitte, 57% of respondents from “digitally maturing” organizations say their leaders spend enough time thinking about and communicating about the societal impact of digital initiatives.
Additionally, in addition to outcomes, ethics frameworks should also consider data sources, calculation methods, use of technology, security/operational risks, and assumptions in automated decision-making. .
It goes without saying that ethical business practices start with compliance. However, when it comes to data protection and privacy, the ethical use of data is more than just a regulatory requirement, it is a strategic imperative. After all, data-driven apps and automations are only as good as the data they ingest.
With this in mind, forward-thinking organizations are developing and implementing comprehensive data ethics guidelines to help ensure that digital technology and AI do not cause unintended harm. For instance:
One of the biggest concerns about intelligent automation and digital transformation is that new technologies will displace human workers. To tell the truth, this fear is not unfounded.
For example, as we pointed out in a previous article on burnout, 45% of American workers say that the technology they use at work does not make their job easier and are in fact very frustrated with it.
The time has come for organizations to evaluate digital technology not just for the value it brings to shareholders, but for its potential impact on its human workforce. At the heart of this enterprise is IT/business alignment. By working closely with business units to ensure new digital investments drive both business goals and the employee experience, IT can increase adoption rates and the likelihood of overall success.
There is no doubt about it. The proliferation of digital technology is exacerbating many, if not all, of the world’s most pressing environmental crises. From the disastrous environmental impact of rare metal mining with staggering amounts of energy a single AI model consumesdigital technologies of all kinds come with substantial environmental costs.
Although calculating the environmental impact of digital technology can be incredibly difficult and complex, organizations and researchers are beginning to do so. Big tech companies like Apple, Meta and Google have all made ambitious commitments to reducing their carbon footprint. Although some of their claims are a bit dubious, they have dramatically increased the efficiency of GPUs, TPUs, and other data processing technologies.
As AI and automation become more prevalent, so do scandals involving their unintended consequences.
Take, for example, the recent Charles Schwab the robo-advisor saga. In June 2022, Charles Schwab agreed to pay $187 million to settle an SEC investigation into alleged hidden fees charged by the company’s robo-advisor, Schwab Intelligent Portfolios. As reported by The Washington Post, “The Securities and Exchange Commission accused Schwab — which controls $7.28 trillion in client assets — of developing robo-advisory products that recommended investors hold 6% at 29, 4% of their assets in cash, rather than investing them in stocks or other securities. Investors would earn significant income if this money had been invested; Instead, Schwab used the money to issue loans and collect interest on those funds. In other words, it was [allegedly] designed to make money for Charles Schwab, not the customer.
While the settlement doesn’t require Charles Schwab to admit any wrongdoing, it’s easy to see how something like this could easily happen. The humans behind the technology (i.e. programmers, product marketers, etc.) are conditioned from day one they enter the workforce to prioritize profitability above all else. It is natural that these biases are reflected in the technology they create.
However, this does not mean that these results cannot be avoided. By integrating ethical decision-making into every step of the development and operationalization process, you can minimize ethics-related risks.