The rocket trajectory of a startup is well known: to get an idea, build a team and together create a minimum viable product (MVP) that you can present to users.
However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market is increasingly aware of the ethical implications of augmenting or replacing humans with AI in the decision-making process.
An MVP allows you to gather critical feedback from your target market which then informs the minimum development required to launch a product, creating a powerful feedback loop that drives customer-driven activities today. This lean and agile model has seen great success over the past two decades, launching thousands of successful startups, some of which have grown into billion dollar companies.
However, creating high-performance products and solutions that work for the majority is no longer enough. From facial recognition technology that is biased against people of color to credit lending algorithms that discriminate against women, recent years have seen several AI or ML-powered products killed due to ethical dilemmas that arise in the process. downstream after millions of dollars have been funneled into their development and commercialization. In a world where you have a chance to bring an idea to market, that risk can be fatal, even for established businesses.
Startups don’t have to abandon the lean business model in favor of a more cautious alternative. There is a common ground that can bring ethics into the mindset of startups without sacrificing the agility of the lean model, and that starts with a startup’s initial goal: to get early stage proof of concept ahead. potential customers.
However, instead of developing an MVP, companies should develop and deploy an ethically viable product (EVP) based on responsible artificial intelligence (RAI), an approach that takes into account ethical, moral, legal, cultural, sustainable considerations. and socio-economic during the development, deployment and use of AI / ML systems.
And while this is a good practice for startups, it is also standard good practice for large tech companies that create AI / ML products.
Here are three steps that startups, especially those that incorporate important AI / ML techniques into their products, can use to develop an EVP.
Find an ethics officer to lead the charge
Startups have strategy directors, investment directors, and even entertainment directors. Just as important, if not more so, is an ethics director. This person can work with different stakeholders to ensure that the startup develops a product that meets the moral standards set by the company, the market and the public.
They should serve as a liaison between the founders, senior management, investors, and the board with the development team – making sure everyone is asking the right ethical questions in a thoughtful and risk-free manner.
The machines are trained on the basis of historical data. If there is a systemic bias in a current business process (such as unequal lending practices based on race or gender), the AI will take that into account and think this is how it should continue to behave. . If it later turns out that your product does not meet ethical market standards, you cannot simply delete the data and search for new data.
These algorithms have already been trained. You cannot erase that influence any more than a 40-year-old man can negate the influence of his parents or older siblings on his upbringing. For better or for worse, you are stuck with the results. Ethics leaders need to detect this inherent bias across the organization before it takes root in AI-powered products.
Integrate ethics into the entire development process
Responsible AI is not just a point in time. It is an end-to-end governance framework focused on the risks and controls of an organization’s AI journey. This means that ethics must be mainstreamed throughout the development process, starting with strategy and planning through development, deployment and operations.
During scoping, the development team should work with the ethics manager to learn about general ethical principles of AI that represent behavioral principles valid in many cultural and geographic applications. These principles prescribe, suggest, or inspire how AI solutions should behave in the face of moral decisions or dilemmas in a specific area of use.
Above all, a risk and harm assessment should be carried out, identifying any risk to the physical, emotional or financial well-being of any person. The assessment should also look at sustainability and assess the damage the AI solution might cause to the environment.
During the development phase, the team must constantly consider how their use of AI aligns with company values, if models treat different people fairly, and if they respect the rights of people. people to privacy. They should also consider whether their AI technology is safe, secure and robust and how effective the operating model is in ensuring accountability and quality.
The data used to train the model is an essential part of any machine learning model. Startups need to be concerned not only with the MVP and how the model is proven initially, but also the possible context and geographic scope of the model. This will allow the team to select the right representative dataset to avoid any future data bias issues.
Don’t forget the ongoing AI governance and regulatory compliance
Given the implications for society, it is only a matter of time before the European Union, United States or some other legislative body passes consumer protection laws governing the use of AI. / ML. Once a law is passed, these protections are likely to extend to other regions and markets around the world.
It’s happened before: The adoption of the General Data Protection Regulation (GDPR) in the EU has led to a wave of other consumer protections around the world that require companies to prove their consent for the collection of data. ‘personal informations. Today, people from all political and business walks are asking for ethical guidelines regarding AI. Once again, the EU is leading the way after releasing a 2021 proposal for a legal framework for AI.
Startups deploying AI / ML-powered products or services should be prepared to demonstrate ongoing governance and regulatory compliance – making sure to create those processes now before regulations are imposed on them later. Performing a quick analysis of proposed legislation, guidance documents and other relevant guidelines before creating the product is a necessary step in the EVP.
Additionally, it is advisable to review the regulatory / policy landscape prior to launch. Having someone who is integrated into the active deliberations that are currently taking place globally on your board or advisory board would also help understand what is likely to happen. The regulations are coming, and it is good to be prepared.
There is no doubt that AI / ML will present a huge benefit to humanity. The ability to automate manual tasks, streamline business processes, and improve the customer experience is too important to ignore. But startups need to be aware of the impacts AI / ML will have on their customers, the market, and society in general.
Startups usually have a chance to be successful, and it would be a shame if an otherwise high performing product were killed off, as some ethical concerns weren’t discovered until after it hit the market. Startups need to embed ethics into the development process from the start, develop an RAI-based EVP, and continue to provide governance for the AI after launch.
AI is the future of business, but we cannot lose sight of the need for compassion and the human element in innovation.