Why the future of AI will be fought on a battleground of ethics

[ad_1]

Over the last few months, layoffs have gripped the tech industry’s workforce. Facing macroeconomic challenges that are impacting revenue forecasts and financial performance, the tech giants are in unprecedented times destined for the history books.

Among the macro pressures and turbulent markets, one of glimmers of light this year has been in emerging technologies, specifically the radical growth and adoption of AI. While AI isn’t exactly new, the world hasn’t been able to stop this technology from seeping into every conversation – both in and outside of the workplace.

And for good reason. The potential and its adoption in our daily lives is significant, driving industrial change and powering emerging technologies, such as generative AI.

But, as with any emerging technology, at this stage there are lots of questions and not a lot of answers. With the rise of ChatGPT, we’ve seen a number of new legal questions around generative AI and the ownership of its output, as well as whether that work can benefit from copyright protection. Right now, there are already ongoing court cases as AI developers begin to face uproar from artists and content creators who are claiming that their artistic works are being used without just compensation.

While the outcomes of this litigation will help shape the future use of AI, for this technology to have sustainable long-term growth, it ultimately must be ethical and responsible. A responsible version of AI will consider, at its core, among other areas, privacy and bias. Many of the tech giants had already started building responsible AI teams to focus and advise on the safety of consumer products that are powered by AI—but as the layoffs continue, the plug seems to have also been pulled on this critical component.

With the likes of Amazon and Twitter making cuts to their teams, Microsoft, which embedded ChatGPT into its Microsoft 365 suite of products, went as far as removing its ethics team altogether.

This is concerning.

These teams were originally developed to have the role of creating and implementing a system of moral principles and techniques which inform the development and responsible use of AI technology. By working within the boundaries of an AI ethics framework, these teams played a pivotal role in creating clearer guidelines for the employees responsible for developing the future of a technology that is going to become so embedded within our daily lives. Without these structures to hold developers to account, there’s an increased risk that the use of AI could become irresponsible and unsafe for the users of the tech.

Critically, ethical AI investment and implementation must be the responsibility of an engaged leadership team. Much like the attention toward corporate social responsibility (CSR) and environmental, social, and corporate governance (ESG), AI must have a home on the top shelf of business acronyms prioritized by boardrooms. After all, they are interconnected. A key pillar of CSR, along with environmental, philanthropic, and economic factors, is how a company is operating their business in an ethical way that upholds human rights, including racial and gender discrimination. Companies developing and integrating AI tools into their consumer products retain the responsibility to uphold such rights to mitigate and we hope avoid algorithmic bias.

We have seen racial bias in facial recognition algorithms recognizing white males more accurately than non-white males, as well as voice recognition systems created by companies including IBM, which found that they have higher error rates when transcribing voices of people who were Black compared to those voices of people who were white. AI is trained on significantly large data sets which, of course, makes it very difficult to ensure that the data being used fits within a defined ethical framework, but this speaks to the critical importance of having dedicated teams that have an accountability in this regard.

And similarly of the environmental front, AI models can consume large amounts of energy to train the model. Does this allow companies to meet their ESG targets? It’s yet to be determined, but what’s clear is that this technology’s success will be defined by its ability to fight a battle of ethics.

This starts with keeping privacy governance at the heart of design. Global privacy regulations, such as the General Data Protection Regulation in the EU, exist to regulate the use of AI systems, but it’s up to businesses to ensure that their AI, legal, and privacy teams collaborate with developers to operate within the boundaries of ethical and privacy frameworks.

Now with the recent open letter posted by the Future of Life Institute calling for a hiatus in the training of powerful AI models in fear of harmful consequences, we have a defining choice to make.

As a society made up of businesses and governments, we each have a role to play in laying down the foundations that can build a more inclusive, human future. Only by acting now to ensure the frameworks and governance structures are in place can we start to build ethical versions of a technology that can undoubtedly change our world.


Matt Burt is the director of business operations and business affairs of Europe, the Middle East and Africa at R/GA London.



[ad_2]

Source link

Comments are closed.