Trust is essential for any organisation that wants to win customers, investors, fans, etc. But what additional complexity has the collection and use of data added to its attainment?
Philosopher Baroness Onora O’Neill has long argued that “what is important in the first place is not trust but trustworthiness.” But what really is the difference between trust and trustworthiness? She suggests that trustworthiness is the demonstration of honesty, competence and reliability, which influences our decisions on where and in who we place our trust.
The 2020 Edelman Trust Barometer shows that people tend to “grant their trust based on […] competence and ethical behaviour.” So there is support for the idea that organisations must both deliver on promises and do so ethically if they want to demonstrate trustworthiness.
Building trust in the data and AI space
The evolution of technology and the burgeoning use of AI have, arguably, made it harder to acquire (and hang onto) people’s trust. More than ever, organisations must think about how they manage the data they access, use and share. The same goes for how ethically they use AI technologies.
Over the past few years, there have been countless examples of organisations that have suffered legal, financial and/or reputational consequences for mishandling data or using AI irresponsibly.
Companies like Amazon and Google have been heavily criticised for using biased algorithms in HR processes. Amazon’s resume-review AI system was accused of excluding women from their pool of candidates, meanwhile Google Adwords’ AI tech was found guilty of showing more high-paying executive positions to male job seekers as compared to female ones. These findings are concerning considering that “72% of resumes are weeded out before a human eye sees them” (Harvard Business Review).
Communicating Corporate Digital Responsibility
As companies become more alert to the repercussions of the misuse of data and AI on their reputation and overall stability, they are starting to pay more attention to their Corporate Digital Responsibility (CDR). This is driving growing emphasis upon the responsible and ethical use of data and AI; one that is increasingly embedded in board-backed policies and processes; ensuring that the business uses data and AI in a way that is ethical, responsible and wise.
The AI Ethics Guidelines Global Inventory, a project by Algorithm Watch, looks at best responsible AI practice.
For example, although Google has been criticised many times for using biased algorithms with tools like Google’s Perspective, the tech giant acknowledged the need for organisational processes and policies to ensure ethical use of AI across the company. In June 2019, they announced Google’s AI Principles – an ethical charter to guide the responsible development and use of AI in […] research and products – that are being implemented by the Advanced Technology External Advisory Council (ATEAC).
Telefonica has followed suit, and has also designed training for employees to explain the negative impact that improper practice can have on society and human rights. They also developed a self-assessment process that holds product managers accountable and requires them to evaluate their products and services against unethical AI risks and make informed decisions.
Demonstrating trustworthiness through communications
Commitments towards high standards of CDR represent a step change in the understanding of the risks and opportunities afforded by data and digital technologies. For organisations to build trust, it is critical that they invest in communicating their commitment to CDR internally and externally.
Communications practitioners themselves are considering the critical role of ethics in the use of Data and AI; in 2020, the CIPR’s #AIinPR Panel produced an Ethics Guide to Artificial Intelligence in PR. Our CEO, Emma Thwaites is a member of the panel and since 2012, at Allegory, we have worked with our clients to help them navigate the task of building trust amidst the complexities of an ever-changing landscape of technological advancement.
No comment yet, add your voice below!