Deloitte’s State of Ethics and Trust in Technology reveals disturbing gaps between use and understanding of generative artificial intelligence.
Artificial intelligence (AI) has supplanted cryptocurrency as the trendiest tech around and, unlike crypto, it seems poised to make a real impact on the day-to-day life of the average person. Industries ranging from healthcare to customer service are finding ways to deploy AI, with eager investors pumping hundreds of billions into the emerging market.
And yet, despite the differences in their practicality, there remains some concerning overlap between AI and cryptocurrency.
For one, both are afflicted with serious sustainability problems. Blockchain, the decentralized system that underpins crypto transactions, is notoriously resource hungry—and that’s on top of all the energy being devoted to mining: Bitcoin alone consumes more electricity than the entire country of Argentina. Large language models (LLMs) such as ChatGPT operate on much of the same infrastructure, and with the growth rate of the industry, they’re already running into similar sustainability issues.
But at least all that energy is going into something productive, right?
Surely, businesses wouldn’t be spending all this money on a technology they don’t really understand, would they?
A recent report from Deloitte suggests the answers to these questions might not be so simple.
The second annual “State of Ethics and Trust in Technology” includes some worrying findings, chief among them the fact that more than half of respondents surveyed (56 percent) don’t know or are unsure if their organizations have ethical standards guiding the use of generative AI. That’s despite the fact that nearly three quarters (74 percent) reported that their companies have begun testing generative AI, 65 percent are using it internally and 31 percent are using it for external consumption.
Deloitte surveyed over 1,700 business and technical professionals across industry sectors for the report, with nearly half (45 percent) coming from Technology, Media & Telecommunications. “[T]he adoption of Generative AI is outpacing the development of ethical principles around the use of the technology, intensifying the potential risks to society and corporate trust if these standards continue to lag,” said Deloitte chief purpose and DEI officer, Kwasi Mitchell, in a press release for the report.
It’s worth noting that the ethical concerns in the case of Generative AI are not of the spurious Skynet-variety but rather more realistic and immediate. Data Privacy was the most common concern among those surveyed (22 percent), followed by Transparency (14 percent), then Data Poisoning, IP Ownership and Data Provenance (each at 12 percent).
To put the point bluntly, in the absence of a framework for ensuring that their training data isn’t being leaked, interfered with, acquired illegally or significantly biased, three out of four companies are happily turning it over to generative AI tools. Even experimentally, this seems like seriously bad practice.
Imagine if a third of businesses had opted to accept NFTs in place of hard (i.e., actual) currency.
For what’s it worth, the Deloitte report does include some suggestions for safely incorporating AI, including utilizing public-based and private-instance capabilities through major platform developers, as well as partnering with major platform developers to develop custom, private instances. There’s even a small minority (8 percent) of respondents who reported building their AI tools completely in-house, though there are obvious risks with each of these approaches.
To return to the comparison with crypto, FTX would have been considered a “major platform” only a few years ago. Let’s hope we’ve learned our lesson since then.