Can AI replace human performance?
By Jeff Roach, Executive Director, Digital Consulting
From Star Trek to Dr. Who to The Jetsons, science fiction is universally infused with AI. Whether a fully embodied character or a voice emanating from a machine panel, AI has captured imaginations and bolstered box offices for decades. But as AI continues to shift from the stuff of science fiction to a fixture of mainstream science, one common plot point gains particular relevance—Will AI ever advance far enough to replace human creativity and productivity?
It’s an easy scenario to imagine, given the remarkable advancements in AI across a short span of years. For example, ChatGPT, a powerful language model developed by OpenAI, can generate its own human-like text, while other AI innovations like DALL-E and Midjourney can produce highly realistic images and videos from large artistic data sets. Yet, while the abilities of AI systems are incredibly (and increasingly) sophisticated, it’s important to remember that AI isn’t a replacement for people.
Unlike people, AI isn’t capable of making decisions or performing tasks that require critical thinking—it’s simply a tool that generates an output based on a given input. Because AI can’t understand situational context or nuances, it can’t make appropriate, independent decisions, so it still requires human supervision to operate effectively. Nonetheless, like the innovations that preceded it—such as trains, cars, planes, televisions, computers, and cellphones—AI tends to elicit a reactive fear that these new tools will make people redundant. The key to transforming this fear is to view AI as a toolkit that can enhance, rather than replace, human performance.
The ethics of AI
Using AI to support human work presents some ethical concerns, as well as discussion opportunities. Perhaps the most recognizable concern is the fear-based one listed above, that AI systems may automate specific types of work currently performed by humans, leading to widespread job loss and economic disruption. Furthermore, without retraining programs that are well planned and implemented, AI-related unemployment could become chronic, further affecting entire geographies.
A less familiar, but equally crucial ethical factor is transparency. To avoid the possibility of harmful outcomes, we must build transparency and accountability into how humans train AI systems. An ethical foundation is necessary because AI has the potential to perpetuate—or even amplify—the societal biases already present in AI-training data. Left unchecked, these societal biases result in unfair practices or discriminatory outcomes. For example, facial-recognition systems trained on data sets that lack diversity can result in higher error rates for people with darker skin tones. Similarly, language models trained on biased text can promote stereotypes and discriminate against marginalized groups.
The “hidden” human costs involved in training AI models must also be addressed. Human labor is required to collect and label the data used to train the model. This process, known as data annotation, is often time-consuming and repetitive. It can also be disturbing, since training an AI model to know what’s acceptable sometimes requires tagging and cataloging graphic or explicit content examples. Moreover, this work tends to be performed by people in developing countries, who are paid less to work long hours viewing and sorting questionable content. So we must consider fair compensation, along with copyright issues, when building AI-training frameworks.
Finally, the ethics of AI affect people and the environment alike, since training AI models requires significant computational power. This can contribute to higher carbon-emission loads, which are known to exact environmental and human health costs.
Best practices for AI
Creating a plan to address AI concerns is not only an interpersonal responsibility, but also a strategic one. Demonstrated, thoughtful leadership can increase trust in AI systems. It can also help you avoid negative consequences, such as legal issues, lack of adoption, and reputational damage.
Consider the below best practices when determining how to use AI for automating business and creative work.
Data diversity and quality: To reduce bias in AI systems, use diverse, high-quality training data that’s representative of the population the AI will serve.
Transparency and explainability: Design AI systems to be transparent and explainable so users can understand the AI’s decision-making process and help identify potential biases.
Regulation and oversight: Become familiar with local, state, and federal government roles in regulating AI usage, as these agencies are often responsible for ensuring AI is used ethically and appropriately.
Responsible AI development: Adopt responsible AI-development practices, such as regularly reviewing and testing AI systems for bias and discrimination.
Human-in-the-loop: Explore the human-in-the-loop approach to AI design, where a human operator can review and override decisions made by the AI system.
Job transition and retraining: As AI automates specific types of work, provide organizational support for people affected by job loss, like retraining programs that help them transition to new roles.
Environmental sustainability: Consider the environmental impact of your AI systems. Take steps to minimize the energy consumption and carbon footprint of AI development and deployment.
So, will AI ever advance far enough to replace human creativity and productivity? Unlike your favorite science fiction book or movie, AI can’t think critically, make independent decisions, or act without human intervention. Instead, AI is a powerful productivity accelerator that can assist people with a variety of tasks. While AI can and does enhance human performance, it doesn’t replace it.
Now, what will you and your organization accomplish with the time AI saves you? That’s a question only you can answer.
Developmental Editing & Grammar
Lisa Rosenberger and Ren Iris