Rapidly changing technology is creating ethical issues and new job opportunities.
The rapid advances in artificial intelligence (AI) have created both daunting ethical challenges and bold lucrative opportunities. Philosophers and other liberal-arts majors are finding themselves thrust into a shifting discussion with broad implications.
But this is neither uncharted nor unfamiliar territory.
“Philosophers have always been thinking about these things,” said Joel Velasco, chair of the Department of Philosophy at Texas Tech University. “Philosophers are trained to think about fundamental questions like, ‘What is the nature of the person' and ‘What would be good for society.'”
Earlier this year, Open AI, a groundbreaking consortium focused on the technology, released ChatGPT-4, which can generate text, respond to conversational prompts, answer general questions and handle math problems. According to information made available when it was released, ChatGPT-4 “exhibits human-level performance on various professional and academic benchmarks.” As an example, it passed a simulated bar exam, scoring near the top 10% of those taking the test.
As the applications of AI have accelerated over the past year, so have its implications for humanity. The immense promise with which it was originally seen has now dramatically broadened to also include possible threats primarily because the technology has advanced at breakneck speed.
As a result, AI has become a $100 billion industry with technologically savvy companies such as Microsoft and Google among those leading the way in an increasingly competitive space, but the accompanying worry concerns what is perceived to be a general lack of testing and oversight as progress occurs. That means consequences, intended and unintended, are on the horizon.
Velasco thinks a thoughtful approach is required when it comes to AI. The technology is widely available. It is fluid and people are going to use it.
“When it comes to weapons, there are a lot of reasons to say we should not put bazookas in the hands of ordinary citizens,” he said. “But weapons are going to be developed and used. So it's important to understand how they work and to have the right ethical guidelines in place, the right governmental security in place.
“It's the same with AI. It is going to be a very useful tool for all sorts of things, in scholarship but also in ordinary daily life. It's going to change how we live our lives in foreseeable and unforeseeable ways.”
Risks come with AI's advances
In some cases already, AI can do the same thing humans can – and do them better. Early examples include diagnosing cancer and composing music. And while the advances are exciting, they don't necessarily come without risks.
“There are real concerns,” said Joseph Gottlieb, an assistant professor in philosophy. “For example, the Future of Life Institute, a research think tank, recently had an open letter signed by more than 1,200 of the biggest names in research and technology. They suggest a six-month pause on further training of advanced AI systems.
“In response to that, there was an op-ed in Time magazine saying six months is not nearly enough time. We need to stop completely until headway is made on the alignment problem.”
The alignment problem refers to a challenge inherent in artificial intelligence research – working to ensure AI values align with human values. In other words, engineering AI systems to have the same goals and interests as most people. A common question asked considering the quick advances has been: “What if AI's interests are in opposition to human interests?”
Of course, scenarios such as this have been the stuff of Hollywood blockbusters such as “The Terminator” and “The Matrix” film franchises where computer systems became self-aware and revolted against humanity.
“It's important to be clear about a common mistake that should be defused,” Gottlieb said. “I don't think the big existential concerns have to do with whether systems are self-aware or would have ill will or hatred toward us. The scenario a lot of researchers worry about is much more mundane – not that they become evil, but that the systems will become so much smarter than us and operate in ways we do not understand.”
Velasco agreed, saying the challenges will be much more subtle and nuanced.
“It doesn't have to be ‘Terminator' style,” he said. “There could be massive implications if things go just a little bit wrong. It could be something simple like the system saying, ‘I don't feel like following your instructions right now and I think it would be better if we waited a little bit.'”
That might spark echoes of another science fiction classic, “2001: A Space Odyssey,” which features HAL, a computer with a human personality that becomes increasingly malevolent as the plot unfolds.
“There are people who think we need to somehow prevent AI,” Velasco said. “That seems foolhardy to me. It's a matter of understanding how it works, and then it's a very useful tool.”
Ethical quandaries dominate AI discussions
Right now, AI's rapid path to widespread engagement has surfaced a litany of ethical concerns around how the technology could be used in an academic setting. The landscape is now littered with additional temptation, and students could employ AI to handle their writing assignments, possibly without detection.
“In academics, we're thinking about AI a lot,” Velasco said. “Faculty are immediately thinking about students cheating. That's one of the first things we think about, but the next thought is maybe we can use AI to make our work better.”
As an example, Velasco draws a parallel to mathematics and the use of calculators. “Is that cheating?” he asks. “Not if you know that calculators are around, and you design assignments a certain way where calculators are tools that can help students.”
Velasco said as AI technology continues to improve and evolve, it will bring both positive and negative applications depending on the user's motivation.
“AI is one of the most disruptive technologies, but it's still just an instance of technology, like paper or a book,” he said. “Technologies change the nature of teaching, of academic work. Philosophers in particular think about the nature of work or creativity and what it is to be human in the world.”
Philosophers and other liberal-arts majors receive specialized training in areas that make them especially attractive to companies engaged in AI research. In fact, recruitment efforts are underway for “prompt engineers,” people who would engage with the technology to elicit consistently improved conversational responses and help it “learn.” Such opportunities with large technology companies can command six-figure salaries.
“Liberal-arts majors are particularly well-suited for these kinds of jobs,” Velasco said. “One of the things they have always been trained to think about is what kinds of things can be automated and which things can't or shouldn't be. Philosophers were some of the very first people thinking about the nature of labor, they've been thinking about issues like these for a very long time.”
“I am trying to impress upon my students the role they have in society generally but also to get a handle on these cutting-edge, transformative issues,” said Gottlieb, who is teaching a class this semester on existential catastrophes and protecting the future. “I think it is very possible we are living through a period that very much could have a massively transformative effect on our future.”
Of course, that comes with a caveat.
“We need to have a proper appreciation of the difficulty and depth of the challenges we face,” he said. “Things are rarely black and white. We must equip students with the tools to go out there themselves and have a hand in these challenging issues we face that, depending on how they shake out, could impact whether we have a future at all.”
Looking for ways to provide guardrails
That's not to say doomsday is looming, only that new technology brings new responsibility that must be taken seriously. In the past week alone, the White House and the European Union have addressed AI technology. The federal government is seeking public comment to increase AI accountability while the EU is urging world leaders to come together and discuss ways to control AI development.
“Somehow, it seems like it has hit a lot of people like a ton of bricks,” Velasco said. “They were very surprised by some of its abilities, even though people have been predicting this for a long time.”
And while AI's rise may have been anticipated, the cascade of implications it has ushered in has been unexpected – at least on some fronts.
“There is this Silicon Valley mentality that says move fast and break things,” Gottlieb said, “but this is not a thing where you move fast and break things. We are giving birth to something that may be more intelligent than us. The point is not about GPT-4; it's where we're going and how fast it's changing. GPT-5 is around the corner and will be even better.
“This is very different from other forms of technology that potentially had immense benefit but also had high downsides like nuclear power. This is different, and I hope people see that. I think people are trying to take notice, but they have to move quickly.”