What if a machine could create art or technology on its own in seconds?
ChatGPT, a so-called “AI chatbot website,” can do just that and one artificial intelligence ethicist from the University of Waterloo says technology like this has the potential to create art that people will want.
ChatGPT by OpenAI is available as a “research preview” to the public for now. It allows users to enter questions or tasks into a textbox — whether it’s asking it to write a poem, song lyrics or computer code — and it’ll output copy or code that’s passable as human.
“I think it’ll be a slow process, but I do think a lot of people are going to be interested in creativity or creative work product that is written by AI as opposed to humans,” said Maura Grossman, a research professor with the David R. Cheriton School of Computer Science at the University of Waterloo who studies AI ethics, among other things.
“It may be very different, though, since it isn’t going to be coming from the heart in the same way,” she said.
Grossman guessed that this year, AI technology may go beyond just creating literature and could potentially make videos. She mused about AI music, which is already a reality.
“If one of these tools can create music that’s just as pleasing as other music, and you can get it for free and you can create the music that you like best, maybe you do stop buying tickets to go see other people or whatever,” Grossman said.
During a visit to Grossman’s home to experiment with the system, ChatGPT was asked to write a poem about the benefits of winter. It wrote:
Winter’s here a time of cheer, the air is crisp, the sky is clear.
The trees are bare, but in the air, there’s a magic that’s beyond compare.
Grossman then asked it to revise the poem to rhyme less and be “more soulful.” This is how it revised the text:
Winter’s chill, a crispness in the air, the trees stand bare, stripped of their leaves.
The sky is grey, a blanket overhead, but there is a stillness, a sense of peace instead.
Issues with accuracy
All that said, Grossman was a bit skeptical about ChatGPT when she learned about it six months ago.
“My tendency is not to think that all new technologies are magic,” she said.
Users can also ask the system questions and it will generally output an answer, but Grossman said that there were inaccuracies in an answer the system gave her.
“It sounds very authoritative and it doesn’t make grammar mistakes and spelling mistakes like the rest of us do, so things like that it can do,” Grossman said.
“The problem comes in, is whether what it’s saying is accurate or not. And that’s where the challenge is so very general, things that it can parroted back because it has seen so much text on the Internet, that it can do well.”
The accuracy of the ChatGPT outputs is one of the system’s limitations outlined by OpenAI on their site, which states: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
‘I’m not that worried about my jobs’
First year University of Waterloo computer science student Kabir Verma found that the computer code ChatGPT creates is unsophisticated and imperfect.
“The code, it works, but it’s very basic and you have to walk it through some of the processes of what to do it,” he said.
When he first tried the system a month ago, he joked to a friend they weren’t going to find jobs, but it’s not something he’s actually worried about. He can see AI technology like this writing the code that’s akin to “grunt work,” which allows a programmer to focus on solving problems.
“Computer science is a very vast field and problem solving will always be something that people are like needed for,” Verma said. “So I’m not that worried about my jobs, but I do realize that it means that programmers have to step up their game a little bit.”
University of Oxford computer science professor, Michael Wooldridge, told CBC’s The Current host Matt Galloway that the launch of the system was an AI “landmark event” and the technology could take the strain off doing a tedious or difficult task.
“For the vast majority of people, it’s just going to be another tool that they use and it’s going to make them more productive,” he said.
Wooldridge agrees it won’t make human work obsolete and says one of the big concerns with a program like ChatGPT is the spread of AI-generated fake news.
“You can create the skeleton of a fake news story, ask it to produce 100 different variations on that, and then just use 100 different fake Twitter IDs or Facebook IDs to start spreading that,” he said.
One way this issue could be fought is by inserting a hidden digital signature into text generated by an AI program in order to signify it was produced by a computer and not a human journalist, he said.
Similarly to AI-created artwork, questions have also been raised about copyright.
“There are copyright issues that we’re going to have to work out,” Grossman said, explaining that what would be considered plagiarism using a system like this are yet to be defined.
“This is so new, and law and ethics tend to stumble far behind technology developments, so we’re just starting to think about this.”