What is Chat GPT? KSAT Explains how it works and its ethical dilemmas

Chat GPT is the latest tech innovation blowing people’s minds while simultaneously raising questions about ethics, critical thinking and originality

Maybe you don’t consider yourself “into tech.” But Chat GPT and other Generative AI like it are worth paying attention to because of their potential to revolutionize just about everything from business to art, academia, and more.

Chat GPT is a large language model and a form of generative AI, or artificial intelligence.

Let’s call that the technical description.

What does it actually do?

You can ask Chat GPT a question or give it a prompt, and it can respond with paragraph after paragraph in seconds — think full articles or essays.

The AI machine is a way to produce text fast and is free to try.

“It is one of the most revolutionary technologies that is going to impact everything we do in society,” said Ryan McPherson, associate professor of practice in the UTSA Communications Department.

McPherson is excited about the possibilities with Chat GPT.

“Sometimes we struggle to explain things to other people, and this can give us a way to explain it at different levels,” he said. “So I can say, ‘Explain Chat GPT or any other subject like I’m a fifth grader,’ and it can give me a simple bulleted list that I can more easily understand than going to Wikipedia and searching or going to Google and searching.”

Watch what happened when I asked Chat GPT to write this article for KSAT.com:

The program produced that content in 50 seconds.

“It not only just accelerates your work, but it gives you more time to be the editor and the curator of your work,” said Grace Delgado, a digital marketer who uses Chat GPT to help make blogs, ads and other content.

“Instead of focusing on that first paragraph, that second paragraph, and your conclusion, it just gets it started for you so you can add the best to it. That is your twist,” Delgado added.

The ability to create something this thorough, this fast, is new. But the technology to predict language is not.

“It’s actually been out there for a long, long time. It’s just getting better over time,” said Anthony Rios, information systems & cybersecurity professor at UTSA.

Think of the autocomplete feature in your email or text messages. That’s a similar version of this tech.

“A language model is basically when you’re given some input sequence of tokens, and then you’re trying to predict future tokens. You can think of a token as basically a word, but sometimes we’re thinking about sub-words,” Rios said. “As an example, you know, given the input sequence, ‘The University of Texas at,’ and then the model would learn to predict things like ‘San Antonio or Austin or Dallas.’”

Where did it come from?

OpenAI debuted Chat GPT in November 2022.

At the time, the large language model was trained on how to predict text based on content created and published online up until that point.

Then it was updated in mid-March 2023.

With every update, Chat GPT incorporates more current content.

Users can ask it to come up with anything. The more specific the task, the more thorough the result the user will receive.

Chat GPT is also refining its own product as it gets feedback from users.

“You’re playing with Chat GPT, and you say, ‘Hey, no, this is not good. Or ‘This is good,’ like a thumbs up or thumbs down, then it can learn it over time,” Rios said. “Humans are actually providing that feedback.”

It’s not perfect

Chat GPT isn’t the only generative AI in use. It has received the biggest buzz, but other models are already out there.

And there are some words of caution with whatever model you might use: It has flaws, and it makes mistakes.

Just one example:

“The bot saying that 13 is not a prime number because it’s divisible by three,” said Ronni Gura Sadovsky, assistant professor of philosophy at Trinity University. “And it was correctly defining what a prime is and what divisible means.”

“Somebody might not realize it’s false because it sounds so confident,” said Rios. “And if people don’t take that into consideration, there’s these unintentional harms as well.”

Ethics in Education and beyond

As more people use Generative AI to find out what it’s capable of, this type of technology is raising even more questions about ethics.

“There’s huge dangers. First off, we need guardrails. Because if we come to a place where we trust AI, we know how those movies end, and it’s not good,” McPherson said.

The world of academia may be the first to wrestle with the pros and cons of this technology.

“I think a lot of people immediately assume that it’s going to be used for nefarious ends, that students are going to use it to cheat and to plagiarize,” said Scott Gage, associate professor of english at Texas A&M University-San Antonio. “I think that potential exists, but I don’t think it’s inevitable.”

The internet has already provided plenty of ways to cheat.

A big challenge for the professors we talked with is incorporating Chat GPT and getting students to use it responsibly while ensuring they’re still learning something.

It’s been the subject of professor workshops at Trinity University.

“Students are trying to get from point A to point B, whether that’s, you know, a credential or a grade or even just an experience that they hope to have in the class. And we try to put the learning in between point A to point B,” said Gura Sadovsky. “And this is almost like there’s a way around the entire apparatus that we’ve created that’s supposed to teach them something.”

For all the dangers, this tech could have an upside in education and beyond.

“A lot of people come from backgrounds where, say, they learned English as a second language or maybe, you know, they don’t have very strong writing history or educational background, or they just haven’t practiced writing very often,” said Rios. “So this actually is a very good tool to help create professional writing samples.”

“What if I showed my students all the things that this technology is doing well? Like it’s organizing an essay really nicely. It’s got great grammar. It’s supporting its claims with evidence,” Gura Sadovsky said.

“It can take the place of critical thinking for our students. So, in education, we’re looking for ways to integrate it into our courses, into our assessment practices, with a bias towards making sure that students are still able to think critically and be prepared for the job market,” McPherson said.

Because, like it or not, this tech exists in that job market.

“We can’t ignore the technologies that students will have access to in those lives outside of school,” Gage said.

“Is using Google cheating? Is using books cheating? Are using reference guides cheating? At the end of the day, AI is a tool,” Delgado said.

New technology signals innovation. It’s up to the user to let it spark and not stifle their own.

“One of the big dangers here is creativity and innovation,” McPherson said. “Because if we’re all following the recipe and we’re all looking for answers in the back of the book, then where is our innovation? Where’s our creativity? And how do we drive conversations forward in meaningful ways?”

Find more KSAT Explains episodes here


About the Authors:

Myra Arthur is passionate about San Antonio and sharing its stories. She graduated high school in the Alamo City and always wanted to anchor and report in her hometown. Myra anchors KSAT News at 6:00 p.m. and hosts and reports for the streaming show, KSAT Explains. She joined KSAT in 2012 after anchoring and reporting in Waco and Corpus Christi.

Valerie Gomez is lead video editor and graphic artist for KSAT Explains. She began her career in 2014 and has been with KSAT since 2017. She helped create KSAT’s first digital-only newscast in 2018, and her work on KSAT Explains and various specials have earned her a Gracie Award from the Alliance for Women in Media and multiple Emmy nominations.