ChatGPT: The Next Big Program or Internet Gimmick?

Courtesy of Pixabay.

The latest sensation prompting headlines for the past month is a new Artificial Intelligence (AI) program, Chat GPT, which can generate text within seconds. For those who don’t know, Entrepreneurs Herald describes the program as “specifically designed for generating human-like text in a conversational setting. It can be used to generate responses to user input in real-time, making it an ideal tool for chatbots and virtual assistants.”

When asked with the prompt, “How would you describe yourself?” the ChatGPT program quickly replied: 

“I am a machine learning model, specifically a type of deep learning model called a transformer, that has been trained on a large dataset of text. I am capable of understanding and responding to natural language inputs in a human-like manner. I can be used for a variety of natural language processing tasks such as text generation, language translation, and question answering. I have knowledge cut off of 2021, and my training is based on a neural network architecture.” 

York University Philosophy Professor, Regina Rini, explains that “GPT is basically an interactive repository of patterns in human-authored text. That’s how it works: it draws in an enormous database of things that real people wrote online, then uses that, plus clever mathematics, to generate new text. Basically it is trying to predict what humans would write, starting from whatever prompt you give it.”

Essentially, users can type in a question or a task. Depending on the task length or difficulty, the program typically responds to a command within seconds. You can ask the program a variety of prompts including: dinner recipes, historical questions, and cover letter writing. 

As CNET notes, “the AI is trained on large volumes of data from the internet written by humans, including conversations. But ChatGPT isn’t connected to the internet, so it sometimes produces incorrect answers and has limited knowledge.” 

The CBC reports that the program comes from OpenAI, a research and development firm from San Francisco. It’s co-funded by Elon Musk, and boasts investors such as Peter Theil and Microsoft. 

Despite the innovation, there is some worry that the program could cause a shift in the workforce – that it could replace jobs, even whole lines of work – especially those in the writing industry, as it becomes more sophisticated.

If the technology advances, companies could use ChatGPT or similar programs for content writing tasks such as articles, blogs, and social media posts.

Entrepreneurs Herold notes that it could be used for “a large amount of written material on a regular basis, such as news organizations, marketing agencies, and e-commerce websites.”

“LLMs (Large Language Models) will diminish some industries but also create new ones, like most technological change. Think about how the web and Google changed traditional advertising: some old jobs disappeared, but there were also new jobs like Search Engine Optimization (SEO) or social media influencing” says Professor Rini. 

Additionally, ChatGPT’s impact could reside in the virtual assistant and search engine industry. According to CNET, a program like ChatGPT might change customer service chatbots, virtual assistants such as Siri, and search engines like Google. 

Despite fears that this program could remove lines of work, Professor Rini describes how it may create new jobs such as LLM prompt-writing or text database curation. “The big unknown is how many old jobs will go and how many new ones will appear,” says Rini. 

On initial glance, the ChatGPT program sounds flawless, but it still has some issues.

For now, their website describes ChatGPT as a “free research preview,” and before entering the program, a notification also appears, stating  that “the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” 

In response to the question, “Describe some of your abilities, simply”, the program also states, “please note that I am not always 100% accurate and my abilities might be limited by my training data and its quality.” 

One of the main issues of Chat GPT comes from it responding with misinformation or factual inaccuracies. According to Professor Rini, “it gets facts wrong all the time, because it doesn’t really have any knowledge of the world, only patterns in its source texts (including fiction and misinformation). We need to build the human skill of fact-checking.” 

ChatGPT and other LLMs have other limitations. In a recent report by Current Digest, they argue that ChatGPT and other LLMs are not advanced enough to replace human critical thinking skills and human creativity. 

Only time will tell what shifts Chat GPT will bring to us culturally and within the working world. 

About the Author

By David Clarke

Former Editor

David is in his fourth year, studying English at York University. He has a keen interest in filmmaking, writing, literature, video-editing, and ideas. When he isn’t working on his next project or studying, you can catch him watching film-noirs on Turner Classic Movies.

Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments