• Sat. Apr 13th, 2024

Grow4Sites

Technology News

The Good and Bad About ChatGPT

Byadmin

Mar 21, 2023

ChatGPT is an artificial intelligence (AI) chatbot created by OpenAI, released publicly in January and already proving popular.

It can also assist with schoolwork and even generate code for you.

But it also presents ethical problems. It can be used to phish you out of personal information, and it doesn’t always tell the truth.

Technology faces a monumental challenge, which must be tackled to learn how to utilize these systems safely and responsibly. It is essential that we consider the ethics of these systems and create guidelines and best practices for their usage.

ChatGPT has been heralded as a game-changer despite these difficulties, already powering Bing’s new challenge to Google search and Microsoft’s announcement in January that they would invest billions of dollars into OpenAI research.

It’s an historic milestone, as this AI chatbot can do some truly remarkable tasks. You can ask it to write a song in the style of your favourite band, answer complex questions on Wikipedia (500 words on World War Two), create website copy for you and even compose speeches!

But it can be an issue when seeking the most accurate answers to questions, as developers on developer question-and-answer site StackOverflow discovered. Because it’s an AI system, it may be susceptible to hallucinations or inaccurate responses to factual queries.

To prevent this, it’s essential to provide frequent feedback. Doing so allows a machine learning network to be fine-tuned by people who have actually experienced its results. It will learn how to more accurately answer human inquiries and create text that’s grammatically correct.

Unfortunately, though, the machine learning algorithm requires a considerable amount of time to train. This task involves reading text on the Web, comparing what it sees with what humans might have said, and then trying to guess what someone may have meant when speaking.

Once it’s finished “reading” the Web, it begins building a neural net that can generate its own text. This process is similar to training an advanced language model which needs to be able to scan through large amounts of text in order to figure out what it should say.

At the end, it uses several hundred billion weights, roughly equivalent to the number of words (or tokens) provided in training data. Yet in essence, it still utilizes an outer loop – one which is more recursive than many other large language models.

This neural network is an example of what’s known as a transformer net.

A transformer net draws inputs from a set of pre-generated neural networks and compares them to its own network in an attempt to predict which ones are likely to provide the best results.

Related Post