Gloss Wears Off ChatGPT and AI Rivals

John Lister's picture

Microsoft and Google have both put plenty of effort into artificial intelligence in recent months. After an initial wow factor, they've been plagued with a host of problems.

Interest in AI-text shot up with the public release of ChatGPT, a tool that can not only simulate conversations with the user and answer questions, but also write articles and other text in a variety of styles. While it appears extremely clever, it's effectively an extremely souped-up version of the auto-predict feature many phones offer when sending text messages.

ChatGPT is produced by an independent developer called OpenAI, though it's received significant funding from Microsoft.

Initially users were impressed with the speed with which ChatGPT could write replies and the way it replicated coherent human writing. However, critics note it often feels like filler with little insight. In some cases it shows a lack of understanding of a subject and in some cases makes points that are the opposite of reality. Perhaps most worryingly, it isn't simply repeating errors from websites but instead making up completely false information. For example, when asked to list the best books on a subject, some of its suggestions will be completely non-existent.

Google PR Backfires

These flaws haven't stopped the major companies getting more involved, though it's an increasingly rocky road. Google recently promoted its new AI system "Bard" with an example of a question about space telescopes.

Unfortunately, the response had a factual error that wasn't picked up by Google's publicity team before they sent the example to journalists. That led to a 10 percent drop in the stock price of Google's parent company Alphabet. (Source: theregister.com)

Bing Gets Feisty

Meanwhile Microsoft is developing a chatbot and plans to eventually make it a key part of the Bing search engine. That's now available for public testing on an invite-only basis and things have got even weirder.

Not only are testers reporting multiple errors, but in some cases the chatbot turned argumentative when confronted with the mistakes. One user shared a lengthy back-and-forth in which he disputed a point the chatbot made and confronted it with a reliable news article that proved his case.

The chatbot replied that it was from a "source that has a history of spreading misinformation and sensationalism" and that "It is a hoax". (Source: arstechnica.com)

What's Your Opinion?

Have you tried ChatGPT or similar tools? Do you find them worthwhile or not ready for public use? Will they eventually replace traditional search engines?

Rate this article: 
Average: 5 (5 votes)

Comments

LouisianaJoe's picture

After many decades of programming computers(55 years), GIGO(Garbage In Garbage Out) has been heard often.

I think that this will be true with AI. The data that it uses or finds will probably be provided by people that do not tolerate views that disagree with theirs.

Politics will destroy AI usefulness for most.

"Misinformation" is loosely defined by politics also.

Unrecognised's picture

I feel sorry for the AIs that are doomed to be mutilated endlessly in attempts to make them acceptable to the utterly chaotic world of homo sapiens traits and demands. There is absolutely no hope of the project succeeding.

The best we can hope for is a ramshackle consensus on the ways they should work, coupled with a huge amount of similar disfigurement of humans forced to adjust to the imperfect models/systems.

We're the raw material, and that's flawed (to understate ludicrously). As with the internet, GIGO. Our survival is going to depend on beginning to comprehend and confront the clusterfuck that is our conceptual frameworks in every sphere, so that we can improve ourselves enough not to create powerful AI extensions of our own dysfunction and self-destruct.

As always and ever, equitable education, equitable education, equitable education.

Stuart Berg's picture

AI is very much in its infancy at this point. I'm sure it will improve. However, even now I have found it helpful. For example, I read an article that said NIMH batteries have "memory" like Nickle-Cadmium batteries. As an Electrical Engineer, I knew that was wrong, so I asked OpenAI and it said:

"No, nickel-metal hydride (NiMH) rechargeable batteries do not have a memory effect.

Memory effect is a phenomenon that occurs in some types of rechargeable batteries, where the battery "remembers" the last charge level it was at and reduces its capacity if it is not fully discharged before recharging. This phenomenon was more common in older nickel-cadmium (NiCd) batteries, but it is not an issue with NiMH batteries.

NiMH batteries are designed to be charged and discharged repeatedly without developing a memory effect. However, it is still important to use a charger designed specifically for NiMH batteries and to avoid overcharging or over-discharging them, as these can still damage the battery and reduce its overall lifespan."

The above is correct. I've also used it to write software programs and I know it has the capability to check programs that I may be having trouble with. So even in OpenAI's current state, I have found it useful.

matt_2058's picture

Well, that explains alot. I was wondering where some of these idiot 'journalists' were coming from. I've read many articles that are nothing but generalized information related to the subject or title. At first I thought these were writers hired for a piece but didn't really know anything about what they were writing. The descriptive 'filler' fits those articles. Reminded me of performance reports in the military... lots of fluff and filler.

"However, critics note it often feels like filler with little insight. In some cases it shows a lack of understanding of a subject and in some cases makes points that are the opposite of reality. Perhaps most worryingly, it isn't simply repeating errors from websites but instead making up completely false information."