Skip to main content

The busy month of July may be behind us now, but August is looking to be just as full of major headlines. Here are the biggest stories of the final days of July and the first week of August 2023. 

Williams Racing’s partnership with Kraken could give NFTs a place in F1. 

The crypto exchange firm Kraken has been working closely with Williams Racing since earlier this year. According to a recent post by Kraken, NFT holders will now be able to vote on an NFT to be placed onto the team’s F1 cars in time for the October Grand Prix. 

Starting from today ( August 1), any NFT holder whose asset is listed on Kraken’s native NFT marketplace will be eligible to submit their token before the contest closes on August 18. Williams and Kraken will then select the top 20 entries, putting them up for a community vote from August 28 to 31. 

Are AIs and language models getting worse? 

At the end of last year, ChatGPT, AI ,and language models as a whole exploded into the mainstream. For a while, the rise of AI-based technology seemed inevitable and the internet as a whole was going to undergo a rapid period of change. 

SERPs were going to be filled with AI-generated content that was identical from website to website and contained little information of value. Tech giants such as Google, Reddit and Twitter, (now known as ‘X’) raced to ensure that their services would not be ruined by the rise of AI. 

Despite this mix of panic and optimism, a recent study suggests that language models are not ready for widespread implementation. If anything, allowing the general public unrestricted access to AI has made the technology worse.

In a recent study conducted and published by Stanford University, it was found that ChatGPT is underperforming in several areas compared to its performance just a few months ago. One of the most eye-catching results from the study was that when the language model was given a specific maths question, it got the answer right just 2% of the time, down from 98% at the beginning of the year. 

The degeneration of the language model isn’t limited to closed or mathematical questions either. It was noted within the same study that the AI’s response to text prompts was also of a lower quality than before.

Why is AI getting worse?

There are multiple factors that could have caused ChatGPT and other language models to fail at the rate that they have been. One possibility is that, since the release of the program to the general public, the language model has been consistently given text prompts and questions that it generated itself. 

In the same manner that photocopying a copy of something over and over again produces lacklustre results, ChatGPT’s responses to prompts are now infected by small errors that have been replicated for so long that the model has internalised the error. 

It is important to remember that despite being called a language model, AI such as ChatGPT does not understand English or any other language in the same manner that we do. This is why its responses can sometimes produce nonsensical or empty phrases. 

Another factor that can explain why AI is getting worse is that it is a conscious decision made by the developers to improve human and AI workflow. In a similar study, Stanford researchers analysed thousands of interactions between humans and language models. It was noted that when the AI responded with a sentence that was more pragmatically accurate but tonally sedated or neutral, people found the answer lacklustre. 

On the other hand, when the AI responded in a more confident tone, but was less accurate in its answer, people found the response to be more fulfilling and positive. 

“[…]by exaggerating its confidence levels, the overall performance of the human users improved significantly. The humans got more of the either/or questions right and also reported higher confidence levels in their correct answers. This improvement arose because in the particular tasks used in the study the AI performed better than human users and thus usually had “good” advice; yet, human users underutilised the AI’s advice when stated at a more sedate, mathematically accurate level.”

What does “Worse” AI mean for digital marketing? 

It is unlikely that the findings from the two studies will affect the flood of AI-written content that now crowds websites and search results. That being said, it does infer that AI-generated content will become easier to spot as it develops its own distinct style. Repetitive AI content is already pushed down by search engines such as Google in favour of more organic, human-centric content, and this is likely to increase to combat the rise of AI-written articles. 

Moving towards author-focused, decentralised content marketing will only become more important as tech giants make the switch to Web3 style services and close the doors to AI scraping tools through API restrictions.