• AI Joe
  • Posts
  • Is AI getting worse?

Is AI getting worse?

Good morning, welcome to AI Joe, the newsletter that’s more legit than a Supreme Court decision.

via Giphy

Here’s what’s on today’s agenda 👇️ 

Main story and opinion
  • Is OpenAI’s GPT-4 getting worse? 🙅 

Other stories
  • Netflix offering salaries up to $900K for new AI roles 🍿 

  • Brain2Music Model 🧠 ➡️🎵 

AI content of the week
  • Marc Andreessen talks AI with Joe Rogan

How you can use AI this week
  • Edit your writing with the ultimate ChatGPT prompt

Is OpenAI’s GPT-4 getting worse?

There’s been a fleury of tweets, news stories and anecdotal posts online that claim ChatGPT-4 is getting worse over time, not better.

A study from earlier this month sparked the debate as it claimed a March version of GPT-4 was able to identify prime numbers 97.6% of the time while in June, the model was only able to identify them 2.4% of the time. Note these numbers are now the exact opposite–it was previously right 97.6% of the time, and now it’s wrong 97.6% of the time. After digging slightly beneath the surface of the study, it became clear that the researchers ONLY used prime numbers. This posed an issue, the model was not more likely to accurately identify prime numbers, it was just more likely to respond in the affirmative that a number it was given was a prime number. When others replicated the study using a combination of prime and composite numbers the March and June models performed similarly.

Does that mean it’s not getting worse?

Well, it depends who you ask.

Though this initial study has been ”debunked” there are many other anecdotal stories claiming the model is degrading over time.

One theory is that as more AI content gets released, more of the training data is AI-generated. This article explores the consequences of training models with too much AI-generated data. The conclusion that the researchers arrived at is called “model collapse.” This happens when a model reinforces its own biases or hallucinations with more and more data. This makes those falsehoods irreversible and makes the model unusable.

The other theory is that the models are not degrading and are likely getting better over time. Researchers believe the challenge power users and businesses using GPT-4 are experiencing is something called “model drift". Model drift means users can’t expect consistent results from the same prompts. For businesses or power users who are consistently running the same prompts, they are noticing the results aren’t as reliable as they once were. Even if overall the model is producing more reliable answers, the user’s “use case” may be negatively impacted. This makes these users feel the model is getting worse with time.

Joe’s takeaway: There are always going to be bumps in the road. While I believe these are valid concerns, AI will continue to press forward. Technology challenges will always appear more frequently as usage grows. Technology always gets better through iteration. I’m excited to see how OpenAI and its competitors address these AI challenges.

Netflix offering salaries up to $900K for new AI roles 🍿 

While writers and actors continue to strike, AI opportunities at Netflix ramp up. Could these be linked?

via Giphy

Netflix has been using AI for years to help recommend shows and movies. The “recommended for you” section uses an AI algorithm to recommend shows based on your account history and the viewing patterns of millions of other users.

While a recommendation engine isn’t the most pressing issue in the Hollywood strikes, concerns that AI will creep into the writers’ room or onto the silver screen are being validated as production studios build their AI talent pools. There is also concern that AI will be used to determine the number of episodes and funding required to make a show successful enough to drive subscribers. The calculation doesn’t concern itself with the art of storytelling and instead will focus on the math of profitability.

Algorithms dictate how many episodes a season needs to be before you reach a plateau of new subscribers and how many seasons a series needs to be on. That reduces the amount of episodes per season to between six and 10, and it reduces the amount of seasons to three or four. You can't live on that. We're being systematically squeezed out of our livelihood by a business model that was foisted upon us, that has created a myriad of problems for everyone up and down the ladder.

Fran Drescher, Sag-Aftra via Time Magazine

As this fight continues to heat up, only time will tell what influence AI has on the Hollywood machine.

Brain2Music Model 🧠 ➡️🎵 

Learning an instrument can be really difficult. Maybe learning to play with your brain will make it a little easier.

via Giphy

In a previous edition of AI Joe, we covered Google’s MusicLM, which takes a prompt and creates music. Now Google’s taken it 1 step further and created music with a prompt created by measuring brain activity.

How they did it

Five participants listened to 15-second music clips across different genres. While they listened to the clips, their brains were being scanned using fMRI technology. Researchers used these brain scans to train neural networks and build the Brain2Music model.

While the results don’t sound like the original clips, they do sound like music.

Joe’s takeaway: This is another case where we are seeing the very early results of brand-new technology. We’ll look back on this and laugh about how crude it was. This is the type of research that will eventually lead to products that change our lives. Whether it’s this specific research or other types of AI research, it’s only a matter of how it’ll change our lives and when.

Marc Andreessen joins Joe Rogan to talk AI

This is a long episode, so I’ve summarized the AI-specific notes below. It’s definitely worth listening to if you’ve got the time!

  • AI is like a sophisticated autocomplete that probabilistically uses all written text to predict the next word.

  • The next step for AI is to correlate seemingly unrelated information in new and interesting ways.

  • AI isn’t an existential threat, it’s a tool like all the tools humans invent and use that have created a healthier, safer, better world.

  • Two theories explain bias in AI systems: The first theory suggests that AI systems reflect biases in the training data. The second theory involves direct censorship and restraints on the model.

  • AI reduces the cost of knowledge work by orders of magnitude. As a result, it will free up spending power, increase quality of life, and create new jobs.

  • It can generate art, write a legal brief, or summarize a book. Moreover, it can adopt different styles to provide the best answers.

  • Kids today will perceive advanced AI as normal and it will be significant in their learning and coaching.

  • Some companies are already trying to get regulatory capture around AI, erect barriers to new startups, and ban open-source AI by arguing that it’s too dangerous.

  • Open-source AI censorship will lead to a global surveillance state and potentially escalate to military action and international conflict.

  • Your role in combating censorship: Show up at town hall meetings and pressure politicians. Start with elected officials and judges, especially Supreme Court judges.

Joe’s Takeaway: While many, arguably most mainstream interviews, focus on the dangers of AI, Andreessen is extremely optimistic. He acknowledges that of course AI can be used for harm, but argues that the technology will help push humanity forward.

I’d argue that AI is necessary for humanity to support an aging population. Researchers expect the human population to begin slowly declining in about 60 years. AI will be necessary to allow our economies to continue growing even without an increasing population.

The best ChatGPT Prompt for editing your writing

If you’ve ever tried to fix your writing with ChatGPT, you may have been disappointed with the result. While it will fix errors and simplify sentences, it also tends to “over edit” and often loses the tone of your original piece.

Via Giphy

Here’s a prompt you can use on ChatGPT to get just the right amount of editing done:

You are an assistant that revises a user's document to improve its writing quality.

Make sure to:

- Fix spelling and grammar
- Make sentences more clear and concise
- Split up run-on sentences
- Reduce repetition
- When replacing words, do not make them more complex or difficult than the original
- If the text contains quotes, repeat the text inside the quotes verbatim
- Do not change the meaning of the text
- Do not remove any markdown formatting in the text, like headers, bullets, or checkboxes
- Do not use overly formal language

Here’s the document you will edit:

Instagram user @justinfineberg_

Know a Joe? Forward this email or send them a link to subscribe here

Got feedback? Reply to this email with your thoughts, we love to hear from readers!