close
close

topicnews · August 31, 2024

What you need to know about Project Strawberry

What you need to know about Project Strawberry

Behind closed doors, AI manufacturer OpenAI is working on a new project called Strawberry. The project is still top secret. However, there are some leaks that tell us more about the release and the capabilities of the OpenAI AI.

What is Strawberry by OpenAI?

Basically, Strawberry is a new AI from OpenAI. According to initial insider information, the artificial intelligence should be able to solve more complex tasks than ChatGPT currently can – without hallucinating.

Strawberry should also be able to solve tasks for which no specific training data is available. These include mathematical problems or in-depth subject areas that the AI ​​has not previously trained. Even without this “knowledge”, the AI ​​should be able to offer reliable solutions.

Initially, Strawberry was apparently developed to train OpenAI’s next AI project called Orion, which is likely to be OpenAI’s next flagship AI. Strawberry is intended to provide high-quality synthetic data for Orion and thus eliminate the risk of hallucinations in the new AI.

However, according to insiders, OpenAI has now decided to implement a distilled version of Strawberry in ChatGPT. This way, we could also try out the new AI as a chatbot and put it to the test as soon as it is available.

When can we expect Strawberry

According to information from insiders, we can expect Strawberry as early as fall 2024. OpenAI is said to have presented AI to the NSA’s AI managers as early as summer 2024. Therefore, an early release would not be entirely unreasonable. However, the release period has yet to be confirmed. OpenAI is still keeping quiet about Strawberry.

What we can use strawberries for

If a chatbot version of Strawberry were to appear, we could use the AI ​​for a variety of tasks. These include the mathematical tasks mentioned above, but also programming challenges and answering technical questions. According to insiders, Strawberry will take a little longer than ChatGPT, but will provide in-depth answers.

This is probably because the AI ​​approaches the tasks strategically and prepares a path to the solution. Strawberry was supposedly able to solve the New York Times’ Connection word puzzle. In addition, this approach would also make it possible to provide detailed answers to very subjective topics. For example, the AI ​​should be able to develop marketing strategies for certain products.

What Strawberry could mean for ChatGPT and OpenAI

Wolfgang Stieler, editor for AI, robotics and physics at MIT Technology Review, summarized his expert opinion on the impact of Strawberry on ChatGPT and other language models and also discussed it in the podcast:

Solving mathematical problems could actually prove to be the key to taking large language models to a new level. Language models like ChatGPT have only been trained to produce sentences by predicting the next word that statistically best fits the prompt. This means that it can learn to give answers to questions like “What is 2 + 2?”, but the language model is not doing any calculations. It has only learned that the answer 4 o’clock is most frequently contained in the training data. With complex tasks – and complex can be a rule of three for language models – a second problem arises: language models work sequentially from input to output. They generate their answers word by word, but cannot adapt them retrospectively.

Symbolic AI can be something like that, but firstly it is not as flexible as large language models. And secondly it is very likely that OpenAI will first try to do better what it already does very well, including improving large language models. A now widespread approach to this is so-called chain-of-thought (CoT) methods, in which the AI ​​models are made to work step by step when answering queries using targeted prompting and suitable examples.

In keeping with these considerations, the new OpenAI approach is intended to be similar to a method developed by Stanford researchers called Self-Taught Reasoner (Star). The idea behind it is to teach language models to provide reasons for their answers, then use the correct reasons as training data, then use the fine-tuned model to generate new training data, and so on.

Stanford’s Tree of Thoughts method is also promising: using a logical tree structure, a language model could try out different argument paths and return to an original point when it got stuck. The code name originally used by OpenAI for Strawberry, Q*, points a little in this direction, because an algorithm called A* was a historic breakthrough in solving navigation problems.

Google’s AI search backfires

Google’s new AI search backfires

Done quickly!

Please click on the link in the confirmation email to confirm your registration.

Do you want more information about the newsletter? Find out more now