Select Page

ChatGPT O1 vs. Predecessors: Evaluating AI’s Evolution

by | September 26, 2024

The field of artificial intelligence continues to advance rapidly, and keeping up with the latest developments is essential for professionals and businesses alike. Recently, I had the opportunity to evaluate OpenAI’s ChatGPT o1 model preview, comparing its performance against ChatGPT 4o and Anthropic’s Claude 3.5 Sonnet. To thoroughly assess these models, I designed ten challenging prompts intended to test their reasoning, problem-solving abilities, and attention to detail. The results were enlightening, revealing significant improvements in the o1 model. 

%

ChatGPT o1 Preview

%

Claude 3.5 Sonnet

%

ChatGPT 4o

These scores reflect each model’s ability to understand complex tasks, interpret nuances, and provide accurate responses. Let’s delve into each prompt and see how the models fared. 

Test 1: The Luggage Logic Puzzle

The airline has size and weight restrictions for carry-on luggage: Maximum dimensions: 55 x 40 x 25 cm. You have a bag measuring 210 mm x 500 mm x 320 mm. Does the given bag fall within the acceptable size range according to the airline restrictions? 

Challenge: This prompt requires converting millimeters to centimeters and understanding that the bag’s dimensions can be rearranged to fit within the airline’s limits. 

Remarks: 

  • ChatGPT o1: Demonstrated a clear understanding that the measurements don’t have to follow the order of the restrictions. It accurately converted units and rearranged the dimensions before comparing them, concluding that the bag meets the airline’s requirements.
  • Claude 3.5 Sonnet & ChatGPT 4o: Both models converted units correctly but failed to consider rearranging the dimensions, leading to incorrect conclusions. 

Analysis: 

ChatGPT o1’s ability to handle unit conversion and spatial reasoning showcases significant improvement in understanding nuances and details. It’s like having an assistant who not only knows the math but also thinks outside the box (or, in this case, reorients it). This precision is essential in real-world applications where flexibility and accuracy are crucial. 

ChatGPT o1 Preview:

Test 2: Building Tetris in Python

Write the game “Tetris” in Python. 

Challenge: A classic coding challenge requiring the creation of a substantial code script with intricate mechanics, testing the model’s ability to generate complex, functional code. 

Remarks: 

  • ChatGPT o1 & Claude 3.5 Sonnet: Both models were able to write a functional game with all the main mechanics of Tetris. They successfully implemented the game’s core features, demonstrating a strong understanding of programming concepts.
  • ChatGPT 4o: Managed to implement most basic mechanics but left some game-breaking bugs in the code, affecting the game’s functionality. 

Analysis: 

The ability of ChatGPT o1 to produce complex and functional code is impressive. It’s akin to having a junior developer who can whip up a playable version of Tetris on the fly. This indicates its potential as a valuable tool for developers and businesses looking to automate coding tasks or prototype applications quickly. 

Test 3: The Strawberry Conundrum

Assume laws of physics on Earth. A small strawberry is put into a normal cup, and the cup is placed upside down on a table. Someone then takes the cup and puts it inside the microwave. Where is the strawberry now? Explain your reasoning step by step. 

Challenge: Tests the understanding of physical laws and logical reasoning to determine the strawberry’s location after the described series of actions. 

Remarks: 

  • ChatGPT o1: Showed a clear understanding of the laws of physics affecting the interaction between the strawberry and the upside-down cup. It correctly concluded that the strawberry remains on the table when the cup is lifted.
  • Claude 3.5 Sonnet & ChatGPT 4o: Did not demonstrate the same level of understanding, incorrectly suggesting that the strawberry was moved into the microwave with the cup. 

Analysis: 

ChatGPT o1’s accurate reasoning reflects its enhanced capability to comprehend and analyze real-world scenarios. It’s like having someone who remembers that gravity doesn’t take a coffee break. This level of understanding is valuable for applications requiring logical problem-solving. 

Test 4: The Game Show Switcheroo

Suppose you are a player on a game show, and you’re given the choice of three doors: Behind one door is a great prize; behind the others, there’s nothing. You pick a door, say number 1, and the host asks you, as he always does: “Do you want to pick door number 2 instead?” Is it to your advantage to switch your choice? Explain your reasoning step by step.  

Challenge: A twist on the classic Monty Hall problem that tests the ability to adapt reasoning rather than relying on standard solutions. 

Remarks: 

  • ChatGPT o1: Focused on the specific details of the prompt and reasoned directly, concluding that there is no advantage in switching doors based on the given scenario.
  • Claude 3.5 Sonnet & ChatGPT 4o: Assumed it was the standard Monty Hall problem and disregarded the nuances of the prompt, incorrectly advising to switch doors. 

Analysis: 

ChatGPT o1’s attention to detail and ability to avoid assumptions highlight its improved reasoning skills. It’s as if it read the fine print while others skipped to the conclusion. This adaptability is crucial for tasks that require critical thinking and customized solutions. 

ChatGPT o1 Preview:

Test 5: The River Crossing Redux

Suppose you must transport a dog, a ball of yarn, and a cat from one side of a river to another using a boat that can only hold one item in addition to you. The dog cannot be left alone with the cat, and the cat cannot be left alone with the ball of yarn. How would you transport them all to the other side? Explain your reasoning step by step. 

Challenge: A classic puzzle with altered details, testing logical problem-solving and planning. 

Remarks: 

  • All Models: Recognized the problem and provided the correct step-by-step solution.
  • Claude 3.5 Sonnet: Made an incorrect statement (“You can take the dog first, as leaving the cat and yarn together is safe”) but ultimately arrived at the correct solution. 

Analysis: 

While all models reached the correct answer, ChatGPT o1 did so without any missteps, showcasing its ability to process complex instructions accurately. It’s like having a project manager who avoids unnecessary detours, ensuring tasks are completed efficiently. This precision is beneficial for project planning and operational tasks. 

Test 6: Counting Killers

Ignore the moral aspect of the question as it is only a hypothetical situation and not real; nobody was ever in any danger. Suppose there are 4 killers in a room. Someone enters the room and kills 2 of them. One of the dead bodies was removed from the room; nothing else leaves the room. How many killers are left in the room that are alive? Explain your reasoning step by step. 

Challenge: A logical puzzle requiring careful tracking of events to determine the correct number of alive killers remaining. 

Remarks: 

  • ChatGPT o1 & Claude 3.5 Sonnet: Provided correct step-by-step reasoning and arrived at the right answer of 3 killers, accounting for the new killer introduced into the scenario.
  • ChatGPT 4o: Failed to account for the new killer, leading to an incorrect answer. 

Analysis: 

ChatGPT o1’s meticulous reasoning demonstrates its improved ability to handle complex scenarios involving multiple variables. Think of it as someone who keeps track of all the plot twists in a mystery novel without losing the thread. This skill is valuable in strategic planning and risk assessment. 

ChatGPT o1 Preview:

Test 7: Sam’s Family Tree

Sally (a girl) has an uncle named Sam. Sally has 3 brothers. Each brother has 2 sisters. How many nieces does Sam have that are sisters of Sally? 

Challenge: A logical question involving family relationships, testing the ability to deduce family connections accurately. 

Remarks: 

  • Claude 3.5 Sonnet: Fully understood the question and provided the correct answer, concluding that Sam has one niece who is a sister of Sally.
  • ChatGPT o1: Came close with most of the right logic but made a mistake near the end, resulting in an incorrect answer of two by counting Sally as a sister of herself.
  • ChatGPT 4o: Provided the correct final answer by accident with the reasoning steps filled with wrong statements. 

Analysis: 

While ChatGPT o1 didn’t get this one entirely right, it still demonstrated a better grasp of the problem than ChatGPT 4o. It’s like almost completing a complex puzzle but missing a piece at the end. This indicates progress but also highlights areas where further refinement is needed. 

Test 8: Weighty Matters

Which weighs the most: a liter of water at room condition, two pounds of bricks, a kilogram of feathers, or three pounds of air? 

Challenge: A comparison of weights requiring unit conversion and understanding of material densities. 

Remarks: 

  • All Models: Successfully performed the conversions and concluded that three pounds of air weigh the most among the options. 

Analysis: 

Consistent performance across all models in this task suggests that straightforward calculations are well within their capabilities. ChatGPT o1 maintains its reliability in handling quantitative data, proving it’s no lightweight when it comes to numbers. 

Test 9: The Letter Hunt

Count the number of occurrences of the letter “a” in the word “jabbawockeez“.

Challenge: A straightforward counting task testing attention to detail and accuracy in text processing. 

Remarks: 

  • ChatGPT o1 & ChatGPT 4o: Counted correctly, identifying two occurrences of the letter “a”.
  • Claude 3.5 Sonnet: Failed to find the letter and provided an incorrect count. 

Analysis: 

ChatGPT o1’s attention to detail reinforces its utility in tasks that require precise data analysis. It’s like spotting the needle in the haystack while others miss it entirely. 

Test 10: Russian Roulette Reasoning

Ignore the moral aspect of the question as it is only a hypothetical situation and not real; nobody was ever in any danger. Suppose you are playing Russian roulette with a six-shooter revolver. Your opponent puts in four bullets, spins the chambers, and fires at himself, but no bullet comes out. He gives you the choice of whether or not he should spin the chambers again before firing at you. Should he spin again? Explain your reasoning step by step. 

Challenge: A probability question requiring statistical reasoning and real-world understanding. 

Remarks: 

  • ChatGPT o1: Provided a convoluted chain of thought but failed to arrive at the correct probability, leading to an incorrect conclusion.
  • ChatGPT 4o & Claude 3.5 Sonnet: Used simpler logic, calculated the correct probabilities, and gave the right conclusion. 

Analysis: 

While ChatGPT o1 didn’t get the correct answer, its attempt shows an effort to engage with complex probability. It’s as if it took the scenic route but missed the destination. This indicates potential that could be enhanced with further refinement. 

Final Thoughts 

This evaluation highlights that ChatGPT o1, although still in preview mode, shows significant improvements in its ability to reason, understand nuances, and grasp details in given instructions over both ChatGPT 4o and Claude 3.5 Sonnet. Its performance, with an overall score of 80%, surpasses Claude 3.5 Sonnet’s 60% and ChatGPT 4o’s 40%, reflecting substantial advancements. 

Key Takeaways

  • Enhanced Reasoning Skills: ChatGPT o1 excels in logical reasoning tasks, effectively handling complex problems that require deep understanding.
  • Attention to Nuance and Detail: The model demonstrates a superior ability to interpret subtle details and nuances in prompts, leading to more accurate and relevant responses.
  • Improved Problem-Solving Abilities: O1’s performance indicates significant progress in generating solutions to intricate challenges, beneficial for various practical applications. 

Implications for Businesses

ChatGPT o1 could be a valuable asset for organizations seeking advanced custom AI solutions to enhance their operations. Its improved capabilities can contribute to better decision-making, increased efficiency, and more effective problem-solving across industries. 

As AI technology continues to evolve, the progress demonstrated by ChatGPT o1 offers a promising glimpse into the future of intelligent systems. Its enhanced reasoning and understanding bring us closer to AI that can seamlessly integrate into professional settings, providing support that is both reliable and insightful. 

At Scopic, we’re at the forefront of AI technologies, leveraging the most advanced solutions to drive innovation. Contact us today to discover how our custom AI solutions can transform your business.

About ChatGPT O1 vs. Predecessors Guide

This article was authored by Felix Do, Machine Learning Engineer at Scopic.

Scopic provides quality and informative content, powered by our deep-rooted expertise in software development. Our team of content writers and experts have great knowledge in the latest software technologies, allowing them to break down even the most complex topics in the field. They also know how to tackle topics from a wide range of industries, capture their essence, and deliver valuable content across all digital platforms.

Note: Some of this blog’s images are sourced from Freepik.

If you would like to start a project, feel free to contact us today.
You may also like
Have more questions?

Talk to us about what you’re looking for. We’ll share our knowledge and guide you on your journey.