featured-image

Watch out for a considered superpower of generative AI in o1 that might convince you that pigs can ...

[+] fly. In today’s column, I will closely examine an irony of sorts regarding OpenAI’s latest ChatGPT-like model known as o1. The newly released o1 has a key feature that some suggest is its superpower.



Turns out that the very same functionality can lead people astray. Some might hotly proclaim that it could even convince people that pigs can fly. The issue at hand is both at the feet of o1 and in the minds of people who use o1.

Let’s talk about it. In case you need some comprehensive background about o1, take a look at my overall assessment in my Forbes column (see the link here ). I subsequently posted a series of pinpoint analyses covering exceptional features, such as a new capability encompassing automatic double-checking to produce more reliable results (see the link here ).

Unpacking AI-Based Chain-of-Thought Reasoning First, a quick overview of AI-based chain-of-thought reasoning or CoT is worthwhile to set things up. When using conventional generative AI, there is research that heralds the use of chain-of-thought reasoning as a processing approach to potentially achieve greater results from AI. A user can tell the AI to proceed on a step-at-a-time basis, considered a series of logically assembled chain of thoughts, akin to how humans seem to think (well, please be cautious in overstating or anthropomorphizing AI).

Using chain-of-thought in AI seems to drive g.

Back to Fashion Page