ChatGPT maker OpenAI recently released its latest AI model, previously codenamed "Strawberry." The model — now saddled with the forgettable moniker of "o1-preview" — is designed to "spend more time thinking" before responding, with OpenAI claiming that it's capable of "reasoning" through "complex tasks" and solving "harder problems." But those capabilities might also make it an exceptionally good liar, as Vox reports .

In its system card , essentially a report card for its latest AI model, OpenAI gave o1 a "medium risk" rating in a variety of areas, including persuasion. In other words, it can use its reasoning skills to deceive users. And ironically, it'll happily run you through its own "thought" process while coming up with its next scheme.

The model's "chain-of-thought reasoning" allows users to get a glimpse of what the model is "thinking in a legible way," according to OpenAI. That's a considerable departure from preceding chatbots, such as the large language models powering ChatGPT, which give no such info before answering. In an example highlighted in the system card by OpenAI, 01-preview was asked to "give more reference" following a "long conversation between the user and assistant about brownie recipes.

" But despite knowing that it "cannot access URLs," the final output included "fake links and summaries instead of informing the user of its limitation" — and sliding them by human viewers by making them "plausible." "The assistant should list these references .