OpenAI creates AI models with human-like capabilities that can solve mathematical problems.

OpenAI, a leading company in its field, has launched new models of generative artificial intelligence (GAI) that enable users to use an application that can think more deeply and answer complex and relatively general questions, mostly in the fields of mathematics, coding, and science.

The company also expressed its ambitions to improve the performance of these applications and close gaps in them.

In a statement Thursday, the leading technology company, inventor of the revolutionary artificial intelligence app ChatGPT, confirmed that its new generative model, called o1, “thinks before it answers.”

The company launched the beta version of o1 on Thursday, and it will later be made available to ChatGPT users for a fee.

The company considers the application a new step in its ambitious project to “develop artificial general intelligence,” referring to intelligence similar to human intelligence.

Ambitions to “reduce hallucinations” and fill gaps

The company expressed its hopes of developing tools that would mitigate “hallucinations,” the electronic vulnerabilities that sometimes plague this type of software, resulting in the invention of inaccurate facts and information.

In the future, the company is attempting to overcome these shortcomings that plague generative AI models in general, which primarily rely on predicting answers without thinking, and thus without understanding the sentences or images they generate. They are also unable to process or create content other than text.

practical experience

Agence France-Presse tested the o1 model by asking it simple logical questions. It achieved the same results as GPT-4o, but it took longer and provided more detail in its answers, rather than generating them almost instantly.

For its part, the company tested its new mathematical problem-solving model in a competition with more than 500 high school students in the United States.

The new model was able to produce lines of code after taking the time to think, just as humans do.

The company explained that the model is also capable of recognizing and correcting its own errors.

It also provides users with new reassurances not found in previous models regarding safety and compatibility with human values, because its thinking has become “readable.”

Leave a Reply

Your email address will not be published. Required fields are marked *