top of page

OpenAI and the Path to Safe Artificial Intelligence

Updated: Jan 18

In recent weeks, there has been much talk about OpenAI's bold move of "firing and immediately reinstating CEO Sam Altman", but what lies behind this decision? In this article, we attempt to explore the motivations that led to this drastic but somewhat strategic decision.


Gestire qualcosa di Fragile

The Mission of OpenAI: create a safe Artificial Intelligence

The first thing to consider is OpenAI's organizational structure. Indeed, OpenAI stands out in the technology landscape for its unusual organizational structure, as it was founded in 2015 as a non-profit company with a deeply humanitarian goal: to ensure that humanity and the entire observable universe do not disappear due to poorly managed technologies. Its approach to this goal is to bring together the world's best AI researchers to develop advanced technologies, understand their workings, publish the results in open source, and address the crucial problem of AI alignment. In other words, OpenAI's goal is to develop advanced artificial intelligences, but ensuring that they act beneficially for humanity.


Funding and Business Model

A key element of OpenAI's philosophy is the absence of financial incentives for the Board members, including CEO Sam Altman. This means they do not receive salaries or shares of the company, allowing them to focus exclusively on humanitarian, rather than financial, objectives. However, economic capital is still necessary to realize the grand benevolent project, so in 2019 a for-profit subsidiary was created to act as the General Partner of OpenAI Foundation, controlling its managerial decisions. Over the years, Venture Capitals and large companies, like Microsoft, which is also the exclusive commercial partner, have invested in the for-profit subsidiary.

OpenAI's subsidiary's business model is based on the B2B sale of APIs and subscriptions to ChatGPT Premium, as well as revenues from the Commercial Partnership with Microsoft, which includes developing AI technologies, including Azure's AI supercomputer, integrating OpenAI models into Microsoft products, and advanced Azure Enterprise Platform features.


News

Conflicts of Interest in OpenAI

OpenAI's Board consists of people without direct financial involvement in the company, with the aim of blocking anything that could be a real danger to humanity. Therefore, to have a clear mind in this task, they cannot have conflicts of interest. Despite this, it was precisely due to a conflict of interest that CEO Sam Altman was fired. In 2019, OpenAI negotiated a $51 million deal with the startup Rain AI, for the purchase of AI chips designed to emulate human brain features, but in the same startup, Sam Altman personally invested more than $1 million, keeping other OpenAI Board members in the dark, thus creating a "conflict of interest". This risky action led to his subsequent dismissal by the OpenAI board of directors, who accused him of not always being sincere in communications with the Board and of having forgotten his commitment to the safety of General Artificial Intelligence, losing focus on AI alignment.


The Concept of "AI Alignment"

AI alignment is a fundamental research field for AI safety, aiming to ensure that AI systems achieve desired goals and serve humanity by ensuring that AI actions and behaviors are aligned with human interests and values, especially as AI becomes more advanced and powerful. Alignment is crucial to prevent unintended or harmful outcomes from AI use and focuses on three types of goals:


  1. Intended Goals: These are the goals that developers and users want the AI to pursue. They are the original intentions behind the creation and programming of the system. For example, if an AI system is created to diagnose diseases, the intended goal is that the system accurately and reliably identifies medical conditions.

  2. Specified Goals:

  3. Emerging Goals: These are goals that emerge from the AI's behavior during its operation, which may not have been anticipated or desired by the developers. They are often the result of complexity and unforeseen interactions within the AI system or between the system and its environment. An example could be an AI system for stock trading that begins to adopt unintentional risky strategies due to learning from volatile market patterns.


Thus, the ultimate purpose of AI alignment is to maximize the match between intended and specified goals, minimizing the emergence of unwanted objectives. Therefore, it is essential to work gradually in achieving AGI (General Artificial Intelligence), as given its advanced capabilities and the breadth of its potential applications, it could reach human intelligence, improve itself, and thus surpass us, becoming a threat to humanity as it would no longer be under control.


Conclusions

As frightening as the future of artificial intelligence is, so noble is OpenAI's commitment to safe AI, determined not to harm humanity, a very difficult goal to manage given the high degree of tension that the people working there can experience, as may have been experienced by the scientists who created the atomic bomb between 1941 and 1945. Therefore, the solution to gradually achieve AI alignment seems to be functional also to get used to the changes.


 

Would you like to integrate AI into your company?

Discover how Run2Cloud can help your company keep up with technological evolution, supporting it with the skills and digital solutions needed to automate business processes, allowing you to innovate and scale your business.







4 views0 comments
bottom of page