New Anti-Jobs program OPENAI

New Anti-Jobs program OPENAI

News
Before Content

On Tuesday, I thought I could write a story about the consequences of the abolition of Biden’s executive order on AI by Trump’s administration. (The greatest consequence: that the laboratories are no longer required to report dangerous abilities to the government, even if they can do it anyway.) But then two larger and more important stories about AI: one technical and one economic.

Register here and explore the large and complicated problems that the world faces, and the most effective ways to solve them. Sent twice a week.

Stargate is a work program – but maybe not for people

The economic story is the Stargate. Co -founder Openai Sam Altman, in conjunction with companies such as Oracle and SoftBank, announced a stunning planned investment of $ 500 billion in the “new AI infrastructure for OpenI” – ie in data centers and power plants that will be needed to power them. .

After Paragraph

People had questions immediately. At first, Elon Musk’s public statement was that “they didn’t really have the money” followed by the response of Microsoft’s CEO Nadell: “I’m good for my $ 80 billion.”

Secondly, some of them questioned the claims of Openai that the program “creates hundreds of millions of American jobs”.

Why? The only likely way investors can get their money back for this project is, if the company has been betting, OpenI will soon develop artificial intelligence systems that can do most of the work people can do on a computer. Economists are furiously discussing what economic impacts it would do if it happened, although the creation of hundreds of thousands of jobs does not seem, at least not in the long term. (Publication: VOX Media is one of several publishers who have signed partnership contracts with Openi. Our reports remain editorial independent.)

The mass automation has occurred earlier, at the beginning of the industrial revolution, and some people honestly expect that in the long run it will be a good thing for society. (My opinion: It really, it really depends on whether we have a plan to maintain democratic responsibility and reasonable supervision and sharing the benefits of a new alarming sci-fi world. We do not have it right now, so I do not support the prospect of automation.)

But even if you are more enthusiastic about automation than I do, we “replace all the AI ​​office work” – which is quite widely understood as the Openai business model – is an absurd plan that could be transformed as a work program. But then an investment of $ 500 billion in the removal of countless jobs would probably not get President Donald Trump’s imprimature like Stargate.

Deepseek may have come to strengthen the feedback of AI

Another big story of this week was Deepseek R1, a new version from the Chinese startup Deepseek with artificial intelligence, which the company advertises as a competitor Openai O1. What makes R1 a big problem is less economic consequences and more technical.

To teach artificial intelligence systems to give good answers, we evaluate the answers they give us and train them to adapt to those we evaluate high. It is a “strengthening of human feedback” (RLHF) and it is the main approach to training modern LLM since the Openai team has put it into operation. (The process is described in this document from 2019.)

But RLHF is not the way we have acquired the ultra -superhuman AI Alphazero game program. It was trained with another strategy based on its own game: AI was able to invent new puzzles for themselves, solve them, learn from solutions and improve.

This strategy is particularly useful for teaching the model how to do quick anything he can do dearly and slowly. Alphazero could slowly and time consider to consider a lot of different policies to find out which one is the best, and then learn from the best solution. It is this kind of self -defense that has made Alphazero significantly improve the previous game engine.

Of course, the laboratories tried to figure out something similar to the large language models. The basic idea is simple: you let the model consider the question for a long time, potentially using many expensive calculations. Then you train it on the answers that he eventually found and try to create a model that can achieve the same result cheaper.

But so far, “the large laboratories seemed to not achieve great success with this kind of self-improvement RL,” wrote engineer of machine learning Peter Schmidt-Nielsen in explaining the technical meaning of Deepseek R1. What impressed the engineers so much (and so it was concerned) R1 is that the team seems to have made significant progress with this technique.

This would mean that artificial intelligence systems can be learned quickly and cheaply by everything they can, slowly and dearly – which would bring some of the fast and shocking improvements of abilities that the world has witnessed with Alphazero, only in the areas of the economy. Much more important than playing games.

One other remarkable fact: these advances come from Chinese company AI. Given that American societies with artificial intelligence are not afraid to use the threat of Chinese dominance of artificial intelligence to promote its interests – and since there is a geopolitical plant around this technology – it tells a lot about how quickly China can catch up.

A lot of people I know is wrong of hearing about AI. They already mind AI in their innovations and AI products that are worse than humans, but are dirty cheap, and are not exactly supporters of Open (or anyone else) to become the world’s first trillions in the world of automation of the entire industry.

But I think in 2025 will really care about artificial intelligence – not because of whether these powerful systems develop, which looks good at the moment, but because the company is ready to get up and insist that it does it responsibly.

When artificial intelligence systems begin to act independently and commit serious crimes (all large laboratories are working on “agents” who can act independently right now), will we call their creators responsibility? If OpenI makes its non -profit entity a ridiculously low offer when switching to fully profitable status, will the government hit the law on non -profit organizations?

Many of these decisions will be made in 2025 and the bets are very high. If artificial intelligence disturbs you, this is a much greater reason to require an action than a reason to tune.

The version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Editor’s note, January 25, 2025, 9:00 AM: This story has been updated to contain information about the relationship of VOX Media to Open.

Leave a Reply

Your email address will not be published. Required fields are marked *