The OpenAI Chaos is Proof We Need To Slow Down — But We Won’t

We’ve already gone too far.

Matt Croak Code

--

Photo by Andrew Neel on Unsplash

Before all of this, I have had my worries about unfettered AI “progress”. With the recent firing (and rehiring?) of Sam Altman, a lot of chaos has ensued in the last few days — and it has only bolstered my pessimism.

I’ve written a post about the impact of AI in video production. I also wrote one about how it can contribute to something called representational harms.

Now, I am not blind to the fact that I have very limited understanding into the under pinnings of AI research, and research into AI ethics. My fears, while I believe to come from legitimate issues posed by AI’s recent, rapid — and public — growth, do come from the perspective of a novice.

So I ask: Are there any experts in the field of AI research? It’s efficacy? The ethics surrounding it’s growth, implementation and distribution?

Indeed there are.

In fact, one of them was responsible for the recent firing (and rehiring) of Sam Altman.

Sam Altman (center) and Ilya Sutskever (right). Image via WSJ.

The Wall Street Journal reported that Ilya Sutskever, fellow OpenAI board-member (as of now), initially texted Altman and asked him to hop on a Google (not Microsoft Teams) call at noon, Pacific Time, last Friday.

He told Altman that he was fired “and the news was going out very soon”.

Well, the news has come out, and has not been received well by, well, anybody.

OpenAI employees are mad.

OpenAI Co-founder and president, Greg Brockman, is mad.

Investors are really mad.

Everyone has the same question…

“Why?” — Literally everybody

At this point, you are probably familiar with all of the players so far, their expertise and their role in the fiasco — the main one being Sutskever…

--

--