I of course have zero special insights into the matter (other than mostly publicly available information – growing by the hour – and having lived some), but that has never deterred me from mouthing off my opinions (and being substantially more right than wrong) in the past – so here are my two cents on the latest Open AI developments that you never asked for.
The board (read: e.g. Ilya Sutskever or one Adam D’Angelo, who is also behind Poe, a ChatGPT competitor — of all things to have as an independent director — Toner, or McCauley) has had plenty of time to explain, but have so far chosen not to. Until I learn otherwise, I’ll assign ego and zealotry – two of the universe’s most destructive forces – as the reasons for this mess. As long as there is no public explanation I am going to assume the reason is one big fucking nothing burger and they know it. (Happy to change opinion if facts to the contrary appear, which is unlikely).
As long as there is no public explanation from the board as to why exactly they fired the co-founder CEO Sam Altman, I am going to assume the reason is one big f*ing nothing burger — and they know it. (Happy to change opinion if facts to the contrary appear — which I think is unlikely at this point, but would make for a great plot twist).
I guess the most likely scenarios so far are a) The Decel faction of the board getting cold feet about the current speed of things b) D’Angelo feeling snubbed by ChatGPT’s new AppStore stealing Poe’s thunder c) A combination of both, with or without an active coup conspiracy — although I lean heavily in the active conspiracy direction; Envy, spite, and zealotry (aka ego), are in my experience the usual suspects when it comes to the motivation behind cataclysmicly stupid decision making.
Now, Open AI is not operating in a vacuum. Open AI does not have monopoly on creating AGI; Other projects are trying to develop AGI — safely or otherwise. If AGI will happen, it will happen — with or without Open AI and with or without Open AI playing it safe & slow. Applying Game Theory would tell you Open AI would be playing a heavily biased game (which I guess you could also call hubris or ignorance — or of you’re a decel, “acting on [your religious sci-fi fantasy cult-like] conviction”). Ok, so now someone else developed AGI — what now Open AI? Congrats, you’ve become irrelevant. Who cares if you warned the world, who cares if you developed safe guidelines, who cares if you developed a “safe” neutered AGI now? I think this is also why I don’t lean towards “decel” as the real or only motivation behind the ousting, but it may have been used as an argument by the coup faction to sway the zealots on the board to make an insanely stupid decision.
Also, applying Game Theory 101 to an aftermath of ousting an “everybody’s darling” CEO with no outside or bottom up support, indicates a thinking (or perhaps just a complete lack of thinking?) that they could somehow survive this (which I guess means they have the legal paperwork to back them up — for now) — and to be fair, if the direction for Open AI is to regress into a [somewhat irrelevant] research organisation, I think they’ll achieve it quicker than they could ever dream of — and I also think they’ve already potentially achieved one of the fastest & largest destruction of value in history in the process — at least for now.
Update: I’m not the only one noticing the board flunked Game Theory 101 – Kindergarten Edition
Also, as a Norwegian, I’ll never pass on an opportunity to dunk on Swedes ;), so I guess I should also assign some blame and shame to the harebrained economist Nick Boström for fueling the AGI danger scare – I’m only half joking, though: Boström has been highly influential in turning Musk and others into doomers with his (IMO stupid and also in blatant disregard of Game Theory basics) book “Superintelligence”. And the kicker being Boström now has come to regret his scaremongering doom & gloom. Heuristic: Never believe a word an economist is saying. Ever.
Now, Sam and Greg “joining” MSFT sounds like MSFT creating an interim CYA (Cover Your Ass) vehicle (aka a story at the moment, not necessarily anything set in stone yet) to protect their AI interests (and more importantly, their share price) until situation gets further sorted out – MSFT is not exactly known for creating great products and daring innovations, so I don’t see how those two people would thrive under the MSFT (or any) corporate yoke for the long run. I’d bet against it.
I think MSFT definitely won the narrative so far, but I’m not so sure about actual value (did they sign / formalise anything yet?). Anyways, MSFT with Nadella taking a stance of seemingly extreme optionality was a good move, perhaps the only move to preserve any upside potential.
Update 2: Proof that clown car will keep on clowning: Ilya Sutskever has regrets. Regrets of the actions of the board that he was supposedly in control of. Give me a fucking break. I just can’t… How was / is any of this possible?
I can’t remember who said it on X/twitter, but I agree with their sentiment that the board might have still been a floppy kludge that Altman as CEO and co-founder didn’t pay too much mind to tighten up since “no CEO was ever fired by a board for being too successful”.
Now, what’s next for me as an Open AI / ChatGPT customer? What can we expect going forward? Neutered nanny ChatGPT v4 forever? Greatest come-back since Steve Jobs? I think it’s time for me to look into the current state of the alternatives again, regardless.
Although I don’t feel this soap opera is canceled just yet. I expect at least one more season of twists and turns. Stay tuned — I have a feeling The Sam & Greg Show can still surprise us.