December 10, 2023
Array

OpenAI drama, AI and Big Tech

Bappa Sinha

LAST month witnessed high drama at OpenAI, the company behind ChatGPT. The company’s CEO and co-founder, Sam Altman was fired by the OpenAI Board, apparently at the behest of chief scientist, Ilya Sutskever. The move sent shockwaves across the tech world. OpenAI engineers wrote a letter to the board threatening to quit in protest. Microsoft, which had invested more than 10 billion dollars in the company, offered to hire Sam Altman, other key executives and engineers. Faced with an investor revolt and the collapse of its sky-high USD 86 billion dollar valuation, the OpenAI Board capitulated, and Sam Altman made a triumphant return as CEO of OpenAI. What explains this head-spinning sequence of events?

The mainstream media has sought to portray the events as a clash of worldviews with CEO Sam Altman leading OpenAI down a reckless path, chasing profits and valuations without paying heed to ethical and safety concerns of the board led by chief scientist Ilya Sutskever. According to OpenAI’s website, its mission is “to ensure that artificial general intelligence benefits all of humanity.” Artificial General Intelligence (AGI) refers to machines developing human-like intelligence enabling them to perform unfamiliar tasks and not just tasks they are programmed or trained for. While officially the board gave only a vague statement about the firing, the narrative pushed by the media is that the board and especially Sutskever were getting increasingly concerned over the direction of OpenAI under CEO Altman. They were apparently worried that OpenAI was on the path to developing AGI which, without addressing safety concerns, could “wipe out humanity.”

Reuters had reported that several staff members had written to the board warning about the “prowess and potential danger of an internal research project known as Q* (pronounced Q Star).” These people believe that Q*, which could solve certain mathematical problems unlike ChatGPT, had the potential to provide the elusive breakthrough towards AGI. However, other AI researchers remain sceptical of such claims. They point out that Q* can only solve primary school level math problems which could hardly be classified as monumental, and also that such capabilities are not new but rather have been known for some time and published by other research teams.

Ilya Sutskever has been driving the talk about AGI being imminent and even claiming that ChatGPT showed glimpses of being “slightly conscious.” Such claims have been contested by other researchers who have called large language models (LLMs) like ChatGPT as “stochastic parrots” that are great at mimicry and fooling people into believing that they are intelligent by parroting the vast trove of text they have ingested but do not really have any level of human-like understanding. Of late, Ilya himself has been talking about such models “going rogue” and posing a threat to humanity. Whether or not Ilya believes his own hyperbole, it clearly does add to the hype of OpenAI doing path-breaking work of achieving AGI thus driving up the company's valuation in the eyes of investors and enabling it to raise further funding to fuel its expensive research activities.

Keeping aside the self-serving hype of rogue super-intelligent AI models presented as ethical concerns, we need to look at the far more ordinary ways in which AI models working on behalf of monopoly corporate interests harm society and humanity, often targeting the poor, and ethnic and religious minorities. We know the havoc social media algorithms have caused to societies and democracies through the propagation of fake news and the creation of hate-filled bubbles. We also have numerous examples of AI decision systems the world over, denying people legitimate insurance claims, medical and hospitalisation benefits and state welfare benefits. AI systems in the US have been implicated in imprisoning minorities to longer prison terms. There  have even been reports of withdrawal of parental rights from minority parents based on spurious statistical correlations, which often boil down to them not having enough money to “properly” feed and take care of their children. As noted linguist Noam Chomsky wrote in a recent article, “ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.”

We cannot rely on the morality of the big tech monopolies such as Google, Microsoft, Facebook, etc or companies funded by them such as OpenAI to self-regulate and curb their monopoly tendencies to chase super-profits and work for the betterment of humanity. Nor will senior multi-millionaire executives in such companies driven by fads like “effective altruism” play such a role. It is for governments worldwide to step up and act quickly to prevent these tech monopolies and their AI models from harming society in myriad ways. Due to their foundational nature, government should treat these technologies as public goods. Governments should get involved in setting up public initiatives to fund research and development in these promising technologies so that they can be safely and ethically developed and deployed for humanity's greater good.