School Bans, ChatGPT’s Drama, and Sam Altman’s Ouster: The Big AI Developments of 2023
The artificial intelligence (AI) industry began 2023 with a bang as schools and universities grappled with students using OpenAI’s ChatGPT to help with homework and essay writing.
Less than a week into the new year, New York City public schools banned ChatGPT—released weeks earlier to great fanfare—a move that would set the tone for much of the discussion around generative AI in 2023.
As the buzz grew around the Microsoft-backed ChatGPT and rivals like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, so too did questions over how to manage a powerful new technology that had become publicly accessible virtually overnight.
While AI-generated images, music, videos and computer code created by platforms like Stability AI’s Stable Diffusion or OpenAI’s DALL-E opened up exciting new possibilities, they also fueled concerns over misinformation, targeted harassment and copyright infringement.
In March, a group of over 1,000 signatories—including Apple co-founder Steve Wozniak and tech billionaire Elon Musk—called for a pause in the development of more advanced AI in light of its “profound risks to society and humanity.”
While no pause materialized, governments and regulators did begin rolling out new laws and regulations to put guardrails around the development and use of AI.
Though many AI questions remain unresolved heading into the new year, 2023 will likely be remembered as a major milestone in the history of the industry.
**Drama at OpenAI**
After ChatGPT racked up over 100 million users in 2023, developer OpenAI returned to the spotlight in November when its board abruptly fired CEO Sam Altman, saying he was “not consistently candid in his communications with the board.”
Though the Silicon Valley startup has been tight-lipped about the reasons behind Altman’s ouster, his removal was widely attributed to an ideological struggle within the company between safety concerns and commercial ones.
Altman’s removal kicked off five days of high-profile public drama that saw OpenAI staff threaten to resign en masse, Altman briefly hired by Microsoft, and then reinstated with the board replaced.
While OpenAI eventually moved past the drama, the questions raised during the upheaval remain true for the industry as a whole, including how to weigh the push for profit and new product launches against fears that AI could quickly become too powerful or fall into the wrong hands.
In a July Pew Research Center survey of 305 developers, policymakers and academics, 79 percent of respondents said they were more concerned than enthusiastic about the future of AI—or equally concerned and enthusiastic.
Despite AI’s potential to transform fields from medicine to education to mass communications, respondents expressed worries over risks like mass surveillance, government and police abuse, job displacement and social isolation.
Sean McGregor, founder of the Responsible AI Collaborative, told ZME Science the year had laid bare the hopes and fears around generative AI, as well as the deep philosophical divides within the industry.
“The most hopeful thing is the light now being shone on the societal choices being made by technologists, though it’s concerning that many of my colleagues in tech seem to resent this attention,” McGregor told ZME, adding that AI should be shaped by the “needs of the most impacted people.”
“I’m still broadly positive, but we are in for a few rough decades as we come to terms with the fact that the discourse around AI safety is a dressed-up tech version of age-old societal challenges,” he said.
**Legislating the Future**
In December, European Union lawmakers agreed on landmark legislation to regulate the future of AI, capping a year of efforts by national governments and international bodies like the United Nations and G7.
Major concerns include the sources of the training data used to teach AI algorithms, much of which is scraped from the internet with little regard for privacy, bias, accuracy or copyright.
The draft EU legislation requires developers to disclose their training data and comply with the bloc’s laws, with restrictions on certain types of uses and an avenue for user complaints.
Similar legislative efforts are underway in the United States, where President Joe Biden in October issued a sweeping executive order on AI standards, and in the United Kingdom, which in November hosted an AI Safety Summit involving 27 countries and industry stakeholders.
China has also moved to regulate the future of AI, issuing interim rules for developers that require them to undergo a “safety assessment” before releasing products to the public.
The guidelines also restrict AI training data and prohibit content deemed to be “supporting terrorism,” “undermining social stability,” “subverting the socialist system” or “damaging the country’s image.”
On a global scale, 2023 also saw the first interim international agreement on AI safety, signed by 20 countries including the United States, the United Kingdom, Germany, Italy, Poland, Estonia, the Czech Republic, Singapore, Nigeria, Israel and Chile.
**AI and the Future of Work**
Questions about the future of AI are also rampant in the private sector, where its use has already led to class-action lawsuits in the United States by writers, artists and news organizations over alleged copyright infringement.
Fears that AI could replace jobs have been a driving factor behind months-long strikes in Hollywood by the Screen Actors Guild and Writers Guild of America.
In March, Goldman Sachs predicted that generative AI could replace up to 300 million jobs through automation and have an impact, at least in some way, on two-thirds of current jobs in Europe and the US, making work more productive but also more automated.
Others have sought to temper the more dire predictions.
In August, the International Labour Organization, the United Nations agency dealing with labor issues, said generative AI was more likely to augment most jobs than replace them, with office work listed as the occupation most at risk.
**Year of the ‘Deepfake’?**
2024 will be a major test year for generative AI as new applications hit the market and new laws come into force against a backdrop of global political upheaval.
Over the next 12 months, more than two billion people will vote in elections in a record 40 countries, including geopolitical hotspots like the United States, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan.
While online disinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make things worse as false information becomes increasingly difficult to distinguish from real information and easier to replicate on a mass scale.
AI-generated content, including “deepfake” images, has already been used to sow confusion and anger in conflict zones like Ukraine and Gaza, and has featured in closely fought electoral contests like the US presidential election.
Meta last month told advertisers that it would ban AI-generated political ads on Facebook and Instagram, while YouTube announced it would require creators to label AI-generated content that looks realistic.