Altman in Nigeria
Sam Altman, the co-founder and CEO of OpenAI, the company behind the smash-hit AI chatbot ChatGPT was in Lagos, Nigeria on May 19th. There was no officially stated reason for the visit but some reports say it is part of a 17-city tour around the world with Lagos being his only stop in Africa. Some might also think this was a PR move to forestall a bad image of the company in Africa — based on a low-wage controversy involving Africans hired to help train ChatGPT.
But this is unlikely as it was barely discussed at the event. A more likely motive is that this is part of Altman and OpenAI’s continuing effort to shape the narrative on how the development of AI technologies should be regulated. The probable risks associated with the development of AI technology have been a long-running debate, but it was a niche debate conducted by scholars and nerds. ChatGPT, being the first widely available and commercialised application of the technology, has pushed the debate into the mainstream and accelerated the timeline for political intervention.
And trust me, the regulations are coming. This is from the communique of the just concluded meeting of the leaders of the G7 countries (point 38);
‘’We commit to further advancing multi-stakeholder approaches to the development of standards for AI, respectful of legally binding frameworks, and recognize the importance of procedures that advance transparency, openness, fair processes, impartiality, privacy and inclusiveness to promote responsible AI. We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognize that approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members. We support the development of tools for trustworthy AI through multi-stakeholder international organizations, and encourage the development and adoption of international technical standards in standards development organizations through multi-stakeholder processes.
We recognize the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organizations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year. These discussions could include topics such as governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies. We welcome the Action Plan for promoting global interoperability between tools for trustworthy AI from the Digital and Tech Ministers’ Meeting.’’
Beyond the fears of AI becoming conscious enough to take over human systems and then deciding we are no longer necessary, there are also spirited debates about the socioeconomic implications of AI technology — particularly how it will affect jobs and widen income inequalities. The fear is that many jobs like copywriters, paralegals, teachers, etc. could be replaced by AI creating what economists call technological unemployment and loss of livelihoods for many people.
Economist Daron Acemoglu (co-author of the famous Why Nations Fail book), who has been quite vocal and recently co-authored another historical tome on the subject had this to say in his Lunch with FT:
‘’The research shows that major technological disruption — such as the Industrial Revolution — can flatten wages for an entire class of working people. It also points to the distributional conflict and power dynamics inherent in it. “Yes, you got progress,” Acemoglu says, “but you also had costs that were huge and very long-lasting. A hundred years of much harsher conditions for working people, lower real wages, much worse health and living conditions, less autonomy, greater hierarchy. And the reason that we came out of it wasn’t some law of economics, but rather a grassroots social struggle in which unions, more progressive politics and, ultimately, better institutions played a key role — and a redirection of technological change away from pure automation also contributed importantly.’’
Anxiety about the impact of new technology on jobs is highly relevant in Africa. Investment in education has been poor, hence the level of human capital is low. Unemployment is very high — especially among the youth population. Efforts to boost manufacturing on the continent have not yielded much in terms of job creation — and some researchers have arguedthat this is due to firms opting for capital-intensive technologies and processes that do not involve human labour. The most prevalent jobs are low-paying low-productivity service sector jobs, and there are fears that automation will worsen the unemployment situation.
Development economist Lant Pritchett wrote a characteristically blunt essay six years ago asking why technological entrepreneurs and geniuses are trying to destroy jobs in the poorest region of the world. He said:
‘’As I rode the Boston Red Line yesterday from the Kendall Square/MIT station I saw the poster above. Amazon is recruiting some of the global top talent, MIT engineers, to create robots.
That same day I was talking to a development practitioner who argued the biggest and hardest challenge facing Uganda was creating sufficient jobs for its youth bulge — about three quarters of the population is under 30 and 18–30 year olds make up 64 percent of the unemployed.
These two come together because the last time I was in Uganda the airport had switched to an all automated pre-paid parking system. The parking lot booth no longer employed anyone to collect payment.
This juxtaposition makes my head explode: Why are the world’s scarcest economic resources devoted to economizing one of the world’s most abundant economic resources?
The world’s scarcest economic resources are people with entrepreneurial and technical ability that can transform ideas and technology into scaled profitable production. We know entrepreneurial talent is scarce because its returns when successful are astronomical: Bill Gates, Larry Page and Sergey Brin, Mark Zuckerberg, Elon Musk, Jeff Bezos, Jack Ma are each worth tens of billions. And scientists and engineers with top talent — like MIT graduates — are scarce and heavily recruited into high paying jobs.
What drives world poverty is that the wage of low-skill labor is low, which suggests that it is abundant relative to demand. So what are Larry Page and Sergey Brin devoting their extremely scarce time and talents to? Among other things, developing a self-driving car. Elon Musk is working on a self-driving truck. Jeff Bezos wants even more robotics and drones and self-driving vehicles. Why are these geniuses working on destroying jobs?’’
Altman did not directly address this issue in his interactive session — possibly because we were not the ones asking the questions. He got different questions from the moderator and the audience, ranging from interesting to strange. For example, he was asked how ChatGPT will impact youth unemployment in Nigeria and I immediately rolled my eyes. Digital technology has had an overwhelmingly positive effect on young people in Nigeria, however, the belief that we can ‘’tech’’ away some of our deep structural and policy problems is a baffling one.
One thing that stood out to me in his answers was his take on failure and the San Francisco tech subculture. He mentioned that SF is a place where failure does not define you — as long you do good work, people will forget your past failures. He thinks this attitude has encouraged creative risk-taking better than any other place he has observed. Sam himself has benefitted from this in his career. His first attempt as founder failed, after which he became a successful investor. Now he is having his second life as a founder and everything seems to be going well so far.
Overall, I think tech leaders like Altman are trying to write the rules on AI development themselves (see Feyi’s note below). I suspect that they are worried politicians will make a mess of things. In my opinion, it is too early to tell what the impact of AI will be, and all sides trying to frame the debate need to be careful and not instigate a moral panic (see this brilliant essay by Cardiff Garcia). Africa also needs domestic leadership in harnessing the positives of AI. I think there can be a massive impact on education and healthcare. We can improve living standards in no small measure by delivering simplified academic instructions, diagnoses, and even prescriptions using AI technology on mobile phones to people (even in remote places) at very low costs. At least that is as far as my imagination can go for now. (Tobi)
It was hard to shake the feeling that he was overdoing his warnings about the dangers of AI. He stressed this several times and called for regulation, albeit with the caveat that the only regulation he thinks worthwhile must be done globally. I wasn’t sure how to process this as global regulation of anything, let alone AI, will be practically impossible to achieve for a technology that is moving so rapidly with America and China trying to gain advantages over the other.
Much of this is understandable: as much as OpenAI is technically a startup, it has been backed heavily by a giant corporation like Microsoft which is embedded in the fabric of computing and the internet. However minimal the risk of AI going rogue and wiping out all of humanity is, Microsoft cannot be seen to be backing that kind of risk in any way. So the explicit ‘regulate me, now!’ calls make sense. How seriously should it be taken, though? Hard to say. One thing he did say was that he hoped for as light regulation as possible. And perhaps relatedly, he said on more than one occasion that he was against calls for pausing the development of AI. Make of that what you will.
One other thing I found interesting was the only time he said he was scared about what AI might do. Note that as much as he kept repeating his calls for regulation, he never did explicitly say what he thought the risks to regulate away is. But he did say he was terrified (I think that is the word he used) of how AI could be used for fraud using the example of a model being trained on the voice and mannerisms of someone familiar to you and then you receive a call from that person asking you for money. He did not link this in any way to Nigeria but I couldn’t help but wonder the damage that can be done by Yahoo Boys with access to AI. If you think things are bad now…
At an earlier more intimate session, I got to ask him a question that had been bugging me since I read it. In my head, I think of the Large Language Models behind AI tools like ChatGPT as the inverse of Moore’s Law i.e. they have to keep getting bigger (more inputs) as they get better. So I was a bit surprised to read these comments (the best piece on how LLMs work I’ve read, by the way) attributed to him widely a few weeks ago:
The sheer scale at which LLMs can process data has been driving their recent growth. GPT-3 has hundreds of layers, billions of weights, and was trained on hundreds of billions of words. By contrast, the first version of GPT, created five years ago, was just one ten-thousandth of the size.
But there are good reasons, says Dr Bengio, to think that this growth cannot continue indefinitely. The inputs of LLMs — data, computing power, electricity, skilled labour — cost money. Training GPT-3, for example, used 1.3 gigawatt-hours of electricity (enough to power 121 homes in America for a year), and cost Openai an estimated $4.6m. GPT-4, which is a much larger model, will have cost disproportionately more (in the realm of $100m) to train. Since computing-power requirements scale up dramatically faster than the input data, training LLMs gets expensive faster than it gets better. Indeed, Sam Altman, the boss of Openai, seems to think an inflection point has already arrived. On April 13th he told an audience at the Massachusetts Institute of Technology: “I think we’re at the end of the era where it’s going to be these, like, giant, giant models. We’ll make them better in other ways.”
My question was what happens if LLMs simply can’t get any larger? Does AI become sort of fixed in what it knows? Happily his answer was that his comments had been widely misinterpreted (he seemed annoyed by it) and that LLMs will continue to grow but past a certain point, the bulk of the improvements will come from much better algorithms relative to just feeding new information into them.
On a final point, Sam Altman is 38. He recalled vividly how fearful people were of Google when it was launched and how kids would become stupid by being able to Google everything instead of thinking for themselves.
Maybe that’s why he’s only pretending to be afraid of what AI might do to us (Feyi).