The WInter is Coming is a common mantra within the AI community. Multiple AI winters have come and gone since the term's inception at the 1956 Dartmouth workshop. Shouts that AI will solve all our problems have quieted, and funding has dried. Yet, the AI climate, like our real climate, is getting warmer.
In addition to Winters, fallacies abound within the AI domain. One in particular is quite pernicious: The goalpost fallacy. It involves changing the criteria or requirements of an argument so that the evidence provided can never conclusively meet the shifting standards. Sounds familiar?
Imagine you lived in the 1960s. You could be astounded if, at some point, some AI would outpace the human computers who perform all your calculations (see Hidden Figures). When, a few years later, no human computers existed, no one attributed to AI. It was just silicon, which got better.
Denialists said that a true AI would be able to create new math theorems. Yet, in 1975, as Simon and Newell proved just that, no one cared. Indeed, the price was given in the middle of the first AI winter triggered by the 1973 Lighthill report.
AIs should make decisions and know stuff. Expert systems arose from this new mantra. Yet, the language model it was built on hit a complexity wall. People could encode all their explicit knowledge via LISP, but much of what we know cannot be written down. The second AI winter (late 80s to mid 90s triggered after companies needed to scale back their surefire knowledge engineering investments was cold. Indeed, expert systems could beat humans in Checkers in 1994, but a true AI would need to beat humans in chess.
The closest I had been to an AI summer was in 1997, when Deep Blue did just that; according to Kasparov, it happened because of random chance (see here). But in reality the use of simple. heuristics like the minimax algorithm and more abstract notations enabled a feat that almost got us to the goal post.
Yet, the late 90s were a coke-induced mayhem. Most of our digital present was written down and codified into standards during that decade. Think of computing, the internet, online commerce, intellectual property laws; all were made anew in that decade.
Were we happy with this new world brought by science? Hell no. By October 2002, NASDAQ had decreased over 75% from its historic peak a year prior. Computers were dumb. AI, given its reliance on computers, was also.
But the early 2000s marked a warming to the AI winters. Simple algorithms like OLS regressions became central to organizational decision-making. As computers entered more and more of our lives and the internet went through its three cycles (corporate, personal, things), companies realized that simple regressions paired with big data could help them predict the future at least with enough certainty that the benefits outpaced the costs.
With this, we entered the era of surveillance capitalism. If you have time, read this article: How Companies Learn Your Secrets. The article talks about Andrew Pole. Andy was a manager at Target in the early 2000s but he realized he could use data to understand what different customers need. Andy found out that one of the most critical changes in a person's spending life is having a child. As children come, complexity explodes, and money is shed. So knowing when exactly a baby would come was fundamental for the company, which was gathering the Lion's share of the baby sales. There is some evidence that these algorithms sent coupons for baby stuff even earlier than the future mom shared the news with the families (see here).
I share this in detail because, just as we think of computers as things made of silicon, we think of regressions as things made of statistics. Statistics and silicon are not AI, AI is a magical mystery thing made of sugar spice and everything nice. As I studied in the mid-2000s, AI was a funny and useful but unimportant. I took an elective in neural networks and learned they could fix some minor problems for situations I might not fully understand. Still, care and dutiful data analysis were better than nets.
I graduated with my MSc. in 2012. The same year, the last AI spring started. A spring triggered by the 2012 IMAGENET competition. This competition saw the first use of GPUs running boilerplate neural networks. The GPU exhausts blew away the competition.
These GPU exhausts can be seen as analogous to cars and airplanes. For centuries, stoves and trains have been producing carbon dioxide. Yet, it was only in the mid-20th century, when cars and planes went mainstream, that our world began to warm up. It was only after GPUs came along that the true power of dumb nets could blow us away.
Yet, critics continued to move the goalpost. AI would not be real until it beats humans in games other than chess. It did. Wait, but Go should be the benchmark; it is so hard. It did. Actually, it should beat the Turing test. It did. It should be creative. It did. It should be able to be empathetic. It did. It should be able to drive. It did. To run a company. It did. It should be whatever fluffs your boat. It did.
Still, the ideas of AI winters loom in the background. Last month, Nature wrote about how "The AI revolution is running out of data". This is a real problem, as we know that AI is not a good cannibal. It feeds on a diet of human data. Yet its output does not lead to more knowledge.
So, yes, winter is coming. But what would this winter look like? For one, it won't be cold. In fact, the next AI winter will probably be warmer than any prior spring. Our world is changing. Ever since the turn of the century, we went from one in which a sweaty Steve Balmer yelled for people to hire more and more developers. To one in which software companies do not see prospects for hiring new developers.
So what happened to that wall AI was supposed to hit?
Well, just as if Westeros were hit with massive climate change, the wall way up in the north would melt. We would continue to go up and up and up. Sometimes, the speed might go down, but winters, even if they come, will never again be cold.
Building igloos or ice hotels might become a flawed business model. Just as insurance companies are deciding to step out of Florida and Los Angeles, jobs that AI systems do well, such as reading text, should disappear. Past are the days in which lawyers could have an army of paralegals collecting data. Same for low-level consultants, investors, etc.
Yet, other professions will fare better. I am particular medicine will increase in size. As the need for memory decreases, the thresholds needed for selecting a potential doctor will fall. Nurses and medical aids will be given more and more freedom. Specialists will become more common and desirable as research will be needed to feed the machine. But I imagine the standard of care will improve.
Contrary to many, liberal arts should grow as well. The machine needs new knowledge. This knowledge will be built in areas where hard-to-codify knowledge starts getting written down. The people who will do this are people with a broad background. People who can work with an expert system, leverage its value, and connect pieces others cannot.
Note, however, that this does not imply any danger, even in areas where a decline might be expected. Jevons law predicts demand rises as a good's cost goes down—mind-blowing stuff. But think of psychotherapists or lawyers. Their services are costly. Yet, with chatGPT you can prepare low-level lawsuits that can be taken to court relatively well. Sure, you won't win against Microsoft in court, but complaining about a traffic ticket will become much easier in the future.
A million people aim at up-ending psychotherapy. I hope they will. As 2020 showed us, we are fragile. A radical improvement in our global emotional resilience and state would be priceless.
That said, I wonder what the world will bring. As data gets scarce, oligopolies will form. Boundaries will be introduced. Some LLM will have better science data. Another better law data. Another will be for entertainment. So on and so forth. Big efforts will be spend on maximizing the openness of data, as any competitive advantage will depend on differentiation. Yet, I imagine that companies that sit on top of huge pay-walled knowledge piles, such as SpringerNature, Wiley, and Elsevier, will become highly valuable in the future.
It's hard to make predictions about the future. But I am confident that the next winter will be warm.
PS: This builds on a prior bsky post that used the figure above and linked to the same paper. The post read:
I wonder what the Herfindahl index of data will be. It seems LLM training sets approach all our data.
If there are just 100x more data to be used in training, data oligopolies will soon emerge.
Elsevier, SpringerNature, or Wiley might gain a lot of value very soon
Comments