Wanism’s Newsletter
What happened in tech that actually mattered, and what did it mean?
Electricity shortages have become a critical bottleneck as AI’s rapid development leads to soaring power demands. Explores potential power sources like solar, wind, and nuclear energy while also examining how pursuing efficiency in AI algorithms and chips could help alleviate the power crunch in the long run.
Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk stated that electricity shortages have become the biggest challenge in building AI data centers. Even with ample funds to purchase GPUs, issues like electricity shortages persist. Constructing a large power plant takes several years, with site selection and environmental assessments alone consuming multiple years. This problem will remain a bottleneck for AI development in three to four years. OpenAI’s leader, Sam Altman, also believes that future AI development will depend on whether there can be significant breakthroughs in energy technology.
According to an analysis by Goldman Sachs, global data center electricity demand is expected to grow by 160% from 2024 to 2030. It has become a consensus that electricity is the key bottleneck for the future development of AI.
As AI development faces the bottleneck of electricity supply, exploring potential electricity sources becomes crucial.
Due to climate change, traditional thermal power generation, the mainstay of power generation, is unlikely to be the primary source. Relatively clean natural gas will still play an important role as a buffer for renewable energy, with the advantage of quick start-up. As for coal-fired power, traditional pollution is very high. Although, recently, ultra-supercritical technology combined with carbon capture can significantly reduce pollution, it may still not meet stricter environmental standards.
Among renewable energy sources, solar energy has the most development potential. The cost of solar power generation has dropped by several percentages over the past few decades. As solar costs continue to decline, solar energy will be the most important energy source for human civilization in the future.
According to the latest LCOE 2023 report by the French investment bank Lazard, the cost of power generation per megawatt-hour for solar energy is $24-96 (utility-scale projects), while depreciated nuclear power plants are estimated at $141-221, and fully depreciated nuclear power is estimated at $31. Depreciated coal-fired power plants range from $68-166 per megawatt-hour, and even when fully depreciated, the cost drops to $52, which is still more than twice as expensive as solar energy because the main cost of coal-fired power is fuel.
In addition to low prices, solar energy has several other significant advantages:
Solar energy also has some disadvantages:
Wind power can be divided into two types: onshore and offshore. The cost of onshore wind power is even lower than solar power, but development is relatively difficult due to limited sites and protests against noise. Currently, almost all onshore wind power projects have been developed. The cost of offshore wind power is higher, basically higher than solar power, with a cost of $72-140 per megawatt-hour. Like solar energy, offshore wind power faces strict site selection (requiring strong winds) and unstable power generation issues.
Renewable energy is in full-speed development, coupled with the large-scale construction of energy storage facilities. These renewable energy sources meet economic needs and are relatively inexpensive.
Nuclear energy is one of the hottest topics of discussion recently. Nuclear fusion technology is the ideal choice for power generation. The principle of nuclear fusion power generation is to use light elements (such as hydrogen isotopes) to fuse at ultra-high temperatures, releasing tremendous energy. Compared with nuclear fission, nuclear fusion reactions produce almost no radioactive waste and are considered the ideal and cleanest power generation method. However, the technical threshold for nuclear fusion is extremely high, and it is still in the laboratory stage, with a long way to go before commercialization.
Even if the technology matures, the commercialization of nuclear fusion on a large scale will take at least 20 years, which is relatively slow to meet urgent needs. Microsoft previously signed a contract with Sam Altman’s investment, Helion Energy, to supply power through nuclear fusion by 2028. However, even if Helion can build the first commercially viable nuclear fusion power plant within ten years, it will probably take another ten years for governments worldwide to purchase such technology.
The power generation cost of small modular reactors (SMRs), a hot topic, may be slightly higher than that of large nuclear power plants. Still, they have higher safety because they are smaller and contain less radioactive material, thus having a smaller impact on the environment and the public in case of an accident. The International Atomic Energy Agency (IAEA) has endorsed that SMRs can significantly reduce the possibility of releasing unsafe, radioactive materials into the environment and to the public in the event of an accident. Currently, most projects are still under construction or planning, and some data even suggests that they will be more expensive than existing nuclear power plants. The technology does not seem to be fully ready yet.
From the perspective of AI demand, will human demand for computing power ultimately be an endless glutton, or will it eventually adjust to a model that pursues efficiency? So far, the focus has been on increasing computing power, consuming electricity, and desperately generating power to meet demand. However, this curve will get stuck at some point unless there is a future breakthrough in nuclear fusion technology. Ultimately, the solution to the energy problem may return to reducing power consumption through AI model algorithms and chip efficiency. After all, if it is a 20 million parameter model versus a 1 million parameter model, the 20 million parameter model will consume more power during training.
This situation is similar to the early days of CPUs when power consumption was not a major concern. Early Intel X86 Pentium 4 CPUs were complete power-hungry monsters, and ARM CPUs later gained the upper hand precisely because of improvements in power consumption. The same is true for GPUs now. Initially, no one cared about GPU power consumption, but in three or four years, there may be cases where performance is the same as the previous generation, but power consumption is only 50%. As power expansion encounters bottlenecks, AI chip efficiency will inevitably increase, and application-specific integrated circuits (ASICs) may be used to make chips perform calculations more efficiently.
For example, speech recognition and speech synthesis models are currently used in AI technology. If there is a speech recognition model that can achieve a recognition accuracy of 99.99%, but the computing cost of using it is 100. Another model has only 99% accuracy, but its computing and power consumption costs are only 10. When it comes to commercialization, the version with lower accuracy but much lower power consumption and cost will likely succeed.
In the long run, perhaps 70-80% of AI applications will not pursue the highest performance but rather a balance between efficiency and performance. So, do not think about the issue of AI power consumption purely from the power generation and supply perspective. The demand side will also change in the future. Humans are greedy, but when resources are limited, they will eventually encounter realistic bottlenecks, and humans will find new paths.