Skip to main content
Commentary by our President, Keiichi Shimada
A nanometer is a unit of length equivalent to one-billionth of a meter. If one meter were taken to be the diameter of the earth, one nanometer would correspond to the size of a marble. This is a unit that is almost never encountered in everyday life, but in the world of electronic devices, it is often used to express the degree of fineness in semiconductor microfabrication technology. A semiconductor is made by cutting grooves in a silicon plate with a laser to form a circuit, and nanometers are applied as a scale to measure the line width of these circuits.
In the past, the standard unit of scale for semiconductor technology was the micrometer, which had been used for a long time since the days of integrated circuits. In the mid-2000s, however, a design rule specifying a processing line width of 90 nanometers (0.09 micrometers) was adopted for CPUs in personal computers, and nanometers came into wider use thereafter. I recall that around that time, “nanotechnology” was a popular term referring to a wide range of advanced microfabrication technologies. Since then, this technology has continued to evolve, and as of 2020, the minimum processing line width of advanced semiconductors under development is as small as 5 to 7 nanometers.
Semiconductors have evolved in accordance with Moore's Law, doubling the rate of their integration in circuit packaging over each period of one and a half to two years. This ongoing technological evolution has also been driven by fields for applications which showed great promise at the time. These included minicomputers in the 1970s, mainframes and consumer appliances in the 1980s, servers and PCs from the 1990s to the 2000s, and clouds and smartphone devices from the late 2000s. Moore's Law describes short-term technology node (generation) cycles, but semiconductors can also be said to have undergone continuous innovation through long-term application cycles according to changes in their application fields.
This semiconductor technology is now becoming a point of increasing conflict between the United States and China. In May, the U.S. Department of Commerce strengthened export controls on U.S.-made semiconductor design software and manufacturing equipment. Specifically, it banned exports of semiconductors manufactured overseas using U.S.-made manufacturing equipment, to Huawei and its semiconductor-related affiliates. It also announced in August that it would add 38 of Huawei’s overseas affiliates to the entity list restricting the access to these technologies and products, in order to prevent roundabout exports.
This is the first time that countries have been in conflict over semiconductors since the Japan-U.S. semiconductor dispute in the 1980s. At that time, the main products in this field were DRAMs and semiconductors for consumer electronics. This situation of friction ended in 1986 with the conclusion of the Japan-U.S. Semiconductor Agreement, which could also be considered a measure for self-imposed control by Japan on its exports, and the Japanese semiconductor industry headed for its demise as it rapidly lost its market competitiveness. Since then, there were no international disputes over semiconductors for several decades, until the confrontation between the United States and China surfaced.
I sometimes wonder why this was so. Japanese semiconductor manufacturers reduced their market share by self-regulation, which undoubtedly improved conditions for the United States. Yet after that, the United States did not gain a monopoly on advanced semiconductor technology. This was because in the field of memory technology, Korean manufacturers made aggressive investments, secured advanced semiconductor technology, and greatly expanded their market shares. However, this did not lead to a conflict as severe as the one that had occurred between Japan and the United States.
Although friction over advanced semiconductors relates to issues with the industrial and market structure, I believe it is additionally influenced by the question of who holds a dominating position in the application fields of the era. In the 1980s, these fields consisted of mainframes, television, and VCR, areas in which Japan and the United States were in fierce competition regarding technological development. Since then, however, the United States has maintained a high degree of competitiveness for a long period over two successive application cycles: PCs and servers, and clouds and smart devices. They can even be considered to be the sole winner in those fields. I believe this may be one of the reasons why there were no conflicts over semiconductors for decades.
Now, the leading application fields for advanced semiconductors are AI and 5th generation mobile communications (5G). From here on, the processing capacity of AI, electronic devices, and networks will increase, and the distribution of data and conversions to real-time operation will accelerate. As the basic structure of our society’s digital infrastructure shifts from the “cloud” to the “edge” and we approach the next application cycle, the important questions will be how the United States will evaluate technology from China, and whether or not it will perceive that technology as a threat.