Cloud native EDA tools & pre-optimized hardware platforms
When John Bardeen, Walter Brattain, and William Shockley built the first successful working model of a transistor in 1947, little did they know that it would go on to become the most important invention for modern-day electronics. What started off as an ambition to solve longstanding issues of power and size caused by vacuum tubes, laid the foundation for the impact that electronic devices would have on people's lives today.
No doubt, 76 years is a long period of time for one to reflect on, long enough that most of us can't imagine a world without transistors. Over the years, engineers have devoted their careers to understanding the transistor's use and applications while reinventing the humble transistor several times. Today, an average of 10 billion transistors goes into a smartphone's processor, a number that would have been inconceivable for Bardeen, Brattain, and Shockley.
This article will explore the transistor's historical journey and potential future, evaluating how the first transistor evolved to meet new and emerging application demands as underlying structures transform and multi-die systems gain further adoption.
Transistors have come a long way from the first point-contact model (Fig. 1). But even before the transistor was invented, two other important milestones occurred: the discovery of electrons and the invention of the vacuum tube.
Figure 1: Diagram of a point-contact transistor.
Electrons were discovered 10 years before the vacuum tube was invented, and it took another 40 years for transistors to be conceived. When the vacuum tube was developed, it allowed for switching of electrical signals or power between "on" and "off" states (similar to a relay, the foundation of digital design) and amplifying signals (the foundation of analog design)¡ªa powerful duo that could ultimately control the flow of electrons in a circuit.
However, similar to a light bulb, vacuum tubes have a glass enclosure with a metal filament inside. Vacuum tubes revolutionized electronics, leading to desktop radios and early computers, but they were large and used lots of power. Transistors, built with a small rectangular block of semiconductor material such as silicon or germanium, offered scientists and engineers the opportunity to dramatically reduce the power consumed by existing designs, or to build larger and more complex systems.
Ten years later, the invention of integrated circuits (ICs)¡ªa compact, effective method of arranging multiple transistors and other electronic components together¡ªbecame the central driving force that helped the transistor become so ubiquitous today. In fact, ICs embedded within the lunar module of the Apollo 11 spacecraft and the two guidance computers in command were essential in enabling the Apollo 11 to land humans on the Moon in 1969¡ªa feat that wouldn't have been possible with vacuum tubes.
The advancement of the transistor has gone through several stages, each driven by the need to meet new demands in terms of size, performance, and power consumption. Integrating billions of transistors in complex forms and systems allowed for the miniaturization of electronic devices and the creation of more efficient and reliable devices.
Three application phases specifically influenced how the transistor evolved over the decades.
The first phase involved running communication and computing in a much smaller form factor. Think of a lighter radio or computer, like that used in Apollo 11. The second wave revolved around the application's capability itself, catalyzed by the personal computer. Once you have an innovative and functioning application, such as a computer, how does one make it perform functions that have never been done before? For instance, using a computer to write documents or play games. It was this school of thought that propelled the growth of ICs.
In the third phase, ICs and transistors went mobile, with the cellular phone, digital camera, music player, and, eventually, the smartphone integrating all of them. This moment in time can be compared to the 2007 MacWorld Expo, where Steve Jobs first unveiled the iPhone as a revolutionary device that could combine the functionality of a mobile phone with that of a PC and an iPod.
ICs developed in the 1960s and onward used a traditional planar structure to create basic digital circuits. However, the industry began to transition to newer structures in the following decades, with FinFET (fin field effect) transistors appearing in 2011 (Fig. 2) and GAA (gate-all-around) transistors expected to pick up steam in 2024. GAAs specifically remove the constraints of dealing with the granularity of fins.
Figure 2: Intel moved from 32-nm planar transistors (left) to 22 nm and tri-gate FinFET-style transistors (right).
Besides scaling down transistor size and scaling up transistor density, engineers also devoted their efforts to the development of new materials and enhancing the device's power consumption and computing speed.
As manufacturers reach the physical limits of how many transistors they can push onto a single chip, chips of the future will comprise multiple chiplets in a single package, vertically stacked in some cases. While some transistors can be designed with GAA technology, others can be planar and embedded on 1D or 2D devices.
Multi-die systems are already helping designers scale system functionality and becoming mainstream in the semiconductor world, enabling teams with more tools to get closer to streamlined 3D integration. But this journey isn't for the faint-hearted.
Future transistors will need to be highly specialized, becoming a double-edged sword for teams to determine how they can design a system that not only houses diverse types of transistors, but also is highly efficient. The key will be to accurately design the transistor from the bottom up with a system-centric design mindset.
Circuit designers are accustomed to having a minimal set of options when dealing with different transistor specialization types. However, future transistors will largely be about domain-specific applications and material-specific choices, both at the chip level and at the system level.
Where does the future of Moore's Law stand amidst all of this? After spending decades treading the path of the famed law, Synopsys believes it will continue, but that we also may need to recalibrate the meaning of the "density of transistors."
For instance, do we really need to worry about the number of transistors per unit area or is it the number of transistors per footprint? With the footprint measure taking into account the 3D volume and the cross section of the largest view on the X-Y scale, it may be a better determiner of performance and speed as transistors continue to shrink.
Looking at applications available today, an area that we are really excited about is the extension of human capability to perceive, observe, and better understand the world around us (i.e., virtual reality and augmented reality, or VR/AR). On a similar spectrum, autonomous vehicles on the road today already use a variety of sensors, cameras, and other electronic systems powered by multiple transistors to extract useful signals for the vehicle to function.
This evolution, which started with a single transistor to now becoming systems of chips, is accompanied by transistors getting smaller, lighter, and cheaper. Thus, it's bringing new considerations into the picture, such as co-designing the hardware system with software running on it.
A renaissance in transistor invention is already underway, and the blank canvas of potential is fascinating. The moonshot idea will lie in finding the best way to design a system-centric view of chiplets that can be used to better model the world¡ªpurely enabled by better transistors.
Synopsys is empowering technology visionaries with a comprehensive and scalable multi-die solution for fast heterogeneous integration.
Download Brief