Monday, October 27, 2008

Limits to Nanoelectronics – Theoretical and Physical Limits-Plasmonics – Economic analogies of limits to growth from the bulk metals industry

"Researchers from Umea University in Sweden and the University of Maryland, USA, have demonstrated that nanosized electronic components cannot transfer information using plasmon technology. This is because electrons at that scale can no longer be defined, causing the plasmon to lose energy. This may affect the development of nanoelectronics."; reads the intrigueing tiny insert at the foot of page 11 in the current issue, Oct. 2008, of my print copy of my Institute's house journal Materials World (MW).  For the many rich online articles and materials, jobs, conference dates; etc. freely available in MW cf. References at the end of this post.

This complex fundamental work on theoretical and physical limits, here in nanoelectronics, is worthy of some additional few pointers both for our colleagues in materials science, technology and engineering either in the field or considering entry it as well as the more general reader of scientific endeavour and accompanying trials and tribulatons.

Umea's press release gives further general insight into the highly complex phenomena at play:

The electronics we know in our computers today is, as the name suggests, based on the transfer of information with the help of electrons. Using electrons has allowed us to shrink the size of computer circuits without losing efficacy. At the same time, communication with the help of electrons represents a rather slow means of transmission. To alleviate this problem, light can be used instead of electrons. This is the basis of so-called photonic components. While the transfer speed in photonics is extremely high, the size of the components cannot be shrunk to the same level as ‘ordinary’ electronics.

For a number of years, so-called plasmonic components have proven to be a possible way around the dilemma of electronics and photonics. By combining photonics and electronics, scientists have shown that information can be transferred with the help of so-called plasmons. Plasmons are surface waves, like waves in the ocean, but here consisting of electrons, which can spread at extremely high speeds in metals.

The findings now being presented by the Swedish-American research team show that difficulties arise when the size of such components is reduced to the nanometer level. At that point it turns out that the dual nature of electrons makes itself felt: the electrons no longer act like particles but rather have a diffuse character, with their location and movement no longer being clearly defined. This elusive personality leads to the energy of the plasmon being dissipated and lost in the transfer of information. For nanocomponents, this consequence is devastating, entailing the loss of all information before it can be transferred.

“The effects we have discovered cannot be fully avoided, but the behavior of the plasmons might nevertheless be controlled by meticulous component design that takes into consideration the quantum nature of the nanoscale. It’s our hope that continued research will provide a solution to this problem,” says Mattias Marklund.

"Combining ordinary electronics with light has been a potential way to create minimal computer circuits with super fast information transfer. Researchers at Umeå University in Sweden and the University of Maryland in the U.S. are now showing that there is a limit. When the size of the components approaches the nanometer level, all information will disappear before it has time to be transferred.

"Our findings throw a monkey wrench in the machinery of future nanoelectronics." This is arguably not the proper expression to be applied as will be shown in the short history of limits to which will follow. cf also my precious post and links recalled. (Knowledge is most certainly better than ignorance) However  the author, Mattias Marklund, professor of theoretical physics at Umeå University in Sweden does indeed rectify; "At the same time, it’s a fascinating issue to address just how we might be able to prevent the information from being lost,” thus laying a foundation block for future sucessful research.


The findings are presented in the September issue of the journal Europhysics Letters. Title: New quantum limits in plasmonic devices
Authors: M. Marklund, G. Brodin, L. Stenflo and C. S. Liu
The freely available paper may be found the the following link:
Arxiv-pdf.
Other sources reporting these findings together with and related articles may be found at
PHYSORG.


The important equation for development and plasmonic component design is summarised in their above Arxiv paper as follows:

"It should be stressed that the electromagnetic contribution to the group velocity dominates over the quantum induced contribution Vq for wavelengths λ ≥ 30 nm. Assuming that the dielectric consists of SiO2 , we have [42] from Arxiv ref. above.
              εd ∼ 3 – 5
Ref:
List of Dielectric Constants for different materals

and with the Plasma Frequency of the metal oxide.

              ѡ p ∼ 4 x 10¹⁵ s¯¹

Quantum Damping length δSP
------- eqn. 10 from Arxiv. ref above


Thus, due to the strong wavelength dependence in the eqn (10)[ cf Arxiv], µm-waves can propagate
without significant quantum damping, while decreasing the scale much below the µm regime will affect the effective propagation distance. For example, for λ ∼ 30 nm the damping length δSP ∼ 10 nm. Although different geometries may affect the possibilities to design smaller devices [15 _ Arxiv], our result (10) [cf Arxiv] is robust, and its consequences must therefore be considered in the design of plasmonic
devices.

Useful and Necessary Concepts, Definitions and Formula concerning Plasmons: 

The obvious place to start is the free encyclopedia Wikipedia which rapidly produces rich introduction to the field. cf . Footnotes II.


Recent History on Limits to Progress.
 
Researchers have defined a fundamental limit that will help extend a half-century's progress in producing ever-smaller microelectronic devices for increasingly more powerful and less expensive computerized equipment.
The fundamental limit defines the minimum amount of energy needed to perform the most basic computing operation: binary logic switching that changes a 0 to a 1 or vice-versa. This limit provides the foundation for determining a set of higher-level boundaries on materials, devices, circuits and systems that will define future opportunities for miniaturization advances possible through traditional microelectronics -- and its further extension to nanoelectronics.

James D. Meindl, and collaborator Jeffrey A. Davis studied the fundamental limit from two different perspectives: the minimum energy required to produce a binary transition that can be distinguished, and the minimum energy necessary for sending the resulting signal along a communications channel. The result was the same in both cases.


The fundamental limit, expressed as E(min) = (ln2)kT, was first reported 50 years ago by electrical engineer John von Neumann, who never provided an explanation for its derivation. (In this equation, T represents absolute temperature, k is Boltzmann's constant, and ln2 is the natural log of 2).

It defines the minimum amount of energy needed to perform the most basic computing operation: binary logic switching that changes a 0 to a 1, or vice-versa. Meindl and collaborators Jeffrey A. Davis and Qiang Chen found that the fundamental limit depends on just one variable: the absolute temperature. Based on this fundamental limit, they studied a hierarchy of limits that are much less absolute because they depend on assumptions about the operation of devices, circuits and systems.

Though this fundamental limit provides the theoretical stopping point for electrical and computer engineers, Meindl says no future device will ever operate close to it, because device designers will first bump into the higher-level limits -- and economic realities.


For example, electronic signals can move through interconnects no faster than the speed of light. And quantum mechanical theory introduces uncertainties that would make devices smaller than a certain size impractical. Beyond that is a more important issue -- devices operating at the fundamental limit would be wrong as often as they are right.


"The probability of making an error while operating at this fundamental limit of energy transfer in a binary transition is one-half," Meindl noted. "In other words, if you are operating just above the limit, you'll be right most of the time, but if you are operating just below it, you'd be wrong most of the time."


"What does this mean for electronic and computer engineers?" asks Meindl rhetorically.


We can expect another 10 to 15 years of the exponential pace of the past 40 years in reducing cost per function, improving productivity and improving performance," Meindl said. "There will be lots of problems to solve and inventions that will be needed, just as they have over the past four decades."

Steel and Bulk Metals Economic Anaalogies 
He expects the world's use of silicon will follow the pattern set by its use of steel. During the second half of the 19th century, steel use increased exponentially as the world built its industrial infrastructure. Growth in steel demand fell after that, but it remains the backbone of world economies, though other materials increasingly challenge it.


"In the middle of the 21st century, we are going to be using more silicon than we are now, by far," he predicted. "There will be other materials that will come in to replace it, like plastics and aluminum came in to push steel out of certain applications. But we don't know yet what will replace silicon."


Though the limits provide a final barrier to innovation, Meindl believes economic realities will bring about the real end to advances in microelectronics.


"What has enabled the computer revolution so far is that the cost per function has continued to decrease," he said. "It is likely that after a certain point, we will not be able to continue to increase productivity. We may no longer be able to see investment pay off in reduced cost per function."


Beyond that point, designers will depend on nanotechnology for continuing advances in miniaturization.


"What happens next is what nanotechnology research is trying to answer," he said. "Work that is going on in nanotechnology today is trying to create a discontinuity and jump to a brand new science and technology base. Fundamental physical limits encourage the hypothesis that silicon technology provides a singular opportunity for exploration of nanoelectronics."

"It is reassuring to know that you are not fighting against a law of physics," Meindl said. "Knowing the fundamental limits gives you hope that cleverness can produce the inventions that you need to continue miniaturization. Now that the fundamental limits have been pinned down, we can start to see what other factors will impede us as we approach this limit." cf.  also my previous post and links.

The semiconductor industry publishes an annual "roadmap" (the International Technology Road Map for Semiconductors) that lays out the challenges expected for the next 15 years.

To produce trillion-transistor chips, he noted, the industry must be able to economically mass-produce structures on the nanometer-size scale. That means double-gate metal-oxide-semiconductor field effect transistors (MOSFETs) with gate oxide thicknesses of about one nanometer, silicon channel thicknesses of about three nanometers and channel lengths of about 10 nanometers - along with nanoscale wires for interconnecting such tiny components.

As previously mentioned, the fundamental limit defines the minimum amount of energy needed to perform the most basic computing operation: binary logic switching that changes a 0 to a 1, or vice-versa. Meindl and collaborators Jeffrey A. Davis and Qiang Chen found that the fundamental limit depends on just one variable: the absolute temperature. Based on this fundamental limit, they studied a hierarchy of limits that are much less absolute because they depend on assumptions about the operation of devices, circuits and systems.

The researchers studied the fundamental limit from two different perspectives: the minimum energy required to produce a binary transition that can be distinguished, and the minimum energy necessary for sending the resulting signal along a communications channel. The result was the same in both cases.
The fundamental limit, expressed as E(min) = (ln2)kT, was first reported 50 years ago by electrical engineer John von Neumann, who never provided an explanation for its derivation. (In this equation, T represents absolute temperature, k is Boltzmann's constant, and ln2 is the natural log of 2).

Though this fundamental limit provides the theoretical stopping point for electrical and computer engineers, Meindl says no device will ever operate close to it because designers will first bump into the higher-level limits. For example, electronic signals can move through interconnects no faster than the speed of light. And quantum mechanical theory sets minimum size restrictions on devices.


Though the limits provide a final barrier to innovation, Meindl believes economic realities will bring about the real end to advances in silicon microelectronics.

"What has enabled the computer revolution so far is that the cost per function has continued to decrease," he said. "It is likely that after a certain point, we will not be able to continue to increase productivity. We may no longer be able to see investment pay off in reduced cost per function. Because the stakes are getting so high in terms of factories needed to turn out these denser and denser chips, the number of companies that can afford the multi-billion dollar factories has been dwindling."


The future of silicon semiconductors ultimately depends on nanotechnology.
"What happens next is what nanotechnology research is trying to answer," Meindl said. "Work that is going on in nanotechnology today is trying to create a discontinuity and jump to a brand new science and technology base. Fundamental physical limits encourage the hypothesis that silicon technology provides a singular opportunity for exploration of nanoelectronics."


For MORE from James Meindi:  Link to Research Horizons Magazine article by James Meindl 

Footnote I:
Online articles in MW.

Footnote II

Useful and Necessary Concepts, Definitions and Formula concerning Plasmons:

The following definitions are quoted in full from Wikipedia:
'In physics, a  plasmon is a quantum of a plasma oscillation. The plasmon is the quasiparticle resulting from the quantization of plasma oscillations (or "Langmuir waves" (after Irving Langmuir one of few metallurgists to win the Nobel Prize!) just as photons and phonons are quantizations of light and sound waves, respectively. Thus, plasmons are collective oscillations of the free electron gas density, often at optical frequencies. They can also couple with a photon to create a third quasiparticle called a plasma polariton.
Since plasmons are the quantization of classical plasma oscillations, most of their properties can be derived directly from Maxwell's Equations.
Plasmons are explained in the classical picture using the Drude model of metals. The metal is treated as a three dimensional crystal of positively charged ions, and a delocalized electron gas is moving in a periodic potential of this ion grid.
Plasmons play a large role in the optical properties of metals. Light of frequency below the plasma frequency is reflected, because the electrons in the metal screen the electric field of the light. Light of frequency above the plasma frequency is transmitted, because the electrons cannot respond fast enough to screen it. In most metals, the plasma frequency is in the ultraviolet, making them shiny (reflective) in the visible range. Some metals, such as copper and gold, have electronic interband transitions in the visible range, whereby specific light energies (colors) are absorbed, yielding their distinct color. In semiconductors, the valence electron plasma frequency is usually in the deep ultraviolet[1][2]. That is why they are reflective, too.
The plasmon energy can often be estimated in the free electron model as

                                                       Ep = ђ (n x e²/m x ε0)¹/²

where n is the conduction electron density, e is the elementary charge, m is the electron mass and ε0 the permittivity of free space.

TUTORIAL -