More Engineering News: Protein Engineering and Next-Generation Computer Architecture

Here are two new articles I’ve written about exciting newly published research on protein engineering and computer systems, led by engineers at Stanford University:

 

Stanford engineers invent process to accelerate protein evolution

A new tool enables researchers to test millions of mutated proteins in a matter of hours or days, speeding the search for new medicines, industrial enzymes and biosensors.

All living things require proteins, members of a vast family of molecules that nature “makes to order” according to the blueprints in DNA.

Through the natural process of evolution, DNA mutations generate new or more effective proteins. Humans have found so many alternative uses for these molecules – as foods, industrial enzymes, anti-cancer drugs – that scientists are eager to better understand how to engineer protein variants designed for specific uses.

Now Stanford engineers have invented a technique to dramatically accelerate protein evolution for this purpose. This technology, described in Nature Chemical Biology, allows researchers to test millions of variants of a given protein, choose the best for some task and determine the DNA sequence that creates this variant.

An overview of the directed evolution process with μSCALE: preparing protein libraries, screening them, extracting desired cells, and then inferring the DNA sequence at work. (Credit: Cochran Lab, Stanford)

An overview of the directed evolution process with μSCALE: preparing protein libraries, screening them, extracting desired cells, and then inferring the DNA sequence at work. (Credit: Cochran Lab, Stanford)

“Evolution, the survival of the fittest, takes place over a span of thousands of years, but we can now direct proteins to evolve in hours or days,” said Jennifer Cochran, an associate professor of bioengineering who co-authored the paper with Thomas Baer, executive director of the Stanford Photonics Research Center.

“This is a practical, versatile system with broad applications that researchers will find easy to use,” Baer said.

By combining Cochran’s protein engineering know-how with Baer’s expertise in laser-based instrumentation, the team created a tool that can test millions of protein variants in a matter of hours.

“The demonstrations are impressive and I look forward to seeing this technology more widely adopted,” said Frances Arnold, a professor of chemical engineering at Caltech who was not affiliated with the study.

[For more, check out the entire article in Stanford News, published on 7 Dec. 2015. Thanks to Tom Abate for help with editing.]

 

Stanford-led skyscraper-style chip design boosts electronic performance by factor of a thousand

In modern computer systems, processor and memory chips are laid out like single-story structures in a suburb. But suburban layouts waste time and energy. A new skyscraper-like design, based on materials more advanced than silicon, provides the next computing platform.

For decades, engineers have designed computer systems with processors and memory chips laid out like single-story structures in a suburb. Wires connect these chips like streets, carrying digital traffic between the processors that compute data and the memory chips that store it.

But suburban-style layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy.

That is why researchers from three other universities are working with Stanford engineers, including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong, to create a revolutionary new high-rise architecture for computing.

A multi-campus team led by Stanford engineers Subhasish Mitra and H.-S. Philip Wong has developed a revolutionary high-rise architecture for computing.

A multi-campus team led by Stanford engineers Subhasish Mitra and H.-S. Philip Wong has developed a revolutionary high-rise architecture for computing.

In Rebooting Computing, a special issue of the IEEE Computer journal, the team describes its new approach as Nano-Engineered Computing Systems Technology, or N3XT.

N3XT will break data bottlenecks by integrating processors and memory like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators. The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits.

“We have assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra said.

Shifting electronics from a low-rise to a high-rise architecture will demand huge investments from industry – and the promise of big payoffs for making the switch.

“When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong said.

[For more, check out the entire article in Stanford News, published on 9 Dec. 2015.]

Engineering News: Solar Cells and Plasma Combustion

Check out these new articles I’ve written about solar cells and plasma combustion research led by Stanford University engineers:

 

Plasma experiments bring astrophysics down to Earth

New laboratory technique allows researchers to replicate on a tiny scale the swirling clouds of ionized gases that power the sun, to further our understanding of fusion energy, solar flares and other cosmic phenomena.

Intense heat, like that found in the sun, can strip gas atoms of their electrons, creating a swirling mass of positively and negatively charged ions known as a plasma.

For several decades, laboratory researchers sought to replicate plasma conditions similar to those found in the sun in order to help them understand the basic physics of ionized matter and, ultimately, harness and control fusion energy on Earth or use it as a means of space propulsion.

Now Stanford engineers have created a tool that enables researchers to make detailed studies of certain types of plasmas in a laboratory. Their technique allows them to study astrophysical jets—very powerful streams of focused plasma energy.

A long-exposure photographic image capturing the Stanford Plasma Gun during a single firing. The image shows where the plasma is brightest during the acceleration process, which occurs over tens of microseconds.

A long-exposure photographic image capturing the Stanford Plasma Gun during a single firing. The image shows where the plasma is brightest during the acceleration process, which occurs over tens of microseconds.

Writing in Physical Review Letters, mechanical engineering graduate students Keith Loebner and Tom Underwood, together with Professor Mark Cappelli, describe how they built a device that creates tiny plasma jets and enabled them to make detailed measurements of these ionized clouds.

The researchers also proved that plasmas exhibit some of the same behavior as the gas clouds created by, say, firing a rocket engine or burning fuel inside an internal combustion engine.

Their instrument, coupled with this new understanding of the fire-like behavior of plasmas, creates a down-to-earth way to explore the physics of solar flares, fusion energy and other astrophysical events.

“The understanding of astrophysical phenomena has always been hindered by the inability to generate scaled conditions in the laboratory and measure the results in great detail,” Cappelli said.

[For more, check out the entire article in Stanford News, published on 1 Dec. 2015. Thanks to Tom Abate for help with editing.]

 

Stanford designs underwater solar cells that turn captured greenhouse gases into fuel

Taking a cue from plants, researchers figure out how to use the sun’s energy to combine CO2 with H2O to create benign chemical products, as part of a futuristic technology called artificial photosynthesis.

Stanford engineers have developed solar cells that can function under water. Instead of pumping electricity into the grid, though, the power these cells produce would be used to spur chemical reactions to convert captured greenhouse gases into fuel.

This new work, published in Nature Materials, was led by Stanford materials scientist Paul McIntyre, whose lab has been a pioneer in an emerging field known as artificial photosynthesis.

Stanford engineers have shown how to increase the power of corrosion-resistant solar cells, setting a record for solar energy output under water. (Photo credit: Shutterstock)

Stanford engineers have shown how to increase the power of corrosion-resistant solar cells, setting a record for solar energy output under water. (Photo credit: Shutterstock)

In plants, photosynthesis uses the sun’s energy to combine water and carbon dioxide to create sugar, the fuel on which they live. Artificial photosynthesis would use the energy from specialized solar cells to combine water with captured carbon dioxide to produce industrial fuels, such as natural gas.

Until now, artificial photosynthesis has faced two challenges: ordinary silicon solar cells corrode under water, and even corrosion-proof solar cells had been unable to capture enough sunlight under water to drive the envisioned chemical reactions.

Four years ago, McIntyre’s lab made solar cells resistant to corrosion in water. In the new paper, working with doctoral student Andrew Scheuermann, the researchers have shown how to increase the power of corrosion-resistant solar cells, setting a record for solar energy output under water.

“The results reported in this paper are significant because they represent not only an advance in performance of silicon artificial photosynthesis cells, but also establish the design rules needed to achieve high performance for a wide array of different semiconductors, corrosion protection layers and catalysts,” McIntyre said.

Such solar cells would be part of a larger system to fight climate change. The vision is to funnel greenhouse gases from smokestacks or the atmosphere into giant, transparent chemical tanks. Solar cells inside the tanks would spur chemical reactions to turn the greenhouse gases and water into what are sometimes called “solar fuels.”

[For more, check out the entire article in Stanford News, published on 18 Nov. 2015.]

Frontiers of Computer Engineering: Graphene and Cognitive Networking

Check out these new articles I’ve written about exciting computer engineering research going on at Stanford University and the University of California, San Diego:

 

Graphene key to high-density, energy-efficient memory chips, Stanford engineers say

Only an atom thick, graphene is a key ingredient in three Stanford projects to create data storage technologies that use nanomaterials other than standard silicon.

The memory chips in phones, laptops and other electronic devices need to be small, fast and draw as little power as possible. For years, silicon chips have delivered on that promise.

But to dramatically extend the battery life of mobile gadgets, and to create data centers that use far less energy, engineers are developing memory chips based on new nanomaterials with capabilities that silicon can’t match.

Stanford electrical engineering professor H.-S. Philip Wong, left, graduate student Joon Sohn and postdoctoral fellow Seunghyun Lee (seated) are developing high-capacity, energy-efficient memory chips that are not based on silicon. (Photo: Norbert von der Groeben)

Stanford electrical engineering professor H.-S. Philip Wong, left, graduate student Joon Sohn and postdoctoral fellow Seunghyun Lee (seated) are developing high-capacity, energy-efficient memory chips that are not based on silicon. (Photo: Norbert von der Groeben)

In three recent experiments, Stanford engineers demonstrate post-silicon materials and technologies that store more data per square inch and use a fraction of the energy of today’s memory chips.

The unifying thread in all three experiments is graphene, an extraordinary material isolated a decade ago but which had, until now, relatively few practical applications in electronics…

[For more, check out the entire article in Stanford News, published on 23 Oct. 2015. Thanks to Tom Abate for help with editing.]

 

New Frontiers of Cognitive Networking

Most of us use many devices — perhaps too many — throughout the day: A smartphone at home and then at the cafe around the corner, a laptop computer at work and maybe a tablet or e-reader in the evening. That’s not counting all of the other possible “smart” devices at our fingertips, such as health monitors, fitness trackers and smart watches.

Few people think about how all these devices could be efficiently networked via software and wireless technology; however, if you were trying to download a video and coworkers on the same network were attempting to communicate via Google Talk, you would want to be sure those limited wireless resources were allocated optimally.

UC San Diego Postdoctoral Researcher Giorgio Quer.

UC San Diego Postdoctoral Researcher Giorgio Quer.

Giorgio Quer, a postdoctoral researcher in electrical and computer engineering at the University of California, San Diego is on a quest to solve these and other networking problems. Quer has been working on “cognitive networking” and exploring related projects at the UC San Diego division of Qualcomm Institute for the past five years as a visiting scholar from the University of Padua in northern Italy. He works with Ramesh Rao, director of the Qualcomm Institute and principal investigator of the research, as well as Matteo Danieletto, another postdoc in the department.

The concept of cognitive networking, according to Quer, refers to “a way to apply cognition to wireless, where a network learns from past history.” Such a process enables a network to “perceive” and learn about its current conditions and then plan, decide and act on those conditions, resembling (in a limited sense) cognition in the human brain…

[For more, check out the entire article at the California Institute for Telecommunications and Information Technology (Calit2), published on 21 Sep. 2015. Thanks to Tiffany Fox for help with editing.]