What Zhang did was to create the world’s most precise value for a subatomic nuclear particle called a neutral pion. It’s a quark and an antiquark comprising a meson. The neutral pion (also known as p0) is the lightest of the mesons, but a player in the strong attractive force that holds the atom’s nucleus together.
And that, in turn, makes it a part of the puzzle Gao and her students have been trying to solve for many years. The prevailing theory about the strong force is called quantum chromodynamics (QCD), and it’s been probed for years by high-energy physics. But Gao, Zhang and their collaborators are trying to study QCD under more normal energy states, a notoriously difficult problem.
Yang Zhang spent six years analyzing and writing up the data from a Primakoff (PrimEx-II) experiment in Hall B at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in Newport News, VA. His work was done on equipment supported by both the National Science Foundation and the Department of Energy.
In a Primakoff experiment, a photon beam is directed on a nuclear target, producing neutral pions. In both the PrimEx-I and PrimEx-II experiments at Jefferson Lab, the two photons from the decay of a neutral pionwere subsequently detected in an electromagnetic calorimeter. From that, Zhang extracted the pion’s ‘radiative decay width.’ That decay width is a handy thing to have, because it is directly related to the pion’s life expectancy, and QCD has a direct prediction for it.
Zhang’s hard-won answer: The neutral pion has a radiative decay width of 7.8 electron-volts, give or take. That makes it an important piece of the dauntingly huge puzzle about QCD. Gao and her colleagues will continue to ask the fundamental questions about nature, at the finest but perhaps most profound scale imaginable.
The PrimEx-I and PrimEx-II collaborations were led by Prof. Ashot Gasparian from North Carolina A&T State University. Gao and Zhang joined the collaboration in 2011.
“Precision Measurement of the Neutral Pion Lifetime,” appears in Science May 1. Dr. Yang Zhang is now a quantitative researcher at JPMorgan Chase & Co.
Many university labs may have gone quiet amid coronavirus shutdowns, but faculty continue to analyze data, publish papers and write grants. In this guest post from Duke chemistry professor David Beratan and colleagues, the researchers describe a new study showing how water’s ability to shepherd electrons can change with subtle shifts in a water molecule’s 3-D structure:
Water, the humble combination of hydrogen and oxygen, is essential for life. Despite its central place in nature, relatively little is known about the role that single water molecules play in biology.
Researchers at Duke University, in collaboration with Arizona State University, Pennsylvania State University and University of California-Davis have studied how electrons flow though water molecules, a process crucial for the energy-generating machinery of living systems. The team discovered that the way that water molecules cluster on solid surfaces enables the molecules to be either strong or weak mediators of electron transfer, depending on their orientation. The team’s experiments show that water is able to adopt a higher- or a lower-conducting form, much like the electrical switch on your wall. They were able to shift between the two structures using large electric fields.
In a previous paper published fifteen years ago in the journal Science, Duke chemistry professor David Beratan predicted that water’s mediation properties in living systems would depend on how the water molecules are oriented.
Water assemblies and chains occur throughout biological systems. “If you know the conducting properties of the two forms for a single water molecule, then you can predict the conducting properties of a water chain,” said Limin Xiang, a postdoctoral scholar at University of California, Berkeley, and the first author of the paper.
“Just like the piling up of Lego bricks, you could also pile up a water chain with the two forms of water as the building blocks,” Xiang said.
In addition to discovering the two forms of water, the authors also found that water can change its structure at high voltages. Indeed, when the voltage is large, water switches from a high- to a low-conductive form. In fact, it is may be possible that this switching could gate the flow of electron charge in living systems.
This study marks an important first step in establishing water synthetic structures that could assist in making electrical contact between biomolecules and electrodes. In addition, the research may help reveal nature’s strategies for maintaining appropriate electron transport through water molecules and could shed light on diseases linked to oxidative damage processes.
The researchers dedicate this study to the memory of Prof. Nongjian (NJ) Tao.
Imagine robots that can move, sense and respond to stimuli, but that are smaller than a hair’s width. This is the project that Cornell professor and biophysicist Itai Cohen, who gave a talk on Wednesday, January 29 as a part of Duke’s Physics Colloquium, has been working on with and his team. His project is inspired by the microscopic robots in Paul McEuen’s book Spiral. Building robots at such a small scale involves a lot more innovation than simply shrinking all of the parts of a normal robot. At low Reynolds number, fluids are viscous instead of inertial, Van der Waals forces come into play, as well as other factors that affect how the robot can move and function.
To resolve this issue, Cohen and his team decided to build and pattern their micro robots in 2D. Then, inspired by origami, a computer would print the 2D pattern of a robot that can fold itself into a 3D structure. Because paper origami is scale invariant, mechanisms built at one scale will work at another, so the idea is to build robot patterns than can be printed and then walk off of the page or out of a petri dish. However, as Cohen said in his talk last Wednesday, “an origami artist is only as good as their origami paper.” And to build robots at a microscopic scale, one would need some pretty thin paper. Cohen’s team uses graphene, a single sheet of which is only one atom thick. Atomic layer deposition films also behave very similarly to paper, and can be cut up, stretch locally and adopt a 3D shape. Some key steps to making sure the robot self-folds include making elements that bend, and putting additional stiff pads that localize bends in the pattern of the robot. This is what allows them to produce what they call “graphene bimorphs.”
Cohen and his team are looking to use microscopic robots in making artificial cilia, which are small leg-like protrusions in cells. Cilia can be sensory or used for locomotion. In the brain, there are cavities where neurotransmitters are redirected based on cilial beatings, so if one can control the individual beating of cilia, they can control where neurotransmitters are directed. This could potentially have biomedical implications for detecting and resolving neurological disorders.
Right now, Cohen and his lab have microscopic robots made of graphene, which have photovoltaics attached to their legs. When a light shines on the photovoltaic receptor, it activates the robot’s arm movement, and it can wave hello. The advantage of using photovoltaics is that to control the robot, scientists can shine light instead of supplying voltage through a probe—the robot doesn’t need any tethers. During his presentation, Cohen showed the audience a video of his “Brobot,” a robot that flexes its arms when a light shines on it. His team has also successfully made microscopic robots with front and back legs that can walk off a petri dish. Their dimensions are 70 microns long, 40 microns wide and two microns thick.
Cohen wants to think critically about what problems are important to use technology to solve; he wants make projects that can predict the behavior of people in crowds, predict the direction people will go in response to political issues, and help resolve water crises. Cohen’s research has the potential to find solutions for a wide variety of current issues. Using science fiction and origami as the inspiration for his projects reminds us that the ideas we dream of can become tangible realities.
It was a Frankenstein moment for Duke alumnus and adjunct physics professor Henry Everitt.
After years of working out the basic
principles behind his new laser, last Halloween he was finally ready to put it
to the test. He turned some knobs and toggled some switches, and presto, the
first bright beam came shooting out.
“It was like, ‘It’s alive!’” Everitt said.
This was no laser for presenting Powerpoint slides or entertaining cats. Everitt and colleagues have invented a new type of laser that emits beams of light in the ‘terahertz gap,’ the no-man’s-land of the electromagnetic spectrum between microwaves and infrared light.
Terahertz radiation, or ‘T-rays,’ can see
through clothing and packaging, but without the health hazards of harmful
radiation, so they could be used in security scanners to spot concealed weapons
without subjecting people to the dangers of X-rays.
It’s also possible to identify substances by
the characteristic frequencies they absorb when T-rays hit them, which makes
terahertz waves ideal for detecting toxins in the air or gases between the
stars. And because such frequencies are higher than those of radio waves and
microwaves, they can carry more bandwidth, so terahertz signals could transmit
data many times faster than today’s cellular or Wi-Fi networks.
“Imagine a wireless hotspot where you could
download a movie to your phone in a fraction of a second,” Everitt said.
Yet despite the potential payoffs, T-rays
aren’t widely used because there isn’t a portable, cheap or easy way to make
Now Everitt and colleagues at Harvard University and MIT have invented a small, tunable T-ray laser that might help scientists tap into the terahertz band’s potential.
While most terahertz molecular lasers take up
an area the size of a ping pong table, the new device could fit in a shoebox.
And while previous sources emit light at just one or a few select frequencies,
their laser could be tuned to emit over the entire terahertz spectrum, from 0.1
to 10 THz.
The laser’s tunability gives it another
practical advantage, researchers say: the ability to adjust how far the T-ray
beam travels. Terahertz signals don’t go very far because water vapor in the
air absorbs them. But because some terahertz frequencies are more strongly
absorbed by the atmosphere than others, the tuning capability of the new laser
makes it possible to control how far the waves travel simply by changing the
frequency. This might be ideal for applications like keeping car radar sensors
from interfering with each other, or restricting wireless signals to short
distances so potential eavesdroppers can’t intercept them and listen in.
Everitt and a team co-led by Federico Capasso of Harvard and Steven Johnson of MIT describe their approach this week in the journal Science. The device works by harnessing discrete shifts in the energy levels of spinning gas molecules when they’re hit by another laser emitting infrared light.
Their T-ray laser consists of a pencil-sized
copper tube filled with gas, and a 1-millimeter pinhole at one end. A zap from the
infrared laser excites the gas molecules within, and when the molecules in this
higher energy state outnumber the ones in a lower one, they emit T-rays.
The team dubbed their gizmo the “laughing gas
laser” because it uses nitrous oxide, though almost any gas could work, they
Everitt started working on terahertz laser designs 35 years ago as a Duke undergraduate in the mid-1980s, when a physics professor named Frank De Lucia offered him a summer job.
De Lucia was interested in improving special
lasers called “OPFIR lasers,” which were the most powerful sources of T-rays at
the time. They were too bulky for widespread use, and they relied on an equally
unwieldy infrared laser called a CO2 laser to excite the gas inside.
Everitt was tasked with trying to generate
T-rays with smaller gas laser designs. A summer gig soon grew into an
undergraduate honors thesis, and eventually a Ph.D. from Duke, during which he
and De Lucia managed to shrink the footprint of their OPFIR lasers from the
size of an axe handle to the size of a toothpick.
But the CO2 lasers they were
partnered with were still quite cumbersome and dangerous, and each time
researchers wanted to produce a different frequency they needed to use a different
gas. When more compact and tunable sources of T-rays came to be, OPFIR lasers
were largely abandoned.
Everitt would shelf the idea for another
decade before a better alternative to the CO2 laser came along, a
compact infrared laser invented by Harvard’s Capasso that could be
tuned to any frequency over a swath of the infrared spectrum.
By replacing the CO2 laser with
Capasso’s laser, Everitt realized they wouldn’t need to change the laser gas anymore
to change the frequency. He thought the OPFIR laser approach could make a
comeback. So he partnered with Johnson’s team at MIT to work out the theory,
then with Capasso’s group to give it a shot.
The team has moved to patent their design,
but there is still a long way before it finds its way onto store shelves or
into consumers’ hands. Nonetheless, the researchers — who couldn’t resist a
laser joke — say the outlook for the technique is “very bright.”
This research was supported by the U.S. Army Research Office (W911NF-19-2-0168, W911NF-13-D-0001) and by the National Science Foundation (ECCS-1614631) and its Materials Research Science and Engineering Center Program (DMR-1419807).
CITATION: “Widely Tunable Compact Terahertz Gas Lasers,” Paul Chevalier, Arman Armizhan, Fan Wang, Marco Piccardo, Steven G. Johnson, Federico Capasso, Henry Everitt. Science, Nov. 15, 2019. DOI: 10.1126/science.aay8683.
The proton, that little positively-charged nugget inside
an atom, is fractions of a quadrillionth of a meter smaller than anyone thought,
according to new research appearing Nov. 7 in Nature.
In work they hope solves the contentious “proton radius puzzle” that has been roiling some corners of physics in the last decade, a team of scientists including Duke physicist Haiyan Gao have addressed the question of the proton’s radius in a new way and discovered that it is 0.831 femtometers across, which is about 4 percent smaller than the best previous measurement using electrons from accelerators. (Read the paper!)
A single femtometer is 0.000000000000039370 inches
imperial, if that helps, or think of it as a millionth part of a billionth part
of a meter. And the new radius is just 80 percent of that.
But this is a big — and very small — deal for
physicists, because any precise calculation of energy levels in an atom will be
affected by this measure of the proton’s size, said Gao, who is the Henry
Newson professor of physics in Trinity College of Arts & Sciences.
What the physicists actually measured is the radius of
the proton’s charge distribution, but that’s never a smooth, spherical point,
Gao explained. The proton is made of still smaller bits, called quarks, that
have their own charges and those aren’t evenly distributed. Nor does anything
sit still. So it’s kind of a moving target.
One way to measure a proton’s charge radius is to scatter
an electron beam from the nucleus of an atom of hydrogen, which is made of just
one proton and one electron. But the electron must only perturb the proton very
gently to enable researchers to infer the size of the charge involved in the
interaction. Another approach measures the difference between two atomic
hydrogen energy levels. Past results from these two methods have generally
But in 2010, an experiment at the Paul Scherrer Institute replaced the electron in a hydrogen atom with a muon, a much heavier and shorter-lived member of the electron’s particle family. The muon is still negatively charged like an electron, but it’s about 200 times heavier, so it can orbit much closer to the proton. Measuring the difference between muonic hydrogen energy levels, these physicists obtained a proton charge radius that is highly precise, but much smaller than the previously accepted value. And this started the dispute they’ve dubbed the “proton charge radius puzzle.”
To resolve the puzzle, Gao and her collaborators set out
to do a completely new type of electron scattering experiment with a number of
innovations. And they looked at electron scattering from both the proton and
the electron of the hydrogen atom at the same time. They also managed to get
the beam of electrons scattered at near zero degrees, meaning it came almost
straight forward, which enabled the electron beam to “feel” the proton’s charge
response more precisely.
Voila, a 4-percent-smaller proton. “But actually, it’s
much more complicated,” Gao said, in a major understatement.
The work was done at the Department of Energy’s Thomas
Jefferson National Accelerator Facility in Newport News, Virginia, using new
equipment supported by both the National Science Foundation and the Department
of Energy, and some parts that were purpose-built for this experiment. “To
solve the argument, we needed a new approach,” Gao said.
Gao said she has been interested in this question for nearly
20 years, ever since she became aware of two different values for the proton’s
charge radius, both from electron scattering experiments. “Each one claimed about 1 percent
uncertainty, but they disagreed by several percent,” she said.
And as always in modern physics, had the answer not
worked out so neatly, it might have called into question parts of the Standard
Model of particle physics. But alas, not this time.
“This is particularly important for a number of reasons,”
Gao said. The proton is a fundamental building block of visible matter, and the
energy level of hydrogen is a basic unit of measure that all physicists rely
The new measure may also help advance new insights into
quantum chromodynamics (QCD), the theory of strong interaction in quarks and
gluons, Gao said. “We really don’t understand how QCD works.”
“This is a very, very big deal,” she said. “The field is
very excited about it. And I should add that this experiment would not have
been so successful without the heroic contributions from our highly talented
and hardworking graduate students and postdocs from Duke.”
This work was funded in part by the U. S. National Science Foundation (NSF MRI PHY-1229153) and by the U.S. Department of Energy (Contract No. DE-FG02-03ER41231), including contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates Thomas Jefferson National Accelerator Facility.
CITATION: “A Small Proton Charge Radius from An Electron-Proton Scattering Experiment,” W. Xiong, A. Gasparian, H. Gao, et al. Nature, Nov. 7, 2019. DOI: 10.1038/s41586-019-1721-2 (ONLINE)
The technical-sounding category of “light-driven
charge-transfer reactions,” becomes more familiar to non-physicists when you just
call it photosynthesis or solar electricity.
When a molecule (in a leaf or solar cell) is hit by an
energetic photon of light, it first absorbs the little meteor’s energy, generating
what chemists call an excited state. This excited state then almost immediately
(like trillionths of a second) shuttles an electron away to a charge acceptor
to lower its energy. That transference of charge is what drives plant life and
The energy of the excited state plays an important role in
determining solar energy conversion efficiency. That is, the more of that
photon’s energy that can be retained in the charge-separated state, the better.
For most solar-electric devices, the excited state rapidly loses energy,
resulting in less efficient devices.
But what if there were a way to create even more energetic
excited states from that incoming photon?
Using a very efficient photosynthesizing bacterium as their inspiration,
a team of Duke chemists that included graduate students Nick Polizzi and Ting
Jiang, and faculty members David Beratan and Michael Therien, synthesized a
“supermolecule” to help address this question.
“Nick and Ting discovered a really cool trick about electron
transfer that we might be able to adapt to improving solar cells,” said Michael
Therien, the William R. Kenan, Jr. Professor of Chemistry. “Biology figured
this out eons ago,” he said.
“When molecules absorb light, they have more energy,” Therien said. “One of the things that these molecular excited states do is that they move charge. Generally speaking, most solar energy conversion structures that chemists design feature molecules that push electron density in the direction they want charge to move when a photon is absorbed. The solar-fueled microbe, Rhodobacter sphaeroides, however, does the opposite. What Nick and Ting demonstrated is that this could also be a winning strategy for solar cells.”
The chemists devised a clever synthetic molecule that shows the advantages of an excited state that pushes electron density in the direction opposite to where charge flows. In effect, this allows more of the energy harvested from a photon to be used in a solar cell.
“Nick and Ting’s work shows that there are huge advantages
to pushing electron density in the exact opposite direction where you want
charge to flow,” Therien said in his top-floor office of the French Family
Science Center. “The biggest advantage of an excited state that pushes charge the
wrong way is it stops a really critical pathway for excited state relaxation.”
“So, in many ways it’s a Rube Goldberg Like conception,”
Therien said. “It is a design strategy that’s been maybe staring us in the face
for several years, but no one’s connected the dots like Nick and Ting have
In a July 2
commentary for the Proceedings of the National Academy of Sciences, Bowling
Green State University chemist and photoscientist Malcom D.E. Forbes calls this
work “a great leap forward,” and says it “should be regarded as one of the most
beautiful experiments in physical chemistry in the 21st century.”
CITATION: “Engineering Opposite Electronic Polarization of
Singlet and Triplet States Increases the Yield of High-Energy Photoproducts,”
Nicholas Polizzi, Ting Jiang, David Beratan, Michael Therien. Proceedings of
the National Academy of Sciences, June 10, 2019. DOI: 10.1073/pnas.1901752116
From the miniscule particles underlying matter, to
vast amounts of data from the far reaches of outer space, Chris Walter, a professor
of physics at Duke, pursues research into the great mysteries of the universe,
from the infinitesimal to the infinite.
As an undergraduate at the University of California at
Santa Cruz, he thought he would become a theoretical physicist, but while
continuing his education at the California Institute of Technology (Caltech),
he found himself increasingly drawn to experimental physics, deriving knowledge
of the universe by observing its phenomena.
Neutrinos — miniscule particles emitted during radioactive decay — captured his attention, and he began work with the KamiokaNDE (Kamioka Nucleon Decay Experiment, now typically written as Kamiokande) at the Kamioka Observatory in Hida, Japan. Buried deep underground in an abandoned mine to shield the detectors from cosmic rays and submerged in water, Kamiokande offered Walter an opportunity to study a long-supposed but still unproven hypothesis: that neutrinos were massless.
Recalling one of his most striking memories from his time in the lab, he described observing and finding answers in Cherenkov light – a ‘sonic boom’ of light. Sonic booms are created by breaking the sound barrier in air. However, the speed of light changes in different media – the speed of light in water is less than the speed of light in a vacuum — and a particle accelerator could accelerate particles beyond the speed of light in water. Walter described it like a ring of light bursting out of the darkness.
In his time at the Kamioka Observatory, he was a part of groundbreaking neutrino research on the mass of neutrinos. Long thought to have been massless, Kamiokande discovered the property of neutron oscillation – that neutrinos could change from flavor to flavor, indicating that, contrary to popular belief, they had mass. Seventeen years later, in 2015, the leader of his team, Takaaki Kajita, would be co-awarded the Nobel Prize for Physics, citing research from their collaboration.
Neutrinos originated from the cosmic rays in outer space, but soon another mystery from the cosmos captured Walter’s attention.
“If you died and were given the chance to know the
answer to just one question,” he said, “for me, it would be, ‘What is dark
Observations made in the 1990s, as Walter was
concluding his time at the Kamioka Observatory, found that the expansion of the
universe was accelerating. The nature of the dark energy causing this
accelerating expansion remained unknown to scientists, and it offered a new
course of study in the field of astrophysics.
Walter has recently joined the Large Synoptic Survey
Telescope (LSST) as part of a 10-year, 3D survey of the entire sky, gathering
over 20 terabytes of data nightly and detecting thousands of changes in the
night sky, observing asteroids, galaxies, supernovae, and other astronomical
phenomena. With new machine learning techniques and supercomputing methods to
process the vast quantities of data, the LSST offers incredible new
opportunities for understanding the universe.
To Walter, this is the next big step for research into
the nature of dark energy and the great questions of science.
Protein crystals don’t usually display the glitz and glam of gemstones. But no matter their looks, each and every one is precious to scientists.
Patrick Charbonneau, a professor of chemistry and physics at Duke, along with a worldwide group of scientists, teamed up with researchers at Google Brain to use state-of-the-art machine learning algorithms to spot these rare and valuable crystals. Their work could accelerate drug discovery by making it easier for researchers to map the structures of proteins.
“Every time you miss a protein crystal, because they are so rare, you risk missing on an important biomedical discovery,” Charbonneau said.
Knowing the structure of proteins is key to understanding their function and possibly designing drugs that work with their specific shapes. But the traditional approach to determining these structures, called X-ray crystallography, requires that proteins be crystallized.
Crystallizing proteins is hard — really hard. Unlike the simple atoms and molecules that make up common crystals like salt and sugar, these big, bulky molecules, which can contain tens of thousands of atoms each, struggle to arrange themselves into the ordered arrays that form the basis of crystals.
“What allows an object like a protein to self-assemble into something like a crystal is a bit like magic,” Charbonneau said.
Even after decades of practice, scientists have to rely in part on trial and error to obtain protein crystals. After isolating a protein, they mix it with hundreds of different types of liquid solutions, hoping to find the right recipe that coaxes them to crystallize. They then look at droplets of each mixture under a microscope, hoping to spot the smallest speck of a growing crystal.
“You have to manually say, there is a crystal there, there is none there, there is one there, and usually it is none, none, none,” Charbonneau said. “Not only is it expensive to pay people to do this, but also people fail. They get tired and they get sloppy, and it detracts from their other work.”
The machine learning software searches for points and edges (left) to identify crystals in images of droplets of solution. It can also identify when non-crystalline solids have formed (middle) and when no solids have formed (right).
Charbonneau thought perhaps deep learning software, which is now capable of recognizing individual faces in photographs even when they are blurry or caught from the side, should also be able to identify the points and edges that make up a crystal in solution.
Scientists from both academia and industry came together to collect half a million images of protein crystallization experiments into a database called MARCO. The data specify which of these protein cocktails led to crystallization, based on human evaluation.
The team then worked with a group led by Vincent Vanhoucke from Google Brain to apply the latest in artificial intelligence to help identify crystals in the images.
After “training” the deep learning software on a subset of the data, they unleashed it on the full database. The A.I. was able to accurately identify crystals about 95 percent of the time. Estimates show that humans spot crystals correctly only 85 percent of the time.
“And it does remarkably better than humans,” Charbonneau said. “We were a little surprised because most A.I. algorithms are made to recognize cats or dogs, not necessarily geometrical features like the edge of a crystal.”
Other teams of researchers have already asked to use the A.I. model and the MARCO dataset to train their own machine learning algorithms to recognize crystals in protein crystallization experiments, Charbonneau said. These advances should allow researchers to focus more time on biomedical discoveries instead of squinting at samples.
Charbonneau plans to use the data to understand how exactly proteins self-assemble into crystals, so that researchers rely less on chance to get this “magic” to happen.
“We are trying to use this data to see if we can get more insight into the physical chemistry of self-assembly of proteins,” Charbonneau said.
CITATION: “Classification of crystallization outcomes using deep convolutional neural networks,” Andrew E. Bruno, et al. PLOS ONE, June 20, 2018. DOI: 10.1371/journal.pone.0198883
A new conductive “felt” carries electricity even when twisted, bent and stretched. Credit: Matthew Catenacci
The exercise-tracking power of a Fitbit may soon jump from your wrist and into your clothing.
Researchers are seeking to embed electronics such as fitness trackers and health monitors into our shirts, hats, and shoes. But no one wants stiff copper wires or silicon transistors deforming their clothing or poking into their skin.
Scientists in Benjamin Wiley’s lab at Duke have created new conductive “felt” that can be easily patterned onto fabrics to create flexible wires. The felt, composed of silver-coated copper nanowires and silicon rubber, carries electricity even when bent, stretched and twisted, over and over again.
“We wanted to create wiring that is stretchable on the body,” said Matthew Catenacci, a graduate student in Wiley’s group.
The conductive felt is made of stacks of interwoven silver-coated copper nanotubes filled with a stretchable silicone rubber (left). When stretched, felt made from more pliable rubber is more resilient to small tears and holes than felts made of stiffer rubber (middle). These tears can be seen in small cavities in the felt (right). Credit: Matthew Catenacci
To create a flexible wire, the team first sucks a solution of copper nanowires and water through a stencil, creating a stack of interwoven nanowires in the desired shape. The material is similar to the interwoven fibers that comprise fabric felt, but on a much smaller scale, said Wiley, an associate professor of chemistry at Duke.
“The way I think about the wires are like tiny sticks of uncooked spaghetti,” Wiley said. “The water passes through, and then you end up with this pile of sticks with a high porosity.”
The interwoven nanowires are heated to 300 F to melt the contacts together, and then silicone rubber is added to fill in the gaps between the wires.
To show the pliability of their new material, Catenacci patterned the nanowire felt into a variety of squiggly, snaking patterns. Stretching and twisting the wires up to 300 times did not degrade the conductivity.
The material maintains its conductivity when twisted and stretched. Credit: Matthew Catenacci
“On a larger scale you could take a whole shirt, put it over a vacuum filter, and with a stencil you could create whatever wire pattern you want,” Catenacci said. “After you add the silicone, so you will just have a patch of fabric that is able to stretch.”
Their felt is not the first conductive material that displays the agility of a gymnast. Flexible wires made of silver microflakes also exhibit this unique set of properties. But the new material has the best performance of any other material so far, and at a much lower cost.
“This material retains its conductivity after stretching better than any other material with this high of an initial conductivity. That is what separates it,” Wiley said.
Heat-loving thermophile bacteria may have been some of the earliest lifeforms on Earth. Researchers are studying their great great great grandchildren, like those living in Yellowstone’s Grand Prismatic Spring, to understand how these early bacteria repaired their DNA.
Think your life is hard? Imagine being a tiny bacterium trying to get a foothold on a young and desolate Earth. The earliest lifeforms on our planet endured searing heat, ultraviolet radiation and an atmosphere devoid of oxygen.
Benjamin Rousseau, a research technician in David Beratan’s lab at Duke, studies one of the molecular machines that helped these bacteria survive their harsh environment. This molecule, called photolyase, fixes DNA damaged by ultraviolet (UV) radiation — the same wavelengths of sunlight that give us sunburn and put us at greater risk of skin cancer.
“Anything under the sun — in both meanings of the phrase — has to have ways to repair itself, and photolyase proteins are one of them,” Rousseau said. “They are one of the most ancient repair proteins.”
Though these proteins have been around for billions of years, scientists are still not quite sure exactly how they work. In a new study, Rousseau and coworkers, working with Professor David Beratan and Assistant Research Professor Agostino Migliore, used computer simulations to study photolyase in thermophiles, the great great great great grandchildren of Earth’s original bacterial pioneers.
The study appeared in the Feb. 28 issue of the Journal of the American Chemical Society.
DNA is built of chains of bases — A, C, G and T — whose order encodes our genetic information. UV light can trigger two adjacent bases to react and latch onto one other, rendering these genetic instructions unreadable.
Photolyase uses a molecular antenna to capture light from the sun and convert it into an electron. It then hands the electron over to the DNA strand, sparking a reaction that splits the two bases apart and restores the genetic information.
Photolyase proteins use a molecular antenna (green, blue and red structure on the right) to harvest light and convert it into an electron. The adenine-containing structure in the middle hands the electron to the DNA strand, splitting apart DNA bases. Credit: Benjamin Rousseau, courtesy of the Journal of the American Chemical Society.
Rousseau studied the role of a molecule called adenine in shuttling the electron from the molecular antenna to the DNA strand. He looked at photolyase in both the heat-loving ancestors of ancient bacteria, called thermophiles, and more modern bacteria like E. Coli that thrive at moderate temperatures, called mesophiles.
He found that in thermophiles, adenine played a role in transferring the electron to the DNA. But in E. coli, the adenine was in a different position, providing mainly structural support.
The results “strongly suggest that mesophiles and thermophiles fundamentally differ in their use of adenine for this electron transfer repair mechanism,” Rousseau said.
He also found that when he cooled E. Coli down to 20 degrees Celsius — about 68 degrees Fahrenheit — the adenine shifted back in place, resuming its transport function.
“It’s like a temperature-controlled switch,” Rousseau said.
Though humans no longer use photolyase for DNA repair, the protein persists in life as diverse as bacteria, fungi and plants — and is even being studied as an ingredient in sunscreens to help repair UV-damaged skin.
Understanding exactly how photolyase works may also help researchers design proteins with a variety of new functions, Rousseau said.
“Photolyase does all of the work on its own — it harvests the light, it transfers the electron over a huge distance to the other site, and then it cleaves the DNA bases,” Rousseau said. “Proteins with that kind of plethora of functions tend to be an attractive target for protein engineering.”