Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Visualization Page 9 of 10

Another hint of the Higgs, maybe

By Ashley Yeager

This cartoon shows a "line-up" of possible suspects for the Higgs boson. Click image for a larger view. Credit: Mark Kruse, Duke University.

Scientists may have spotted the Higgs boson again.

But, Duke physicist Mark Kruse says Fermilab has made its latest announcement prematurely.

Physicists have been searching for the Higgs for more than 40 years, hoping to find it and at last explain how mass in the universe is created.

Last year, the Fermilab team announced no significant hint of the particle when it had analyzed about 80 percent of the data from its two Higgs-hunting instruments, CDF and DZero.

Now, after adding the remaining 20 percent of the data, and some analytic improvements, the team is suggesting that Fermilab has seen the particle.

The signal, however, would be “almost fantastically high” if seen with other Higgs detection methods, Kruse says. He is on one of the committees reviewing the analyses from Fermilab’s CDF experiment and once led the instrument’s Higgs Discovery Group.

He also works at LHC, where teams made a similar announcement last December.

A “tremendous amount of work” has gone into the latest Fermilab results, Kruse says. But, the team could have waited for upcoming improvements in the CDF and DZero studies and also worked to better understand the discrepancy between the lab’s latest results and those from last year.

This might, of course, all be sorted out soon, he adds. But, “my feeling is that it was a little soon to make this announcement with the suggested claims we made, without the full results and proper understanding of the present analyses.”

This “rush to announce” mentality may also create a certain amount of distrust in the public eye, Kruse says.

Composing music with Xbox Kinect

By Ashley Yeager

Ken Stewart uses his motions and an XBox Kinect to narrate, musically, a dance by Thomas DeFrantz. Credit: Duke University Dance Program.

To watch Ken Stewart dance in front of his Xbox Kinect gives a whole new meaning to the “Dance Your Ph.D.” contest.

Stewart, a graduate student in the music department and a composer, is using the camera, along with specialized computer software, to narrate dance with sound. He demo’ed the program while walking an audience through his imnewhere, or I’m new here, composition of dance professor Tommy DeFrantz’s journey to Duke.

The Jan. 27 presentation was part of the Visualization Friday Forum and gave attendees a behind-the-scenes look at the research and mathematics behind Stewart’s new, “more expressive way” to write music.

With the Kinect, which has motion-detection technology for interacting with video games, Stewart can transform his gestures into sound, intimately controlling the loudness, pitch and rhythmic intensity of the score he creates. The system records 15 points on a controller’s body, including his head, neck, shoulders, knees and feet.

Using a library of sounds, the controller can then correlate and choreograph a composition, using the computer to calculate angles between his hands or distance between his body and the camera. These angles are converted to become the musical notes.

The work, Stewart says, gives him a way to use his ears and actions to “feel out” a song. He concedes that there are hiccups between how he moves and the sounds created, but, he says he thinks that the imprecision adds to the expressivity of the composing process.

Stewart said he and DeFrantz are still working on imnewhere. They plan to expand the piece to 15 minutes and will perform it again in Grand Rapids, Mich., Berkeley, Calif. and Belfast, UK.

Double-walled nanotubes shine, sometimes

By Ashley Yeager

Double-walled nanotubes

This montage shows modeled and imaged double-walled carbon nanotubes. Courtesy of: Morinobu Endo, Shinshu University.

Nanotubes are tiny, and they can give off light. Those properties make the carbon constructions promising for looking at cells inside our bodies and also making small electrons that can capture and manipulate light.

But recent research suggests that not all nanotubes shine as chemists thought, a discovery that ends a debate in the field about which type of tubes to use for applications relying on their ability to emitted light.

The debate pitted single-walled carbon nanotubes against double-walled ones. Some scientists thought only single-walled tubes could give off light and be used in light-related applications. But other scientists showed that double-walled nanotubes could also emit light and possibly replace their single-walled cousins.

Now, Sungwoo Yang, a former chemistry graduate student at Duke, and his colleagues in Jie Liu’s lab have shown that both single and double-walled carbon nanotubes shine when hit with lasers. But, in the double-walled tubes, only the inner wall emits light and only a small range of diameters of the inner tube could get their light to the outside world. The ones outside of this range gave off light, but it got doused on its way through the outer layer.

Bottom line: Some double-walled carbon nanotubes do emit light but most don’t, if you’re looking for the light outside of the tube. That discovery makes both camps in the nanotube debate correct, depending on the diameter of tube being considered.

CITATION: “Photoluminescence from Inner Walls in Double-Walled Carbon Nanotubes: Some Do, Some Do Not.” Sungwoo Yang, Ashley Parks, Stacey Saba, P. Lee Ferguson, and Jie Liu. Nano Lett., 2011, 11:10, 4405–4410
DOI: 10.1021/nl2025745

Solving problems with iPad (or Android) apps

eCLIP iPad applicationBy Becca Bayham

When a patient comes into the E.R. with a lung problem, doctors usually put them on a ventilator. Unfortunately, this procedure helps some patients, but hurts others. Doctors have difficulty predicting which will be the case, due to a lack of data on risk factors. A predictive model was recently developed to solve this problem, but the calculations require more time and information than E.R. doctors usually have.

Enter Raquel Bartz, an emergency room doctor at Duke Hospital. She envisioned an iPad application where doctors and family members could input the necessary medical information, and the app would spit out the treatment protocol for a particular patient. Bartz turned to Richard Lucic and Robert Duvall’s Software for Mobile Devices class (COMPSCI 196) to make her idea a reality.

The result? An application called eCLIP, developed by students last Fall and available now in iTunes’ App Store. (See photos at left)

eCLIP is one of five applications created by students during the two semesters COMPSCI 196 has been offered. Lucic and Duvall described the course — and its various student-produced applications — at last week’s Visualization Friday Forum, sponsored by the Visualization Studies Initiative (http://visualstudies.duke.edu/) and Duke’s computer science department.

“We’re trying to teach students about the mobile app world,” Lucic said. “In addition, we’re trying to teach students about the software development process, from conception of an idea to delivering a product to a client.”

Lucic emphasized the importance of teamwork, as well as the value of visual design skills for increasing a product’s appeal. Furthermore, user testing is a critical step for identifying problems.

This semester, nine clients pitched their application ideas. Students voted for their favorite projects, and three were ultimately chosen:

  • Ajay Patel, IT Manager in the Duke Cancer Center, wanted a way to track medical samples during processing and reduce human error
  • Allison Besch, educational curator for the North Carolina Maritime Museum, wanted a fun, educational tool for teaching marine resource conservation to 4th graders
  • Rachel Cook, Duke alumna and former futures trader, wanted an app to encourage microlending and bridge the gap between lenders and borrowers

Each client worked with a team of 3-4 students, and met with them every other week to discuss the team’s progress.

“A lot of students are learning how to code mobile apps for the first time, so there’s only 6-7 weeks of actual coding time,” Duvall said.

Despite the time crunch, students try to present a finished product to their clients by the end of the semester. But who keeps the app going after the course’s conclusion?

“What we’re trying to do is have the students provide enough documentation and write their code well enough that the app can be maintained by the client’s organization,” Lucic said. “Clients have been thrilled with the experience. I think we’ve done a superb job of meeting their needs, as much as you can in a one-semester course.”

Particles of light overcome their lack of attraction

By Ashley Yeager

This waveguide shows an electric field moving from right to left. Credit: Setreset, Wikimedia Commons.

Electrons typically repulse each other. But sometimes they can actually attract each other and pair up, which is why superconductors exist. The particles that make up light, on the other hand, have no charge and rarely appeal to or repel one other.

Now, Duke theoretical physicist Harold Baranger and his collaborators think it’s possible to get these particles, called photons, to pair up, and stay that way, as they travel through space.

The new idea could help with the development of quantum communications, and possibly quantum computing in the future, Baranger says.

Particles of light do not have electric charge so they’ve got no attraction to the particles around them. That lack of attraction gives photons the ability to travel long distances without losing information, a good trait for building quantum networks.

But, the photons’ reserve has a drawback. It makes it more difficult for scientists to control each particle and retrieve the information it carries. To rein in the seemingly aloof photons, physicists have designed cavities that trap them and boost their contact with individual atoms.

In these cavities, the particles pass one by one through an atom in a phenomenon called a photon blockade. But, the cavities trap the photons for a really long time, so they pass through the cavities really slowly, which isn’t good for networking — especially in this age of instant information.

Using pencil and paper, Baranger, his student Huaixiu Zheng and colleague Daniel Gauthier, figured out that they could avoid the problems with the cavities if the photons instead went into a one-dimensional structure, called a waveguide, which also channels the photons past an atom one by one.

In the waveguide, a control atom acts as an intermediary between the incoming particles. The first photon passes through the atom and changes its state.

This diagram shows a photon blockade in a waveguide. Multiple photons (left, yellow) pass by the control atom in a one-by-one manner, ending with a train of single-photon pulses and empty pulses (gray). Courtesy of Harold Baranger, Duke.

The atom then interacts differently with the next photons, which ultimately causes the two particles of light to interact. What’s surprising, Baranger says, is that the photons are still bound to each other for a long time, even as they move away from the atom that paired them.

These bound states also end up producing a photon blockade much like in a cavity, but through a completely different mechanism, and the photons move a lot faster, he says.

The work, which appears online in Physical Review Letters, “paves the way” for experimentalists who want to try to build quantum networks without using cavities, Baranger says. He says experiments in this area may be done at Duke in the next few years.

Right now, he plans to work out what happens to the photons if more than one atom sits in the waveguide. The photons will be interacting in a lot of different places, and “one can imagine that there could even be a quantum phase transition, giving rise to some new quantum state,” he says. “But, that’s just a hope at this point.”

CITATION: Cavity-Free Photon Blockade Induced by Many-Body Bound States. Zheng, H., Gauthier, D., and Baranger, H. Phys. Rev. Lett. 107, 223601 (2011).

DOI: 10.1103/PhysRevLett.107.223601

Your brain on memories

By Ashley Yeager

Students map the molecules associated with memory and how they flow through a brain cell. Courtesy of Craig Roberts, Duke.

9/11. JFK’s assassination. A man on the moon.

These words probably evoke a memory of where you were and how you reacted, if you were alive when the events occurred.

The exact molecules and brain processes that form memories and make some memories stronger than others haven’t been worked out yet. But by “walking” through our brain cells, a team of Duke students is taking a more vivid look at how we remember the past.

With Duke computer science faculty, neuroscientist Craig Roberts and his students have created and tested a virtual representation of our brain cells. In this world, students move around a virtual neuron, rearranging and organizing molecules to express their understanding of our memories.

In this 3-D environment of a neuron, students can mock how molecules flow through the brain to make memories. Courtesy of Craig Roberts, Duke.

Working in a shared digital space from individual computers, the students collaborate in both real life and cyberspace to model the flow of molecules from brain cell to brain cell. Computer scientists Julian Lombardi and Mark McCahill designed the neuronal landscape on Open Cobalt, a community-based, open-source web page for developing virtual, 3-D workspaces.

Roberts, the assistant director of education of the Duke Institute for Brain Sciences, says he trying to harness the “eventuality of the Internet,” where we’ll explore ideas and solve scientific problems on media-rich, multi-dimensional websites.

Roberts says he wanted to teach students about learning and memory. But he also wanted to experiment with whether 2-D or 3-D environments affected how different types of learners participated in class and retained what they were supposed to be studying.

He and undergrad student Daniel Wilson assessed the learning types of the students in the neurobiology class and then gauged their reactions to the 3-D environment compared to the 2-D work done in a collaborative Google document.

A 2-D Google doc mapping molecule movements for making memories. Courtesy of Craig Roberts, Duke.

“We’re finding that active learners perceive greater benefit from the 2-D environment than reflective learners. Visual learners perceive greater benefit than verbal learners from the 3-D environment,” Robert says. He presented the 3-D neuronal environment, his research results and other learning media he has been experimenting with at the 2011 Society for Neuroscience meeting in Washington D.C. on Sat. Nov. 12

By developing different environments in which students can learn, teachers may be able to engage all their students, independent of learning style, Roberts says.

He also said he “sees it as icing on the cake” that in a neurobiology course on learning and memory, students are working in a “learner-centric,” non-lecturing environment to expand their understanding of how they remember and recall the past.

Prescription lens brings spinning black holes into focus

lensing effects of black hole

This computer generated image highlights how strange space would look if you could fly right up to a black hole. The effect of gravity on light causes some very unusual visual distortions. Credit & Copyright: Alain Riazuelo.

By Ashley Yeager

If a black hole is the eye of a galaxy, then Duke mathematician Arlie Petters is its optometrist.

Petters along with his colleagues, visiting scholar Amir Aazami and Rutgers astronomer Charles Keeton, have written the prescription, or mathematical equation, to describe the lens of a spinning black hole.

The new equation provides astronomers with an easier way of calculating what’s going on around a spinning black hole, says Harvard astrophysicist Avi Loeb, who was not involved in the research.

Astronomers typically classify black holes into two types, static or spinning.

Static black holes are easier to describe mathematically, which is why most previous studies describing a black hole’s action on light did not include a spin variable.

In reality, though, everything is in motion. Stars, planets, even black holes, spin. “As scientists, we need to add that spin into the equation if we are going to try to explain spinning black holes as an element of nature and how they work on a grand scale,” Petters says.

To describe black holes mathematically, Petters and his team had to first consider how elements of nature distort light. On Earth, air, water, glass and even our eyes alter how we interpret patterns of light.

In the case of our eyes, doctors can describe the distortion with a “lensmakers equation,” which underlies how they write precise prescriptions for our contacts or glasses.

In space, it’s gravity that bends light. Black holes have so much gravity due to their extreme mass that they can pull particles of light onto new paths. That bending and pulling of light acts as a cosmic lens creating cosmic mirages like Einstein rings.

The mirages or effects of the lensing can convey a lot of information about the universe, such as its age and the nature of dark matter. They also reveal details about the black holes themselves, Petters says.

double Einstein ring

This Hubble Space Telescope image shows a double Einstein ring. Credit: NASA

But, to the decode the mirages, astronomers need a precise prescription of the lenses creating them, just like we need prescription lenses to see our world more clearly.

In the past, astronomers would calculate the characteristics of a black hole lens using the equation for a static black hole. Or, they would use heavy-duty computer simulations or other painstakingly difficult methods to track particle trajectories and describe the lensing effects.

The new prescription Petters and his team has written, however, allows astronomers to calculate certain characteristics of a black hole by observing it and recording its mass and lensing effects. The researchers can then solve the lensmakers equation for the spin of the black hole. Petters and his colleagues describe the equation in two papers published in the Journal of Mathematical Physics.

Aside from making it easier to study black holes, the new equation also gives scientists another way to test Albert Einstein’s theory of gravity.

It is important to test Einstein, just as scientists continued to test Newton’s theory of gravity, Petters says. “We need to find any discrepancies in Einstein’s theory in order to push beyond it and to continue to comprehend and to appreciate the structure of the universe around us.”

Citations

A. B. Aazami, C. R. Keeton, and A. O. Petters. Lensing by Kerr Black Holes. I. General Lens Equation and Magnification Formula. J. Math. Phys., vol 52, (2011). doi:10.1063/1.3642614

A. B. Aazami, C. R. Keeton, and A. O. Petters. Lensing by Kerr Black Holes. II. Analytical Study of Quasi-Equatorial Lensing Observables J. Math. Phys., vol 52, (2011). doi:10.1063/1.3642616

Visualizing the past

Duke Academic Quad from Duke Chapel, c.1932

Duke Academic Quad, circa 1932 (Duke University Archives)

By Becca Bayham

Perkins Library didn’t always look the way it does now. Since the sanctum of scholarly thought was built in 1928, it has been expanded and renovated several times — so if you looked at a blueprint from 1928, you’d only be getting part of the story. The same applies to historical structures, according to Caroline Bruzelius, professor of art, art history & visual studies.

“Buildings are constantly changing, and a [building] plan represents one part of the process … of course it is useful in many ways, but it’s very frozen,” Bruzelius said during the Sept. 16 Visualization Friday Forum, a recurring lecture series sponsored by the Research Computing Center. Bruzelius was joined by fellow art, art history and visual studies professors Sheila Dillon and Mark Olson for a discussion of how digital representational technologies — such as animation, 3D modeling and virtual reality — can benefit the humanities.

Unlike static drawings or building plans, digital technologies can illustrate how forms change over time, something “no one’s really thought about showing,” Bruzelius said. Structural changes often reflect changing social, religious, political and ideological concerns, as was the case with the church of San Francesco in Folloni, Italy. See below: a student project about the church’s transformation over several centuries.

See Video:
San Francesco a Folloni on Vimeo.

Dillon has also used visualization technologies to show change — but for ancient sculpture bases, instead of buildings.

“We’ve been really good about representing the buildings of an ancient site. But for the most part, the bases on which statues stood tend to be ‘edited out’ of ground plans,” Dillon said, either because of uncertainty about the bases’ original location or because they make a site seem impossibly cluttered. The reality is that statues were abundant, and constantly vying with each other for the attention of passerby.

“When you set up your statue monument, you wanted it to be visible. You wanted it to be in the most prestigious location,” Dillon said. “I tell my students that the best way to imagine these spaces is to imagine the most open part of East campus and fill it up with 3,000 statues of Benjamin Duke.”

The accumulation of statues over time (courtesy Sheila Dillon)

According to Dillon, some archeologists have qualms with digital representation as a research tool, claiming that it is misleading and hypothetical. Dillon argued that ground plans can be misleading too, because they represent 3D objects in 2D space. 3D representation can offer a more true-to-life view, especially in the case of ancient statues.

“When you open up that elevation, [the space] becomes much less crowded,” Dillon said.

Olson acknowledged a few challenges with digital representation: disseminating and preserving large amounts of data, conveying uncertainty and allowing annotation from other scholars. For the most part, digital representational technologies can help humanities researchers ask and answer new questions.

“Visualization becomes a way of doing our research– not just [something we do] at the end,” Olson said.

Blue Crab Love Is Indeed Blind

Guest post from graduate student Kia Walcott

A reconstruction of what a female crab may see of a displaying male when her lenses are off.

Female blue crabs (Callinectes sapidus) are literally blind when they choose their mates, according to new research from Duke biologists Jamie Baldwin and Sönke Johnsen.

Blue crabs are one of many crustacean species that undergo molting and mating at the same time.  Because the multi-faceted lenses that make up the crab’s eyes are part of the exoskeleton, they too are shed.  So it’s like a molting female has taken out her contact lenses.

Baldwin and Johnsen put the crabs in an eye exam of sorts, with a rotating black-and-white striped drum. When the crab can see, she will move her eyes in the same direction as the rotating stripes. When she can’t see, she will not perform this behavior. By finding the width between the stripes that the female no longer moves her eyes, Baldwin and Johnsen were able to measure visual acuity.

They found that a female’s vision can be blurry from 3 days prior to molting until 3-6 days after molting.

This means that during the critical time of mating, when these female crabs should be experiencing all of the romantic courtship behaviors displayed by male blue crabs like claw waving, standing tall on the walking legs, and rhythmic waving of swim paddles, these single ladies can’t see a thing.

The males, on the other hand, can see perfectly, and in fact, use their color vision to choose females with red claws versus those with claws of other hues. Baldwin and Johnsen say all hope is not lost for female blue crabs however. They believe that chemical cues, what we would call smell, may help overcome her blurred vision.

Other studies have found that visual sexual cues are nearly non-existent or at least not documented in species that mate and molt simultaneously like this. These findings may explain why, at least for one species, looks aren’t everything.

CITATION: Baldwin and Johnsen. (2011) Effects of molting on the visual acuity of the blue crab, Callinctes sapidus. J Exp Biol. 214: 3055-61. <http://jeb.biologists.org/content/214/18/3055.long>

 

Envision Yourself A Winner

The second Abhijit Mahato visualization contest, “Envisioning the Invisible” is now underway, and you don’t have to be a member of the Duke community to participate.

Post-doctoral fellow Anna Loksztejn of the Center for Biologically Inspired Materials created this image of aggregated insulin proteins using atomic force microscopy.

Last year’s first contest was a stunning success, both for the images it produced and what the symbolism it represented — making something beautiful out of something very ugly. Mahato was a second-year graduate student in the Pratt School of Engineering who was murdered in his apartment near campus in 2008, pretty much at random. His friends and colleagues wanted to do something long-lasting and worthy of Abhijit’s memory.

 

You can learn more about the contest at this site. There are two categories: making something ordinary beautiful, and making scientific data into a picture.

The gala awards ceremony and slideshow, with keynote speaker Nickolay Hristov, is set for 5 p.m. Wednesday Sept. 28 in Schiciano Auditorium.

Here are the rules. Hurry, the contest ends at Midnight, Sept. 7!

Page 9 of 10

Powered by WordPress & Theme by Anders Norén