Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology

Detecting disease with sound

By Becca Bayham

Most people experience ultrasound technology either as a pregnant woman or a fetus. Ultrasound is also employed for cardiac imaging and for guiding semi-invasive surgeries, largely because of its ability to produce real-time images. And Kathy Nightingale, associate professor of biomedical engineering, is pushing the technology even further.

“We use high-frequency sound (higher than audible range) to send out echoes. Then we analyze the received echoes to create a picture,” Nightingale said at a Chautauqua Series lecture last Tuesday.

According to Nightingale, ultrasound maps differences in the acoustic properties of tissue. Muscles, blood vessels and fatty tissue have different densities and sound passes through them at different speeds. As a result, they show up as different colors on the ultrasound. Blood is more difficult to image, but researchers have found an interesting way around that problem.

“The signal from blood is really weak compared to the signal coming from tissue. But what you can do is inject microbubbles, and that makes the signal brighter,” Nightingale said.

Microbubbles are small enough to travel freely throughout the circulatory system — anywhere blood flows. Because fast-growing tumors require a large blood supply, microbubbles can be particularly helpful for disease detection.

Like most other electronics, ultrasound scanners have gotten smaller and smaller over the years. Hand-held ultrasounds “are not as fully capable as one of those larger scanners, just as with an iPad you don’t have as many options as your computer or laptop,” Nightingale said. However, the devices’ portability has earned them a place both on the battlefield and in the emergency room.

Nightingale’s research explores another aspect of ultrasonic sound — its ability to “push” on tissue at a microscopic scale. The amount of movement reveals how stiff a tissue is (which, in turn, can indicate whether tissue is healthy or not). It’s the same concept as breast, prostate and lymph node exams, but allows analysis of interior organs too.

“We can use an imaging system to identify regions in organs that are stiffer than surrounding tissue,” Nightingale said. “That would allow doctors to look at regions of pathology (cancer or scarring) rather than having to do a biopsy or cut someone open to look at something.”

Visualizing the past

Duke Academic Quad from Duke Chapel, c.1932

Duke Academic Quad, circa 1932 (Duke University Archives)

By Becca Bayham

Perkins Library didn’t always look the way it does now. Since the sanctum of scholarly thought was built in 1928, it has been expanded and renovated several times — so if you looked at a blueprint from 1928, you’d only be getting part of the story. The same applies to historical structures, according to Caroline Bruzelius, professor of art, art history & visual studies.

“Buildings are constantly changing, and a [building] plan represents one part of the process … of course it is useful in many ways, but it’s very frozen,” Bruzelius said during the Sept. 16 Visualization Friday Forum, a recurring lecture series sponsored by the Research Computing Center. Bruzelius was joined by fellow art, art history and visual studies professors Sheila Dillon and Mark Olson for a discussion of how digital representational technologies — such as animation, 3D modeling and virtual reality — can benefit the humanities.

Unlike static drawings or building plans, digital technologies can illustrate how forms change over time, something “no one’s really thought about showing,” Bruzelius said. Structural changes often reflect changing social, religious, political and ideological concerns, as was the case with the church of San Francesco in Folloni, Italy. See below: a student project about the church’s transformation over several centuries.

See Video:
San Francesco a Folloni on Vimeo.

Dillon has also used visualization technologies to show change — but for ancient sculpture bases, instead of buildings.

“We’ve been really good about representing the buildings of an ancient site. But for the most part, the bases on which statues stood tend to be ‘edited out’ of ground plans,” Dillon said, either because of uncertainty about the bases’ original location or because they make a site seem impossibly cluttered. The reality is that statues were abundant, and constantly vying with each other for the attention of passerby.

“When you set up your statue monument, you wanted it to be visible. You wanted it to be in the most prestigious location,” Dillon said. “I tell my students that the best way to imagine these spaces is to imagine the most open part of East campus and fill it up with 3,000 statues of Benjamin Duke.”

The accumulation of statues over time (courtesy Sheila Dillon)

According to Dillon, some archeologists have qualms with digital representation as a research tool, claiming that it is misleading and hypothetical. Dillon argued that ground plans can be misleading too, because they represent 3D objects in 2D space. 3D representation can offer a more true-to-life view, especially in the case of ancient statues.

“When you open up that elevation, [the space] becomes much less crowded,” Dillon said.

Olson acknowledged a few challenges with digital representation: disseminating and preserving large amounts of data, conveying uncertainty and allowing annotation from other scholars. For the most part, digital representational technologies can help humanities researchers ask and answer new questions.

“Visualization becomes a way of doing our research– not just [something we do] at the end,” Olson said.

Mapping Movement

Guest Post By Viviane Callier

The way your brain tells your hands and feet to move is unique to you.  Your brain uses a map to control where your hands are going, and your map wouldn’t work in anyone else’s head.  Understanding how these maps are created in different brains, and how unique they are to individuals, is key to designing next-generation prosthetics.

You have a map in your head that coordinates physical movement. It wouldn't work in any other head, or body.

In contrast to a robot, the brain is not programmed to solve mathematical equations for the forces required to produce a given movement.  Instead, it learns both forward maps, which link a firing pattern of neurons to a specific movement, and inverse maps, which are used to infer the pattern of neuron activation pattern required to produce a desired movement. These maps are built through trial and error, and kept in memory. Due to the randomness involved in trial and error exploration, no two brains build quite the same movement maps.

To study the process by which brains learn these maps,  Tim Hanson, a graduate student in the neurobiology lab of Miguel Nicolelis, has been using electrodes to record activity patterns from individual neurons in the brains of monkeys who are learning to control prosthetic arms.

The team wants to design a general way to connect prosthetics to a person’s individual brain map, which is a challenge because no two brains make the same map.  The commonality between people is not the particular maps, but rather how these maps are learned.  Therefore, future brain-controlled prosthetics need to integrate well with the biological process by which these maps are learned.

By recording the activity of the brain and correlating it with behavior over time as control of the prosthetic arm improves, Hanson can visualize the map being created in the monkey’s brain.

Observing the learning process in real time in a monkey’s brain will further understanding of how we learn, and potentially aid prosthetic design, Hanson says.

Page 20 of 20

Powered by WordPress & Theme by Anders Norén