Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Art (Page 1 of 2)

Library’s Halloween Exhibit Fascinates and Thrills

Research is not always for the faint of heart.

scary doll_Duke Library

Screamfest V combed through centuries of Rubenstein materials to find the very spookiest of artifacts

At least, that’s what Rubenstein Library seemed to be saying this Halloween with the fifth installment of its sometimes freaky, always fascinating “Screamfest” exhibition. With everything from centuries-old demonology textbooks, to tarot cards, to Duke-based parapsychology studies, Screamfest V took a dive into the deep end of the research Duke has gathered throughout its long history.

There’s a lot to unpack about this exhibit, but one of the most unsettling parts has to be the 1949 written exchange between Duke parapsychologist Joseph Rhine and Lutheran Reverend Duther Schulze, speaking about a boy they thought could be demonically possessed.

“Now he has visions of the devil and goes into a trance and speaks a strange language,” Duther wrote.

Anything about that sound familiar? If so, that might be because this case was the basis for the 1973 horror classic The Exorcist. (And people say research isn’t cool!)

The Rubenstein also exhibited a pack of cards used by Rhine’s parapsychology lab to test for extrasensory perception. Inscribed with vaguely arcane symbols, one of these “Zener cards” would be flipped over by a researcher behind a screen, and a test subject on the other side would attempt to “sense” what card the researcher displayed.

Zener cards for ESP

A pack of “Zener cards” Duke researchers once used to test for ESP

Although the results of this test were never replicated outside of Duke and are today widely considered debunked, Rhine’s research did create a stir in some circles at the time. One of the most interesting things about this exhibit, in fact, was the way it showed how much methods and topics in science have changed over time.

A 1726 publication of the book Sadducismus triumphatus: or, A full and plain evidence concerning witches and apparitions, for example, was loaded with supernatural “research” and “findings” every bit as dense and serious as the title would suggest. The section this tome was opened to bore this subheading: “Proving partly by Holy Scripture, partly by a choice Collection of Modern Relations, the Real EXISTENCE of Apparitions, Spirits, & Witches.”

A similar book titled The Discoverie of Witchcraft, was also on display—only this one was printed over two centuries later, in 1930.

A Depression-era miniature of the Duke mascot, somewhat worse for wear.

Other historical gems the exhibit offered included an a threadbare ‘blue devil’ doll from the ‘30s; a book made up of a lengthy collection of newspaper clippings following the case of Lizzie Borden, a reported axe murderer from the 1890s; and an ad for the 1844 “Life Preserving Coffin … for use in doubtful cases of death.”

It’s not every day research will leave the casual viewer quaking in their boots, but Screamfest V was quick to live up to its name. Covering a broad swath of Duke materials from several centuries, this exhibit successfully pulled off vibes of education, spookiness, and Halloween fun, all at the same time.

Post by Daniel Egitto

Sizing Up Hollywood's Gender Gap

DURHAM, N.C. — A mere seven-plus decades after she first appeared in comic books in the early 1940s, Wonder Woman finally has her own movie.

In the two months since it premiered, the film has brought in more than $785 million worldwide, making it the highest grossing movie of the summer.

But if Hollywood has seen a number of recent hits with strong female leads, from “Wonder Woman” and “Atomic Blonde” to “Hidden Figures,” it doesn’t signal a change in how women are depicted on screen — at least not yet.

Those are the conclusions of three students who spent ten weeks this summer compiling and analyzing data on women’s roles in American film, through the Data+ summer research program.

The team relied on a measure called the Bechdel test, first depicted by the cartoonist Alison Bechdel in 1985.

Bechdel test

The “Bechdel test” asks whether a movie features at least two women who talk to each other about anything besides a man. Surprisingly, a lot of films fail. Art by Srravya [CC0], via Wikimedia Commons.

To pass the Bechdel test, a movie must satisfy three basic requirements: it must have at least two named women in it, they must talk to each other, and their conversation must be about something other than a man.

It’s a low bar. The female characters don’t have to have power, or purpose, or buck gender stereotypes.

Even a movie in which two women only speak to each other briefly in one scene, about nail polish — as was the case with “American Hustle” —  gets a passing grade.

And yet more than 40 percent of all U.S. films fail.

The team used data from the bechdeltest.com website, a user-compiled database of over 7,000 movies where volunteers rate films based on the Bechdel criteria. The number of criteria a film passes adds up to its Bechdel score.

“Spider Man,” “The Jungle Book,” “Star Trek Beyond” and “The Hobbit” all fail by at least one of the criteria.

Films are more likely to pass today than they were in the 1970s, according to a 2014 study by FiveThirtyEight, the data journalism site created by Nate Silver.

The authors of that study analyzed 1,794 movies released between 1970 and 2013. They found that the number of passing films rose steadily from 1970 to 1995 but then began to stall.

In the past two decades, the proportion of passing films hasn’t budged.

Since the mid-1990s, the proportion of films that pass the Bechdel test has flatlined at about 50 percent.

Since the mid-1990s, the proportion of films that pass the Bechdel test has flatlined at about 50 percent.

The Duke team was also able to obtain data from a 2016 study of the gender breakdown of movie dialogue in roughly 2,000 screenplays.

Men played two out of three top speaking roles in more than 80 percent of films, according to that study.

Using data from the screenplay study, the students plotted the relationship between a movie’s Bechdel score and the number of words spoken by female characters. Perhaps not surprisingly, films with higher Bechdel scores were also more likely to achieve gender parity in terms of speaking roles.

“The Bechdel test doesn’t really tell you if a film is feminist,” but it’s a good indicator of how much women speak, said team member Sammy Garland, a Duke sophomore majoring in statistics and Chinese.

Previous studies suggest that men do twice as much talking in most films — a proportion that has remained largely unchanged since 1995. The reason, researchers say, is not because male characters are more talkative individually, but because there are simply more male roles.

“To close the gap of speaking time, we just need more female characters,” said team member Selen Berkman, a sophomore majoring in math and computer science.

Achieving that, they say, ultimately comes down to who writes the script and chooses the cast.

The team did a network analysis of patterns of collaboration among 10,000 directors, writers and producers. Two people are joined whenever they worked together on the same movie. The 13 most influential and well-connected people in the American film industry were all men, whose films had average Bechdel scores ranging from 1.5 to 2.6 — meaning no top producer is regularly making films that pass the Bechdel test.

“What this tells us is there is no one big influential producer who is moving the needle. We have no champion,” Garland said.

Men and women were equally represented in fewer than 10 percent of production crews.

But assembling a more gender-balanced production team in the early stages of a film can make a difference, research shows. Films with more women in top production roles have female characters who speak more too.

“To better represent women on screen you need more women behind the scenes,” Garland said.

Dollar for dollar, making an effort to close the Hollywood gender gap can mean better returns at the box office too. Films that pass the Bechdel test earn $2.68 for every dollar spent, compared with $2.45 for films that fail — a 23-cent better return on investment, according to FiveThirtyEight.

Other versions of the Bechdel test have been proposed to measure race and gender in film more broadly. The advantage of analyzing the Bechdel data is that thousands of films have already been scored, said English major and Data+ team member Aaron VanSteinberg.

“We tried to watch a movie a week, but we just didn’t have time to watch thousands of movies,” VanSteinberg said.

A new report on diversity in Hollywood from the University of Southern California suggests the same lack of progress is true for other groups as well. In nearly 900 top-grossing films from 2007 to 2016, disabled, Latino and LGBTQ characters were consistently underrepresented relative to their makeup in the U.S. population.

Berkman, Garland and VanSteinberg were among more than 70 students selected for the 2017 Data+ program, which included data-driven projects on photojournalism, art restoration, public policy and more.

They presented their work at the Data+ Final Symposium on July 28 in Gross Hall.

Data+ is sponsored by Bass Connections, the Information Initiative at Duke, the Social Science Research Institute, the departments of mathematics and statistical science and MEDx. 

Writing by Robin Smith; video by Lauren Mueller and Summer Dunsmore

Students Share Research Journeys at Bass Connections Showcase

From the highlands of north central Peru to high schools in North Carolina, student researchers in Duke’s Bass Connections program are gathering data in all sorts of unique places.

As the school year winds down, they packed into Duke’s Scharf Hall last week to hear one another’s stories.

Students and faculty gathered in Scharf Hall to learn about each other’s research at this year’s Bass Connections showcase. Photo by Jared Lazarus/Duke Photography.

The Bass Connections program brings together interdisciplinary teams of undergraduates, graduate students and professors to tackle big questions in research. This year’s showcase, which featured poster presentations and five “lightning talks,” was the first to include teams spanning all five of the program’s diverse themes: Brain and Society; Information, Society and Culture; Global Health; Education and Human Development; and Energy.

“The students wanted an opportunity to learn from one another about what they had been working on across all the different themes over the course of the year,” said Lori Bennear, associate professor of environmental economics and policy at the Nicholas School, during the opening remarks.

Students seized the chance, eagerly perusing peers’ posters and gathering for standing-room-only viewings of other team’s talks.

The different investigations took students from rural areas of Peru, where teams interviewed local residents to better understand the transmission of deadly diseases like malaria and leishmaniasis, to the North Carolina Museum of Art, where mathematicians and engineers worked side-by-side with artists to restore paintings.

Machine learning algorithms created by the Energy Data Analytics Lab can pick out buildings from a satellite image and estimate their energy consumption. Image courtesy Hoël Wiesner.

Students in the Energy Data Analytics Lab didn’t have to look much farther than their smart phones for the data they needed to better understand energy use.

“Here you can see a satellite image, very similar to one you can find on Google maps,” said Eric Peshkin, a junior mathematics major, as he showed an aerial photo of an urban area featuring buildings and a highway. “The question is how can this be useful to us as researchers?”

With the help of new machine-learning algorithms, images like these could soon give researchers oodles of valuable information about energy consumption, Peshkin said.

“For example, what if we could pick out buildings and estimate their energy usage on a per-building level?” said Hoël Wiesner, a second year master’s student at the Nicholas School. “There is not really a good data set for this out there because utilities that do have this information tend to keep it private for commercial reasons.”

The lab has had success developing algorithms that can estimate the size and location of solar panels from aerial photos. Peshkin and Wiesner described how they are now creating new algorithms that can first identify the size and locations of buildings in satellite imagery, and then estimate their energy usage. These tools could provide a quick and easy way to evaluate the total energy needs in any neighborhood, town or city in the U.S. or around the world.

“It’s not just that we can take one city, say Norfolk, Virginia, and estimate the buildings there. If you give us Reno, Tuscaloosa, Las Vegas, Pheonix — my hometown — you can absolutely get the per-building energy estimations,” Peshkin said. “And what that means is that policy makers will be more informed, NGOs will have the ability to best service their community, and more efficient, more accurate energy policy can be implemented.”

Some students’ research took them to the sidelines of local sports fields. Joost Op’t Eynde, a master’s student in biomedical engineering, described how he and his colleagues on a Brain and Society team are working with high school and youth football leagues to sort out what exactly happens to the brain during a high-impact sports game.

While a particularly nasty hit to the head might cause clear symptoms that can be diagnosed as a concussion, the accumulation of lesser impacts over the course of a game or season may also affect the brain. Eynde and his team are developing a set of tools to monitor both these impacts and their effects.

A standing-room only crowd listened to a team present on their work “Tackling Concussions.” Photo by Jared Lazarus/Duke Photography.

“We talk about inputs and outputs — what happens, and what are the results,” Eynde said. “For the inputs, we want to actually see when somebody gets hit, how they get hit, what kinds of things they experience, and what is going on in the head. And the output is we want to look at a way to assess objectively.”

The tools include surveys to estimate how often a player is impacted, an in-ear accelerometer called the DASHR that measures the intensity of jostles to the head, and tests of players’ performance on eye-tracking tasks.

“Right now we are looking on the scale of a season, maybe two seasons,” Eynde said. “What we would like to do in the future is actually follow some of these students throughout their career and get the full data for four years or however long they are involved in the program, and find out more of the long-term effects of what they experience.”

Kara J. Manke, PhD

Post by Kara Manke

Visualizing the Fourth Dimension

Living in a 3-dimensional world, we can easily visualize objects in 2 and 3 dimensions. But as a mathematician, playing with only 3 dimensions is limiting, Dr. Henry Segerman laments.  An Assistant Professor in Mathematics at Oklahoma State University, Segerman spoke to Duke students and faculty on visualizing 4-dimensional space as part of the PLUM lecture series on April 18.

What exactly is the 4th dimension?

Let’s break down spatial dimensions into what we know. We can describe a point in 2-dimensional space with two numbers x and y, visualizing an object in the xy plane, and a point in 3D space with 3 numbers in the xyz coordinate system.

Plotting three dimensions in the xyz coordinate system.

While the green right-angle markers are not actually 90 degrees, we are able to infer the 3-dimensional geometry as shown on a 2-dimensional screen.

Likewise, we can describe a point in 4-dimensional space with four numbers – x, y, z, and w – where the purple w-axis is at a right angle to the other regions; in other words, we can visualize 4 dimensions by squishing it down to three.

Plotting four dimensions in the xyzw coordinate system.

One commonly explored 4D object we can attempt to visualize is known as a hypercube. A hypercube is analogous to a cube in 3 dimensions, just as a cube is to a square.

How do we make a hypercube?

To create a 1D line, we take a point, make a copy, move the copied point parallely to some distance away, and then connect the two points with a line.

Similarly, a square can be formed by making a copy of a line and connecting them to add the second dimension.

So, to create a hypercube, we move identical 3D cubes parallel to each other, and then connect them with four lines, as depicted in the image below.

To create an n–dimensional cube, we take 2 copies of the (n−1)–dimensional cube and connecting corresponding corners.

Even with a 3D-printed model, trying to visualize the hypercube can get confusing. 

How can we make a better picture of a hypercube? “You sort of cheat,” Dr. Segerman explained. One way to cheat is by casting shadows.

Parallel projection shadows, depicted in the figure below, are caused by rays of light falling at a  right angle to the plane of the table. We can see that some of the edges of the shadow are parallel, which is also true of the physical object. However, some of the edges that collide in the 2D cast don’t actually collide in the 3D object, making the projection more complicated to map back to the 3D object.

Parallel projection of a cube on a transparent sheet of plastic above the table.

One way to cast shadows with no collisions is through stereographic projection as depicted below.

The stereographic projection is a mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except the point at the top of the sphere.

For the object below, the curves on the sphere cast shadows, mapping them to a straight line grid on the plane. With stereographic projection, each side of the 3D object maps to a different point on the plane so that we can view all sides of the original object.

Stereographic projection of a grid pattern onto the plane. 3D print the model at Duke’s Co-Lab!

Just as shadows of 3D objects are images formed on a 2D surface, our retina has only a 2D surface area to detect light entering the eye, so we actually see a 2D projection of our 3D world. Our minds are computationally able to reconstruct the 3D world around us by using previous experience and information from the 2D images such as light, shade, and parallax.

Projection of a 3D object on a 2D surface.

Projection of a 4D object on a 3D world

How can we visualize the 4-dimensional hypercube?

To use stereographic projection, we radially project the edges of a 3D cube (left of the image below) to the surface of a sphere to form a “beach ball cube” (right).

The faces of the cube radially projected onto the sphere.

Placing a point light source at the north pole of the bloated cube, we can obtain the projection onto a 2D plane as shown below.

Stereographic projection of the “beach ball cube” pattern to the plane. View the 3D model here.

Applied to one dimension higher, we can theoretically blow a 4-dimensional shape up into a ball, and then place a light at the top of the object, and project the image down into 3 dimensions.

Left: 3D print of the stereographic projection of a “beach ball hypercube” to 3-dimensional space. Right: computer render of the same, including the 2-dimensional square faces.

Forming n–dimensional cubes from (n−1)–dimensional renderings.

Thus, the constructed 3D model of the “beach ball cube” shadow is the projection of the hypercube into 3-dimensional space. Here the 4-dimensional edges of the hypercube become distorted cubes instead of strips.

Just as the edges of the top object in the figure can be connected together by folding the squares through the 3rd dimension to form a cube, the edges of the bottom object can be connected through the 4th dimension

Why are we trying to understand things in 4 dimensions?

As far as we know, the space around us consists of only 3 dimensions. Mathematically, however, there is no reason to limit our understanding of higher-dimensional geometry and space to only 3, since there is nothing special about the number 3 that makes it the only possible number of dimensions space can have.

From a physics perspective, Einstein’s theory of Special Relativity suggests a connection between space and time, so the space-time continuum consists of 3 spatial dimensions and 1 temporal dimension. For example, consider a blooming flower. The flower’s position it not changing: it is not moving up or sideways. Yet, we can observe the transformation, which is proof that an additional dimension exists. Equating time with the 4th dimension is one example, but the 4th dimension can also be positional like the first 3. While it is possible to visualize space-time by examining snapshots of the flower with time as a constant, it is also useful to understand how space and time interrelate geometrically.

Explore more in the 4th dimension with Hypernom or Dr. Segerman’s book “Visualizing Mathematics with 3D Printing“!

Post by Anika Radiya-Dixit.

 

 

Seeing Nano

Take pictures at more than 300,000 times magnification with electron microscopes at Duke

Sewer gnat head

An image of a sewer gnat’s head taken through a scanning electron microscope. Courtesy of Fred Nijhout.

The sewer gnat is a common nuisance around kitchen and bathroom drains that’s no bigger than a pea. But magnified thousands of times, its compound eyes and bushy antennae resemble a first place winner in a Movember mustache contest.

Sewer gnats’ larger cousins, horseflies are known for their painful bite. Zoom in and it’s easy to see how they hold onto their furry livestock prey:  the tiny hooked hairs on their feet look like Velcro.

Students in professor Fred Nijhout’s entomology class photograph these and other specimens at more than 300,000 times magnification at Duke’s Shared Material & Instrumentation Facility (SMIF).

There the insects are dried, coated in gold and palladium, and then bombarded with a beam of electrons from a scanning electron microscope, which can resolve structures tens of thousands of times smaller than the width of a human hair.

From a ladybug’s leg to a weevil’s suit of armor, the bristly, bumpy, pitted surfaces of insects are surprisingly beautiful when viewed up close.

“The students have come to treat travels across the surface of an insect as the exploration of a different planet,” Nijhout said.

Horsefly foot

The foot of a horsefly is equipped with menacing claws and Velcro-like hairs that help them hang onto fur. Photo by Valerie Tornini.

Weevil

The hard outer skeleton of a weevil looks smooth and shiny from afar, but up close it’s covered with scales and bristles. Courtesy of Fred Nijhout.

fruit fly wing

Magnified 500 times, the rippled edges of this fruit fly wing are the result of changes in the insect’s genetic code. Courtesy of Eric Spana.

You, too, can gaze at alien worlds too small to see with the naked eye. Students and instructors across campus can use the SMIF’s high-powered microscopes and other state of the art research equipment at no charge with support from the Class-Based Explorations Program.

Biologist Eric Spana’s experimental genetics class uses the microscopes to study fruit flies that carry genetic mutations that alter the shape of their wings.

Students in professor Hadley Cocks’ mechanical engineering 415L class take lessons from objects that break. A scanning electron micrograph of a cracked cymbal once used by the Duke pep band reveals grooves and ridges consistent with the wear and tear from repeated banging.

cracked cymbal

Magnified 3000 times, the surface of this broken cymbal once used by the Duke Pep Band reveals signs of fatigue cracking. Courtesy of Hadley Cocks.

These students are among more than 200 undergraduates in eight classes who benefitted from the program last year, thanks to a grant from the Donald Alstadt Foundation.

You don’t have to be a scientist, either. Historians and art conservators have used scanning electron microscopes to study the surfaces of Bronze Age pottery, the composition of ancient paints and even dust from Egyptian mummies and the Shroud of Turin.

Instructors and undergraduates are invited to find out how they could use the microscopes and other nanotech equipement in the SMIF in their teaching and research. Queries should be directed to Dr. Mark Walters, Director of SMIF, via email at mark.walters@duke.edu.

Located on Duke’s West Campus in the Fitzpatrick Building, the SMIF is a shared use facility available to Duke researchers and educators as well as external users from other universities, government laboratories or industry through a partnership called the Research Triangle Nanotechnology Network. For more info visit http://smif.pratt.duke.edu/.

Scanning electron microscope

This scanning electron microscope could easily be mistaken for equipment from a dentist’s office.

s200_robin.smith

Post by Robin Smith

When Art Tackles the Invisibly Small

Huddled in a small cinderblock room in the basement of Hudson Hall, visual artist Raewyn Turner and mechatronics engineer Brian Harris watch as Duke postdoc Nick Geitner positions a glass slide under the bulky eyepiece of an optical microscope.

To the naked eye, the slide is completely clean. But after some careful adjustments of the microscope, a field of technicolor spots splashes across the viewfinder. Each point shows light scattering off one of the thousands of silver nanoparticles spread in a thin sheet across the glass.

“It’s beautiful!” Turner said. “They look like a starry sky.”

AgAlgae_40x_Enhanced3

A field of 10-nanometer diameter silver nanoparticles (blue points) and clusters of 2-4 nanoparticles (other colored points) viewed under a dark-field hyperspectral microscope. The clear orbs are cells of live chlorella vulgaris algae. Image courtesy Nick Geitner.

Turner and Harris, New Zealand natives, have traveled halfway across the globe to meet with researchers at the Center for the Environmental Implications of Nanotechnology (CEINT). Here, they are learning all they can about nanoparticles: how scientists go about detecting these unimaginably small objects, and how these tiny bits of matter interact with humans, with the environment and with each other.

img_2842

The mesocosms, tucked deep in the Duke Forest, currently lay dormant.

The team hopes the insights they gather will inform the next phases of Steep, an ongoing project with science communicator Maryse de la Giroday which uses visual imagery to explore how humans interact with and “sense” the nanoparticles that are increasingly being used in our electronics, food, medicines, and even clothing.

“The general public, including ourselves, we don’t know anything about nanoparticles. We don’t understand them, we don’t know how to sense them, we don’t know where they are,” Turner said. “What we are trying to do is see how scientists sense nanoparticles, how they take data about them and translate it into sensory data.”

Duke Professor and CEINT member Mark Wiesner, who is Geitner’s postdoctoral advisor, serves as a scientific advisor on the project.

“Imagery is a challenge when talking about something that is too small to see,” Wiesner said. “Our mesocosm work provides an opportunity to visualize how were are investigating the interactions of nanomaterials with living systems, and our microscopy work provides some useful, if not beautiful images. But Raewyn has been brilliant in finding metaphors, cultural references, and accompanying images to get points across.”

img_2872

Graduate student Amalia Turner describes how she uses the dark-field microscope to characterize gold nanoparticles in soil. From left: Amalia Turner, Nick Geitner, Raewyn Turner, and Brian Harris.

On Tuesday, Geitner led the pair on a soggy tour of the mesocosms, 30 miniature coastal ecosystems tucked into the Duke Forest where researchers are finding out where nanoparticles go when released into the environment. After that, the group retreated to the relative warmth of the laboratory to peek at the particles under a microscope.

Even at 400 times magnification, the silver nanoparticles on the slide can’t really be “seen” in any detail, Geitner explained.

“It is sort of like looking at the stars,” Geitner said. “You can’t tell what is a big star and what is a small star because they are so far away, you just get that point of light.”

But the image still contains loads of information, Geitner added, because each particle scatters a different color of light depending on its size and shape: particles on their own shine a cool blue, while particles that have joined together in clusters appear green, orange or red.

During the week, Harris and Turner saw a number of other techniques for studying nanoparticles, including scanning electron microscopes and molecular dynamics simulations.

steepwashing-cake-copy-23

An image from the Steep collection, which uses visual imagery to explore how humans interact with the increasingly abundant gold nanoparticles in our environment. Credit: Raewyn Turner and Brian Harris.

“What we have found really, really interesting is that the nanoparticles have different properties,” Turner said. “Each type of nanoparticle is different to each other one, and it also depends on which environment you put them into, just like how a human will behave in different environments in different ways.”

Geitner says the experience has been illuminating for him, too. “I have never in my life thought of nanoparticles from this perspective before,” Geitner said. “A lot of their questions are about really, what is the difference when you get down to atoms, molecules, nanoparticles? They are all really, really small, but what does small mean?”

Kara J. Manke, PhD

Post by Kara Manke

Meet the New Blogger: Shanen Ganapathee

Hi y’all! My name is Shanen and I am from the deep, deep South… of the globe. I was born and raised in Mauritius, a small island off the coast of Madagascar, once home to the now-extinct Dodo bird.

Shanen Ganapathee

Shanen Ganapathee is a senior who wishes to be ‘a historian of the brain’

The reason I’m at Duke has to do with a desire to do what I love most — exploring art, science and their intersection. You will often find me writing prose; inspired by lessons in neuroanatomy and casting a DNA strand as the main character in my short story.

I’m excited about Africa, and the future of higher education and research on the continent. I believe in ideas, especially when they are big and bold. I’m a dreamer, an idealist but some might call me naive. I am deeply passionate about research but above all how it is made accessible to a wide audience.

I am currently a senior pursuing a Program II in Human Cognitive Evolution, a major I designed in my sophomore year with the help of my advisor, Dr. Leonard White, whom I had to luck to meet through the Neurohumanities Program in Paris.

This semester, I am working on a thesis project under the guidance of Dr. Greg Wray, inspired by an independent study I did under Dr. Steven Churchill, where we examined the difference in early human and Neandertal cognition and behavior. I am interested in using ancient DNA genomics to answer the age-old question: what makes us human? My claim is that the advent of artistic ventures truly shaped the beginning of behavioral modernity. In a sense, I want to be a historian of the brain.

My first exposure to the world of genomics was through the FOCUS program — Genome in our Lives — my freshman fall. Ever since, I have been fascinated by what the human genome can teach us. It is a window into our collective pasts as much as it informs us about our present and future. I am particularly intrigued by how the forces of evolution have shaped us to become the species we are.

I am excited about joining the Duke Research blog and sharing some great science with you all.

Cracking a Hit-and-Run Whodunit — With Lasers

The scratch was deep, two feet long, and spattered with paint flecks. Another vehicle had clearly grazed the side of Duke graduate student Jin Yu’s silver Honda Accord.

But the culprit had left no note, no phone number, and no insurance information.

Pump-Probe-Microscope-Pigment

Duke graduate student Jin Yu used laser-based imaging to confirm the source of a large scratch on the side of her car. Paint samples from an undamaged area on her Honda Accord (top left) and a suspected vehicle (top right) gave her the unique pump-probe microscopy signatures of the pigments on each car. The damaged areas of the Honda (bottom left) and the suspected vehicle on right (bottom right) show pigment signatures from both vehicles.

The timing of the accident, the location of the scratch, and the color of the foreign paint all pointed to a likely suspect: another vehicle in her apartment complex parking lot, also sporting a fresh gash.

She had a solid lead, but Yu wasn’t quite satisfied. The chemistry student wanted to make sure her case was rock-solid.

“I wanted to show them some scientific evidence,” Yu said.

And lucky for her, she had just the tools to do that.

As a researcher in the Warren Warren lab, Yu spends her days as scientific sleuth, investigating how a laser-based tool called pump-probe microscopy can be used to differentiate between individual pigments of paint, even if they appear identical to the human eye.

The team is developing the technique as a way for art historians and conservators peer under the surface of priceless paintings, without causing damage to the artwork. But Yu thought there was no reason the technique couldn’t be used for forensics, too.

“The idea popped into my mind — car paint is made up of pigments, just like paintings,” Yu said. “So, if I can compare the pigments remaining on my car with the suspected car, and they match up, that would be a pretty nice clue for finding the suspected car.”

Using a clean set of eyebrow tweezers, Yu carefully gathered small flecks of paint from her car and from the suspected vehicle and sealed them up inside individual Ziploc bags. She collected samples both from the scratched up areas, where the paint was mixed, and from undamaged areas on both cars.

She left a note on the car, citing the preliminary evidence and stating her plan to test the paint samples. Then, back at the lab, she examined all four samples with the pump-probe microscope. Unlike a standard optical microscope, this device illuminates each sample with a precisely timed series of laser pulses; each pigment absorbs and then re-emits this laser light in a slightly different pattern depending on its chemical structure, creating a unique signature.

Optical-Microscope-and-Note

After finding the gash on her Accord (top left), Yu left a note (top right) on the car that she suspected of having caused the accident. Under an optical microscope, samples from damaged areas on the cars show evidence of the same two kinds of paint (bottom). Yu used pump-probe microscopy to confirm that the pigments in the paint samples matched.

The samples from the undamaged areas gave her the characteristic pigment signatures from both of the two vehicles.

She then looked at the paint samples taken from the scratched areas. She found clear evidence of paint pigment from the suspected car on her Honda, and clear evidence of paint pigment from her Honda on the suspected car. This was like DNA evidence, of the automotive variety.

Fortunately, the owner of the suspect vehicle contacted Yu to confess and pay to have her car fixed, without demanding the results of the paint analysis. “But it was reassuring to have some scientific evidence in case she denied the accident,” Yu said.

Yu says she had no interest in forensic science when she started the investigation, but the experience has certainly piqued her curiosity.

“I had never imagined that I can use pump-probe microscopy for forensic science before this car accident happened,” Yu said. “But I think it shows some interesting possibilities.”

Kara J. Manke, PhD

Post by Kara Manke

Sandcastles of Stars Make Stable Structures

Sandcastles are not known for their structural stability; even the most steadfast seaside fortresses won’t survive a crashing wave or a bully’s kick.

But what if, instead of round grains of sand, you built your castle from tiny stars?

Duke graduate student Yuchen Zhao tests the stability of a tower made from six-armed stars or “hexapods.”

Duke graduate student Yuchen Zhao has spent the last year studying such “sandcastles of stars” — towers crafted from hundreds of six-armed stars or “hexapods” which bear a remarkable resemblance to the jacks you might have played with as a kid.

To build these towers, Zhao simply pours the stars into a hollow tube, and then removes the tube. But unlike columns of sand, these towers stand on their own, stay up when shaken, and can even bear up to twice their own weight.

“When you remove the support, you see that the star particles have really jammed together!” said Zhao. “Nobody understands exactly how this rigidity comes about.”

Sand is a classic example of a granular material, and like other types of granular materials — rice, flour, marbles, or even bags of jacks — it sometimes pours like a liquid, and other times “jams” up, forming a rigid solid.

The physics of jamming has been well-studied for round and spherical particles, says Duke physics professor Bob Behringer, an expert on granular materials who advises Zhao. But much less is understood about jamming in particles with more complex shapes, like hexapods.

“As soon as you move away from spheres, you can create jammed systems at the drop of a hat,” said Behringer. “People think they understand these systems, but there are still a lot of outstanding questions about how they behave: how do they break? Or how do they respond to shear stress?”

These questions aren’t only interesting to physicists, Behringer says. Architects Karola Dierichs and Achim Menges, collaborators on the project, are experimenting with using custom-designed granular materials, from hexapods to hooks, to create structures like walls and bridges.

Similar to a sandcastle or a bird’s nest, structures made this way can be porous, light, recyclable and even adaptable.

“One of their big ideas is, can you actually design a structure that could build itself or be constructed at random, rather than designing something very precise?” said Zhao.

Zhaos says that the first goal of his project was simply to explore the physical limits of towers built from hexapods. To do so, he constructed towers out stars ranging in size from 2 to 10 centimeters and made from two different materials. For each combination, he investigated how high he could build the tower before it collapsed. He then subjected the towers to various stressors, including vibration, tilting, and added weight.

One of the most surprising findings, Zhao said, was that the friction between the particles — whether they were made of smooth acrylic or rougher nylon — had the biggest impact on the stability of the towers. He also noted that when these towers collapse, they don’t just fall over in a heap, they fall apart in a series of mini avalanches.

CT-Scan of jacks

A 3D illustration of a tower of stars reconstructed from CT-scan data. The red dots indicate the points of contact between the stars. Image courtesy of Jonathan Barés.

The team has published this initial study, which they hope will be used as a “handbook of mechanical rules” to improve the design of aggregate structures, in a special edition of the journal Granular Matter.

As a next step in the experiment, Zhao and collaborator Jonathan Barés are using a CT scanner in the Duke SMIF lab to take detailed 3D pictures of the “skeletons” of these structures. With the data, they hope to find a better understanding of how all the individual contacts between stars add up to a stable tower.

“It is amazing to see how these particles can make stable structures capable of supporting big loads,” said Jonathan Barés, who is a former Duke postdoc. “Just changing a small property of the particles — their ability to interlock — creates a dramatic change in the behavior of the system.”

CITATION: “Packings of 3D stars: stability and structure.” Yuchen Zhao, Kevin Liu, Matthew Zheng, Jonathan Barés, Karola Dietrichs, Achim Menges, and Robert P. Behringer. Granular Matter, April 11, 2016. DOI: 10.1007/s10035-016-0606-4

Kara J. Manke, PhD

Post by Kara Manke

What Makes a Face? Art and Science Team Up to Find Out

From the man in the moon to the slots of an electrical outlet, people can spot faces just about everywhere.

As part of a larger Bass Connections project exploring how our brains make sense of faces, a Duke team of students and faculty is using state-of-the-art eye-tracking to examine how the presence of faces — from the purely representational to the highly abstract — influences our perception of art.

The Making Faces exhibit is on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

The artworks they examined are currently on display at the Nasher Museum of Art in an installation titled, “Making Faces: At the Intersection of Art and Neuroscience.”

“Faces really provide the most absorbing source of information for us as humans,” Duke junior Sophie Katz said during a gallery talk introducing the installation last week. “We are constantly attracted to faces and we see them everywhere. Artists have always had an obsession with faces, and recently scientists have also begun grappling with this obsession.”

Katz said our preoccupation with faces evolved because they provide us with key social cues, including information about another individual’s gender, identity, and emotional state. Studies using functional Magnetic Resonance Imaging (fMRI) even indicate that we have a special area of the brain, called the fusiform face area, that is specifically dedicated to processing facial information.

The team used eye-tracking in the lab and newly developed eye-tracking glasses in the Nasher Museum as volunteers viewed artworks featuring both abstract and representational images of faces. They created “heat maps” from these data to illustrate where viewers gazed most on a piece of art to explore how our facial bias might influence our perception of art.

This interactive website created by the team lets you observe these eye-tracking patterns firsthand.

When looking at faces straight-on, most people direct their attention on the eyes and the mouth, forming a triangular pattern. Katz said the team was surprised to find that this pattern held even when the faces became very abstract.

“Even in a really abstract representation of a face, people still scan it like they would a face. They are looking for the same social information regardless of how abstract the work is,” said Katz.


A demonstration of the eye-tracking technology used to track viewers gaze at the Nasher Museum of Art. Credit: Shariq Iqbal, John Pearson Lab, Duke University.

Sophomore Anuhita Basavaraju pointed out how a Lonnie Holley piece titled “My Tear Becomes the Child,” in which three overlapping faces and a seated figure emerge from a few contoured lines, demonstrates how artists are able to play with our facial perception.

“There really are very few lines being used, but at the same time it’s so intricate, and generates the interesting conversation of how many lines are there, and which face you see first,” said Basavaraju. “That’s what’s so interesting about faces. Because human evolution has made us so drawn towards faces, artists are able to create them out of really very few contours in a really intricate way.”

IMG_8354

Sophomore Anuhita Basavaraju discusses different interpretations of the face in Pablo Picasso’s “Head of a Woman.”

In addition to comparing ambiguous and representational faces, the team also examined how subtle changes to a face, like altering the color contrast or applying a mask, might influence our perception.

Sophomore Eduardo Salgado said that while features like eyes and a nose and mouth are the primary components that allow our brains to construct a face, masks may remove the subtler dimensions of facial expression that we rely on for social cues.

For instance, participants viewing a painting titled “Decompositioning” by artist Jeff Sonhouse, which features a masked man standing before an exploding piano, spent most of their time dwelling on the man’s covered face, despite the violent scene depicted on the rest of the canvas.

“When you cover a face, it’s hard to know what the person is thinking,” Salgado said. “You lack information, and that calls more attention to it. If he wasn’t masked, the focus on his face might have been less intense.”

In connection with the exhibition, Nasher MUSE, DIBS, and the Bass Connections team will host visiting illustrator Hanoch Piven this Thursday April 7th and Friday April 8th  for a lunchtime conversation and hands-on workshop about his work creating portraits with found objects.

Making Faces will be on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

Kara J. Manke, PhD

Post by Kara Manke

Page 1 of 2

Powered by WordPress & Theme by Anders Norén