Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology Page 17 of 20

3D Storytelling of Livia’s Villa

by Anika Radiya-Dixit

blog1pic

Eva Pietroni is in charge of the 3D modeling project, “Livia’s Villa Reloaded”

Have you ever pondered upon how 3D virtual realities are constructed? Or the potential to use them to tell stories about architectural masterpieces built millenniums ago?

The 5th International Conference on Remote Sensing in Archaeology held in the Fitzpatrick Center this weekend explored new technologies such as remote sensing, 3D reconstruction, and 3D printing used by the various facets of archaeology.

In her talk about a virtual archeology project called “Livia’s Villa Reloaded,” Eva Pietroni, art historian and co-director of the Virtual Heritage Lab in Italy, explored ways to integrate 3D modeling techniques into a virtual reality to best describe the history behind the reconstruction of the villa. The project is dedicated to the Villa Ad Gallinas Albas, which Livia Drusilla took as dowry when she married Emperor Augustus in the first century B.C.

The archeological landscape and the actual site have been modeled with 3D scenes in a Virtual Reality application with guides situated around the area to explain to tourists details of the reconstruction. The model combined images from the currently observable landscape and the potential ancient landscape — derived from both hypotheses and historical references. Many parts of the model have been implemented in the Duke Immersive Virtual Environment (DiVE).

Instead of using simple 3D characters to talk to the public, the team decided to try using real actors who talked in front of a small virtual set in front of a green screen. They used a specialized cinematic camera and played around with lighting and filtering effects to obtain the best shots of the actor that would later be put into the virtual environment. Pietroni expressed her excitement at the numerous feats the team was able to accomplish especially since they were not limited by rudimentary technology such as joysticks and push buttons. As a result, the 3D scenes have been implemented by testing the “grammar of gesture” — or in other words, the interactivity of the actor performing mid-air gestures — in a virtual environment. Hearteningly, the public has been “attracted by this possibility,” encouraging the team to work on better enhancing the detailed functionalities that the virtual character is able to perform. In her video demonstration, Pietroni showed the audience the Livia’s villa being reconstructed in real time with cinematographic paradigms and virtual set practices. It was extremely fascinating to watch as the video moved smoothly over the virtual reality, giving a helicopter view of the reconstruction.

 

Screen Shot 2014-10-14 at 9.24.55 PM

Helicopter view of the villa

One important point that Pietroni emphasized was testing how much freedom of exploration to give to the user. Currently, the exploration mode — indicated by the red dots hovering over the bird in the bottom left corner of the virtual reality — has a predefined camera animation path, since the site is very large, to prevent the user from getting lost. At the same time, the user has the ability to interrupt this automated navigation to look around and rotate the arm to explore the area. As a result, the effect achieved is a combination of a “movie and a free exploration” that keeps the audience engaged for the most optimal length of time.

Another feature provided in the menu options allows the user to navigate to a closer view of a specific part of the villa. Here, the user can walk through different areas of the villa, through kitchens and gardens, with guides located in specific areas that activate once the user has entered the desired region. This virtual storytelling is extremely important in being able to give the user a vicarious thrill in understanding the life and perspective of people living in ancient times. For example, a guide dressed in a toga in a kitchen explained the traditions held during mealtimes, and another guide in the private gardens detailed the family’s sleeping habits. The virtual details of the private garden were spectacular and beautiful, each leaf realistically swaying in the wind, each flower so well created that one could almost feel the texture of the petals as they strolled past.

 

Screen Shot 2014-10-14 at 9.24.39 PM

Guide talking about a kitchen in the villa

Screen Shot 2014-10-14 at 9.25.15 PM

Strolling through the gardens

The novelty of the “Livia’s Villa Reloaded” project is especially remarkable because the team was able to incorporate new archeological findings about the villa, rather than simply creating a system from old data without ever updating the visual aspects. Sometimes, as the speaker noted, this required the team to entirely reconfigure the lighting of a certain part of the villa when new data came in, so unfortunately, the project is not yet automatic. Of course, to ultimately improve the application, the team often queries the public on specific aspects they liked and disliked, and perhaps in the future, the virtual scenes of the villa may be developed to a perfection that they will be confused with reality itself.

 

See details about the conference at: http://space2place.classicalstudies.duke.edu/program/dive

Mathematical Restoration of Renaissance Masterpieces

Screen Shot 2014-09-21 at 10.41.22 PM

The Ghissi Masterpiece, missing the ninth panel

By Olivia Zhu

Ninth panel of the Ghissi masterpiece, as reconstructed by Charlotte Caspers

Ninth panel of the Ghissi masterpiece, as reconstructed by Charlotte Caspers

What do Renaissance masterpieces and modern medical images have in common?

The same mathematical technique, “oriented elongated filters,” originally developed to detect blood vessels in medical images can actually be used to detect cracks in digital images of antiquated Renaissance paintings.

On September 19, Henry Yan, Rowena Gan, and Ethan Levine, three undergraduate students at Duke, presented their work on oriented elongated filters and many other techniques to the Math Department. Yan, Gan, and Levine performed summer research to detect and correct cracks in the digitized Ghissi masterpiece, an altarpiece done by 14-century Italian painter Francescuccio di Cecco Ghissi. The altarpiece originally consisted of nine panels, but one was lost in the annals of history and has been recently reconstructed by artist and art historian Charlotte Caspers.

The role of the three undergrads was to digitally rejuvenate the panels of the Ghissi masterpiece, which had faded and accumulated cracks in paint layers because of weathering factors like pressure and temperature. Using various mathematical analysis techniques based in Matlab, including oriented elongated filters, linear combinations of 2-D

Henry Yan's K-SVD analysis to detect cracks in the image at left

Henry Yan’s K-SVD analysis to detect cracks in the image at left

Gaussian kernels (which essentially create directional filters), K-SVD (which updates atoms to better fit an image), and multi-scale top-hat (which extracts small elements and details from an image), the research group created a “crack map,” which they overlaid on the original image.

Then they instructed the computer to fill in the cracks with the colors directly adjacent to the cracks, thereby creating a smoother, crack-free image—this method is called inpainting.

In the future, Yan, Gan, and Levine hope to optimize the procedures they have already developed to accomplish color remapping to digitally age or refurbish images so that they look contemporary to their historical period, and to digitally restore gilding, the presence of gold leaf on paintings.

Teachers Look to 'Alice' for Help

Guest Post by Leah Montgomery, NC Central University

With technology and computer science among the fastest growing fields of study today, it’s a wonder there are so few computer science classes in public middle and high schools.

Florida teacher Chari Distler’s message to a Duke classroom full of her middle and high school teaching colleagues was a promising one: They can get a new generation of kids interested in computer science.

School teachers from all over the country learned programming at Duke this summer.

School teachers from all over the country learned programming at Duke this summer.

All they have to do is follow Alice.

Alice is a 3D virtual worlds programming environment that offers an easy way to create animations for games and storytelling. Since 2008, Duke Professor Susan Rodger has led a two-week summer program training teachers to use Alice to help promote computer literacy among young students.

“What we’re trying to do is teach middle school and high school teachers, in all disciplines, how to program and then help them to integrate it into their discipline,” said Rodger. “The teachers will then expose students to what computer science is. The idea is that if they know what it is then they might choose it as a career when they go to college.”

Distler attended her first Adventures in Alice Programming session at Duke two years ago and returned this week to advise this year’s class on how she implemented the program in her classes.

She said one of her students from North Broward Preparatory School won second place in the annual Alice contest for his animated 45-second video titled “From Rags to Riches.”

Audrey Toney, an instructional coach for teachers in the North Carolina New Schools network, said she learned about Alice through a teacher who wanted to add programming to her curriculum.

“It gives students computational thinking and critical thinking and offers another way to present other than PowerPoint and Prezi,” said Toney.

Toney wants to challenge her professional development students to use Alice to replicate a design of a robotic arm that will lift and unload boxes. The program will allow students to budget money, price the cost of parts and code the robot’s movements.

During the first week of the workshop, teachers get familiar with the Alice software through interactive activities. Teachers created worlds with flying dragons, flipping princesses and annoyed Garfields.

The teachers worked together on learning Alice programming. (Les Todd, Duke Photography)

The teachers worked together on learning Alice programming. (Les Todd, Duke Photography)

In week two, teachers learned about the use of 3-D imaging in the classroom at the Duke Immersive Virtual Environment (DiVE). The teachers also started creating their own Alice-based lesson plans this week. New Jersey high school teacher Kenneth McCarthy said he found his inspiration in the Sunday paper.

“I was thumbing through the Sunday paper and saw Garfield,” said McCarthy, who teaches algebra two and a beginner programming class . “It just looked like something that could be easily used with Alice.”

McCarthy is familiar with Alice, having used the program last year when his students participated in the Hour of Code, an initiative that challenges students and teachers to learn programming in one hour.

“I think the traditional thought was that you have to know algebra two (and other higher mathematics) to learn this, but Alice can be used in elementary schools,” said McCarthy.

Rising Duke senior Samantha Huerta was a workshop assistant for Susan Rodger for nine weeks this summer, helping develop workshop materials and finding ways to integrate computer science into math and other subjects.

“I wasn’t exposed to any type of computer science growing up,” said Huerta. “This is a field that isn’t going to go away, and we need to have more diversity. As a female Latina, I am a double minority and it is my hope to continue researching and bringing diversity to this field.”

Calderbank Honored For Being Honored

Robert Calderbank

Robert Calderbank (left), shares a laugh with Engineering Dean Tom Katsouleas and Provost Sally Kornbluth at a reception in his honor Wednesday. (Jared Lazarus, Duke Photography)

By Karl Leif Bates

Robert Calderbank, director of the Information Initiative at Duke (iiD) and the Charles S. Sydnor Professor of Computer Science, was the guest of honor at a small reception hosted by top administrators this week.

On July 3, he was named the 2015 recipient of the Claude E. Shannon Award by the IEEE Information Theory Society, the most coveted prize in Calderbank’s field.

“The Shannon Award is as big as it gets in electrical engineering and computer science,” said Tom Katsouleas, dean of the Pratt School of Engineering. “It reflects the fundamental role he’s played in communications, with many of his algorithms in use in mobile phones and internet communications today.”

The iiD is an interdisciplinary program headquartered in Gross Hall, but reaching into many areas of campus, that is increasing Duke’s application of  “big data” computational research.

The Claude Shannon Award honors “consistent and profound contributions to the field of information theory.” It is named for a man considered the father of information theory, who in his 1937 MIT masters thesis first proposed applying Boolean logic to electrical circuits.

Calderbank joined Duke in 2010 to become dean of natural sciences in Trinity College of Arts & Sciences. Previously, he had been directing the Program in Applied and Computational Mathematics at Princeton University since 2004.  Before that, he was vice president for research at AT&T, responsible for one of the first industrial research labs to focus on “big data.”

Calderbank will present a Shannon Lecture at the IEEE International Symposium on Information Theory in Summer 2015 in Hong Kong.

 

Math and Comp Sci Junior Studies Fruit Flies

By Ashley Mooney

dorsal closure

Dorsal closure is a stage in fruitfly embryonic development that is used to study wound-healing.

Roger Zou, a computer science and math major from Solon, Ohio, is working on creating more efficient ways to study wound-healing in fruit flies. It turns out that the way fruit flies heal actually has implications for how mammals heal too.

The junior is developing computational methods that can more accurately quantify cellular properties of fruit flies. As fruit fly embryos develop, he tracks cells through space and time to learn more about a process called dorsal closure. It’s a developmental stage that is similar to wound healing, where a gap in the embryo’s epithelium—which is like its skin—is closed by the coordinated effort of different types of cells. (see movie below)

Roger Zou is a junior spending the summer in Dan Kiehart's lab.

Roger Zou is a junior spending the summer in Dan Kiehart’s lab.

“It’s fun to study the morphological forces because it’s not entirely understood how organisms develop,” Zou said.

In his analysis, Zou uses a laser under a microscope to make cuts on areas of the fly embryos. After making cuts, Zou uses computational methods to measure the wound healing.

Beyond collecting such data, Zou is developing a computer program that analyzes images from the microscope more accurately.

Zou has worked in Biology Professor Daniel Kiehart’s lab since his freshman year. His project was originally a component of a graduate student’s dissertation, but after she graduated, he continued some aspects of her research.

His project has been funded by the Dean’s Summer Research Fellowship for two consecutive summers. He also has done several independent study projects. Although Zou is planning on publishing his research this summer, he will likely use the data eventually to do a senior thesis.

Several of Zou’s math and computer science classes have given him a background in the techniques needed to use a computer to analyze large sets of image data, he said.

“My favorite thing about my research is the ability to learn new things independently,” Zou said. “[Kiehart] is very good at leading me in the right direction but allowing me to be very independent and I think because of that I’ve been able to learn a lot more and learn from my mistakes.”

Outside of his research, Zou is a teaching assistant for the computer science class Data Structures and Algorithms. He also tutors  Duke students in organic chemistry and middle school children in math through the America Reads*America Counts program. And he also does web development for The Chronicle, Duke University’s independent student newspaper.

After graduating, Zou said he hopes to pursue a PhD in either computational biology or computer science or maybe go for a combined MD-PhD program. No matter which program he chooses, Zou said he wants to continue doing research.

[youtube http://www.youtube.com/watch?v=Yk-O_W1Wqbc?rel=0]

Copper Nanowires Now Match Performance of Leading Competitor

Images of the first and last stages (intermediate photo excluded) of the copper nanowire growth as seen through a transmission electron microscope. Interestingly, though not visible here, the nanowires are pink. (Photo: Shengrong Ye)

Images of the first and last stages of the copper nanowire growth as seen through a transmission electron microscope. Interestingly, though not visible here, the nanowires are pink. (Photo: Shengrong Ye)

 

By Erin Weeks

Copper nanowires are one step closer to becoming a low-cost substitute for the transparent conductor in solar cells, organic LEDs and flexible, electronic touch screens. A team at Duke has succeeded in making transparent conductors from copper nanowires that are only 1% less transparent than the conventional material, indium-tin oxide (ITO).

Copper is 1000 times more abundant and 100 times cheaper than indium, the main ingredient in ITO, but for years copper nanowires have lagged behind in terms of transmittance.

Assistant chemistry professor Benjamin Wiley’s lab has fixed that — simply by changing the aspect ratio, or the proportion of length to diameter, of the nanowires. The findings were reported recently in the journal Chemical Communications.

“We finally have something competitive with ITO in terms of performance, and we got there by increasing the nanowire aspect ratio,” Wiley said.

Using a special growth solution, Wiley’s lab can “sprout” the nanowires in under half an hour and at normal atmospheric pressure. Further tweaking the synthesis, the team was able to prompt the nanowires to grow long and uniform in diameter, instead of tapered and baseball bat-shaped, which have lower aspect ratios.

The paper also includes the images of copper nanowire growth as observed in real time, the first time such observations have been published.

Still, at least one kink remains before the nanowire technology will be attractive for commercial production. The copper nanowires are susceptible to corrosive oxidation, which Wiley’s team has tried to remedy by coating the nanowires with materials like nickel. Unfortunately, a nickel coating reduces the transparency of the nanowire films.

“So now we’re trying to figure out ways to protect the nanowires without decreasing the performance,” Wiley said. “We’re focused on getting the same performance, but having more stability.”

Citation: “A rapid synthesis of high aspect ratio copper nanowires for high-performance transparent conducting films.” Shengrong Ye, Aaron Rathmell, et al. Chemical Communications, March 11, 2014. DOI:10.1039/c3cc48561g.

Sign Up For Datafest 2014 to Work on Mystery Big Data

DATAFESTFLYER


Heads up Duke undergrads and graduate students — here’s an opportunity to hang out in the beautifully renovated Gross Hall, get creative with your friends using big data and compete for cash prizes and statistics fame.

Datafest, a data analysis competition that started at UCLA, is in its third year in the Triangle. Every year, a mystery client provides a dataset that teams can analyze, tinker with and visualize however they’d like over the course of a weekend. Think hackathon, but for data junkies.

“The datasets are bigger and more complex than what you’ll see in a classroom, but they’re of general interest,” said organizer Mine Çetinkaya-Rundel, an assistant professor of the practice in the Duke statistics department. “We want to encourage students from all levels.”

Last year’s mystery client was online dating website eHarmony (you can read about it here), and teams investigated everything from heightism to Myers-Briggs personality matches in online dating. In 2012, the dataset came from Kiva, the  microlending site.

This year’s dataset provider will be revealed on the first day of Datafest. Sign up ends this Friday, March 7, Monday, March 10, so assemble your team and register here!

 

The Catastrophic Origins of Our Moon

This still from a model shows a planet-sized object just after collision with earth. The colors indicate temperature. (Photo: Robin Canup)

This still from a model shows Earth just after collision with a planet-sized object. The colors indicate temperature. (Photo: Robin Canup)

By Erin Weeks

About 65 million years ago, an asteroid the size of Manhattan collided with the Earth, resulting in the extinction of 75% of the planet’s species, including the dinosaurs.

Now imagine an impact eight orders of magnitude more powerful — that’s the shot most scientists believe formed the moon.

One of the leading researchers of the giant impact theory of the moon’s origin is Robin Canup, associate vice president of the Planetary Science Directorate at the Southwest Research Institute. Canup was elected to the National Academy of Sciences in 2012, and she’s also a graduate of Duke University — where she returned yesterday to give the fifth Hertha Sponer Lecture, named for the physicist and first woman awarded a full professorship in science at Duke.

According to the giant impact hypothesis, another planet-sized object crashed into Earth shortly after its formation 4.5 billion years ago. The catastrophic impact sent an eruption of dust and vaporized rock into space, which coalesced into a disk of material rotating around Earth’s smoldering remains (see a very cool video of one model here).  Over time, that wreckage accreted into larger and larger “planetesimals,” eventually forming our moon.

Physics professor Horst Meyer took this photo of Robin Canup, who was his student as an undergraduate,

Robin Canup (Photo: Horst Meyer, who taught Canup as an undergrad at Duke)

Scientists favor this scenario, Canup said, because it answers a number of questions about our planet’s unusual lunar companion.

For instance, our moon has a depleted iron core, with 10% instead of the usual 30% iron composition. Canup’s models have shown the earth may have sucked up the molten core of the colliding object, leaving the dust cloud from which the moon originated with very little iron in it.

Another mystery is the identical isotopic signature of the moon and the earth’s mantle, which could be explained if the two original bodies mixed, forming a hybrid isotopic composition from the collision.

Canup’s models of the moon’s formation help us understand the evolution of just one (albeit important) cosmic configuration in our galaxy. As for the rest out there, she says scientists are just beginning to plump the depths of how they came to be. Already, the models show “they’re even crazier than the theoreticians imagined.”

Are We Merely Machines? Defining the Qualities That Make Us Human

Picard

Dr. Picard speaks with Dr. Michael Gustafson of Duke University

By Olivia Zhu

In an age where the line between humans and robots begins to blur, we’re hard-pressed to identify the source of our uniqueness as humans. Dr. Rosalind Picard of MIT provided insight to that question during the Veritas Forum on Wednesday, January 29.

As a leader of the Affective Computing Research Group, Dr. Picard develops technologies that interpret and display emotion. For example, MACH is an interactive program that analyzes voice inflections and their corresponding emotional connotations to help MIT students refine their interview skills.

Improved sensors can inform parents and educators when autistic children and infants are under stress, which a child himself may not be able to communicate. But despite their lifelike appearances, the robots still lack feeling and experience, according to Dr. Picard.

Although Picard attempts to mimic humanity in her technology, she firmly denied that we are merely machines. She said that assembling a system—in this instance, a human—lends one a better understanding of that system; however, it does not give one a complete understanding of what makes us human.

Adding an element of faith to her lecture, she said that a person can only have full knowledge of humanity after death. What, then, makes us human? While the audience primarily suggested love or consciousness, Picard held that the defining human quality is the capacity for a relationship with God, “the very author of all meaning, of all emotion, all consciousness.” She went on to discuss her own faith, founded largely by reading the Bible.

To continue this conversation, further discussion will be held at 7:00 p.m. Wednesday, February 5th in Social Psychology 130. The panel will feature Duke professors Ray Barfield, Bill Allard, and Connie Walker.

Learn to Fly a Drone in Three Minutes

By Erin Weeks

Missy Cummings has accomplished a lot of difficult things in her life — she was one of the Navy’s first female pilots, after all — but being a guest on The Colbert Report, she said, was hard.

Cummings told the story of her journey from Naval lieutenant to media drone expert last week at the Visualization Friday Forum seminar series in a talk (video archived here) titled “Designing a System for Navigating Small Drones in Tight Spaces.”

Missy Cummings joined Duke as an associate professor of mechanical engineering and materials science last semester

Missy Cummings joined Duke as an associate professor of mechanical engineering and materials science last semester.

Last semester, Cummings moved her renowned Humans and Automation Lab from MIT to Duke University. She’s wasted no time immersing herself in the new university and volunteered for the semester’s first seminar to introduce herself and her lab’s latest work to Duke’s visualization community.

Cummings’ research over recent years has centered on the development of a smartphone interface through which, she said, anyone can learn to pilot a one-pound drone in three minutes. The technology could be a boon to the U.S. Army, which now issues smartphones to its personnel and mostly relies on cumbersome, gas-powered drones.

The lab tested the technology by asking volunteers to maneuver a drone through an obstacle course both in the field — where they learned wind and cold temperatures are not a drone’s friend — and in simulated environments.

One of the things they discovered in both cases was that individuals who performed well in a spatial reasoning test were more likely to complete the obstacle course. Moreover, these performances tended to be gendered, with men scoring higher than women in spatial reasoning. Interestingly, Cummings noted, other studies have shown women tend to perform better piloting drones in long-term, “boring” scenarios with little action.

Cummings is interested in teasing out the reasons for these results, which could have significant implications for the U.S. Army or companies one day interested in hiring drone pilots.

As Stephen Colbert confirmed, you may be able to fly a drone with three minutes’ training, but that doesn’t mean you can fly it well.

Cummings talks to a full house at the Visualization Friday Forum on January 24.

Cummings talks to a full house at the Visualization Friday Forum on January 24.

Page 17 of 20

Powered by WordPress & Theme by Anders Norén