Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology Page 12 of 20

X-mas Under X-ray

If, like me, you just cannot wait until Christmas morning to find out what goodies are hiding in those shiny packages under the tree, we have just the solution for you: stick them in a MicroCT scanner.

A christmas present inside a MicroCT scanner.

Our glittery package gets the X-ray treatment inside Duke’s MicroCT scanner. Credit Justin Gladman.

Micro computed-tomography (CT) scanners use X-ray beams and sophisticated visual reconstruction software to “see” into objects and create 3D images of their insides. In recent years, Duke’s MicroCT has been used to tackle some fascinating research projects, including digitizing fossils, reconstructing towers made of stars, peaking inside of 3D-printed electronic devices, and creating a gorgeous 3D reconstruction of organs and muscle tissue inside this Southeast Asian Tree Shrew.

x-ray-view

A 20 minute scan revealed a devilish-looking rubber duck. Credit Justin Gladman.

But when engineer Justin Gladman offered to give us a demo of the machine last week, we both agreed there was only one object we wanted a glimpse inside: a sparkly holiday gift bag.

While securing the gift atop a small, rotating pedestal inside the device, Gladman explained how the device works. Like the big CT scanners you may have encountered at a hospital or clinic, the MicroCT uses X-rays to create a picture of the density of an object at different locations. By taking a series of these scans at different angles, a computer algorithm can then reconstruct a full 3D model of the density, revealing bones inside of animals, individual circuits inside electronics – or a present inside a box.

“Our machine is built to handle a lot of different specimens, from bees to mechanical parts to computer chips, so we have a little bit of a jack-of-all-trades,” Gladman said.

Within a few moments of sticking the package in the beam, a 2D image of the object in the bag appears on the screen. It looks kind of like the Stay Puft Marshmallow Man, but wait – are those horns?

Blue devil ducky in the flesh.

Blue devil ducky in the flesh.

Gladman sets up a full 3D scan of the gift package, and after 20 minutes, the contents of our holiday loot is clear. We have a blue devil rubber ducky on our hands!

Blue ducky is a fun example, but the SMIF lab always welcomes new users, Gladman says, especially students and researchers with creative new applications for the equipment. For more information on how to use Duke’s MicroCT, contact Justin Gladman or visit the Duke SMIF lab at their website, Facebook, Youtube or Instagram pages.

Kara J. Manke, PhD

Post by Kara Manke

Mapping the Brain With Stories

alex-huth_

Dr. Alex Huth. Image courtesy of The Gallant Lab.

On October 15, I attended a presentation on “Using Stories to Understand How The Brain Represents Words,” sponsored by the Franklin Humanities Institute and Neurohumanities Research Group and presented by Dr. Alex Huth. Dr. Huth is a neuroscience postdoc who works in the Gallant Lab at UC Berkeley and was here on behalf of Dr. Jack Gallant.

Dr. Huth started off the lecture by discussing how semantic tasks activate huge swaths of the cortex. The semantic system places importance on stories. The issue was in understanding “how the brain represents words.”

To investigate this, the Gallant Lab designed a natural language experiment. Subjects lay in an fMRI scanner and listened to 72 hours’ worth of ten naturally spoken narratives, or stories. They heard many different words and concepts. Using an imaging technique called GE-EPI fMRI, the researchers were able to record BOLD responses from the whole brain.

Dr. Huth explaining the process of obtaining the new colored models that revealed semantic "maps are consistent across subjects."

Dr. Huth explaining the process of obtaining the new colored models that revealed semantic “maps are consistent across subjects.”

Dr. Huth showed a scan and said, “So looking…at this volume of 3D space, which is what you get from an fMRI scan…is actually not that useful to understanding how things are related across the surface of the cortex.” This limitation led the researchers to improve upon their methods by reconstructing the cortical surface and manipulating it to create a 2D image that reveals what is going on throughout the brain.  This approach would allow them to see where in the brain the relationship between what the subject was hearing and what was happening was occurring.

A model was then created that would require voxel interpretation, which “is hard and lots of work,” said Dr. Huth, “There’s a lot of subjectivity that goes into this.” In order to simplify voxel interpretation, the researchers simplified the dimensional subspace to find the classes of voxels using principal components analysis. This meant that they took data, found the important factors that were similar across the subjects, and interpreted the meaning of the components. To visualize these components, researchers sorted words into twelve different categories.

img_2431

The Four Categories of Words Sorted in an X,Y-like Axis

These categories were then further simplified into four “areas” on what might resemble an x , y axis. On the top right was where violent words were located. The top left held social perceptual words. The lower left held words relating to “social.” The lower right held emotional words. Instead of x , y axis labels, there were PC labels. The words from the study were then colored based on where they appeared in the PC space.

By using this model, the Gallant could identify which patches of the brain were doing different things. Small patches of color showed which “things” the brain was “doing” or “relating.” The researchers found that the complex cortical maps showing semantic information among the subjects was consistent.

These responses were then used to create models that could predict BOLD responses from the semantic content in stories. The result of the study was that the parietal cortex, temporal cortex, and prefrontal cortex represent the semantics of narratives.

meg_shieh_100hedPost by Meg Shieh

Does Digital Healthcare Work?

Wearable technologies like Fitbit have been shown to provide a short-term increase in physical activity, but long-term benefits are still unclear, even if recent studies on corporate wellness programs highlight the potential healthcare savings.

Headshot of Luca Foschini

Luca Foschini, PhD is a co-founder and head of data science at Evidation Health, and a visiting research scientist at UCSB. Source: Network Science IGERT at UCSB.

To figure out the effects of these technologies on our health, we need ways to efficiently mine through the vast amounts of data and feedback that wearable devices constantly record.

As someone who has recently jumped on the Fitbit “band” wagon, I have often wondered about what happens with all the data collected from my wrist day after day, week after week.

Luca Foschini, a co-founder and head of data science at Evidation Health, recently spoke at Duke’s Genomic and Precision Medicine Forum where he explained how his company uses these massive datasets to analyze and predict how digital health interventions — Fitbits and beyond — can result in better health outcomes.

California-based Evidation health uses real-life data collected upon authorization from 500,000-plus users of mobile health applications and devices. This mobile health or “mHealth” data is quickly becoming a focus of intense research interest because of its ability to provide such a wealth of information about an individual’s behavior.

Foschini and Evidation Health have taken the initiative to design and run clinical studies to show the healthcare field that digital technologies can be used for assessing patient health, behavioral habits, and medication adherence, just to name a few.

Foschini said that the benefits of mobile technologies could go far beyond answering questions about daily behavior and lifestyle to formulate predictions about health outcomes. This opens the door for “wearables and apps” to be used in the realm of behavior change intervention and preventative care.

Foschini speaks at Duke’s Genomic and Precision Medicine Forum

Foschini explains how data collected from thousands of individuals wearing digital health trackers was used to find a associations between activity tracking patterns and weight loss.

Evidation Health is not only exploring data based on wearable technologies, but data within all aspects of digital health. For example, an interesting concept to consider is whether devices create an opportunity for faster clinical trials. So-called “virtual recruiting” of participants for clinical studies might use social media, email campaigns and online advertising, rather than traditional ads and fliers. Foschini said a study by his firm found this type of recruitment is up to twelve times faster than normal recruitment methods for clinical trials (Kumar et al 2016). 

While Foschini and others in his field are excited about the possibilities that mHealth provides for the betterment of healthcare, he acknowledges the hurdles standing in the way of this new approach. There is no standardization in how this type of data is gathered, and greater scrutiny is needed to ensure the reliability and accuracy of some of the apps and devices that supply the data.

amanda_cox_100 Post by Amanda Cox

Girls Get An Eye-Opening Introduction to Photonics

14711502_10153733048287100_3205727043350847171_o

Demonstration of the Relationship between Solar Power and Hydrogen Fuel. Image courtesy of DukeEngineering.

Last week I attended the “Exploring Light Technologies” open house hosted by the Fitzpatrick Institute for Photonics, held to honor International “Introduce a Girl to Photonics” Week. It was amazing!

I was particularly enraptured by a MEDx Wireless Technology presentation and demonstration titled “Using Light to Monitor Health and View Health Information.” There were three “stations” with a presenter at each station.

At the first station, the presenter, Julie, discussed how wearable technologies are used in optical heart rate monitoring. For example, a finger pulse oximeter uses light to measure blood oxygen levels and heart rates, and fitness trackers typically contain LED lights in the band. These lights shine into the skin and the devices use algorithms to read the amount of light scattered by the flow of blood, thus measuring heart rate.

At the second station, the presenter, Jackie, spoke about head-mounted displays and their uses. The Google Glass helped inspire the creation of the Microsoft Hololens, a new holographic piece of technology resembling a hybrid of laboratory goggles and a helmet. According to Jackie, the Microsoft Hololens “uses light to generate 3D objects we can see in our environment.”

14589884_10153733048602100_2715427782090593871_o

Using the Microsoft Hololens. Image courtesy of DukeEngineering.

After viewing a video on how the holographic technology worked, I put on the Microsoft Hololens at the demonstration station. The team had set up 3D images of a cat, a dog and a chimpanzee. “Focus the white point of light on the object and make an L-shape with your fingers,” directed Eric, the overseer. “Snap to make the objects move.” With the heavy Hololens pressing down on my nose, I did as he directed. Moving my head moved the point of light. Using either hand to snap made the dog bark, the cat meow and lick its paws, and the chimpanzee eat. Even more interesting was the fact that I could move around the animals and see every angle, even when the objects were in motion. Throughout the day, I saw visitors of all ages with big smiles on their faces, patting and “snapping” at the air.

Applications of the Microsoft Hololens are promising. In the medical field, they can be used to display patient health information or electronic health records in one’s line of sight. In health education, students can view displays of interactive 3D anatomical animations. Architects can use the Hololens to explore buildings. “Imagine learning about Rome in the classroom. Suddenly, you can actually be in Rome, see the architecture, and explore the streets,” Jackie said. “[The Microsoft Hololens] deepens the educational experience.”

14707854_10153733048632100_7506613834348839624_o

Tour of the Facilities. Image courtesy of DukeEngineering.

Throughout the day, I oo-ed and aw-ed at the three floors-worth of research presentations lining the walls. Interesting questions were posed on easy-to-comprehend posters, even for a non-engineer such as myself. The event organizers truly did make sure that all visitors would find at least one presentation to pique their interest. There were photonic displays and demonstrations with topics ranging from art to medicine to photography to energy conservation…you get my point.

Truly an eye-opening experience!

Post by Meg Shiehmeg_shieh_100hed

Walla Scores Grand Prize at 17th Annual Start-Up Challenge

The finalists of Duke’s 17th Annual Start-Up Challenge have found time between classes, homework, and West Union runs to research and develop pitches aiming to solve real-world problems with entrepreneurship. The event, hosted last week at the Fuqua School of Business, featured a Trinity alum as the keynote speaker. Beating out the other seven start-up pitches for the $50,000 Grand Prize was Walla, an app founded by Judy Zhu, a Pratt senior.

Judy Zhu and the Walla team pose with their $50,000 check, which is giant in more ways than one.

Judy Zhu and the Walla team pose with their $50,000 check, which is giant in more ways than one.

Walla aims to create a social health platform for college students by addressing widespread loneliness and creating a more inclusive campus community. The app’s users post open invitations to activities, from study groups to pick-up sports, allowing students to connect over shared interests.

Walla is closely tied with Duke Medicine by providing data from user activity to medical researchers. User engagement is analyzed to supply valuable information on mental health in young adults to professionals. The app currently features 700 monthly active users, with 3000 anticipated within the next month, and many more as the app opens to other North Carolina colleges.

Tatiana Birgisson returned to Duke to talk about her own experiences creating a business while an undergrad that won the Start-Up Challenge in 2013. Birgisson’s venture, MATI energy drink, was born out of her Central Campus dorm room and, through the support of Duke I&E resources, became the major energy drink contender it is today, as a healthy alternative to Monster or Red Bull.

The $2,500 Audience Choice award went to Ebb, an app designed to empower women on their periods by keeping them informed of physical and emotional symptoms throughout the course of their cycles, and creating a community through which menstruating women can receive support from those they choose to share information with.

Tatiana Birgisson won the 2013 startup challenge with an energy drink brewed in her dorm room, now sold as MATI.

Tatiana Birgisson won the 2013 startup challenge with an energy drink brewed in her dorm room, now sold as MATI.

Other finalists included BioMetrix, a wearable platform for injury prevention; GoGlam, an application to connect working women with beauticians in Latin America; Grow With Nigeria, which provides engaging STEM experiences for students in Nigeria; MedServe; Tiba Health; Teraphic.

This year’s Start-Up Challenge was a major success, with innovative entrepreneurs coming together to share their projects on changing the world. Be sure to come out next year; I’ll post an invite on Walla!

devin_nieusma_100Post by Devin Nieusma

Taking Math Beyond the Blackboard

https://youtu.be/cZVxTeUeez8

Most days, math graduate student Veronica Ciocanel spends her time modeling how frog eggs go from jelly-like blobs to tiny tadpoles having a well-defined front and back, top and bottom. But for a week this summer, she used some of the same mathematical tools from her Ph.D. research at Brown to help a manufacturing company brainstorm better ways to filter nasty-smelling pollutants from industrial exhaust fumes.

Math professor Ryan Pellico of Trinity College took a similar leap. Most of his research aims to model suspension bridges that twist and bounce to the point of collapse. But he spent a week trying to help a defense and energy startup devise better ways to detect landmines using ground-penetrating radar.

Ciocanel and Pellico are among more than 85 people from across the U.S., Canada and the U.K. who met at Duke University June 13-17 for a five-day problem-solving workshop for mathematicians, scientists and engineers from industry and academia.

The concept got its start at Oxford University in 1968 and has convened 32 times. Now the Mathematical Problems in Industry workshop (MPI) takes place every summer at a different university around the U.S. This is the first time Duke has hosted the event.

The participants’ first task was to make sense of the problems presented by the companies and identify areas where math, modeling or computer simulation might help.

One healthcare services startup, for example, was developing a smartphone app to help asthma sufferers and their doctors monitor symptoms and decide when patients should come in for care. But the company needed additional modeling and machine learning expertise to perfect their product.

Another company wanted to improve the marketing software they use to schedule TV ads. Using a technique called integer programming, their goal was to ensure that advertisers reach their target audiences and stay within budget, while also maximizing revenue for the networks selling the ad time.

“Once we understood what the company really cared about, we had to translate that into a math problem,” said University of South Carolina graduate student Erik Palmer. “The first day was really about listening and letting the industry partner lead.”

Mathematicians Chris Breward of the University of Oxford and Sean Bohun of the University of Ontario Institute of Technology were among more than 80 people who met at Duke in June for a week-long problem-solving workshop for scientists and engineers from industry and academia.

Mathematicians Chris Breward of the University of Oxford and Sean Bohun of the University of Ontario Institute of Technology were among more than 80 people who met at Duke in June for a week-long problem solving workshop for scientists and engineers from industry and academia.

For the rest of the week, the participants broke up into teams and fanned out into classrooms scattered throughout the math and physics building, one classroom for each problem. There they worked for the next several days, armed with little more than caffeine and WiFi.

In one room, a dozen or so faculty and students sat in a circle of desks in deep concentration, intently poring over their laptops and programming in silence.

Another team paced amidst a jumble of power cords and coffee cups, peppering their industry partner with questions and furiously scribbling ideas on a whiteboard.

“Invariably we write down things that turn out later in the week to be completely wrong, because that’s the way mathematical modeling works,” said University of Oxford math professor Chris Breward, who has participated in the workshop for more than two decades. “During the rest of the week we refine the models, build on them, correct them.”

Working side by side for five days, often late into the night, was intense.

“It’s about learning to work with people in a group on math and coding, which are usually things you do by yourself,” Ciocanel said.

“By the end of the week you’re drained,” said math graduate student Ann Marie Weideman of the University of Maryland, Baltimore County.

For Weideman, one of the draws of the workshop was the fresh input of new ideas. “Everyone comes from different universities, so you get outside of your bubble,” she said.

“Here people have tons of different approaches to problems, even for things like dealing with missing data, that I never would have thought of,” Weideman added. “If I don’t know something I just turn to the person next to me and say, ‘hey, do you know how to do this?’ We’ve been able to work through problems that I never could have solved on my own in a week’s worth of time.”

Supported by funding from the National Science Foundation and the industry partners, the workshop attracts a wide range of people from math, statistics, biostatistics, data science, computer science and engineering.

monday_groupMore than 50 graduate students participated in this year’s event. For them, one of the most powerful parts of the workshop was discovering that the specialized training they received in graduate school could be applied to other areas, ranging from finance and forensics to computer animation and nanotechnology.

“It’s really cool to find out that you have some skills that are valuable to people who are not mathematicians,” Pellico said. “We have some results that will hopefully be of value to the company.”

On the last day of the workshop, someone from each group presented their results to their company partner and discussed possible future directions.

The participants rarely produce tidy solutions or solve all the problems in a week. But they often uncover new avenues that might be worth exploring, and point to new approaches to try and questions to ask.

“We got lots of new ideas,” said industry representative Marco Montes de Oca, whose company participated in the MPI workshop for the second time this year. “This allows us to look at our problems with new eyes.”

Next year’s MPI workshop will be held at the New Jersey Institute of Technology in Newark.

Robin SmithPost by Robin A. Smith

What Makes a Face? Art and Science Team Up to Find Out

From the man in the moon to the slots of an electrical outlet, people can spot faces just about everywhere.

As part of a larger Bass Connections project exploring how our brains make sense of faces, a Duke team of students and faculty is using state-of-the-art eye-tracking to examine how the presence of faces — from the purely representational to the highly abstract — influences our perception of art.

The Making Faces exhibit is on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

The artworks they examined are currently on display at the Nasher Museum of Art in an installation titled, “Making Faces: At the Intersection of Art and Neuroscience.”

“Faces really provide the most absorbing source of information for us as humans,” Duke junior Sophie Katz said during a gallery talk introducing the installation last week. “We are constantly attracted to faces and we see them everywhere. Artists have always had an obsession with faces, and recently scientists have also begun grappling with this obsession.”

Katz said our preoccupation with faces evolved because they provide us with key social cues, including information about another individual’s gender, identity, and emotional state. Studies using functional Magnetic Resonance Imaging (fMRI) even indicate that we have a special area of the brain, called the fusiform face area, that is specifically dedicated to processing facial information.

The team used eye-tracking in the lab and newly developed eye-tracking glasses in the Nasher Museum as volunteers viewed artworks featuring both abstract and representational images of faces. They created “heat maps” from these data to illustrate where viewers gazed most on a piece of art to explore how our facial bias might influence our perception of art.

This interactive website created by the team lets you observe these eye-tracking patterns firsthand.

When looking at faces straight-on, most people direct their attention on the eyes and the mouth, forming a triangular pattern. Katz said the team was surprised to find that this pattern held even when the faces became very abstract.

“Even in a really abstract representation of a face, people still scan it like they would a face. They are looking for the same social information regardless of how abstract the work is,” said Katz.


A demonstration of the eye-tracking technology used to track viewers gaze at the Nasher Museum of Art. Credit: Shariq Iqbal, John Pearson Lab, Duke University.

Sophomore Anuhita Basavaraju pointed out how a Lonnie Holley piece titled “My Tear Becomes the Child,” in which three overlapping faces and a seated figure emerge from a few contoured lines, demonstrates how artists are able to play with our facial perception.

“There really are very few lines being used, but at the same time it’s so intricate, and generates the interesting conversation of how many lines are there, and which face you see first,” said Basavaraju. “That’s what’s so interesting about faces. Because human evolution has made us so drawn towards faces, artists are able to create them out of really very few contours in a really intricate way.”

IMG_8354

Sophomore Anuhita Basavaraju discusses different interpretations of the face in Pablo Picasso’s “Head of a Woman.”

In addition to comparing ambiguous and representational faces, the team also examined how subtle changes to a face, like altering the color contrast or applying a mask, might influence our perception.

Sophomore Eduardo Salgado said that while features like eyes and a nose and mouth are the primary components that allow our brains to construct a face, masks may remove the subtler dimensions of facial expression that we rely on for social cues.

For instance, participants viewing a painting titled “Decompositioning” by artist Jeff Sonhouse, which features a masked man standing before an exploding piano, spent most of their time dwelling on the man’s covered face, despite the violent scene depicted on the rest of the canvas.

“When you cover a face, it’s hard to know what the person is thinking,” Salgado said. “You lack information, and that calls more attention to it. If he wasn’t masked, the focus on his face might have been less intense.”

In connection with the exhibition, Nasher MUSE, DIBS, and the Bass Connections team will host visiting illustrator Hanoch Piven this Thursday April 7th and Friday April 8th  for a lunchtime conversation and hands-on workshop about his work creating portraits with found objects.

Making Faces will be on display in the Nasher Museum of Art’s Academic Focus Gallery through July 24th.

Kara J. Manke, PhD

Post by Kara Manke

The Art of Asking Questions at DataFest 2016

During DataFest, students engaged in intense collaboration. Image courtesy of Rita Lo.

Students engaged in intense collaboration during DataFest 2016, a stats and data analysis competition held from April 1-3 at Duke. Image courtesy of Rita Lo.

On Saturday night, while most students were fast asleep or out partying, Duke junior Callie Mao stayed up until the early hours of the morning pushing and pulling a real-world data set to see what she could make of it — for fun. Callie and her team had planned for months in advance to take part in DataFest 2016, a statistical analysis competition that occurred from April 1 to April 3.

A total of 277 students, hailing from schools as disparate as Duke, UNC Chapel Hill, NCSU, Meredith College, and even one high school, the North Carolina School of Science and Mathematics, gathered in the Edge to extract insight from a mystery data set. The camaraderie was palpable, as students animatedly sketched out their ideas on whiteboard walls and chatted while devouring mountains of free food.

Callie Mao ponders which aspects of data to include in her analysis.

Duke junior Callie Mao ponders which aspects of the data to include in her analysis.

Callie observed that the challenges the students faced at DataFest were extremely unique: “The most difficult part of DataFest is coming up with an idea. In class, we get specific problems, but at DataFest, we are thrown a massive data set and must figure out what to do with it. We originally came up with a lot of ideas, but the data set just didn’t have enough information to fully visualize though.”

At the core, Callie and her team, instead of answering questions posed in class, had to come up with innovative and insightful questions to pose themselves. With virtually no guidance, the team chose which aspects of the data to include and which to exclude.

Another principal consideration across all categories was which tools to use to quickly and clearly represent the data. Callie and her team used R to parse the relevant data, converted their desired data into JSON files, and used D3, a Javascript library, to code graphics to visualize the data. Other groups, however, used Tableau, a drag and drop interface that provided an expedited method for creating beautiful graphics.

Mentors assisted participants with formulating insights and presenting their results

Mentors assisted participants with formulating insights and presenting their results. Image courtesy of Rita Lo.

On Sunday afternoon, students presented their findings to their attentive peers and to a panel of judges, comprised of industry professionals, statistics professors from various universities, and representatives from Data and Visualization Services at Duke Libraries. Judges commended projects based on aspects such as incorporation of other data sources, like Google Adwords, comprehensibility of the data presentation, and the applicability of findings in a real industry setting.

Students competed in four categories:  best use of outside data, best data insight, best visualization, and best recommendation. The Baeesians, pictured below, took first place in best outside data, the SuperANOVA team won best data insight, the Standard Normal team won best visualization, and the Sample Solution team won best recommendation. The winning presentations will be available to view by May 2 at http://www2.stat.duke.edu/datafest/.

Bayesian, the winner of the Best Outside Data category

The Baeasians, winner of the Best Outside Data category at DataFest 2016: Rahul Harikrishnan, Peter Shi, Qian Wang, Abhishek Upadhyaya. (Not pictured Justin Wang) Image courtesy of Rita Lo.

 

By student writer Olivia Zhu  professionalpicture

Finding other Earths: the Chemistry of Star and Planet Formation

In the last two decades, humanity has discovered thousands of extrasolar planetary systems. Recent studies of star- and planet-formation have shown that chemistry plays a pivotal role in both shaping these systems and delivering water and organic species to the surfaces of nascent terrestrial planets. Professor Geoffrey A. Blake in Chemistry at the California Institute of Technology talked to Duke faculty and students over late-afternoon pizza in the Physics building on the role of chemistry in star and planet formation and finding other Earth-like planets.

milky way

The Milky Way rising above the Pacific Ocean and McKay Cove off the central California coast.

In the late 18th century, French scholar Pierre-Simon Laplace analyzed what our solar system could tell us about the formation & evolution of planetary systems. Since then, scientists have used the combination our knowledge for small bodies like asteroids, large bodies such as planets, and studies of extrasolar planetary systems to figure out how solar systems and planets are formed.

The "Astronomer's periodic table," showing the relative contents of the various elements present in stars.

The “Astronomer’s periodic table,” showing the relative contents of the various elements present in stars like the sun.

In 2015, Professor Blake and other researchers investigated more into ingredients in planets necessary for the development of life. Using the Earth and our solar system as the basis for their data, they explored the relative disposition of carbon and nitrogen in each stage of star and planet formation to learn more about core formation and atmospheric escape. Analyzing the carbon-silicon atomic ratio in planets and comets, Professor Blake discovered that rocky bodies in the solar system are generally carbon-poor. Since carbon is essential for our survival, however, Blake and his colleagues would like to determine the range of carbon content that terrestrial planets can have and still have active biosystem.

Analysis of C/Si ratios in extraterrestrial bodies revealed low carbon content in the formation of Earth-like planets.

Analysis of C/Si ratios in extraterrestrial bodies revealed low carbon content in the formation of Earth-like planets.

With the Kepler mission, scientists have detected a variety of planetary objects in the universe. How many of these star-planet systems – based on measured distributions – have ‘solar system’ like outcomes? A “solar system” like planetary system has at least one Earth-like planet at approximately 1 astronomical unit (AU) from the star – where more ideal conditions for life can develop – and at least one ice giant or gas giant like Jupiter at 3-5 AU in order to keep away comets from the Earth-like planet. In our galaxy alone, there are around 100 billion stars and at least as many planets. For those stars similar to our sun, there exist over 4 million planetary systems similar to our solar system, with the closest Earth-like planet at least 20 light years away. With the rapid improvement of scientific knowledge and technology, Professor Blake estimates that we would be able to collect evidence within next 5-6 years of planets within 40-50 light years to determine if they have a habitable atmosphere.

planet

Graph displaying the locations of Earth-like planets found at 0.01-1 AU from a star, and Jupiter-like planets at 0.01-50 AU from a star.

How does an Earth and a Jupiter form at their ideal distances from a star? Let’s take a closer look at how stars and planets are created – via the astrochemical cycle. Essentially, dense clouds of gas and dust become so opaque and cold that they collapse into a disk. The disk, rotating around a to-be star, begins to transport mass in toward the center and angular momentum outward. Then, approximately 1% of the star mass is left over from the process, which is enough to form planets. This is also why planets around stars are ubiquitous.

 

The Astrochemical Cycle: how solar systems are formed.

The Astrochemical Cycle: how solar systems are formed.

How are the planets formed? The dust grains unused by the star collide and grow, forming larger particles at specific distances from the star – called snowlines – where water vapor turns into ice and solidifies. These “dust bunnies” grow into planetesimals (~10-50 km diameter), such as asteroids and comets. If the force of gravity is large enough, the planetesimals increase further in size to form oligarchs (~0.1-10 times the mass of the Earth), that then become the large planets of the solar system.

Depiction

Depiction of the snow line for planet formation.

In our solar system, a process called dynamic reorganization is thought to have occurred that restructured the order of our planets, putting Uranus before Neptune. This means that if other solar systems did not undergo such dynamic reorganization at an early point in formation of solar system, then other Earths may have lower organic and water content than our Earth. In that case, what constraints do we need to apply to determine if a water/organic delivery mechanism exists for exo-Earths? Although we do not currently have the scientific knowledge to answer this, with ALMA and the next generation of optical/IR telescopes, we will be able image the birth of solar systems directly and better understand how our universe came to be.

To the chemistry students at Duke, Professor Blake relayed an important message: learn chemistry fundamentals very carefully while in college. Over the next 40-50 years, your interests will change gears many times. Strong fundamentals, however, will serve you well, since you are now equipped to learn in many different areas and careers.

Professor Blake and the team of former and current Caltech researchers.

Professor Blake and the team of former and current Caltech researchers.

Learn more about the Blake research group or their work.

Anika_RD_hed100_2

By Anika Radiya-Dixit.

 

The Future of 3D Printing in Medicine

While 3D printers were once huge, expensive devices only available to the industrial elite, they have rapidly gained popularity over the last decade with everyday consumers. I enjoy printing a myriad of objects at the Duke Colab ranging from the Elder Wand to laptop stands.

One of the most important recent applications of 3D printing is in the medical industry. Customized implants and prosthetics, medical models and equipment, and synthetic skin are just a few of the prints that have begun to revolutionize health care.

3D printed prosthetic leg: “customizable, affordable and beautiful.”

Katie Albanese is a student in the Medical Physics Graduate Program who has been 3D printing breasts, abdominal skeletons, and lungs to test the coherent scatter x-ray imaging system she developed. Over spring break, I had the opportunity to talk with Katie about her work and experience. She uses the scatter x-ray imaging system to identify the different kinds of tissue, including tumors, within the breast. When she isn’t busy printing 3D human-sized breasts to determine if the system works within the confines of normal breast geometries, Katie enjoys tennis, running, napping and watching documentaries in her spare time. Below is the transcript of the interview.

How did you get interested in your project?

When I came to Duke in 2014, I had no idea what research lab I wanted to join within the Medical Physics program. After hearing a lot of research talks from faculty within my program, I ultimately chose my lab based on how well I got along with my current advisor, Anuj Kapadia in the Radiology department. He had an x-ray project in the works with the hope of using coherent scatter in tissue imaging, but the system had yet to be used on human-sized objects.

Could you tell me more about the scatter x-ray imaging system you’ve developed?

Normally, scatter in a medical image is actively removed because it doesn’t contribute to diagnostic image quality in conventional x-ray. However, due to the unique inter-atomic spacing of every material – and Bragg’s law – every material has a unique scatter signature. So, using the scattered radiation from a sample (instead of the primary x-ray beam that is transmitted through the sample), we can identify the inter-atomic spacing of that material and trace that back to what the material actually is to a library of known inter-atomic spacings.

Bragg diffraction: Two beams with identical wavelength and phase approach a crystalline solid and are scattered off two different atoms within it.

How do you use this method with the 3D printed body parts?

One of the first things we did with the system was see if it could identify the different types of human tissue (ex. fat, muscle, tumor). The breast has all of these tissues within a relatively small piece of anatomy, so that is where the focus began. We were able to show that the system could discern different tissue types within a small sample, such as a piece of excised human tissue. However, in order to use any system in-vivo, which is ideally the aim, you have to determine whether or not it works on a normal human geometry. Another professor in our department built a dedicated breast CT system, so we used patient scans from that machine to model and print an accurate breast, both in anatomy and physical size.

 

What are the three biggest benefits of the x-ray imaging system for future research? 

Main breast phantom used and a mammogram of that phantom with tissue samples in it

Main breast phantom used and a mammogram of that phantom with tissue samples in it

Coherent scatter imaging is gaining momentum as an imaging field. At the SPIE Medical Imaging Conference a few weeks ago in San Diego, there was a dedicated section on the use of scatter imaging (and our group had 3 out of 5 talks on the topic!). One major benefit is that it is noninvasive. There is always a need for a noninvasive diagnostic step in the medical field. One thing we foresee this technology being used for could be a replacement for certain biopsy procedures. For instance, if a radiologist finds something suspicious in a mammogram, a repeat scan of that area could be taken on a scatter imaging system to determine whether or not the suspicious lesion is malignant or not. It has the potential to reduce the number of unnecessary invasive (and painful!) biopsies done in cancer diagnosis.

Another thing we envision, and work has been done on this in our group, is using this imaging technique for intra-operative margin detection. When a patient gets a lumpectomy or mastectomy, the excised tissue is sent to pathology to make sure all the cancer has been removed from the patient. This is done by assessing whether or not there is cancer on the outer margins of the sample and can often take several days. If there is cancerous tissue in the margin, then it is likely that the extent of the cancer was not removed from the patient and a repeat surgery is required. Our imaging system has the potential to scan the entirety of the tissue sample while the patient is still open in the operating room. With further refinement of system parameters and scanning technique, this could be a reality and help to prevent additional surgeries and the complications that could arise from that.

What was the hardest or most frustrating part of working on the project? 

We use a coded aperture within the x-ray beam, which is basically a mask that allows us to have a depth-resolved image. The aperture is what tells us where the source of the scatter came from so that we can reconstruct. The location of this aperture relative to the other apparatus within our setup is carefully calibrated, down to the sub-millimeter range. If any part of the system is moved, everything must be recalibrated within the code, which is very time-consuming and frustrating. So basically every time we wanted to move something in our setup to make things better or more efficient, it was like we were redesigning the system from scratch.

 What is your workspace like?

Katie and the team at the AAPM (American Association of Physicists in Medicine) conference from this past summer in Anaheim, CA where she presented in a special session on breast imaging. From left to right: Robert Morris (also in the research lab and getting his degree in MedPhys), Katie, Dr. James Dobbins III (former program director and current Associate Vice Provost for DKU) and Dr. Anuj Kapadia, my advisor and current director of graduate studies in the program

Katie presented in a special session on breast imaging at the American Association of Physicists in Medicine conference this past summer in Anaheim, CA. From left to right: Robert Morris, also working in the lab; Katie; Dr. James Dobbins III, former program director and current Associate Vice Provost for Duke-Kunshan University; and Dr. Anuj Kapadia, Katie’s advisor and current director of graduate studies.

We have a working experimental lab within the hospital. It looks like any other physics lab you might come across- messy, full of wires and strange electronics. It is unique from other labs within the Medical Physics department because a lot of research that is done there focuses on image processing or radiation therapy treatment planning and can be done on just a computer. This lab is very hands-on in that we need to engineer the system ourselves. It is not uncommon for us to be using power tools or soldering or welding.

What do you like best about 3D printing? 

3D printing has become such a great community for creativity. One of my favorite websites now, called Thingiverse, is basically a haven for 3D printable files of anything you could ever dream of, with comments on the best printing settings, printers and inks. You can really print anything you want — I’ve printed everything from breasts, lungs and spines to small animal models and even Harry Potter memorabilia to add to my collection. If you can dream it, you can print it in three dimensions, and I think that’s amazing.

 

Anika_RD_hed100_2By Anika Radiya-Dixit

 

Page 12 of 20

Powered by WordPress & Theme by Anders Norén