Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Computers/Technology Page 2 of 13

How a Museum Became a Lab

Encountering and creating art may be some of mankind’s most complex experiences. Art, not just visual but also dancing and singing, requires the brain to understand an object or performance presented to it and then to associate it with memories, facts, and emotions.

A piece in Dario Robleto’s exhibit titled “The Heart’s Knowledge Will Decay” (2014)

In an ongoing experiment, Jose “Pepe” Contreras-Vidal and his team set up in artist Dario Robleto’s exhibit “The Boundary of Life Is Quietly Crossed” at the Menil Collection near downtown Houston. They then asked visitors if they were willing to have their trips through the museum and their brain activities recorded. Robleto’s work was displayed from August 16, 2014 to January 4, 2015. By engaging museum visitors, Contreras-Vidal and Robleto gathered brain activity data while also educating the public, combining research and outreach.

“We need to collect data in a more natural way, beyond the lab” explained Contreras-Vidal, an engineering professor at the University of Houston, during a talk with Robleto sponsored by the Nasher Museum.

More than 3,000 people have participated in this experiment, and the number is growing.

To measure brain activity, the volunteers wear EEG caps which record the electrical impulses that the brain uses for communication. EEG caps are noninvasive because they are just pulled onto the head like swim caps. The caps allow the museum goers to move around freely so Contreras-Vidal can record their natural movements and interactions.

By watching individuals interact with art, Contreras-Vidal and his team can find patterns between their experiences and their brain activity. They also asked the volunteers to reflect on their visit, adding a first person perspective to the experiment. These three sources of data showed them what a young girl’s favorite painting was, how she moved and expressed her reaction to this painting, and how her brain activity reflected this opinion and reaction.

The volunteers can also watch the recordings of their brain signals, giving them an opportunity to ask questions and engage with the science community. For most participants, this is the first time they’ve seen recordings of their brain’s electrical signals. In one trip, these individuals learned about art, science, and how the two can interact. Throughout this entire process, every member of the audience forms a unique opinion and learns something about both the world and themselves as they interact with and make art.

Children with EEG caps explore art.

Contreras-Vidal is especially interested in the gestures people make when exposed to the various stimuli in a museum and hopes to apply this information to robotics. In the future, he wants someone with a robotic arm to not only be able to grab a cup but also to be able to caress it, grip it, or snatch it. For example, you probably can tell if your mom or your best friend is approaching you by their footsteps. Contreras-Vidal wants to restore this level of individuality to people who have prosthetics.

Contreras-Vidal thinks science can benefit art just as much as art can benefit science. Both he and Robleto hope that their research can reduce many artists’ distrust of science and help advance both fields through collaboration.

Post by Lydia Goff

Using Drones to Feed Billions

A drone flying over an agricultural field

Drones revolutionizing farming

As our population continues its rapid growth, food is becoming increasingly scarce. By the year 2050, we will need to double our current food production to feed the estimated 9.6 million mouths that will inhabit Earth.

A portrait of Maggie Monast

Maggie Monast

Thankfully, introducing drones and other high-tech equipment to farmers could be the solution to keeping our bellies full.

Last week, Dr. Ramon G. Leon of North Carolina State University and Maggie Monast of the Environmental Defense Fund spoke at Duke’s monthly Science & Society Dialogue, sharing their knowledge of what’s known as “precision agriculture.” At its core, precision agriculture is integrating technology with farming in order to maximize production.

It is easy to see that farming has already changed as a result of precision agriculture. The old family-run plot of land with animals and diverse crops has turned into large-scale, single-crop operations. This transition was made possible through the use of new technologies — tractors, irrigation, synthetic fertilizer, GMOs, pesticides — and is no doubt way more productive.

A portrait of Dr. Ramon G. Leon

Dr. Ramon G. Leon

So while the concept of precision agriculture certainly isn’t new, in today’s context it incorporates some particularly advanced and unexpected tools meant to further optimize yield while also conserving resources.

Drones equipped with special cameras and sensors, for example, can be flown over thousands of acres and gather huge amounts of data. This data produces a map of  things like pest damage, crop stress and yield. One image from a drone can easily help a farmer monitor what’s going on: where to cut back on resources, what needs more attention, and where to grow a certain type of crop. Some drones can even plant and water crops for you.

Blue River’s “See & Spray” focuses on cutting back herbicide use. Instead of spraying herbicide over an entire field and wasting most of it, this machine is trained to spray weeds directly, using 10% of the normal amount of herbicide.

Similarly, another machine called the Greenseeker can decide where, when and how much fertilizer should be applied based on the greenness of the crop. Fertilizing efficiently means saving money and emitting less ozone-depleting nitrous oxide.

As you can see, fancy toys like these are extremely beneficial, and there are more out there. They enable farmers to make faster, better decisions and understand their land on an unprecedented level. At the same time, farmers can cut back on their resource usage. This should eventually result in a huge productivity boom while helping out the environment. Nice.

One problem preventing these technologies from really taking off is teaching the farmers how to take advantage of them. As Dr. Leon put it, “we have all these toys, but nobody knows how to play with them.” However, this issue can resolved with enough time. Some older farmers love messing around with the drones, and the next generations of farmers will have more exposure to this kind of technology growing up. Sooner or later, it may be no big deal to spot drones circling above fields of wheat as you road trip through the countryside.

A piece of farm equipment in a field

A Greenseeker mounted on a Boom Sprayer

Precision agriculture is fundamental to the modern agricultural revolution. It increases efficiency and reduces waste, and farming could even become a highly profitable business again as the cost for these technologies goes down. Is it the solution to our environmental and production problems? I guess we’ll know by 2050!

Will Sheehan

Post By Will Sheehan

What is a Model?

When you think of the word “model,” what do you think?

As an Economics major, 
the first thing that comes to my mind is a statistical model, modeling phenomena such as the effect of class size on student test scores. A
car connoisseur’s mind might go straight to a model of their favorite vintage Aston
Martin. Someone else studying fashion even might imagine a runway model. The point is, the term “model” is used in popular discourse incredibly frequently, but are we even sure what it implies?

Annabel Wharton, a professor of Art, Art History, and Visual Studies at Duke, gave a talk entitled “Defining Models” at the Visualization Friday Forum. The forum is a place “for faculty, staff and students from across the university (and beyond Duke) to share their research involving the development and/or application of visualization methodologies.” Wharton’s goal was to answer the complex question, “what is a model?”

Wharton began the talk by defining the term “model,” knowing that it can often times be rather ambiguous. She stated the observation that models are “a prolific class of things,” from architectural models, to video game models, to runway models. Some of these types of things seem unrelated, but Wharton, throughout her talk, pointed out the similarities between them and ultimately tied them together as all being models.

The word “model” itself has become a heavily loaded term. According to Wharton, the dictionary definition of “model” is 9 columns of text in length. Wharton then stressed that a model “is an autonomous agent.” This implies that models must be independent of the world and from theory, as well as being independent of their makers and consumers. For example, architecture, after it is built, becomes independent of its architect.

Next, Wharton outlined different ways to model. They include modeling iconically, in which the model resembles the actual thing, such as how the video game Assassins Creed models historical architecture. Another way to model is indexically, in which parts of the model are always ordered the same, such as the order of utensils at a traditional place setting. The final way to model is symbolically, in which a model symbolizes the mechanism of what it is modeling, such as in a mathematical equation.

Wharton then discussed the difference between a “strong model” and a “weak model.” A strong model is defined as a model that determines its weak object, such as an architect’s model or a runway model. On the other hand, a “weak model” is a copy that is always less than its archetype, such as a toy car. These different classifications include examples we are all likely aware of, but weren’t able to explicitly classify or differentiate until now.

Wharton finally transitioned to discussing one of her favorite models of all time, a model of the Istanbul Hagia Sophia, a former Greek Orthodox Christian Church and later imperial mosque. She detailed how the model that provides the best sense of the building without being there is found in a surprising place, an Assassin’s Creed video game. This model is not only very much resembles the actual Hagia Sophia, but is also an experiential and immersive model. Wharton joked that even better, the model allows explorers to avoid tourists, unlike in the actual Hagia Sophia.

Wharton described why the Assassin’s Creed model is a highly effective agent. Not only does the model closely resemble the actual architecture, but it also engages history by being surrounded by a historical fiction plot. Further, Wharton mentioned how the perceived freedom of the game is illusory, because the course of the game actually limits players’ autonomy with code and algorithms.

After Wharton’s talk, it’s clear that models are definitely “a prolific class of things.” My big takeaway is that so many thing in our everyday lives are models, even if we don’t classify them as such. Duke’s East Campus is a model of the University of Virginia’s campus, subtraction is a model of the loss of an entity, and an academic class is a model of an actual phenomenon in the world. Leaving my first Friday Visualization Forum, I am even more positive that models are powerful, and stretch so far beyond the statistical models in my Economics classes.


By Nina Cervantes

Game-Changing App Explores Conservation’s Future

In the first week of February, students, experts and conservationists from across the country were brought together for the second annual Duke Blueprint symposium. Focused around the theme of “Nature and Progress,” this conference hoped to harness the power of diversity and interdisciplinary collaboration to develop solutions to some of the world’s most pressing environmental challenges.

Scott Loarie spoke at Duke’s Mary Duke Biddle Trent Semans Center.

One of the most exciting parts of this symposium’s first night was without a doubt its all-star cast of keynote speakers. The experiences and advice each of these researchers had to offer were far too diverse for any single blog post to capture, but one particularly interesting presentation (full video below) was that of National Geographic fellow Scott Loarie—co-director of the game-changing iNaturalist app.

iNat, as Loarie explained, is a collaborative citizen scientist network with aspirations of developing a comprehensive mapping of all terrestrial life. Any time they go outside, users of this app can photograph and upload pictures of any wildlife they encounter. A network of scientists and experts from around the world then helps the users identify their finds, generating data points on an interactive, user-generated map of various species’ ranges.

Simple, right? Multiply that by 500,000 users worldwide, though, and it’s easy to see why researchers like Loarie are excited by the possibilities an app like this can offer. The software first went live in 2008, and since then its user base has roughly doubled each year. This has meant the generation of over 8 million data points of 150,000 different species, including one-third of all known vertebrate species and 40% of all known species of mammal. Every day, the app catalogues around 15 new species.

“We’re slowly ticking away at the tree of life,” Loarie said.

Through iNaturalist, researchers are able to analyze and connect to data in ways never before thought possible. Changes to environments and species’ distributions can be observed or modeled in real time and with unheard-of collaborative opportunities.

To demonstrate the power of this connectedness, Loarie recalled one instance of a citizen scientist in Vietnam who took a picture of a snail. This species had never been captured, never been photographed, hadn’t been observed in over a century. One of iNat’s users recognized it anyway. How? He’d seen it in one of the journals from Captain James Cook’s 18th-century voyage to circumnavigate the globe.

It’s this kind of interconnectivity that demonstrates not just the potential of apps like iNaturalist, but also the power of collaboration and the possibilities symposia like Duke Blueprint offer. Bridging gaps, tearing down boundaries, building up bonds—these are the heart of conservationism’s future. Nature and Progress, working together, pulling us forward into a brighter world.

Post by Daniel Egitto

 

 

“I Heart Tech Fair” Showcases Cutting-Edge VR and More

Duke’s tech game is stronger than you might think.

OIT held an “I Love Tech Fair” in the Technology Engagement Center / Co-Lab on Feb. 6 that was open to anyone to come check out things like 3D printers and augmented reality, while munching on some Chick-fil-a and cookies. There was a raffle for some sweet prizes, too.

I got a full demonstration of the 3D printing process—it’s so easy! It requires some really expensive software called Fusion, but thankfully Duke is awesome and students can get it for free. You can make some killer stuff 3D printing, the technology is so advanced now. I’ve seen all kinds of things: models of my friend’s head, a doorstop made out of someone’s name … one guy even made a working ukulele apparently!

One of the cooler things at the fair was Augmented Reality books. These books look like ordinary picture books, but looking at a page through your phone’s camera, the image suddenly comes to life in 3D with tons of detail and color, seemingly floating above the book! All you have to do is download an app and get the right book. Augmented reality is only getting better as time goes on and will soon be a primary tool in education and gaming, which is why Duke Digital Initiative (DDI) wanted to show it off.

By far my favorite exhibit at the tech fair was  virtual reality. Throw on a headset and some bulky goggles, grab a controller in each hand, and suddenly you’re in another world. The guy running the station, Mark McGill, had actually hand-built the machine that ran it all. Very impressive guy. He told me the machine is the most expensive and important part, since it accounts for how smooth the immersion is. The smoother the immersion, the more realistic the experience. And boy, was it smooth. A couple years ago I experienced virtual reality at my high school and thought it was cool (I did get a little nauseous), but after Mark set me up with the “HTC Vive” connected to his sophisticated machine, it blew me away (with no nausea, too).

I smiled the whole time playing “Super Hot,” where I killed incoming waves of people in slow motion with ninja stars, guns, and rocks. Mark had tons of other games too, all downloaded from Steam, for both entertainment and educational purposes. One called “Organon” lets you examine human anatomy inside and out, and you can even upload your own MRIs. There’s an unbelievable amount of possibilities VR offers. You could conquer your fear of public speaking by being simulated in front of a crowd, or realistically tour “the VR Museum of Fine Art.” Games like these just aren’t the same were you to play them on, say, an Xbox, because it simply doesn’t have that key factor of feeling like you’re there. In Fallout 4, your heart pounds fast in your chest as you blast away Feral Ghouls and Super Mutants right in front of you. But in reality, you’re just standing in a green room with stupid looking goggles on. Awesome!

There’s another place on campus — the Bolt VR in Edens residence hall — that also has a cutting-edge VR setup going. Mark explained to me that Duke wants people to get experience with VR, as it will soon be a huge part of our lives. Having exposure now could give Duke graduates a very valuable head start in their career (while also making Duke look good). Plus, it’s nice to have on campus for offering students a fun break from all the hard work we put in.

If you’re bummed you missed out, or even if you don’t “love tech,” I recommend checking out the Tech Fair next time — February 13, from 6-8pm. See you there.

Post By Will Sheehan

Will Sheehan

Researchers Get Superman’s X-ray Vision

X-ray vision just got cooler. A technique developed in recent years boosts researchers’ ability to see through the body and capture high-resolution images of animals inside and out.

This special type of 3-D scanning reveals not only bones, teeth and other hard tissues, but also muscles, blood vessels and other soft structures that are difficult to see using conventional X-ray techniques.

Researchers have been using the method, called diceCT, to visualize the internal anatomy of dozens of different species at Duke’s Shared Materials Instrumentation Facility (SMIF).

There, the specimens are stained with an iodine solution that helps soft tissues absorb X-rays, then placed in a micro-CT scanner, which takes thousands of X-ray images from different angles while the specimen spins around. A computer then stitches the scans into digital cross sections and stacks them, like slices of bread, to create a virtual 3-D model that can be rotated, dissected and measured as if by hand.

Here’s a look at some of the images they’ve taken:

See-through shrimp

If you get flushed after a workout, you’re not alone — the Caribbean anemone shrimp does too.

Recent Duke Ph.D. Laura Bagge was scuba diving off the coast of Belize when she noticed the transparent shrimp Ancylomenes pedersoni turn from clear to cloudy after rapidly flipping its tail.

To find out why exercise changes the shrimp’s complexion, Bagge and Duke professor Sönke Johnsen and colleagues compared their internal anatomy before and after physical exertion using diceCT.

In the shrimp cross sections in this video, blood vessels are colored blue-green, and muscle is orange-red. The researchers found that more blood flowed to the tail after exercise, presumably to deliver more oxygen-rich blood to working muscles. The increased blood flow between muscle fibers causes light to scatter or bounce in different directions, which is why the normally see-through shrimp lose their transparency.

Peer inside the leg of a mouse

Duke cardiologist Christopher Kontos, M.D., and MD/PhD student Hasan Abbas have been using the technique to visualize the inside of a mouse’s leg.

The researchers hope the images will shed light on changes in blood vessels in people, particularly those with peripheral artery disease, in which plaque buildup in the arteries reduces blood flow to the extremities such as the legs and feet.

The micro-CT scanner at Duke’s Shared Materials Instrumentation Facility made it possible for Abbas and Kontos to see structures as small as 13 microns, or a fraction of the width of a human hair, including muscle fibers and even small arteries and veins in 3-D.

Take a tour through a tree shrew

DiceCT imaging allows Heather Kristjanson at the Johns Hopkins School of Medicine to digitally dissect the chewing muscles of animals such as this tree shrew, a small mammal from Southeast Asia that looks like a cross between a mouse and a squirrel. By virtually zooming in and measuring muscle volume and the length of muscle fibers, she hopes to see how strong they were. Studying such clues in modern mammals helps Kristjanson and colleagues reconstruct similar features in the earliest primates that lived millions of years ago.

Try it for yourself

Students and instructors who are interested in trying the technique in their research are eligible to apply for vouchers to cover SMIF fees. People at Duke University and elsewhere are encouraged to apply. For more information visit https://smif.pratt.duke.edu/Funding_Opportunities, or contact Dr. Mark Walters, Director of SMIF, via email at mark.walters@duke.edu.

Located on Duke’s West Campus in the Fitzpatrick Building, the SMIF is a shared use facility available to Duke researchers and educators as well as external users from other universities, government laboratories or industry through a partnership called the Research Triangle Nanotechnology Network. For more info visit http://smif.pratt.duke.edu/.

Post by Robin Smith, News and Communications

Post by Robin Smith, News and Communications

Farewell, Electrons: Future Electronics May Ride on New Three-in-One Particle

“Trion” may sound like the name of one of the theoretical particles blamed for mucking up operations aboard the Starship Enterprise.

But believe it or not, trions are real — and they may soon play a key role in electronic devices. Duke researchers have for the first time pinned down some of the behaviors of these one-of-a-kind particles, a first step towards putting them to work in electronics.

A carbon nanotube, shaped like a rod, is wrapped in a helical coating of polymer

Three-in-one particles called trions — carrying charge, energy and spin — zoom through special polymer-wrapped carbon nanotubes at room temperature. Credit: Yusong Bai.

Trions are what scientists call “quasiparticles,” bundles of energy, electric charge and spin that zoom around inside semiconductors.

“Trions display unique properties that you won’t be able to find in conventional particles like electrons, holes (positive charges) and excitons (electron-hole pairs that are formed when light interacts with certain materials),” said Yusong Bai, a postdoctoral scholar in the chemistry department at Duke. “Because of their unique properties, trions could be used in new electronics such as photovoltaics, photodetectors, or in spintronics.”

Usually these properties – energy, charge and spin – are carried by separate particles. For example, excitons carry the light energy that powers solar cells, and electrons or holes carry the electric charge that drives electronic devices. But trions are essentially three-in-one particles, combining these elements together into a single entity – hence the “tri” in trion.

A diagram of how a trion is formed in carbon nanotubes.

A trion is born when a particle called a polaron (top) marries an exciton (middle). Credit: Yusong Bai.

“A trion is this hybrid that involves a charge marrying an exciton to become a uniquely distinct particle,” said Michael Therien, the William R. Kenan, Jr. Professor of Chemistry at Duke. “And the reason why people are excited about trions is because they are a new way to manipulate spin, charge, and the energy of absorbed light, all simultaneously.”

Until recently, scientists hadn’t given trions much attention because they could only be found in semiconductors at extremely low temperatures – around 2 Kelvin, or -271 Celcius. A few years ago, researchers observed trions in carbon nanotubes at room temperature, opening up the potential to use them in real electronic devices.

Bai used a laser probing technique to study how trions behave in carefully engineered and highly uniform carbon nanotubes. He examined basic properties including how they are formed, how fast they move and how long they live.

He was surprised to find that under certain conditions, these unusual particles were actually quite easy to create and control.

“We found these particles are very stable in materials like carbon nanotubes, which can be used in a new generation of electronics,” Bai said. “This study is the first step in understanding how we might take advantage of their unique properties.”

The team published their results Jan. 8 in the Proceedings of the National Academy of Sciences.

Dynamics of charged excitons in electronically and morphologically homogeneous single-walled carbon nanotubes,” Yusong Bai, Jean-Hubert Olivier, George Bullard, Chaoren Liu and Michael J. Therien. Proceedings of the National Academy of Sciences, Jan. 8, 2018 (online) DOI: 10.1073/pnas.1712971115

Post by Kara Manke

David Carlson: Engineering and Machine Learning for Better Medicine

How can we even begin to understand the human brain?  Can we predict the way people will respond to stress by looking at their brains?  Is it possible, even, to predict depression based on observations of the brain?

These answers will have to come from sets of data, too big for human minds to work with on our own. We need mechanical minds for this task.

Machine learning algorithms can analyze this data much faster than a human could, finding patterns in the data that could take a team of researchers far longer to discover. It’s just like how we can travel so much faster by car or by plane than we could ever walk without the help of technology.

David Carlson Duke

David Carlson in his Duke office.

I had the opportunity to speak to David Carlson, an assistant professor of Civil and Environmental Engineering with a dual appointment at the Department of Biostatistics and Bioinformatics at Duke University.  Through machine learning algorithms, Carlson is connecting researchers across campus, from doctors to statisticians to engineers, creating a truly interdisciplinary research environment around these tools.

Carlson specializes in explainable machine learning: algorithms with inner workings comprehensible by humans. Most deep machine learning today exists in a “black box” — the decisions made by the algorithm are hidden behind layers of reasoning that give it incredible predictive power but make it hard for researchers to understand the “why” and the “how” behind the results. The transparent algorithms used by Carlson offer a way to capture some of the predictive power of machine learning without sacrificing our understanding of what they’re doing.

In his most recent research, Carlson collaborated with Dr. Kafui Dzirasa, associate professor of psychiatry and behavioral sciences and assistant professor in neurobiology and neurosurgery, on the effects of stress on the brains of mice, trying to understand the underlying causes of depression.

“What’s happening in neuroscience is the amount of data we’re sorting through is growing rapidly, and it’s really beginning to outstrip our ability to use classical tools,” Carlson says. “A lot of these classical tools made a lot more sense when you had these small data sets, but now we’re talking about this canonically overused word, Big Data”

With machine learning algorithms, it’s easier than ever to find trends in these huge sets of data.  In his most recent study, Carlson and his fellow researchers could find patterns tied to stress and even to how susceptible a mouse was to depression. By continuing this project and looking at new ways to investigate the brain and check their results, Carlson hopes to help improve treatments for depression in the future.

In addition to his ongoing research into depression, Carlson has brought machine learning to a number of other collaborations with the medical center, including research into autism and patient care for diabetes. When there’s too much data for the old ways of data analysis, machine learning can step in, and Carlson sees potential in harnessing this growing technology to improve health and care in the medical field.

“What’s incredibly exciting is the opportunities at the intersection of engineering and medicine,” he said. “I think there’s a lot of opportunities to combine what’s happening in the engineering school and also what’s happening at the medical center to try to create ways of better treating people and coming up with better ways for making people healthier.”

Guest Post by Thomas Yang, a junior at North Carolina School of Math and Science.

Martin Brooke: Mentoring Students Toward an X Prize for Ocean Robotics

We know less about the ocean floor than the surface of the moon. As one of the most unexplored areas of the world, multiple companies have begun to incentivize ingenuity towards exploring the oceans. Among these organizations are the Gates Foundation, the National Academy of Sciences, and X Prize.

XPrize team at Duke

Martin Brooke, second from left, and the student team with their giant drone.

Martin Brooke, an Associate Professor of Electrical and Computer Engineering at Duke, is presently leading a group of students who are working on mapping the ocean floor in an efficient way for the X Prize challenge.

Brooke said “open ended problems where you don’t know what to do” inspire him to do research about ocean engineering and design.

Martin Brooke

Martin Brooke

Collaborating with professors at the Duke Marine Lab that “strap marine sensors on whales” was a simple lead-in to starting a class about ocean engineering a few years ago. His teaching philosophy includes presenting the students with problems that make them think, “we want to do this, but we have no idea how.”

Before working on a drone that drops sensor pods down into the ocean to map the ocean floor, Brooke and his students built a sensor that could be in the ocean for a month or more and take pH readings every five seconds for a previous X Prize challenge.

Addressing the issues that many fisheries faced, he told me that he met an oyster farmer in Seattle who wished that there were pH sensors in the bay because sometimes tides bring in “waves of high pH water into the sound and kill all of the oysters without warning.” Citing climate change as the cause for this rise in pH, Brooke explained how increased carbon dioxide in the air dissolves into the water and raises the acidity. Emphasizing how “there’s not enough data on it,” it’s clear that knowing more about our oceans is beneficial economically and ecologically.

Guest Post by Sofia Sanchez, a senior at North Carolina School of Math and Science

Generating Winning Sports Headlines

What if there were a scientific way to come up with the most interesting sports headlines? With the development of computational journalism, this could be possible very soon.

Dr. Jun Yang is a database and data-intensive computing researcher and professor of Computer Science at Duke. One of his latest projects is computational journalism, in which he and other computer science researchers are considering how they can contribute to journalism with new technological advances and the ever-increasing availability of data.

An exciting and very relevant part of his project is based on raw data from Duke men’s basketball games. With computational journalism, Yang and his team of researchers have been able to generate diverse player or team factoids using the statistics of the games.

Grayson Allen headed for the hoop.

Grayson Allen headed for the hoop.

An example factoid might be that, in the first 8 games of this season, Duke has won 100% of its games when Grayson Allen has scored over 20 points. While this fact is obvious, since Duke is undefeated so far this season, Yang’s programs will also be able to generate very obscure factoids about each and every player that could lead to unique and unprecedented headlines.

While these statistics relating player and team success can only imply correlation, and not necessarily causation, they definitely have potential to be eye-catching sports headlines.

Extracting factoids hasn’t been a particularly challenging part of the project, but developing heuristics to choose which factoids are the most relevant and usable has been more difficult.

Developing these heuristics so far has involved developing scoring criteria based on what is intuitively impressive to the researcher. Another possible measure of evaluating the strength of a factoid is ranking the types of headlines that are most viewed. Using this method, heuristics could, in theory, be based on past successes and less on one researcher’s human intuition.

Something else to consider is which types of factoids are more powerful. For example, what’s better: a bolder claim in a shorter period of time, or a less bold claim but over many games or even seasons?

The ideal of this project is to continue to analyze data from the Duke men’s basketball team, generate interesting factoids, and put them on a public website about 10-15 minutes after the game.

Looking forward, computational journalism has huge potential for Duke men’s basketball, sports in general, and even for generating other news factoids. Even further, computational journalism and its scientific methodology might lead to the ability to quickly fact-check political claims.

Right now, however, it is fascinating to know that computer science has the potential to touch our lives in some pretty unexpected ways. As our current men’s basketball beginning-of-season winning streak continues, who knows what unprecedented factoids Jun Yang and his team are coming up with.

By Nina Cervantes

Page 2 of 13

Powered by WordPress & Theme by Anders Norén