Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Computers/Technology Page 1 of 14

Traveling Back in Time Through Smart Archaeology

The British explorer George Dennis once wrote, “Vulci is a city whose very name … was scarcely remembered, but which now, for the enormous treasures of antiquity it has yielded, is exalted above every other city of the ancient world.” He’s correct in assuming that most people do not know where or what Vulci is, but for explorers and historians – including Duke’s Bass Connections team Smart Archaeology – Vulci is a site of enormous potential.

Vulci, Italy, was an ancient Etruscan city, the remains of which are situated about an hour outside of Rome. The Etruscan civilization originated in the area roughly around Tuscany, western Umbria, northern Lazio, and in the north of Po Valley, the current Emilia-Romagna region, south-eastern Lombarty, southern Veneto, and some areas of Campania. The Etruscan culture is thought to have emerged in Italy around 900 BC and endured through the Roman-Etruscan Wars and coming to an end with the establishment of the Roman Empire. 

As a dig site, Vulci is extremely valuable for the information it can give us about the Etruscan and Roman civilizations – especially since the ruins found at Vulci date back beyond the 8th century B.C.E. On November 20th, Professor Maurizio Forte, of the Art, Art History and Visual Studies departments at Duke as well as Duke’s Dig@Lab, led a talk and interactive session. He summarized the Smart Archaeology teams’ experience this past summer in Italy as well as allowing audience members to learn about and try the various technologies used by the team. With Duke being the first university with a permit of excavation for Vulci in the last 60 years, the Bass Connections team set out to explore the region, with their primary concerns being data collection, data interpretation, and the use of virtual technology. 

Trying out some of the team’s technologies on November 20th (picture credits Sarah Dwyer)

The team, lead by Professor Maurizio Forte, Professor Michael Zavlanos, David Zalinsky, and Todd Barrett, sought to be as diverse as possible. With 32 participants ranging from undergraduate and graduate students to professionals, as well as Italian faculty and student members, the team flew into Italy at the beginning of the summer with a research model focused on an educational approach of practice and experimentation for everyone involved. With a naturally interdisciplinary focus ranging from classical studies to mechanical engineering, the team was divided, with people focusing on excavation in Vulci, remote sensing, haptics, virtual reality, robotics, and digital media. 

Professor Maurizio Forte

So what did the team accomplish? Well, technology was a huge driving force in most of the data collected. For example, with the use of drones, photos taken from an aerial view were patched together to create bigger layout pictures of the area that would have been the city of Vulci. The computer graphics created by the drone pictures were also used to create a video and aided in the process of creating a virtual reality simulation of Vulci. VR can be an important documentation tool, especially in a field as ever-changing as archaeology. And as Professor Forte remarked, it’s possible for anyone to see exactly what the researchers saw over the summer – and “if you’re afraid of the darkness of a cistern, you can go through virtual reality instead.” 

An example of one of the maps created by the team
The team at work in Vulci

In addition, the team used sensor technology to get around the labor and time it would take to dissect the entire site – which by the team’s estimate would take 300 years! Sensors in the soil, in particular, can sense the remnants of buildings and archaeological features up to five meters below ground, allowing researchers to imagine what monuments and buildings might have looked like. 

One of the biggest takeaways from the data the team collected based on discovering remnants of infrastructure and layout of the city was of the Etruscan mastery of water, developing techniques that the Romans also used. More work was also done on classification of Etruscan pottery, tools, and materials based on earlier work done by previous researchers. Discovering decorative and religious artifacts was also impactful for the team, because as Professor Forte emphasized, these objects are the “primary documentation of history.” 

But the discoveries won’t stop there. The Smart Archaeology team is launching their 2019-2020 Bass Connections project on a second phase of their research – specifically focusing on identifying new archaeological sites, analyzing the landscape’s transformation and testing new methods of data capturing, simulation and visualization. With two more years of work on site, the team is hopeful that research will be able to explain in even greater depth how the people of Vulci lived, which will certainly help to shine a light on the significance of the Etruscan civilization in global history.

By Meghna Datta

Predicting sleep quality with the brain

Modeling functional connectivity allows researchers to compare brain activation to behavioral outcomes. Image: Chu, Parhi, & Lenglet, Nature, 2018.

For undergraduates, sleep can be as elusive as it is important. For undergraduate researcher Katie Freedy, Trinity ’20, understanding sleep is even more important because she works in Ahmad Hariri’s Lab of Neurogenetics.

After taking a psychopharmacology class while studying abroad in Copenhagen, Freedy became interested in the default mode network, a brain network implicated in autobiographical thought, self-representation and depression. Upon returning to her lab at Duke, Freedy wanted to explore the interaction between brain regions like the default mode network with sleep and depression.

Freedy’s project uses data from the Duke Neurogenetics Study, a study that collected data on brain scans, anxiety, depression, and sleep in 1,300 Duke undergraduates. While previous research has found connections between brain connectivity, sleep, and depression, Freedy was interested in a novel approach.

Connectome predictive modeling (CPM) is a statistical technique that uses fMRI data to create models for connections within the brain. In the case of Freedy’s project, the model takes in data on resting state and task-based scans to model intrinsic functional connectivity. Functional connectivity is mapped as a relationship between the activation of two different parts of the brain during a specific task. By looking at both resting state and task-based scans, Freedy’s models can create a broader picture of connectivity.

To build the best model, a procedure is repeated for each subject where a single subject’s data is left out of the model. Once the model is constructed, its validity is tested by taking the brain scan data of the left-out subject and assessing how well the model predicts that subject’s other data. Repeating this for every subject trains the model to make the most generally applicable but accurate predictions of behavioral data based on brain connectivity.

Freedy presented the preliminary results from her model this past summer at the BioCORE Symposium as a Summer Neuroscience Program fellow. The preliminary results showed that patterns of brain connectivity were able to predict overall sleep quality. With additional analyses, Freedy is eager to explore which specific patterns of connectivity can predict sleep quality, and how this is mediated by depression.

Freedy presented the preliminary results of her project at Duke’s BioCORE Symposium.

Understanding the links between brain connectivity, sleep, and depression is of specific importance to the often sleep-deprived undergraduates.

“Using data from Duke students makes it directly related to our lives and important to those around me,” Freedy says. “With the field of neuroscience, there is so much we still don’t know, so any effort in neuroscience to directly tease out what is happening is important.”

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

These Microbes ‘Eat’ Electrons for Energy

The human body is populated by a greater number of microbes than its own cells. These microbes survive using metabolic pathways that vary drastically from humans’.

Arpita Bose’s research explores the metabolism of microorganisms.

Arpita Bose, PhD, of Washington University in St. Louis, is interested in understanding the metabolism of these ubiquitous microorganisms, and putting that knowledge to use to address the energy crisis and other applications.

Photoferrotrophic organisms use light and electrons from the environment as an energy source

One of the biggest research questions for her lab involves understanding photoferrotrophy, or using light and electrons from an external source for carbon fixation. Much of the source of energy humans consume comes from carbon fixation in phototrophic organisms like plants. Carbon fixation involves using energy from light to fuel the production of sugars that we then consume for energy.

Before Bose began her research, scientists had found that some microbes interact with electricity in their environments, even donating electrons to the environment. Bose hypothesized that the reverse could also be true and sought to show that some organisms can also accept electrons from metal oxides in their environments. Using a bacterial strain called Rhodopseudomonas palustris TIE-1 (TIE-1), Bose identified this process called extracellular electron uptake (EEU).

After showing that some microorganisms can take in electrons from their surroundings and identifying a collection of genes that code for this ability, Bose found that this ability was dependent on whether a light source was also present. Without the presence of light, these organisms lost 70% of their ability to take in electrons.   

Because the organisms Bose was studying can rely on light as a source of energy, Bose hypothesized that this dependence on light for electron uptake could signify a function of the electrons in photosynthesis.  With subsequent studies, Bose’s team found that these electrons the microorganisms were taking were entering their photosystem.

To show that the electrons were playing a role in carbon fixation, Bose and her team looked at the activity of an enzyme called RuBisCo, which plays an integral role in converting carbon dioxide into sugars that can be broken down for energy. They found that RuBisCo was most strongly expressed and active when EEU was occurring, and that, without RuBisCo present, these organisms lost their ability to take in electrons. This finding suggests that organisms like TIE-1 are able to take in electrons from their environment and use them in conjunction with light energy to synthesize molecules for energy sources.  

In addition to broadening our understanding of the great diversity in metabolisms, Bose’s research has profound implications in sustainability. These microbes have the potential to play an integral role in clean energy generation.

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

The Making of queerXscape

Sinan Goknur

On September 10th, queerXscape, a new exhibit in The Murthy Agora Studio at the Rubenstein Arts Center, opened. Sinan Goknur and Max Symuleski, PhD candidates in the Computational Media, Arts & Cultures Program, created the installation with digital prints of collages, cardboard structures, videos, and audio. Max explains that this multi-media approach transforms the studio from a room into a landscape which provides an immersive experience.

Max Symuleski

The two artists combined their experiences with changing urban environments when planning this exhibit. Sinan reflects on his time in Turkey where he saw constant construction and destruction, resulting in a quickly shifting landscape. While processing all of this displacement, he began taking pictures as “a way of coping with the world.” These pictures later become layers in the collages he designed with Max.

Meanwhile, Max used their time in New York City where they had to move from neighborhood to neighborhood as gentrification raised prices. Approaching this project, they wondered, “What does queer mean in this changing landscape? What does it mean to queer something? Where are our spaces? Where do we need them to survive?” They had previously worked on smaller collages made from magazines that inspired the pair of artists to try larger-scale works.  

Both Sinan and Max have watched the exploding growth in Durham while studying at Duke. From this perspective, they were able to tackle this project while living in a city that exemplifies the themes they explore in their work.

One of the cardboard structures

Using a video that Sinan had made as inspiration for the exhibit, they began assembling four large digital collages. To collaborate on the pieces, they would send the documents back and forth while making edits. When it became time to assemble their work, they had to print the collage in large strips and then careful glue them together. Through this process, they learned the importance of researching materials and experimented with the best way to smoothly place the strips together. While putting together mound-like cardboard structures of building, tire, and ice cube cut-outs, Max realized that, “we’re now doing construction.” Consulting with friends who do small construction and maintenance jobs for a living also helped them assemble and install the large-scale murals in the space. The installation process for them was yet another example of the tension between various drives for and scales of constructions taking place around them.

While collage and video may seem like an odd combination, they work together in this exhibit to surround the viewer and appeal to both the eyes and ears. Both artists share a background in queer performance and are driven to the rough aesthetics of photo collage and paper. The show brings together aspects of their experience in drag performance, collage, video, photography, and paper sculpture of a balanced collaboration. Their work demonstrates the value of partnership that crosses genres.

Poster for the exhibit

When concluding their discussion of changing spaces, Max mentioned that, “our sense of resilience is tied to the domains where we could be queer.” Finding an environment where you belong becomes even more difficult when your landscape resembles shifting sand. Max and Sinan give a glimpse into the many effects of gentrification, destruction, and growth within the urban context. 

The exhibit will be open until October 6. If you want to see the results of weeks of collaging, printing, cutting, and pasting together photography accumulated from near and far, stop by the Ruby.

Post by Lydia Goff

Big SMILES All Around for Polymer Chemists at Duke, MIT and Northwestern

Science is increasingly asking artificial intelligence machines to help us search and interpret huge collections of data, and it’s making a difference.

But unfortunately, polymer chemistry — the study of large, complex molecules — has been hampered in this effort because it lacks a crisp, coherent language to describe molecules that are not tidy and orderly.

Think nylon. Teflon. Silicone. Polyester. These and other polymers are what the chemists call “stochastic,” they’re assembled from predictable building blocks and follow a finite set of attachment rules, but can be very different in the details from one strand to the next, even within the same polymer formulation.

Plastics, love ’em or hate ’em, they’re here to stay.
Foto: Mathias Cramer/temporealfoto.com

Chemistry’s old stick and ball models and shorthand chemical notations aren’t adequate for a long molecule that can best be described as a series of probabilities that one kind of piece might be in a given spot, or not.

Polymer chemists searching for new materials for medical treatments or plastics that won’t become an environmental burden have been somewhat hampered by using a written language that looks like long strings of consonants, equal signs, brackets, carets and parentheses. It’s also somewhat equivocal, so the polymer Nylon-6-6 ends up written like this: 

{<C(=O)CCCCC(=O)<,>NCCCCCCN>}

Or this,

{<C(=O)CCCCC(=O)NCCCCCCN>}

And when we get to something called ‘concatenation syntax,’ matters only get worse.  

Stephen Craig, the William T. Miller Professor of Chemistry, has been a polymer chemist for almost two decades and he says the notation language above has some utility for polymers. But Craig, who now heads the National Science Foundation’s Center for the Chemistry of Molecularly Optimized Networks (MONET), and his MONET colleagues thought they could do better.

Stephen Craig

“Once you have that insight about how a polymer is grown, you need to define some symbols that say there’s a probability of this kind of structure occurring here, or some other structure occurring at that spot,” Craig says. “And then it’s reducing that to practice and sort of defining a set of symbols.”

Now he and his MONET colleagues at MIT and Northwestern University have done just that, resulting in a new language – BigSMILES – that’s an adaptation of the existing language called SMILES (simplified molecular-input line-entry system). They they think it can reduce this hugely combinatorial problem of describing polymers down to something even a dumb computer can understand.

And that, Craig says, should enable computers to do all the stuff they’re good at – searching huge datasets for patterns and finding needles in haystacks.

The initial heavy lifting was done by MONET members Prof. Brad Olsen and his co-worker Tzyy-Shyang Lin at MIT who conceived of the idea and developed the set of symbols and the syntax together. Now polymers and their constituent building blocks and variety of linkages might be described like this:

Examples of bigSMILES symbols from the recent paper

It’s certainly not the best reading material for us and it would be terribly difficult to read aloud, but it becomes child’s play for a computer.

Members of MONET spent a couple of weeks trying to stump the new language with the weirdest polymers they could imagine, which turned up the need for a few more parts to the ‘alphabet.’ But by and large, it holds up, Craig says. They also threw a huge database of polymers at it and it translated them with ease.

“One of the things I’m excited about is how the data entry might eventually be tied directly to the synthetic methods used to make a particular polymer,” Craig says. “There’s an opportunity to actually capture and process more information about the molecules than is typically available from standard characterizations. If that can be done, it will enable all sorts of discoveries.”

BigSMILES was introduced to the polymer community by an article in ACS Central Science last week, and the MONET team is eager to see the response.

“Can other people use it and does it work for everything?” Craig asks. “Because polymer structure space is effectively infinite.” Which is just the kind of thing you need Big Data and machine learning to address. “This is an area where the intersection of chemistry and data science can have a huge impact,” Craig says.

Leaving the Louvre: Duke Team Shows How to Get out Fast

Students finish among top 1% in 100-hour math modeling contest against 11,000 teams worldwide


Imagine trying to move the 26,000 tourists who visit the Louvre each day through the maze of galleries and out of harm’s way. One Duke team spent 100 straight hours doing just that, and took home a prize.

If you’ve ever visited the Louvre in Paris, you may have been too focused on snapping a selfie in front of the Mona Lisa to think about the nearest exit.

But one Duke team knows how to get out fast when it matters most, thanks to a computer simulation they developed for the Interdisciplinary Contest in Modeling, an international contest in which thousands of student teams participate each year.

Their results, published in the Journal of Undergraduate Mathematics and Its Applications, placed them in the top 1% against more than 11,000 teams worldwide.

With a record 10.2 million visitors flooding through its doors last year, the Louvre is one of the most popular museums in the world. Just walking through a single wing in one of its five floors can mean schlepping the equivalent of four and a half football fields.

For the contest, Duke undergraduates Vinit Ranjan, Junmo Ryang and Albert Xue had four days to figure out how long it would take to clear out the whole building if the museum really had to evacuate — if the fire alarm went off, for instance, or a bomb threat or a terror attack sent people pouring out of the building.

It might sound like a grim premise. But with a rise in terrorist activity in Europe in recent years, facilities are trying to plan ahead to get people to safety.

The team used a computer program called NetLogo to create a small simulated Louvre populated by 26,000 visitors, the average number of people to wander through the maze of galleries each day. They split each floor of the Louvre into five sections, and assigned people to follow the shortest path to the nearest exit unless directed otherwise.

Computer simulation of a mob of tourists as they rush to the nearest exit in a section of the Louvre.

Their model uses simple flow rates — the number of people that can “flow” through an exit per second — and average walking speeds to calculate evacuation times. It also lets users see what happens to evacuation times if some evacuees are disabled, or can’t push through the throngs and start to panic.

If their predictions are right, the team says it should be possible to clear everyone out in just over 24 minutes.

Their results show that the exit at the Passage Richelieu is critical to evacuation — if that exit is blocked, the main exit through the Pyramid would start to gridlock and evacuating would take a whopping 15 minutes longer.

The students also identified several narrow corridors and sharp turns in the museum’s ground floor that could contribute to traffic jams. Their analyses suggest that widening some of these bottlenecks, or redirecting people around them, or adding another exit door where evacuees start to pile up, could reduce the time it takes to evacuate by 15%.

For the contest, each team of three had to choose a problem, build a model to solve it, and write a 20-page paper describing their approach, all in less than 100 hours.

“It’s a slog fest,” Ranjan said. “In the final 48 hours I think I slept a total of 90 minutes.”

Duke professor emeritus David Kraines, who advised the team, says the students were the first Duke team in over 10 years to be ranked “outstanding,” one of only 19 out of the more than 11,200 competing teams to do so this year. The team was also awarded the Euler Award, which comes with a $9000 scholarship to be split among the team members.

Robin Smith – University Communications

Hamlet is Everywhere. To Cite, or Not to Cite?

Some stories are too good to forget. With almost formulaic accuracy, elements from classic narratives are constantly being reused and retained in our cultural consciousness, to the extent that a room of people who’ve never read Romeo and Juliet could probably still piece out its major plot points. But when stories are so pervasive, how can we tell what’s original and what’s Shakespeare with a facelift?

This summer, three Duke undergraduate students in the Data+ summer research program built a computer program to find reused stories.

“We’re looking for invisible adaptations, or appropriations, of stories where there are underlying themes or the messages remain the same,” explains Elise Xia, a sophomore in mechanical engineering. “The goal of our project was to create a model where we could take one of these original stories, get data from it, and find other stories in literature, film, TV that are adaptations.”

The Lion King for example, is a well-known appropriation of Hamlet. The savannahs of Africa are a far cry from Denmark, and “Simba” bears no etymologic resemblance to “Hamlet”, yet they’re fundamentally the same story: A power-hungry uncle kills the king and ousts the heir to the throne, only for an eventually cataclysmic return of the prince. In an alternate ending for the film, Disney directors even considered quoting Hamlet.

“The only difference is that there’s no incest in The Lion King,” jokes Mikaela Johnson, an English and religious studies major and member of the Invisible Adaptations team.

With Hamlet as their model text, the team used a Natural Language Processing system to turn words into data points and compare other movie scripts and novels to the original play.

But the students had to strike a balance between the more surficial yet comprehensive analysis of computers (comparing place names, character names, and direct quotes) with the deeper textual analysis that humans provide.

So, they developed another branch of analysis: After sifting through about 30,000 scholarly texts on Hamlet to identify major themes — monarchy, death, ghost, power, revenge, uncle, etc. – their computer program screened Wikipedia’s database for those key words to identify new adaptations. After comparing the titles found from both primary and secondary sources, they had their final list of Hamlet adaptations.

“What we really tried to do was break down what a story is and how humans understand stories, and then try to translate that into a way a computer can do it,” says Nikhil Kaul, rising junior in computer science and philosophy. “And in a sense, it’s impossible.”

Finding the threshold between a unique story and derivative stories could have serious implications for copyright law and intellectual property in the future. But Grant Glass, UNC graduate student of English and comparative literature and the project manager of this study, believes that the real purpose of the research is to understand the context of each story.

“Appropriating without recognition removes the historical context of how that story was made,” Glass explains. Often, problematic facets of the story are too deeply ingrained to coat over with fresh literary paint: “All of the ugliness of text shouldn’t be capable of being whitewashed – They are compelling stories, but they’re problematic. We owe past baggage to be understood.”

Adaptations include small hat-tips to their original source; quoting the original or using character names. But appropriations of works do nothing to signal their source to their audience, which is why the Data+ team’s thematic analysis of Wikipedia pages was vital in getting a comprehensive list of previously unrecognized adaptations.

“A good adaptation would subvert expectations of the original text,” Glass says. Seth Rogan’s animated comedy, Sausage Party, one of the more surprising movie titles the students’ program found, does just that. “It’s a really vulgar, pretty funny movie,” Kaul explains. “It’s very existential and meta and has a lot of death at the end of it, much like Hamlet does. So, the program picked up on those similarities.”

 Without this new program, the unexpected resemblance could’ve gone unnoticed by literary academia – and whether or not Seth Rogan intended to parallel a grocery store to the Danish royal court, it undoubtedly spins a reader’s expectation of Hamlet on its head.

By Vanessa Moss

Vulci 3000: Technology in Archaeology

This is Anna’s second post from a dig site in Italy this summer. Read the first one here.

Duke PhD Candidate Antonio LoPiano on Site

Once home to Etruscan and Roman cities, the ruins found at Vulci date to earlier than the 8th century B.C.E.

As archaeologists dig up the remains of these ancient civilizations, they are better able to understand how humans from the past lived their daily lives. The problem is, they can only excavate each site once.

No matter how careful the diggers are, artifacts and pieces of history can be destroyed in the process. Furthermore, excavations take a large amount of time, money and strenuous labor to complete. As a result, it’s important to carefully choose the location.

Map of the Vulci Landscape Created Using GIS Technology

In response to these challenges Dr. Maurizio Forte decided to supplement the excavation of ancient Vulci sites by using innovative non-invasive technologies. 

Considering that it once housed entire cities, Vulci is an extremely large site. To optimize excavation time, money, and resources, Dr. Forte used technologies to predict the most important urban areas of the site. Forte and his team also used remote sensing which allowed them to interpret the site prior to digging. 

Georadar Imaging
Duke Post Doc Nevio Danelon Gathering Data for Photogrammetry

Having decided where on the site to look, the team was then able to digitally recreate both the landscape as well as the excavation trench in 3D. This allowed them to preserve the site in its entirety and uncover the history that lay below. Maps of the landscape are created using Web-GIS (Geographic Information Systems). These are then combined with 3D models created using photogrammetry to develop a realistic model of the site.

Forte decided to make the excavation entirely paperless. All “paperwork”  on site is done on tablets. There is also an onsite lab that analyzes all of the archaeological discoveries and archives them into a digital inventory.

This unique combination of archaeology and technology allows Forte and his team to study, interpret and analyze the ancient Etruscan and Roman cities beneath the ground of the site in a way that has never been done before. He is able to create exact models of historic artifacts, chapels and even entire cities that could otherwise be lost for good.

3D Model Created Using Photogrammetry

Forte also thinks it is important to share what is uncovered with the public. One way he is doing this is through integrating the excavation with virtual reality applications.

I’m actually on site with Forte and the team now. One of my responsibilities is to take photos with the Insta360x which is compatible with the OculusGo, allowing people to experience what it’s like to be in the trench with virtual reality. The end goal is to create interactive applications that could be used by museums or individuals. 

Ultimately, this revolutionary approach to archaeology brings to light new perspectives on historical sites and utilizes innovative technology to better understand discoveries made in excavations.

By: Anna Gotskind ’22

Vulci 3000: A High-Tech Excavation

This summer I have the incredible opportunity to work with the Vulci 3000 Bass Connections team. The project focuses on combining archaeology and innovative technology to excavate and understand an ancient Etruscan and Roman site. Over the next several weeks I will be writing a series of articles highlighting the different parts of the excavation. This first installment recounts the history of the project and what we plan to accomplish in Vulci.

Covered in tall grasses and grazing cows it’s hard to imagine that the Vulci Archaeology Park was ever something more than a beautiful countryside. However, in reality, it was home to one of the largest, most important cities of ancient Etruria. In fact, it was one of the biggest cities in the 1st millennium BCE on the entire Italian peninsula. Buried under the ground are the incredible remains of Iron Age, Etruscan, Roman, and Medieval settlements.

Duke’s involvement with the Vulci site began in 2015 when Maurizio Forte, the William and Sue Gross Professor of Classical Studies Art, Art History, and Visual Studies visited the site. What was so unique about the site was that most of it was untouched.

One of the perils of archaeology is that any site can only be physically excavated once and it is inevitable for some parts to be damaged regardless of how careful the team is. Vulci presented a unique opportunity. Because much of the site was still undisturbed, Forte could utilize innovative technology to create digital landscapes that could be viewed in succession as the site was excavated. This would allow him and his team to revisit the site at each stage of excavation. In 2015 he applied for his first permit to begin researching the Vulci site.

In 2016 Forte created a Bass Connections project titled Digital Cities and Polysensing Environments. That summer they ventured to Italy to begin surveying the Vulci site. Because Vulci is a large site it would take too much time and money to excavate the city. Instead, Forte and his team decided to find the most important spots to excavate. They did this by combining remote sensing data and procedural modeling to analyze the various layers underground. They collected data using magnetometry and ground-penetrating radar. They also used drones to capture aerial photography of the site.

These technologies allowed the team to locate the urban areas of the site through the discovery of large buildings and streets revealed by the aerial photographs, radiometrically-calibrated orthomaps, and 3D point cloud/mesh models.

Anne-Lise Baylé Cleaning a Discovered Artifact on Site

The project continued into 2017 and 2018 with a team returning to the site each summer to excavate. Within the trench were archaeologists ranging from undergrads to postdocs digging, scraping and brushing for months to discover what lay beneath the surface. As they began to uncover rooms, pottery, coins, and even a cistern, groups outside the trench continued to advanced technology to collect data and improve the understanding of the site.

Nevio Danelon Releasing a Drone

One unit focused on drone sensing to digitally create multispectral imagery as well as high-resolution elevation models. This allowed them to use soil and crop marks to better interpretation and classify the archaeological features.

By combining traditional archaeology and innovative technology the team has been able to more efficiently discover important, ancient artifacts and analyze them in order to understand the ancient Etruscan and Roman civilizations that once called Vulci their home.

Photo Taken Using the Insta360 Camera in “Planet” Mode

This year, archaeologists return to the site to continue excavation. As another layer of Vulci is uncovered, students and faculty will use technology like drones, photogrammetry, geophysical prosecutions and GIS to document and interpret the site. We will also be using a 360 camera to capture VR compatible content for the OculusGo in order to allow anybody to visit Vulci virtually.

By Anna Gotskind

800+ Teams Pitched Their Best Big Ideas. With Your Help, This Duke Team Has a Chance to Win

A Duke University professor says the time is ripe for new research on consciousness, and he needs your help.

More than 800 teams pitched their best “big ideas” to a competition sponsored by the National Science Foundation (@NSF) to help set the nation’s long-term research agenda. Only 33 are still in the running for the grand prize, and a project on the science of consciousness led by Duke artificial intelligence expert Vincent Conitzer is among them!

You can help shape the NSF’s research questions of the future by watching Conitzer’s video pitch and submitting your comments on the importance and potential impact of the ideas at https://nsf2026imgallery.skild.com/entries/theory-of-conscious-experience.

But act fast. The public comment period ends Wednesday, June 26. Winners will be announced and prizes awarded by October 2019. Stay tuned.

Watch all the video pitches until June 26 at nsf2026imgallery.skild.com.

Page 1 of 14

Powered by WordPress & Theme by Anders Norén