Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology Page 6 of 19

Artificial Intelligence Innovation in Taiwan

Taiwan is a small island off the coast of China that is roughly one fourth the size of North Carolina. Despite its size, Taiwan has made significant waves in the fields of science and technology. In the 2019 Global Talent Competitiveness Index Taiwan (labeled as Chinese Taipei) ranked number 1 in Asia and 15th globally.

However, despite being ahead of many countries in terms of technological innovation, Taiwan was still looking for further ways to improve and support research within the country. Therefore, in 2017 the Taiwan Ministry of Science and Technology (MOST), initiated an AI innovation research program in order to promote the development of AI technologies and attract top AI professionals to work in Taiwan.

Tsung-Yi Ho, a professor at the Department of Computer Science of National Tsing Hua University in Hsinchu, Taiwan came to Duke to present on the four AI centers that have been launched since then: the MOST Joint Research Center for AI Technology, All Vista Healthcare (AINTU), the AI for Intelligent Manufacturing Systems Research Center (AIMS), the Pervasive AI Research (PAIR) Labs, and the MOST AI Biomedical Research Center (AIBMRC) at National Taiwan University, National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University, respectively. 

Within the four research centers, there are 79 research teams with more than 600 professors, experts, and researchers. The centers are focused on smart agriculture, smart factories, AI biomedical research, and AI manufacturing. 

The research centers have many different AI-focused programs. Tsung-Yi Ho first discussed the AI cloud service program. In the last two years since the program has been launched, they have created the Taiwania 2 supercomputer that has a computing capacity of 9 quadrillion floating-point operations per second. The supercomputer is ranked 20th in computing power and 10th in energy efficiency.

Next, Tsung-Yi Ho introduced the AI semiconductor Moonshot Program. They have been working on cognitive computing and AI chips, next-generation memory design, IoT System and Security for Intelligent edge, innovative sensing devices, circuits, and systems, emerging semiconductor processes, materials, and device technology, and component circuit and system design for unmanned vehicle system and AR/VR application. 

One of the things Taiwan is known for is manufacturing. The research centers are also looking to incorporate AI into manufacturing through motion generation, production line, and process optimization.

Keeping up with the biggest technological trends, the MOST research centers are all doing work to develop human-robot interactions, autonomous drones, and embedded AI on for self-driving cars.

Lastly, some of the research groups are focused on medical technological innovation including the advancement of brain image segmentation, homecare robots, and precision medicine.

Beyond this, the MOST has sponsored several programming, robotic and other contests to support tech growth and young innovators. 

Tsung-Yi Ho’s goal in presenting at Duke was to showcase the research highlights among four centers and bring research opportunities to attendees of Duke.

If interested, Duke students can reach out to Dina Khalilova to connect with Tsung-Yi Ho and get involved with the incredible AI innovation in Taiwan.

Post by Anna Gotskind

Polymath Mae Jemison encourages bolder exploration, collaboration

Photo from Biography.com

“I don’t believe that [going to] Mars pushes us hard enough.” This was just one of the bold, thought-provoking statements made by Dr. Mae Jemison, who came to speak at Duke on Monday, February 24 as part of the 15th annual Jean Fox O’Barr Distinguished Speaker Series, presented by Baldwin Scholars.

Dr. Jemison is at the pinnacle of interdisciplinary engagement—though she is most famous for serving as a NASA astronaut and being the first African American woman to go into space, she is also trained as an engineer, social scientist and dancer. Dr. Jemison always knew that she was going to space—even though there were no women or people or color participating in space exploration as she was growing up.

Dr. Jemison says that simply “looking up” brought her here. As a child, she would look up at the sky, see the stars and wonder if other children in other places in the world were looking at the same view that she had. Growing up in the 1960’s instilled into Dr. Jemison at an early age that our potential is limitless, and the political culture of civil rights, changing art and music and decolonization were all about “people declaring that they had a right to participate.” 

Photo courtesy of Elizabeth Roy

One of the biggest pieces of advice that Dr. Jemison wanted to impart on her audience was the value of confidence, and how to build confidence in situations where people are tempted to feel incapable or forget the strengths they already possess. “They told me if I wanted to lead projects I needed an M.D.,” Dr. Jemison explained. “I went to medical school because I know myself and I knew I would want to be in charge one day.” 

At 26 years old, Dr. Jemison was on call 24 hours a day, 7 days a week, 365 days a year as the Area Peace Corps Medical Officer for Sierra Leone and Liberia. She described a case where a man came back with a diagnosis of malaria from Senegal. When Dr. Jemison first took a look, the diagnosis seemed more likely to be meningitis. After making an “antibiotic cocktail,” from what she had on site, she realized this man might lose his life if they didn’t get him to a better hospital. At this point, Dr. Jemison wanted to call a military medical evacuation, and she had the authority to do it. However, another man working with her suggested calling a doctor in Ivory Coast, or a doctor at the hospital in Germany to see what he thought before making the evacuation. Dr. Jemison knew what the patient needed in this situation was to be flown to Germany regardless of the cost of the evacuation. In reflecting on this experience, she says that she could have given someone else her authority, but letting her confidence in herself and what she knew was the right thing to do would have negatively impacted her patient. 

So, how do you maintain confidence? According to Dr. Jemison, you come prepared. She knew her job was to save people’s lives, not to listen to someone else. Dr. Jemison also admonished the audience to “value, corral and protect your energy.” She couldn’t afford to always make herself available for non-emergency situations, because she needed her energy for when a patient’s life would depend on it. 

Photo courtesy of Elizabeth Roy

Dr. Jemison’s current project, 100 Year Starship, is about  trying to ensure we have the capabilities to travel to interstellar space. “The extreme nature of interstellar hurdles requires we re-evaluate what we think we know,” Dr. Jemison explained. Alpha Centauri, the next closest star, is more than 25 trillion miles away. Even if we go 10% the speed of light, it will still take us 50 years to get there. We need to be able to travel faster, the vehicle has to be self-replenishing, and we have to think about space-time changes. What Dr. Jemison calls the “long pole in the tent” is human behavior. We need to know how humans will act and interact in a small spaceship setting for possibly decades of space travel. Dr. Jemison is thinking deeply about how we can apply the knowledge we already possess to fix world problems, and how we can start preparing now for problems we may face in the future. For example, how would health infrastructure in deep space look different? How would we act on a starship that contains 5,000 people when we can’t figure out how to interact with each other on the “starship” we’re on now?

Returning to the childhood love for stargazing that brought her here, Dr. Jemison discussed towards the end of her talk that a stumbling block for the majority of people is insufficient appreciation of our connection across time and space. She has worked with a team to develop Skyfie, an app that allows you to upload photos and videos of your sky to the Sky Tapestry and explore images other people in different parts of the world are posting of their sky. Dr. Jemison’s hope is this app will help people realize that we are interconnected with the rest of the universe, and we won’t be able to figure out how to survive as a species on this planet alone. 

By Victoria Priester

Origami-inspired robots that could fit in a cell?

Imagine robots that can move, sense and respond to stimuli, but that are smaller than a hair’s width. This is the project that Cornell professor and biophysicist Itai Cohen, who gave a talk on Wednesday, January 29 as a part of Duke’s Physics Colloquium, has been working on with and his team. His project is inspired by the microscopic robots in Paul McEuen’s book Spiral. Building robots at such a small scale involves a lot more innovation than simply shrinking all of the parts of a normal robot. At low Reynolds number, fluids are viscous instead of inertial, Van der Waals forces come into play, as well as other factors that affect how the robot can move and function. 

Cohen’s team designs robots that fold similar to origami creatures. Image from Origami.me

To resolve this issue, Cohen and his team decided to build and pattern their micro robots in 2D. Then, inspired by origami, a computer would print the 2D pattern of a robot that can fold itself into a 3D structure. Because paper origami is scale invariant, mechanisms built at one scale will work at another, so the idea is to build robot patterns than can be printed and then walk off of the page or out of a petri dish. However, as Cohen said in his talk last Wednesday, “an origami artist is only as good as their origami paper.” And to build robots at a microscopic scale, one would need some pretty thin paper. Cohen’s team uses graphene, a single sheet of which is only one atom thick. Atomic layer deposition films also behave very similarly to paper, and can be cut up, stretch locally and adopt a 3D shape. Some key steps to making sure the robot self-folds include making elements that bend, and putting additional stiff pads that localize bends in the pattern of the robot. This is what allows them to produce what they call “graphene bimorphs.” 

Cilia on the surface of a cell. Image from MedicalXpress.

Cohen and his team are looking to use microscopic robots in making artificial cilia, which are small leg-like protrusions in cells. Cilia can be sensory or used for locomotion. In the brain, there are cavities where neurotransmitters are redirected based on cilial beatings, so if one can control the individual beating of cilia, they can control where neurotransmitters are directed. This could potentially have biomedical implications for detecting and resolving neurological disorders. 

Right now, Cohen and his lab have microscopic robots made of graphene, which have photovoltaics attached to their legs. When a light shines on the photovoltaic receptor, it activates the robot’s arm movement, and it can wave hello. The advantage of using photovoltaics is that to control the robot, scientists can shine light instead of supplying voltage through a probe—the robot doesn’t need any tethers. During his presentation, Cohen showed the audience a video of his “Brobot,” a robot that flexes its arms when a light shines on it. His team has also successfully made microscopic robots with front and back legs that can walk off a petri dish. Their dimensions are 70 microns long, 40 microns wide and two microns thick. 

Cohen wants to think critically about what problems are important to use technology to solve; he wants make projects that can predict the behavior of people in crowds, predict the direction people will go in response to political issues, and help resolve water crises. Cohen’s research has the potential to find solutions for a wide variety of current issues. Using science fiction and origami as the inspiration for his projects reminds us that the ideas we dream of can become tangible realities. 

By Victoria Priester

First-Year Students Designing Real-World Solutions

In the first week of fall semester, four first-year engineering students, Sean Burrell, Teya Evans, Adam Kramer, and Eloise Sinwell, had a brainstorming session to determine how to create a set of physical therapy stairs designed for children with disabilities. Their goal was to construct something that provided motivation through reward, had variable step height, and could physically support the students. 

Evans explained, “The one they were using before did not have handrails and the kids were feeling really unstable.”

,
Teya Evans is pictured stepping on the staircase her team designed and built. With each step, the lightbox displays different colors.

The team was extremely successful and the staircase they designed met all of the goals set out by their client, physical therapists. It provided motivation through the multi-colored lightbox, included an additional smaller step that could be pulled out to adjust step height, had a handrail to physically support the students and could even be taken apart for easy transportation.

This is a part of the Engineering 101 course all Pratt students are required to take. Teams are paired with a real client and work together throughout the semester to design and create a deliverable solution to the problem they are presented with. At the end of the semester, they present their products at a poster presentation that I attended. It was pretty incredible to see what first-year undergraduates were able to create in just a few months.

The next poster I visited focused on designing a device to stabilize hand tremors. The team’s client, Kate, has Ataxia, a neurological disorder that causes her to have uncontrollable tremors in her arms and hands. She wanted a device that would enable her to use her iPad independently, because she currently needs a caregiver to stabilize her arm to use it. This team, Mohanapriya Cumaran, Richard Sheng, Jolie Mason, and Tess Foote, needed to design something that would allow Kate to access the entire screen while stabilizing tremors, being comfortable, easy to set up and durable.

The team was able to accomplish its task by developing a device that allowed Kate to stabilize her tremors by gripping a 3D printed handlebar. The handlebar was then attached to two rods that rested on springs allowing for vertical motion and a drawer slide allowing for horizontal motion.

“We had her [Kate] touch apps in all areas of the iPad and she could do it.” Foote said. “Future plans are to make it comfier.”

The team plans to improve the product by adding a foam grip to the handlebar, attaching a ball and socket joint for index finger support, and adding a waterproof layer to the wooden pieces in their design. 

The last project I visited created a “Fly Flipping Device.” The team, C. Fischer, E. Song, L. Tarman, and S. Gorbaly, were paired with the Mohamed Noor Lab in the Duke Biology Department as their client. 

Tarman explained, “We were asked to design a device that would expedite the process of transferring fruit flies from one vial to another.”

The Noor lab frequently uses fruit flies to study genetics and currently fly flipping has to be done by hand, which can take a lot of time. The goal was to increase the efficiency of lab experiments by creating a device that would last for more than a year, avoid damaging the vials or flies, was portable and fit within a desk space. 

The team came up with over 50 ideas on how to accomplish this task that they narrowed down to one that they would build. The product they created comprised of two arms made of PVC pipe resting on a wooden base. Attached to the arms were “sleeves” 3D printed to hold the vials containing flies. In order to efficiently flip the flies, one of the arms moves about the axis allowing for multiple vials to be flipped that the time it would normally take to flip one vial. The team was very successful and their creation will contribute to important genetic research.

The Fly Flipping Device

It was mind-blowing to see what first-year students were able to create in their first few months at Duke and I think it is a great concept to begin student education in engineering through a hands-on design process that allows them to develop a solution to a problem and take it from idea to implementation. I am excited about what else other EGR 101 students will design in the future.

By Anna Gotskind


Games, Art, and New Frontiers

This is the third of several posts written by students at the North Carolina School of Science and Math as part of an elective about science communication with Dean Amy Sheck.

Beneath Duke University’s Perkins library, an unassuming, yet fiercely original approach to video games research is underway. Tied less to computer science and engineering than you might expect, the students and faculty are studying games for their effects on players.

I was introduced to a graduate researcher who has turned a game into an experiment. His work exists between the humanities, psychology, and computer science. Some games, particularly modern ones, feature complex economies that require players to collaborate as often as they compete. These researchers have adapted that property to create an economics game in which participants anonymously affect the opportunities – and setbacks – of other players. Wealth inequality is built in. The players’ behavior, they hope, will inform them about ‘real-world’ economic decisions.

Shai Ginsburg playing

At the intersection of this interdisciplinary effort with games, I met  Shai Ginsburg, an associate professor in the department of Asian and Middle Eastern studies who studies video games and board games the way other humanities professors might study Beowulf.

For example, he is able to divide human history into eras of games rather than of geopolitics.

“Until recently, games were not all that interactive,” he says. “Video games are, obviously, interactive, but board games have evolved, too, over the same period of time.” This shift is compelling because it offers us new freedoms in the way we express human experience.

A new gaming suite at Lawrence Tech University in Southfield, Mich. (LTU/Matt Roush)

“The fusion of storytelling and interactivity in games is very compelling,” Ginsburg says. “We haven’t seen that many games that handle issues like mental illness,” until more recently, he points out. The degree of interactivity in a video game grants a player a closeness to the narrative in the areas where writing, music, and visual art alone would be restricted. This closeness gives game designers – as artists – the freedom to explore themes where those artistic restrictions also hinder communication.

However, Dr. Ginsburg is not a game historian; the time that a game feature evolved is far less relevant to him than how its parent game affects players. “We tend to focus on the texts that interest us in a literature class,” he says, by way of example. He studies the games that interest him for the play opportunities they provide.

One advantage of using games as a medium to study their effects on people is that, “the distinction between highbrow and lowbrow is not yet there,” Ginsburg says. In painting, writing, and plenty of other mediums, a clear distinction between “good” and “bad” is decided simultaneously by communities of critics and consumers. Not so, in the case of games.

“I look at communities as a measure of the effectivity of the game less than for itself,” Ginsburg notes. “I think the question is ‘how was I reacting?’ and ‘why was I reacting in such a way?’” he says. Ginsburg’s effort seeks to reveal the mechanisms that give games their societal impact, though those impacts can be elusive. How to learn more? “Play lots of games. Play different kinds of games. Play more games.”

Guest Post by Jackson Meade, NCSSM 2020

Traveling Back in Time Through Smart Archaeology

The British explorer George Dennis once wrote, “Vulci is a city whose very name … was scarcely remembered, but which now, for the enormous treasures of antiquity it has yielded, is exalted above every other city of the ancient world.” He’s correct in assuming that most people do not know where or what Vulci is, but for explorers and historians – including Duke’s Bass Connections team Smart Archaeology – Vulci is a site of enormous potential.

Vulci, Italy, was an ancient Etruscan city, the remains of which are situated about an hour outside of Rome. The Etruscan civilization originated in the area roughly around Tuscany, western Umbria, northern Lazio, and in the north of Po Valley, the current Emilia-Romagna region, south-eastern Lombarty, southern Veneto, and some areas of Campania. The Etruscan culture is thought to have emerged in Italy around 900 BC and endured through the Roman-Etruscan Wars and coming to an end with the establishment of the Roman Empire. 

As a dig site, Vulci is extremely valuable for the information it can give us about the Etruscan and Roman civilizations – especially since the ruins found at Vulci date back beyond the 8th century B.C.E. On November 20th, Professor Maurizio Forte, of the Art, Art History and Visual Studies departments at Duke as well as Duke’s Dig@Lab, led a talk and interactive session. He summarized the Smart Archaeology teams’ experience this past summer in Italy as well as allowing audience members to learn about and try the various technologies used by the team. With Duke being the first university with a permit of excavation for Vulci in the last 60 years, the Bass Connections team set out to explore the region, with their primary concerns being data collection, data interpretation, and the use of virtual technology. 

Trying out some of the team’s technologies on November 20th (photo by Renate Kwon

The team, lead by Professor Maurizio Forte, Professor Michael Zavlanos, David Zalinsky, and Todd Barrett, sought to be as diverse as possible. With 32 participants ranging from undergraduate and graduate students to professionals, as well as Italian faculty and student members, the team flew into Italy at the beginning of the summer with a research model focused on an educational approach of practice and experimentation for everyone involved. With a naturally interdisciplinary focus ranging from classical studies to mechanical engineering, the team was divided, with people focusing on excavation in Vulci, remote sensing, haptics, virtual reality, robotics, and digital media. 

Professor Maurizio Forte

So what did the team accomplish? Well, technology was a huge driving force in most of the data collected. For example, with the use of drones, photos taken from an aerial view were patched together to create bigger layout pictures of the area that would have been the city of Vulci. The computer graphics created by the drone pictures were also used to create a video and aided in the process of creating a virtual reality simulation of Vulci. VR can be an important documentation tool, especially in a field as ever-changing as archaeology. And as Professor Forte remarked, it’s possible for anyone to see exactly what the researchers saw over the summer – and “if you’re afraid of the darkness of a cistern, you can go through virtual reality instead.” 

An example of one of the maps created by the team
The team at work in Vulci

In addition, the team used sensor technology to get around the labor and time it would take to dissect the entire site – which by the team’s estimate would take 300 years! Sensors in the soil, in particular, can sense the remnants of buildings and archaeological features up to five meters below ground, allowing researchers to imagine what monuments and buildings might have looked like. 

One of the biggest takeaways from the data the team collected based on discovering remnants of infrastructure and layout of the city was of the Etruscan mastery of water, developing techniques that the Romans also used. More work was also done on classification of Etruscan pottery, tools, and materials based on earlier work done by previous researchers. Discovering decorative and religious artifacts was also impactful for the team, because as Professor Forte emphasized, these objects are the “primary documentation of history.” 

But the discoveries won’t stop there. The Smart Archaeology team is launching their 2019-2020 Bass Connections project on a second phase of their research – specifically focusing on identifying new archaeological sites, analyzing the landscape’s transformation and testing new methods of data capturing, simulation and visualization. With two more years of work on site, the team is hopeful that research will be able to explain in even greater depth how the people of Vulci lived, which will certainly help to shine a light on the significance of the Etruscan civilization in global history.

By Meghna Datta

Predicting sleep quality with the brain

Modeling functional connectivity allows researchers to compare brain activation to behavioral outcomes. Image: Chu, Parhi, & Lenglet, Nature, 2018.

For undergraduates, sleep can be as elusive as it is important. For undergraduate researcher Katie Freedy, Trinity ’20, understanding sleep is even more important because she works in Ahmad Hariri’s Lab of Neurogenetics.

After taking a psychopharmacology class while studying abroad in Copenhagen, Freedy became interested in the default mode network, a brain network implicated in autobiographical thought, self-representation and depression. Upon returning to her lab at Duke, Freedy wanted to explore the interaction between brain regions like the default mode network with sleep and depression.

Freedy’s project uses data from the Duke Neurogenetics Study, a study that collected data on brain scans, anxiety, depression, and sleep in 1,300 Duke undergraduates. While previous research has found connections between brain connectivity, sleep, and depression, Freedy was interested in a novel approach.

Connectome predictive modeling (CPM) is a statistical technique that uses fMRI data to create models for connections within the brain. In the case of Freedy’s project, the model takes in data on resting state and task-based scans to model intrinsic functional connectivity. Functional connectivity is mapped as a relationship between the activation of two different parts of the brain during a specific task. By looking at both resting state and task-based scans, Freedy’s models can create a broader picture of connectivity.

To build the best model, a procedure is repeated for each subject where a single subject’s data is left out of the model. Once the model is constructed, its validity is tested by taking the brain scan data of the left-out subject and assessing how well the model predicts that subject’s other data. Repeating this for every subject trains the model to make the most generally applicable but accurate predictions of behavioral data based on brain connectivity.

Freedy presented the preliminary results from her model this past summer at the BioCORE Symposium as a Summer Neuroscience Program fellow. The preliminary results showed that patterns of brain connectivity were able to predict overall sleep quality. With additional analyses, Freedy is eager to explore which specific patterns of connectivity can predict sleep quality, and how this is mediated by depression.

Freedy presented the preliminary results of her project at Duke’s BioCORE Symposium.

Understanding the links between brain connectivity, sleep, and depression is of specific importance to the often sleep-deprived undergraduates.

“Using data from Duke students makes it directly related to our lives and important to those around me,” Freedy says. “With the field of neuroscience, there is so much we still don’t know, so any effort in neuroscience to directly tease out what is happening is important.”

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

These Microbes ‘Eat’ Electrons for Energy

The human body is populated by a greater number of microbes than its own cells. These microbes survive using metabolic pathways that vary drastically from humans’.

Arpita Bose’s research explores the metabolism of microorganisms.

Arpita Bose, PhD, of Washington University in St. Louis, is interested in understanding the metabolism of these ubiquitous microorganisms, and putting that knowledge to use to address the energy crisis and other applications.

Photoferrotrophic organisms use light and electrons from the environment as an energy source

One of the biggest research questions for her lab involves understanding photoferrotrophy, or using light and electrons from an external source for carbon fixation. Much of the source of energy humans consume comes from carbon fixation in phototrophic organisms like plants. Carbon fixation involves using energy from light to fuel the production of sugars that we then consume for energy.

Before Bose began her research, scientists had found that some microbes interact with electricity in their environments, even donating electrons to the environment. Bose hypothesized that the reverse could also be true and sought to show that some organisms can also accept electrons from metal oxides in their environments. Using a bacterial strain called Rhodopseudomonas palustris TIE-1 (TIE-1), Bose identified this process called extracellular electron uptake (EEU).

After showing that some microorganisms can take in electrons from their surroundings and identifying a collection of genes that code for this ability, Bose found that this ability was dependent on whether a light source was also present. Without the presence of light, these organisms lost 70% of their ability to take in electrons.   

Because the organisms Bose was studying can rely on light as a source of energy, Bose hypothesized that this dependence on light for electron uptake could signify a function of the electrons in photosynthesis.  With subsequent studies, Bose’s team found that these electrons the microorganisms were taking were entering their photosystem.

To show that the electrons were playing a role in carbon fixation, Bose and her team looked at the activity of an enzyme called RuBisCo, which plays an integral role in converting carbon dioxide into sugars that can be broken down for energy. They found that RuBisCo was most strongly expressed and active when EEU was occurring, and that, without RuBisCo present, these organisms lost their ability to take in electrons. This finding suggests that organisms like TIE-1 are able to take in electrons from their environment and use them in conjunction with light energy to synthesize molecules for energy sources.  

In addition to broadening our understanding of the great diversity in metabolisms, Bose’s research has profound implications in sustainability. These microbes have the potential to play an integral role in clean energy generation.

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

The Making of queerXscape

Sinan Goknur

On September 10th, queerXscape, a new exhibit in The Murthy Agora Studio at the Rubenstein Arts Center, opened. Sinan Goknur and Max Symuleski, PhD candidates in the Computational Media, Arts & Cultures Program, created the installation with digital prints of collages, cardboard structures, videos, and audio. Max explains that this multi-media approach transforms the studio from a room into a landscape which provides an immersive experience.

Max Symuleski

The two artists combined their experiences with changing urban environments when planning this exhibit. Sinan reflects on his time in Turkey where he saw constant construction and destruction, resulting in a quickly shifting landscape. While processing all of this displacement, he began taking pictures as “a way of coping with the world.” These pictures later become layers in the collages he designed with Max.

Meanwhile, Max used their time in New York City where they had to move from neighborhood to neighborhood as gentrification raised prices. Approaching this project, they wondered, “What does queer mean in this changing landscape? What does it mean to queer something? Where are our spaces? Where do we need them to survive?” They had previously worked on smaller collages made from magazines that inspired the pair of artists to try larger-scale works.  

Both Sinan and Max have watched the exploding growth in Durham while studying at Duke. From this perspective, they were able to tackle this project while living in a city that exemplifies the themes they explore in their work.

One of the cardboard structures

Using a video that Sinan had made as inspiration for the exhibit, they began assembling four large digital collages. To collaborate on the pieces, they would send the documents back and forth while making edits. When it became time to assemble their work, they had to print the collage in large strips and then careful glue them together. Through this process, they learned the importance of researching materials and experimented with the best way to smoothly place the strips together. While putting together mound-like cardboard structures of building, tire, and ice cube cut-outs, Max realized that, “we’re now doing construction.” Consulting with friends who do small construction and maintenance jobs for a living also helped them assemble and install the large-scale murals in the space. The installation process for them was yet another example of the tension between various drives for and scales of constructions taking place around them.

While collage and video may seem like an odd combination, they work together in this exhibit to surround the viewer and appeal to both the eyes and ears. Both artists share a background in queer performance and are driven to the rough aesthetics of photo collage and paper. The show brings together aspects of their experience in drag performance, collage, video, photography, and paper sculpture of a balanced collaboration. Their work demonstrates the value of partnership that crosses genres.

Poster for the exhibit

When concluding their discussion of changing spaces, Max mentioned that, “our sense of resilience is tied to the domains where we could be queer.” Finding an environment where you belong becomes even more difficult when your landscape resembles shifting sand. Max and Sinan give a glimpse into the many effects of gentrification, destruction, and growth within the urban context. 

The exhibit will be open until October 6. If you want to see the results of weeks of collaging, printing, cutting, and pasting together photography accumulated from near and far, stop by the Ruby.

Post by Lydia Goff

Big SMILES All Around for Polymer Chemists at Duke, MIT and Northwestern

Science is increasingly asking artificial intelligence machines to help us search and interpret huge collections of data, and it’s making a difference.

But unfortunately, polymer chemistry — the study of large, complex molecules — has been hampered in this effort because it lacks a crisp, coherent language to describe molecules that are not tidy and orderly.

Think nylon. Teflon. Silicone. Polyester. These and other polymers are what the chemists call “stochastic,” they’re assembled from predictable building blocks and follow a finite set of attachment rules, but can be very different in the details from one strand to the next, even within the same polymer formulation.

Plastics, love ’em or hate ’em, they’re here to stay.
Foto: Mathias Cramer/temporealfoto.com

Chemistry’s old stick and ball models and shorthand chemical notations aren’t adequate for a long molecule that can best be described as a series of probabilities that one kind of piece might be in a given spot, or not.

Polymer chemists searching for new materials for medical treatments or plastics that won’t become an environmental burden have been somewhat hampered by using a written language that looks like long strings of consonants, equal signs, brackets, carets and parentheses. It’s also somewhat equivocal, so the polymer Nylon-6-6 ends up written like this: 

{<C(=O)CCCCC(=O)<,>NCCCCCCN>}

Or this,

{<C(=O)CCCCC(=O)NCCCCCCN>}

And when we get to something called ‘concatenation syntax,’ matters only get worse.  

Stephen Craig, the William T. Miller Professor of Chemistry, has been a polymer chemist for almost two decades and he says the notation language above has some utility for polymers. But Craig, who now heads the National Science Foundation’s Center for the Chemistry of Molecularly Optimized Networks (MONET), and his MONET colleagues thought they could do better.

Stephen Craig

“Once you have that insight about how a polymer is grown, you need to define some symbols that say there’s a probability of this kind of structure occurring here, or some other structure occurring at that spot,” Craig says. “And then it’s reducing that to practice and sort of defining a set of symbols.”

Now he and his MONET colleagues at MIT and Northwestern University have done just that, resulting in a new language – BigSMILES – that’s an adaptation of the existing language called SMILES (simplified molecular-input line-entry system). They they think it can reduce this hugely combinatorial problem of describing polymers down to something even a dumb computer can understand.

And that, Craig says, should enable computers to do all the stuff they’re good at – searching huge datasets for patterns and finding needles in haystacks.

The initial heavy lifting was done by MONET members Prof. Brad Olsen and his co-worker Tzyy-Shyang Lin at MIT who conceived of the idea and developed the set of symbols and the syntax together. Now polymers and their constituent building blocks and variety of linkages might be described like this:

Examples of bigSMILES symbols from the recent paper

It’s certainly not the best reading material for us and it would be terribly difficult to read aloud, but it becomes child’s play for a computer.

Members of MONET spent a couple of weeks trying to stump the new language with the weirdest polymers they could imagine, which turned up the need for a few more parts to the ‘alphabet.’ But by and large, it holds up, Craig says. They also threw a huge database of polymers at it and it translated them with ease.

“One of the things I’m excited about is how the data entry might eventually be tied directly to the synthetic methods used to make a particular polymer,” Craig says. “There’s an opportunity to actually capture and process more information about the molecules than is typically available from standard characterizations. If that can be done, it will enable all sorts of discoveries.”

BigSMILES was introduced to the polymer community by an article in ACS Central Science last week, and the MONET team is eager to see the response.

“Can other people use it and does it work for everything?” Craig asks. “Because polymer structure space is effectively infinite.” Which is just the kind of thing you need Big Data and machine learning to address. “This is an area where the intersection of chemistry and data science can have a huge impact,” Craig says.

Page 6 of 19

Powered by WordPress & Theme by Anders Norén