Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Computers/Technology (Page 1 of 12)

Meet New Blogger Brian Du

Brian survives his week in the desert.

Hi! My name is Brian Du, and I’m a sophomore from Texas. I’m a pre-med majoring in computer science. I like vacations, hiking, and hiking on vacation. Besides these hobbies, I also love learning about science and hearing a good story. These latter two are exactly why I’m excited to be writing for the Duke Research Blog.

My first exposure to science happened in third grade because my goldfish kept getting sick and dying. This made me sad and I became invested in making them well again. I would measure pH levels regularly with my dad and keep notes on the fishes’ health. Eventually the process turned into a science fair project. I remember I loved presenting because I got to point out to the judges the ‘after’ pictures of my fish, which showed them alive, healthy, and happy (I think? it’s hard to tell with fish).

One happy fish!
Source: Reddit

My fish and I go way back.

After that third-grade experiment, I kept doing science projects — almost year after year actually — since I love the research process. From framing the right questions and setting up the experiment, to running the trials and writing up and sharing my work, my enthusiasm grew with each step. Come competition day, I noticed that in interviews that went well, my excitement was contagious, so that judges grew more eager too as they listened. And so I understood: a huge part to science is communication. Science, like food or a good story, is meant to be shared with others. The scientist is a storyteller, adjusting his presentation to captivate different audiences. With judges, I spoke jargon, but during public exhibition, where I chatted with anyone who came up to me, I got creative when asked about my research. Analogies helped me link strange concepts to everyday objects and experiences. An important protein channel became a pipe, and its inhibitor molecule a rock which would clog the pipe to make it unusable.

protein channel “pipe”
edited from CThompson02

Now that I’m at Duke, there’s so many stories to tell of the rich variety of research being done right on campus! I’ve written a few articles for the Chronicle covering some of the new medicine or proteins Duke professors have been involved in developing. As I keep an ear out for more stories, I hope to share a few of them in my upcoming posts, because I know they’ll be exciting!

Smart Phones Are the New Windows to the Soul

It’s one of those things that seems so simple and elegant that you’re left asking yourself, “Geez, why didn’t I think of that?”

Say you were trying to help people lose weight, prep for a surgery or take their meds every day. They’re probably holding a smartphone in at least one of their hands — all you need to do is enlist that ever-present device they’re staring at to bug them!

So, for example, have the health app send a robo-text twice a day to check in: “Did you weigh yourself?” Set up a group chat where their friends all know what they’re trying to accomplish: “We’re running today at 5, right?”

This is a screenshot of a Pattern Health app for pre-operative patients.

It’s even possible to make them pinky-swear a promise to their phone that they will do something positive toward the goal, like walking or skipping desert that day. And if they don’t? The app has their permission to lock them out of all their apps for a period of time.

Seriously, people agree to this and it works.

Two app developers on this frontier of personalized, portable “mHealth” told a lunchtime session  sponsored by the Duke Mobile App Gateway on Thursday that patients not only willingly play along with these behavioral modification apps, their behaviors change for the better.

The idea of using phones for health behavior came to pediatric hematologist Nirmish Shah MD one day while he attempted to talk to a 16-year-old sickle cell disease patient as she snapped selfies of herself with the doctor. Her mom and toddler sister nearby both had their noses to screens as well. “I need to change how I do this,” Shah thought to himself.

Pediatric hematologist Nirmish Shah MD

Pediatric hematologist Nirmish Shah MD is director of Duke’s sickle cell transition program.

Twenty health apps later, he’s running phase II clinical trials of phone-based interventions for young sickle cell patients that encourage them to stay on their medication schedule and ask them often about their pain levels.

One tactic that seems to work pretty well is to ask his patients to send in selfie videos as they take their meds each day. The catch? The female patients send a minute or so of chatty footage a day. The teenage boys average 13 seconds, and they’re grumpy about it.

Clearly, different activities may be needed for different patient populations, Shah said.

While it’s still early days for these approaches, we do have a lot of behavioral science on what could help, said Aline Holzwarth, a principal of the Center for Advanced Hindsight and head of behavioral science for a Durham health app startup called Pattern Health.

Aline Gruneisen Holzwarth

Aline Holzwarth is a principal in the Center for Advanced Hindsight.

“It’s not enough to simply inform people to eat better,” Holzwarth said. The app has to secure a commitment from the user, make them set small goals and then ask how they did, enlist the help of social pressures, and then dole out rewards and punishments as needed.

Pattern Health’s app says “You need to do this, please pick a time when you will.” Followed by a reward or a consequence.

Thursday’s session, “Using Behavioral Science to Drive Digital Health Engagement and Outcomes, was the penultimate session of the annual Duke Digital Health Week. Except for the Hurricane Florence washout on Monday, the week  has been a tremendous success this year, said Katie McMillan, the associate director of the App Gateway.

What Happens When Data Scientists Crunch Through Three Centuries of Robinson Crusoe?

Reading 1,400-plus editions of “Robinson Crusoe” in one summer is impossible. So one team of students tried to train computers to do it for them.

Reading 1,400-plus editions of “Robinson Crusoe” in one summer is impossible. So one team of students tried to train computers to do it for them.

Since Daniel Defoe’s shipwreck tale “Robinson Crusoe” was first published nearly 300 years ago, thousands of editions and spinoff versions have been published, in hundreds of languages.

A research team led by Grant Glass, a Ph.D. student in English and comparative literature at the University of North Carolina at Chapel Hill, wanted to know how the story changed as it went through various editions, imitations and translations, and to see which parts stood the test of time.

Reading through them all at a pace of one a day would take years. Instead, the researchers are training computers to do it for them.

This summer, Glass’ team in the Data+ summer research program used computer algorithms and machine learning techniques to sift through 1,482 full-text versions of Robinson Crusoe, compiled from online archives.

“A lot of times we think of a book as set in stone,” Glass said. “But a project like this shows you it’s messy. There’s a lot of variance to it.”

“When you pick up a book it’s important to know what copy it is, because that can affect the way you think about the story,” Glass said.

Just getting the texts into a form that a computer could process proved half the battle, said undergraduate team member Orgil Batzaya, a Duke double major in math and computer science.

The books were already scanned and posted online, so the students used software to download the scans from the internet, via a process called “scraping.” But processing the scanned pages of old printed books, some of which had smudges, specks or worn type, and converting them to a machine-readable format proved trickier than they thought.

The software struggled to decode the strange spellings (“deliver’d,” “wish’d,” “perswasions,” “shore” versus “shoar”), different typefaces between editions, and other quirks.

Special characters unique to 18th century fonts, such as the curious f-shaped version of the letter “s,” make even humans read “diftance” and “poffible” with a mental lisp.

Their first attempts came up with gobbledygook. “The resulting optical character recognition was completely unusable,” said team member and Duke senior Gabriel Guedes.

At a Data+ poster session in August, Guedes, Batzaya and history and computer science double major Lucian Li presented their initial results: a collection of colorful scatter plots, maps, flowcharts and line graphs.

Guedes pointed to clusters of dots on a network graph. “Here, the red editions are American, the blue editions are from the U.K.,” Guedes said. “The network graph recognizes the similarity between all these editions and clumps them together.”

Once they turned the scanned pages into machine-readable texts, the team fed them into a machine learning algorithm that measures the similarity between documents.

The algorithm takes in chunks of texts — sentences, paragraphs, even entire novels — and converts them to high-dimensional vectors.

Creating this numeric representation of each book, Guedes said, made it possible to perform mathematical operations on them. They added up the vectors for each book to find their sum, calculated the mean, and looked to see which edition was closest to the “average” edition. It turned out to be a version of Robinson Crusoe published in Glasgow in 1875.

They also analyzed the importance of specific plot points in determining a given edition’s closeness to the “average” edition: what about the moment when Crusoe spots a footprint in the sand and realizes that he’s not alone? Or the time when Crusoe and Friday, after leaving the island, battle hungry wolves in the Pyrenees?

The team’s results might be jarring to those unaccustomed to seeing 300 years of publishing reduced to a bar chart. But by using computers to compare thousands of books at a time, “digital humanities” scholars say it’s possible to trace large-scale patterns and trends that humans poring over individual books can’t.

“This is really something only a computer can do,” Guedes said, pointing to a time-lapse map showing how the Crusoe story spread across the globe, built from data on the place and date of publication for 15,000 editions.

“It’s a form of ‘distant reading’,” Guedes said. “You use this massive amount of information to help draw conclusions about publication history, the movement of ideas, and knowledge in general across time.”

This project was organized in collaboration with Charlotte Sussman (English) and Astrid Giugni (English, ISS). Check out the team’s results at https://orgilbatzaya.github.io/pirating-texts-site/

Data+ is sponsored by Bass Connections, the Information Initiative at Duke, the Social Science Research Institute, the departments of Mathematics and Statistical Science and MEDx. This project team was also supported by the Duke Office of Information Technology.

Other Duke sponsors include DTECH, Duke Health, Sanford School of Public Policy, Nicholas School of the Environment, Development and Alumni Affairs, Energy Initiative, Franklin Humanities Institute, Duke Forge, Duke Clinical Research, Office for Information Technology and the Office of the Provost, as well as the departments of Electrical & Computer Engineering, Computer Science, Biomedical Engineering, Biostatistics & Bioinformatics and Biology.

Government funding comes from the National Science Foundation.

Outside funding comes from Lenovo, Power for All and SAS.

Community partnerships, data and interesting problems come from the Durham Police and Sheriff’s Department, Glenn Elementary PTA, and the City of Durham.

Videos by Paschalia Nsato and Julian Santos; writing by Robin Smith

Can’t Decide What Clubs to Join Outside of Class? There’s a Web App for That

With 400-plus student organizations to choose from, Duke has more co-curriculars than you could ever hope to take advantage of in one college career. Navigating the sheer number of options can be overwhelming. So how do you go about finding your niche on campus?

Now there’s a Web app for that: the Duke CoCurricular Eadvisor. With just a few clicks it comes up with a personalized ranked list of student clubs and programs based on your interests and past participation compared to others.

“We want it to be like the activity fair, but online,” said  Duke computer science major Dezmanique Martin, who was part of a team of Duke undergrads in the Data+ summer research program who developed the “recommendation engine.”

“The goal is to make a web app that recommends activities like Netflix recommends movies,” said team member Alec Ashforth.

The project is still in the testing stage, but you can try it out for yourself, or add your student organization to the database, at https://eadvisorduke.shinyapps.io/login/

A “co-curricular” can be just about any learning experience that takes place outside of class and doesn’t count for credit, be it a student magazine, Science Olympiad or community service. Research shows that students who get involved on campus are more likely to graduate and thrive in the workplace post-graduation.

For the pilot version, the team compiled a list of more than 150 student programs related to technology. Each program was tagged with certain attributes.

Students start by entering a Net ID, major, and expected graduation date. Then they enter all the programs they have participated in at Duke so far, submit their profile, and hit “recommend.”

The e-advisor algorithm generates a ranked list of activities recommended just for the user.

The e-advisor might recognize that a student who did DataFest and HackDuke in their first two years likes computer science, research, technology and competitions. Based on that, the Duke Robotics Club might be highly recommended, while the Refugee Health Initiative would be ranked lower.

A new student can just indicate general interests by selecting a set of keywords from a drop-down menu. Whether it’s literature and humanities, creativity, competition, or research opportunities, the student and her advisor won’t have to puzzle over the options — the e-advisor does it for them.

The tool comes up with its recommendations using a combination of approaches. One, called content-based filtering, finds activities you might like based on what you’ve done in the past. The other, collaborative filtering, looks for other students with similar histories and tastes, and recommends activities they tried.

This could be a useful tool for advisors, too, noted Vice Provost for Interdisciplinary Studies Edward Balleisen, while learning about the EAdvisor team at this year’s Data+ Poster Session.

“With sole reliance on the app, there could be a danger of some students sticking with well-trodden paths, at the expense of going outside their comfort zone or trying new things,” Balleisen said.

But thinking through app recommendations along with a knowledgeable advisor “might lead to more focused discussions, greater awareness about options, and better decision-making,” he said.

Led by statistics Ph.D. candidate Lindsay Berry, so far the team has collected data from more than 80 students. Moving forward they’d like to add more co-curriculars to the database, and incorporate more features, such as an upvote/downvote system.

“It will be important for the app to include inputs about whether students had positive, neutral, or negative experiences with extra-curricular activities,” Balleisen added.

The system also doesn’t take into account a student’s level of engagement. “If you put Duke machine learning, we don’t know if you’re president of the club, or just a member who goes to events once a year,” said team member Vincent Liu, a rising sophomore majoring in computer science and statistics.

Ultimately, the hope is to “make it a viable product so we can give it to freshmen who don’t really want to know what they want to do, or even sophomores or juniors who are looking for new things,” said Brooke Keene, rising junior majoring in computer science and electrical and computer engineering.

Video by Paschalia Nsato and Julian Santos; writing by Robin Smith

Data+ is sponsored by Bass Connections, the Information Initiative at Duke, the Social Science Research Institute, the departments of Mathematics and Statistical Science and MEDx. This project team was also supported by the Duke Office of Information Technology.

Other Duke sponsors include DTECH, Duke Health, Sanford School of Public Policy, Nicholas School of the Environment, Development and Alumni Affairs, Energy Initiative, Franklin Humanities Institute, Duke Forge, Duke Clinical Research, Office for Information Technology and the Office of the Provost, as well as the departments of Electrical & Computer Engineering, Computer Science, Biomedical Engineering, Biostatistics & Bioinformatics and Biology.

Government funding comes from the National Science Foundation.

Outside funding comes from Lenovo, Power for All and SAS.

Community partnerships, data and interesting problems come from the Durham Police and Sheriff’s Department, Glenn Elementary PTA, and the City of Durham.

Teaching a Machine to Spot a Crystal

A collection of iridescent crystals grown in space

Not all protein crystals exhibit the colorful iridescence of these crystals grown in space. But no matter their looks, all are important to scientists. Credit: NASA Marshall Space Flight Center (NASA-MSFC).

Protein crystals don’t usually display the glitz and glam of gemstones. But no matter their looks, each and every one is precious to scientists.

Patrick Charbonneau, a professor of chemistry and physics at Duke, along with a worldwide group of scientists, teamed up with researchers at Google Brain to use state-of-the-art machine learning algorithms to spot these rare and valuable crystals. Their work could accelerate drug discovery by making it easier for researchers to map the structures of proteins.

“Every time you miss a protein crystal, because they are so rare, you risk missing on an important biomedical discovery,” Charbonneau said.

Knowing the structure of proteins is key to understanding their function and possibly designing drugs that work with their specific shapes. But the traditional approach to determining these structures, called X-ray crystallography, requires that proteins be crystallized.

Crystallizing proteins is hard — really hard. Unlike the simple atoms and molecules that make up common crystals like salt and sugar, these big, bulky molecules, which can contain tens of thousands of atoms each, struggle to arrange themselves into the ordered arrays that form the basis of crystals.

“What allows an object like a protein to self-assemble into something like a crystal is a bit like magic,” Charbonneau said.

Even after decades of practice, scientists have to rely in part on trial and error to obtain protein crystals. After isolating a protein, they mix it with hundreds of different types of liquid solutions, hoping to find the right recipe that coaxes them to crystallize. They then look at droplets of each mixture under a microscope, hoping to spot the smallest speck of a growing crystal.

“You have to manually say, there is a crystal there, there is none there, there is one there, and usually it is none, none, none,” Charbonneau said. “Not only is it expensive to pay people to do this, but also people fail. They get tired and they get sloppy, and it detracts from their other work.”

Three microscope images of protein crystallization solutions

The machine learning software searches for points and edges (left) to identify crystals in images of droplets of solution. It can also identify when non-crystalline solids have formed (middle) and when no solids have formed (right).

Charbonneau thought perhaps deep learning software, which is now capable of recognizing individual faces in photographs even when they are blurry or caught from the side, should also be able to identify the points and edges that make up a crystal in solution.

Scientists from both academia and industry came together to collect half a million images of protein crystallization experiments into a database called MARCO. The data specify which of these protein cocktails led to crystallization, based on human evaluation.

The team then worked with a group led by Vincent Vanhoucke from Google Brain to apply the latest in artificial intelligence to help identify crystals in the images.

After “training” the deep learning software on a subset of the data, they unleashed it on the full database. The A.I. was able to accurately identify crystals about 95 percent of the time. Estimates show that humans spot crystals correctly only 85 percent of the time.

“And it does remarkably better than humans,” Charbonneau said. “We were a little surprised because most A.I. algorithms are made to recognize cats or dogs, not necessarily geometrical features like the edge of a crystal.”

Other teams of researchers have already asked to use the A.I. model and the MARCO dataset to train their own machine learning algorithms to recognize crystals in protein crystallization experiments, Charbonneau said. These advances should allow researchers to focus more time on biomedical discoveries instead of squinting at samples.

Charbonneau plans to use the data to understand how exactly proteins self-assemble into crystals, so that researchers rely less on chance to get this “magic” to happen.

“We are trying to use this data to see if we can get more insight into the physical chemistry of self-assembly of proteins,” Charbonneau said.

CITATION: “Classification of crystallization outcomes using deep convolutional neural networks,” Andrew E. Bruno, et al. PLOS ONE, June 20, 2018. DOI: 10.1371/journal.pone.0198883

 

Post by Kara Manke

Heating Up the Summer, 3D Style

While some students like to spend their summer recovering from a long year of school work, others are working diligently in the Innovation Co-Lab in the Telcom building on West Campus.

They’re working on the impacts of dust and particulate matter (PM) pollution on solar panel performance, and discovering new technologies that map out the 3D volume of the ocean.

The Co-Lab is one of three 3D printing labs located on campus. It allows students and faculty the opportunity to creatively explore research through the use of new and emerging technologies.

Third-year PhD candidate Michael Valerino said his long term research project focuses on how dust and air pollution impacts the performance of solar panels.

“I’ve been designing a low-cost prototype which will monitor the impact of dust and air pollution on solar panels,” said Valerino. “The device is going to be used to monitor the impacts of dust and particulate matter (PM) pollution on solar panel performance. This processis known as soiling. This is going to be a low-cost alternative (~$200 ) to other monitoring options that are at least $5,000.”

Most of the 3D printers come with standard Polylactic acid (PLA) material for printing. However, because his first prototype completely melted in India’s heat, Valerino decided to switch to black carbon fiber and infused nylon.

“It really is a good fit for what I want to do,” he said. “These low-cost prototypes will be deployed in China, India, and the Arabian Peninsula to study global soiling impacts.”

In a step-by-step process, he applied acid-free glue to the base plate that holds the black carbon fiber and infused nylon. He then placed the glass plate into the printer and closely examined how the thick carbon fiber holds his project together.

Michael Bergin, a professor of civil and environmental engineering professor at Duke collaborated with the Indian Institute of Technology-Gandhinagar and the University of Wisconsin last summer to work on a study about soiling.

The study indicated that there was a decrease in solar energy as the panels became dirtier over time. The solar cells jumped 50 percent in efficiency after being cleaned for the first time in several weeks. Valerino’s device will be used to expand Bergin’s work.

As Valerino tackles his project, Duke student volunteers and high school interns are in another part of the Co-Lab developing technology to map the ocean floor.

The Blue Devil Ocean Engineering team will be competing in the Shell Ocean Discovery XPRIZE, a global technology competition challenging teams to advance deep-sea technologies for autonomous, fast and high-resolution ocean exploration. (Their mentor, Martin Brooke, was recently featured on Science Friday.)

The team is developing large, highly redundant carbon drones that are eight feet across. The drones will fly over the ocean and drop pods into the water that will sink to collect sonar data.

Tyler Bletsch, a professor of the practice in electrical and computer engineering, is working alongside the team. He describes the team as having the most creative approach in the competition.

“We have many parts of this working, but this summer is really when it needs to come together,” Bletsch said. “Last year, we made it through round one of the competition and secured $100,000 for the university. We’re now using that money for the final phase of the competition.”

The final phase of the competition is scheduled to be held fall 2018.
Though campus is slow this summer, the Innovation Co-Lab is keeping busy. You can keep up-to-date with their latest projects here.

Post by Alexis Owens

 

Duke Alumni Share Their SpaceX Experiences

It was 8 o’clock on a Monday night and Teer 203 was packed. A crowd of largely Pratt Engineering students had crammed into practically every chair in the room, as if for lecture. Only, there were no laptops out tonight. No one stood at the blackboard, teaching.

SpaceX launches

SpaceX’s Falcon Heavy and Dragon rockets in simultaneous liftoff

No, these students had given up their Monday evening for something more important. Tonight, engineering professor Rebecca Simmons was videoconferencing with six recent Duke grads—all of whom are employed at the legendary aerospace giant SpaceX, brainchild of tech messiah Elon Musk.

Eager to learn as much as possible about the mythic world of ultracompetitive engineering, the gathered students spent the next hour and fifteen minutes grilling Duke alumni Anny Ning (structures design engineering), Kevin Seybert (integration and test engineering), Matthew Pleatman and Daniel Lazowski (manufacturing engineering), and Zachary Loncar (supply chain) with as many questions as they could squeeze through.

Over the course of the conversation, Duke students seemed particularly interested in the overall culture of SpaceX: What was it like to actually work there? What do the employees think of the SpaceX environment, or the way the company approaches engineering?

One thing all of the alumni were quick to key in on was the powerful emphasis their company placed on flexibility and engagement.

“It’s much harder to find someone that says ‘no’ at SpaceX,” Pleatman said. “It’s way easier to find someone who says ‘yes.’ ”

SpaceX’s workflow, Seybert added, is relentlessly adaptive. There are no strict boundaries on what you can work on in your job, and the employee teams are made up of continually evolving combinations of specialists and polymaths.

“It’s extremely dynamic,” Seybert said. “Whatever the needs of the company are, we will shift people around from week to week to support that.”

“It’s crazy—there is no typical week,” Lazowski added. “Everything’s changing all the time.”

SpaceX Launch

Launch of Hispasat 30W-6 Mission

Ning, for her part, focused a great deal on the flexibility SpaceX both offers and demands. New ideas and a willingness to question old ways of thinking are critical to this company’s approach to innovation, and Ning noted that one of the first things she had to learn was to be continuously on the lookout for ways her methods could be improved.

“You should never hear someone say, ‘Oh, we’re doing this because this is how we’ve always done it,’ ” she said.

The way SpaceX approaches engineering and innovation, Seybert explained, is vastly different from how traditional aerospace companies have tended to operate. SpaceX employees are there because of their passion for their work. They focus on the projects they want to focus on, they move between projects on a day-to-day basis, and they don’t expect to stay at any one engineering company for more than a few years. Everything is geared around putting out the best possible product, as quickly as humanly possible.

So now, the million dollar question: How do you get in?

“One thing that I think links us together is the ability to work hands-on,” Loncar offered.

Pleatman agreed. “If you want to get a job at SpaceX directly out of school, it’s really important to have an engineering project that you’ve worked on. It doesn’t matter what it is, but just something where you’ve really made a meaningful contribution, worked hard, and can really talk through the design from start to finish.”

Overall, passion, enthusiasm and flexibility were overarching themes. And honestly, that seems pretty understandable. We are talking about rockets, after all — what’s not to be excited about? These Duke alums are out engineering the frontier of tomorrow — bringing our species one step closer to its place among the stars.

As Ning put it, “I can’t really picture a future where we’re not out exploring space.”

Post by Daniel Egitto

Artificial Intelligence Knows How You Feel

Ever wondered how Siri works? Afraid that super smart robots might take over the world soon?

On April 3rd researchers from Duke, NCSU and UNC came together for Triangle Machine Learning Day to provoke everyone’s curiosities about the complex field that is Artificial Intelligence. A.I. is an overarching term for smart technologies, ranging from self-driving cars to targeted advertising. We can arrive at artificial intelligence through what’s known as “machine learning.” Instead of explicitly programming a machine with the basic capabilities we want it to have, we can make it so that its code is flexible and adapts based on information it’s presented with. Its knowledge grows as a result of training it. In other words, we’re teaching a computer to learn.

Matthew Philips is working with Kitware to get computers to “see,” also known as “machine vision.” By providing thousands and thousands of images, a computer with the right coding can learn to actually make sense of what an image is beyond different colored pixels.

Machine vision has numerous applications. An effective way to search satellite imagery for arbitrary objects could be huge in the advancement of space technology – a satellite could potentially identify obscure objects or potential lifeforms that stick out in those images. This is something we as humans can’t do ourselves just because of the sheer amount of data there is to go through. Similarly, we could teach a machine to identify cancerous or malignant cells in an image, thus giving us a quick diagnosis if someone is at risk of developing a disease.

The problem is, how do you teach a computer to see? Machines don’t easily understand things like similarity, depth or orientation — things that we as humans do automatically without even thinking about. That’s exactly the type of problem Kitware has been tackling.

One hugely successful piece of Artificial Intelligence you may be familiar with is IBM’s Watson. Labeled as “A.I. for professionals,” Watson was featured on Sixty Minutes and even played Jeopardy on live television. Watson has visual recognition capabilities, can work as a translator, and can even understand things like tone, personality or emotional state. And obviously it can answer crazy hard questions. What’s even cooler is that it doesn’t matter how you ask the question – Watson will know what you mean. Watson is basically Siri on steroids, and the world got a taste of its power after watching it smoke its competitors on Jeopardy. However, Watson is not to be thought of as a physical supercomputer. It is a collection of technologies that can be used in many different ways, depending on how you train it. This is what makes Watson so astounding – through machine learning, its knowledge can adapt to the context it’s being used in.

Source: CBS News.

IBM has been able to develop such a powerful tool thanks to data. Stacy Joines from IBM noted, “Data has transformed every industry, profession, and domain.” From our smart phones to fitness devices, data is being collected about us as we speak (see: digital footprint). While it’s definitely pretty scary, the point is that a lot of data is out there. The more data you feed Watson, the smarter it is. IBM has utilized this abundance of data combined with machine learning to produce some of the most sophisticated AI out there.

Sure, it’s a little creepy how much data is being collected on us. Sure, there are tons of movies and theories out there about how intelligent robots in the future will outsmart humans and take over. But A.I. isn’t a thing to be scared of. It’s a beautiful creation that surpasses all capabilities even the most advanced purely programmable model has. It’s joining the health care system to save lives, advising businesses and could potentially find a new inhabitable planet. What we choose to do with A.I. is entirely up to us.

Post by Will Sheehan

Will Sheehan

How a Museum Became a Lab

Encountering and creating art may be some of mankind’s most complex experiences. Art, not just visual but also dancing and singing, requires the brain to understand an object or performance presented to it and then to associate it with memories, facts, and emotions.

A piece in Dario Robleto’s exhibit titled “The Heart’s Knowledge Will Decay” (2014)

In an ongoing experiment, Jose “Pepe” Contreras-Vidal and his team set up in artist Dario Robleto’s exhibit “The Boundary of Life Is Quietly Crossed” at the Menil Collection near downtown Houston. They then asked visitors if they were willing to have their trips through the museum and their brain activities recorded. Robleto’s work was displayed from August 16, 2014 to January 4, 2015. By engaging museum visitors, Contreras-Vidal and Robleto gathered brain activity data while also educating the public, combining research and outreach.

“We need to collect data in a more natural way, beyond the lab” explained Contreras-Vidal, an engineering professor at the University of Houston, during a talk with Robleto sponsored by the Nasher Museum.

More than 3,000 people have participated in this experiment, and the number is growing.

To measure brain activity, the volunteers wear EEG caps which record the electrical impulses that the brain uses for communication. EEG caps are noninvasive because they are just pulled onto the head like swim caps. The caps allow the museum goers to move around freely so Contreras-Vidal can record their natural movements and interactions.

By watching individuals interact with art, Contreras-Vidal and his team can find patterns between their experiences and their brain activity. They also asked the volunteers to reflect on their visit, adding a first person perspective to the experiment. These three sources of data showed them what a young girl’s favorite painting was, how she moved and expressed her reaction to this painting, and how her brain activity reflected this opinion and reaction.

The volunteers can also watch the recordings of their brain signals, giving them an opportunity to ask questions and engage with the science community. For most participants, this is the first time they’ve seen recordings of their brain’s electrical signals. In one trip, these individuals learned about art, science, and how the two can interact. Throughout this entire process, every member of the audience forms a unique opinion and learns something about both the world and themselves as they interact with and make art.

Children with EEG caps explore art.

Contreras-Vidal is especially interested in the gestures people make when exposed to the various stimuli in a museum and hopes to apply this information to robotics. In the future, he wants someone with a robotic arm to not only be able to grab a cup but also to be able to caress it, grip it, or snatch it. For example, you probably can tell if your mom or your best friend is approaching you by their footsteps. Contreras-Vidal wants to restore this level of individuality to people who have prosthetics.

Contreras-Vidal thinks science can benefit art just as much as art can benefit science. Both he and Robleto hope that their research can reduce many artists’ distrust of science and help advance both fields through collaboration.

Post by Lydia Goff

Using Drones to Feed Billions

A drone flying over an agricultural field

Drones revolutionizing farming

As our population continues its rapid growth, food is becoming increasingly scarce. By the year 2050, we will need to double our current food production to feed the estimated 9.6 million mouths that will inhabit Earth.

A portrait of Maggie Monast

Maggie Monast

Thankfully, introducing drones and other high-tech equipment to farmers could be the solution to keeping our bellies full.

Last week, Dr. Ramon G. Leon of North Carolina State University and Maggie Monast of the Environmental Defense Fund spoke at Duke’s monthly Science & Society Dialogue, sharing their knowledge of what’s known as “precision agriculture.” At its core, precision agriculture is integrating technology with farming in order to maximize production.

It is easy to see that farming has already changed as a result of precision agriculture. The old family-run plot of land with animals and diverse crops has turned into large-scale, single-crop operations. This transition was made possible through the use of new technologies — tractors, irrigation, synthetic fertilizer, GMOs, pesticides — and is no doubt way more productive.

A portrait of Dr. Ramon G. Leon

Dr. Ramon G. Leon

So while the concept of precision agriculture certainly isn’t new, in today’s context it incorporates some particularly advanced and unexpected tools meant to further optimize yield while also conserving resources.

Drones equipped with special cameras and sensors, for example, can be flown over thousands of acres and gather huge amounts of data. This data produces a map of  things like pest damage, crop stress and yield. One image from a drone can easily help a farmer monitor what’s going on: where to cut back on resources, what needs more attention, and where to grow a certain type of crop. Some drones can even plant and water crops for you.

Blue River’s “See & Spray” focuses on cutting back herbicide use. Instead of spraying herbicide over an entire field and wasting most of it, this machine is trained to spray weeds directly, using 10% of the normal amount of herbicide.

Similarly, another machine called the Greenseeker can decide where, when and how much fertilizer should be applied based on the greenness of the crop. Fertilizing efficiently means saving money and emitting less ozone-depleting nitrous oxide.

As you can see, fancy toys like these are extremely beneficial, and there are more out there. They enable farmers to make faster, better decisions and understand their land on an unprecedented level. At the same time, farmers can cut back on their resource usage. This should eventually result in a huge productivity boom while helping out the environment. Nice.

One problem preventing these technologies from really taking off is teaching the farmers how to take advantage of them. As Dr. Leon put it, “we have all these toys, but nobody knows how to play with them.” However, this issue can resolved with enough time. Some older farmers love messing around with the drones, and the next generations of farmers will have more exposure to this kind of technology growing up. Sooner or later, it may be no big deal to spot drones circling above fields of wheat as you road trip through the countryside.

A piece of farm equipment in a field

A Greenseeker mounted on a Boom Sprayer

Precision agriculture is fundamental to the modern agricultural revolution. It increases efficiency and reduces waste, and farming could even become a highly profitable business again as the cost for these technologies goes down. Is it the solution to our environmental and production problems? I guess we’ll know by 2050!

Will Sheehan

Post By Will Sheehan

Page 1 of 12

Powered by WordPress & Theme by Anders Norén