Using a deep convolutional neural network to identify Dreissenid mussels

Session: Remote Sensing, Visualization, and Spatial Data Applications for the Great Lakes (4)

Jennifer Wardell, US Geological Survey, [email protected]
Peter Esselman, U.S. Geological Survey, [email protected]

Abstract

Deep convolutional neural networks (DCNNs) are used to analyze visual imagery using layers in a neural network stacked together.  This project uses a DCNN developed by Dan Buscombe that identifies different features in an image by labeling each category in a different color.  The code for the DCNN was originally written to identify landscape features from aerial photos, however, I have applied it to underwater photographs in order to identify mussels, substrates, and other aquatic life.  The main focus of this project is to see if a DCNN can accurately differentiate between dead and live mussels and also differentiate the mussels from the surrounding substrates.  This project was a starting point in order to see how the accuracy of the labelling increases with more training photos and also to see where there are areas to improve on in order to create and train a DCNN that can accurately identify dreissenid mussels.  Once this is developed, we will be able to get an accurate estimation of dreissenid quantities and biomass in our Great Lakes.  Using this, we can learn more about the current state of the food web and nutrient cycling to hopefully work on management techniques in the future.