8 research outputs found
Habitat-Net: Habitat interpretation using deep neural nets
Biological diversity is decreasing at a rate of 100-1000 times pre-human rates [1] [2], and tropical rainforests are among the most vulnerable ecosystems. To avoid species extinction, we need to understand factors influencing the occurrence of species. Fast, reliable computer-assisted tools can help to describe the habitat and thus to understand species habitat associations. This understanding is of utmost importance for more targeted species conservation efforts. Due to logistical challenges and time-consuming manual processing of field data, months up to years are often needed to progress from data collection to data interpretation. Deep learning can be used to significantly shorten the time while keeping a similar level of accuracy. Here, we propose Habitat-Net: a novel Convolutional Neural Network (CNN) based method to segment habitat images of rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution produces a binary segmentation of an image. The primary contribution of Habitat-Net is the translation of medical imaging knowledge (inspired by U-Net [3]) to ecological problems. The entire Habitat-Net pipeline works automatically without any user interaction. Our only assumption is the availability of annotated images, from which Habitat-Net learns the most distinguishing features automatically. In our experiments, we use two habitat datasets: (1) canopy and (2) understory vegetation. We train the model with 800 canopy images and 700 understory images separately. Our testing dataset has 150 canopy and 170 understory images. We use the Dice coefficient and Jaccard Index to quantify the overlap between ground-truthed segmentation images and those obtained by Habitat-Net model. This results in a mean Dice Score (mean Jaccard Index) for the segmentation of canopy and understory images of 0.89 (0.81) and 0.79 (0.69), respectively. Compared to manual segmentation, Habitat-Net prediction is approximately 3K – 150K times faster. For a typical canopy dataset of 335 images, Habitat-Net reduces total processing time to 5 seconds (15 milliseconds/ image) from 4 hours (45 seconds/ image). In this study, we show that it is possible to speed up the data pipeline using deep learning in the ecological domain. In the future, we plan to create a freely available mobile app based on Habitat-Net technology to characterize the habitat directly and automated in the field. In combination with ecological models our tools will help to understand the ecology of some poorly known, but often highly threatened, species and thus contribute to more timely conservation interventions.
REFERENCES:
1. Sachs et al. "Biodiversity conservation and the millennium development goals." Science 325.5947 (2009): 1502-1503.
2. Chapin Iii, F. Stuart, et al. "Consequences of changing biodiversity." Nature 405.6783 (2000): 234.
3. Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015
Interrelationship between electrocoalescence and interfacial tension in a high acidity crude: Effect of pH and nature of alkalinity
The efficacy of electrocoalescence is critically dependent upon the interfacial tension of the crude-water interface. This study demonstrates the effect of interfacial tension on the electrocoalescence efficiency in crudes with high acidity. The interfacial tension is estimated using spinning drop tensiometer (SDT) and electrocoalescence experiments are performed at an electric field = 1.15 kV(rms)/cm at a frequency of 50 Hz. It is observed that separation of water from the crude is hindered at high pH for two very different reasons depending upon the source of alkalinity. Calcium hydroxide induced alkalinity leads to more rigid interface, resulting in delayed electrocoalescence. On the other hand, sodium hydroxide based alkalinity leads to ultra-low tension of crude-water interface, thereby causing oil-in-water emulsion. Increase in the pH also leads to poor quality of brine resolution, in case of sodium hydroxide based alkalinity (pH = 10) we get unresolved turbid emulsion
Habitat-Net: Segmentation of habitat images using deep learning
Understanding environmental factors that influence forest health, as well as the occurrence and abundance of wildlife, is a central topic in forestry and ecology. However, the manual processing of field habitat data is time-consuming and months are often needed to progress from data collection to data interpretation. To shorten the time to process the data we propose here Habitat-Net: a novel deep learning application based on Convolutional Neural Networks (CNN) to segment habitat images of tropical rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution, produces a binary segmentation of the input image. We worked on two different types of habitat datasets that are widely used in ecological studies to characterize the forest conditions: canopy closure and understory vegetation. We trained the model with 800 canopy images and 700 understory images separately and then used 149 canopy and 172 understory images to test the performance of Habitat-Net. We compared the performance of Habitat-Net to the performance of a simple threshold based method, manual processing by a second researcher and a CNN approach called U-Net, upon which Habitat-Net is based. Habitat-Net, U-Net and simple thresholding reduced total processing time to milliseconds per image, compared to 45 s per image for manual processing. However, the higher mean Dice coefficient of Habitat-Net (0.94 for canopy and 0.95 for understory) indicates that accuracy of Habitat-Net is higher than that of both the simple thresholding (0.64, 0.83) and U-Net (0.89, 0.94). Habitat-Net will be of great relevance for ecologists and foresters, who need to monitor changes in their forest structures. The automated workflow not only reduces the time, it also standardizes the analytical pipeline and, thus, reduces the degree of uncertainty that would be introduced by manual processing of images by different people (either over time or between study sites)