28 research outputs found

    A deep learning approach to photo–identification demonstrates high performance on two dozen cetacean species

    Get PDF
    We thank the countless individuals who collected and/or processed the nearly 85,000 images used in this study and those who assisted, particularly those who sorted these images from the millions that did not end up in the catalogues. Additionally, we thank the other Kaggle competitors who helped develop the ideas, models and data used here, particularly those who released their datasets to the public. The graduate assistantship for Philip T. Patton was funded by the NOAA Fisheries QUEST Fellowship. This paper represents HIMB and SOEST contribution numbers 1932 and 11679, respectively. The technical support and advanced computing resources from University of Hawaii Information Technology Services—Cyberinfrastructure, funded in part by the National Science Foundation CC* awards # 2201428 and # 2232862 are gratefully acknowledged. Every photo–identification image was collected under permits according to relevant national guidelines, regulation and legislation.Peer reviewedPublisher PD

    Deep diving by offshore bottlenose dolphins (Tursiops spp.)

    No full text
    We used satellite-linked tags to evaluate dive behavior in offshore bottlenose dolphins (Tursiops spp.) near the island of Bermuda. The data provide evidence that bottlenose dolphins commonly perform both long (&amp;gt;272 s) and deep (&amp;gt;199 m) dives, with the deepest and longest dives being to 1,000 m and 826 s (13.8 min), respectively. The data show a relationship between dive duration and dive depth for dives longer than about 272 s. There was a diurnal pattern to dive behavior, with most dives deeper than 50 m being performed at night; deep diving began at sunset and varied throughout the night. We used the cumulative frequency of dive duration to estimate a behavioral aerobic dive limit (bADL) of around 560-666 s (9.3-11.1 min) in adult dolphins in this population. Dives exceeding the bADL spent significantly longer time in the upper-most 50 m following a dive as compared with dives less than the bADL. We conclude that the offshore ecotype off Bermuda, unlike the shallow-diving near-shore bottlenose dolphin, is a deep-diving ecotype, and may provide a useful animal model to study extreme diving behavior and adaptations.Funding Agencies|Office of Naval Research (ONR YIP award) [N00014-14-1-0563]; Dolphin Quest Inc.</p

    Biologically Important Areas II for cetaceans within U.S. and adjacent waters - Updates and the application of a new scoring system

    Get PDF
    Building on earlier work identifying Biologically Important Areas (BIAs) for cetaceans in U.S. waters (BIA I), we describe the methodology and structured expert elicitation principles used in the “BIA II” effort to update existing BIAs, identify and delineate new BIAs, and score BIAs for 25 cetacean species, stocks, or populations in seven U.S. regions. BIAs represent areas and times in which cetaceans are known to concentrate for activities related to reproduction, feeding, and migration, as well as known ranges of small and resident populations. In this BIA II effort, regional cetacean experts identified the full extent of any BIAs in or adjacent to U.S. waters, based on scientific research, Indigenous knowledge, local knowledge, and community science. The new BIA scoring and labeling system improves the utility and interpretability of the BIAs by designating an overall Importance Score that considers both (1) the intensity and characteristics underlying an area’s identification as a BIA; and (2) the quantity, quality, and type of information, and associated uncertainties upon which the BIA delineation and scoring depend. Each BIA is also scored for boundary uncertainty and spatiotemporal variability (dynamic, ephemeral, or static). BIAs are region-, species-, and time-specific, and may be hierarchically structured where detailed information is available to support different scores across a BIA. BIAs are compilations of the best available science and have no inherent regulatory authority. BIAs may be used by international, federal, state, local, or Tribal entities and the public to support planning and marine mammal impact assessments, and to inform the development of conservation and mitigation measures, where appropriate under existing authorities. Information provided online for each BIA includes: (1) a BIA map; (2) BIA scores and label; (3) a metadata table detailing the data, assumptions, and logic used to delineate, score, and label the BIA; and (4) a list of references used in the assessment. Regional manuscripts present maps and scores for the BIAs, by region, and narratives summarizing the rationale and information upon which several representative BIAs are based. We conclude with a comparison of BIA II to similar international efforts and recommendations for improving future BIA assessments

    DataSheet_2_Rise of the Machines: Best Practices and Experimental Evaluation of Computer-Assisted Dorsal Fin Image Matching Systems for Bottlenose Dolphins.pdf

    No full text
    Photographic-identification (photo-ID) of bottlenose dolphins using individually distinctive features on the dorsal fin is a well-established and useful tool for tracking individuals; however, this method can be labor-intensive, especially when dealing with large catalogs and/or infrequently surveyed populations. Computer vision algorithms have been developed that can find a fin in an image, characterize the features of the fin, and compare the fin to a catalog of known individuals to generate a ranking of potential matches based on dorsal fin similarity. We examined if and how researchers use computer vision systems in their photo-ID process and developed an experiment to evaluate the performance of the most commonly used, recently developed, systems to date using a long-term photo-ID database of known individuals curated by the Chicago Zoological Society’s Sarasota Dolphin Research Program. Survey results obtained for the “Rise of the machines – Application of automated systems for matching dolphin dorsal fins: current status and future directions” workshop held at the 2019 World Marine Mammal Conference indicated that most researchers still rely on manual methods for comparing unknown dorsal fin images to reference catalogs of known individuals. Experimental evaluation of the finFindR R application, as well as the CurvRank, CurvRank v2, and finFindR implementations in Flukebook suggest that high match rates can be achieved with these systems, with the highest match rates found when only good to excellent quality images of fins with average to high distinctiveness are included in the matching process: for the finFindR R application and the CurvRank and CurvRank v2 algorithms within Flukebook more than 98.92% of correct matches were in the top 50-ranked positions, and more than 91.94% of correct matches were returned in the first ranked position. Our results offer the first comprehensive examination into the performance and accuracy of computer vision algorithms designed to assist with the photo-ID process of bottlenose dolphins and can be used to build trust by researchers hesitant to use these systems. Based on our findings and discussions from the “Rise of the Machines” workshop we provide recommendations for best practices for using computer vision systems for dorsal fin photo-ID.</p

    DataSheet_3_Rise of the Machines: Best Practices and Experimental Evaluation of Computer-Assisted Dorsal Fin Image Matching Systems for Bottlenose Dolphins.pdf

    No full text
    Photographic-identification (photo-ID) of bottlenose dolphins using individually distinctive features on the dorsal fin is a well-established and useful tool for tracking individuals; however, this method can be labor-intensive, especially when dealing with large catalogs and/or infrequently surveyed populations. Computer vision algorithms have been developed that can find a fin in an image, characterize the features of the fin, and compare the fin to a catalog of known individuals to generate a ranking of potential matches based on dorsal fin similarity. We examined if and how researchers use computer vision systems in their photo-ID process and developed an experiment to evaluate the performance of the most commonly used, recently developed, systems to date using a long-term photo-ID database of known individuals curated by the Chicago Zoological Society’s Sarasota Dolphin Research Program. Survey results obtained for the “Rise of the machines – Application of automated systems for matching dolphin dorsal fins: current status and future directions” workshop held at the 2019 World Marine Mammal Conference indicated that most researchers still rely on manual methods for comparing unknown dorsal fin images to reference catalogs of known individuals. Experimental evaluation of the finFindR R application, as well as the CurvRank, CurvRank v2, and finFindR implementations in Flukebook suggest that high match rates can be achieved with these systems, with the highest match rates found when only good to excellent quality images of fins with average to high distinctiveness are included in the matching process: for the finFindR R application and the CurvRank and CurvRank v2 algorithms within Flukebook more than 98.92% of correct matches were in the top 50-ranked positions, and more than 91.94% of correct matches were returned in the first ranked position. Our results offer the first comprehensive examination into the performance and accuracy of computer vision algorithms designed to assist with the photo-ID process of bottlenose dolphins and can be used to build trust by researchers hesitant to use these systems. Based on our findings and discussions from the “Rise of the Machines” workshop we provide recommendations for best practices for using computer vision systems for dorsal fin photo-ID.</p

    DataSheet_1_Rise of the Machines: Best Practices and Experimental Evaluation of Computer-Assisted Dorsal Fin Image Matching Systems for Bottlenose Dolphins.pdf

    No full text
    Photographic-identification (photo-ID) of bottlenose dolphins using individually distinctive features on the dorsal fin is a well-established and useful tool for tracking individuals; however, this method can be labor-intensive, especially when dealing with large catalogs and/or infrequently surveyed populations. Computer vision algorithms have been developed that can find a fin in an image, characterize the features of the fin, and compare the fin to a catalog of known individuals to generate a ranking of potential matches based on dorsal fin similarity. We examined if and how researchers use computer vision systems in their photo-ID process and developed an experiment to evaluate the performance of the most commonly used, recently developed, systems to date using a long-term photo-ID database of known individuals curated by the Chicago Zoological Society’s Sarasota Dolphin Research Program. Survey results obtained for the “Rise of the machines – Application of automated systems for matching dolphin dorsal fins: current status and future directions” workshop held at the 2019 World Marine Mammal Conference indicated that most researchers still rely on manual methods for comparing unknown dorsal fin images to reference catalogs of known individuals. Experimental evaluation of the finFindR R application, as well as the CurvRank, CurvRank v2, and finFindR implementations in Flukebook suggest that high match rates can be achieved with these systems, with the highest match rates found when only good to excellent quality images of fins with average to high distinctiveness are included in the matching process: for the finFindR R application and the CurvRank and CurvRank v2 algorithms within Flukebook more than 98.92% of correct matches were in the top 50-ranked positions, and more than 91.94% of correct matches were returned in the first ranked position. Our results offer the first comprehensive examination into the performance and accuracy of computer vision algorithms designed to assist with the photo-ID process of bottlenose dolphins and can be used to build trust by researchers hesitant to use these systems. Based on our findings and discussions from the “Rise of the Machines” workshop we provide recommendations for best practices for using computer vision systems for dorsal fin photo-ID.</p

    Vulnerability to climate change of United States marine mammal stocks in the western North Atlantic, Gulf of Mexico, and Caribbean.

    No full text
    Climate change and climate variability are affecting marine mammal species and these impacts are projected to continue in the coming decades. Vulnerability assessments provide a framework for evaluating climate impacts over a broad range of species using currently available information. We conducted a trait-based climate vulnerability assessment using expert elicitation for 108 marine mammal stocks and stock groups in the western North Atlantic, Gulf of Mexico, and Caribbean Sea. Our approach combined the exposure (projected change in environmental conditions) and sensitivity (ability to tolerate and adapt to changing conditions) of marine mammal stocks to estimate vulnerability to climate change, and categorize stocks with a vulnerability index. The climate vulnerability score was very high for 44% (n = 47) of these stocks, high for 29% (n = 31), moderate for 20% (n = 22), and low for 7% (n = 8). The majority of stocks (n = 78; 72%) scored very high exposure, whereas 24% (n = 26) scored high, and 4% (n = 4) scored moderate. The sensitivity score was very high for 33% (n = 36) of these stocks, high for 18% (n = 19), moderate for 34% (n = 37), and low for 15% (n = 16). Vulnerability results were summarized for stocks in five taxonomic groups: pinnipeds (n = 4; 25% high, 75% moderate), mysticetes (n = 7; 29% very high, 57% high, 14% moderate), ziphiids (n = 8; 13% very high, 50% high, 38% moderate), delphinids (n = 84; 52% very high, 23% high, 15% moderate, 10% low), and other odontocetes (n = 5; 60% high, 40% moderate). Factors including temperature, ocean pH, and dissolved oxygen were the primary drivers of high climate exposure, with effects mediated through prey and habitat parameters. We quantified sources of uncertainty by bootstrapping vulnerability scores, conducting leave-one-out analyses of individual attributes and individual scorers, and through scoring data quality for each attribute. These results provide information for researchers, managers, and the public on marine mammal responses to climate change to enhance the development of more effective marine mammal management, restoration, and conservation activities that address current and future environmental variation and biological responses due to climate change

    Exposure factor mean scores for all scored stocks.

    No full text
    Exposure factor mean scores for 108 U.S. marine mammal stocks in the western North Atlantic, Gulf of Mexico, and Caribbean Sea. The vertical bar represents the median; the box is bounded by the first and third quartiles; whiskers represent 1.5 times the inter-quartile range; points represent all outlying values.</p
    corecore