235 research outputs found

    Dwarf minke whale tourism monitoring program (2003--2008)

    Get PDF
    This report provides a comprehensive account of interactions with dwarf minke whales by swimming-with-whales (SWW) endorsed vessels in the Cairns/Cooktown Management Area of the Great Barrier Reef Marine Park over the period 2003 to 2008. Results presented in this report are primarily based on analyses of Great Barrier Reef tourism industry-collected Whale Sighting Sheets. Key management processes and outcomes, arising from bi-annual stakeholder workshops (held pre- and post-season) during the 2003–2008 Great Barrier Reef Marine Park Authority-funded Dwarf Minke Whale Tourism Monitoring Program are also summarised and discussed. During the latter three years of this program, three PhD studies (by Mangott, Sobtzick and Curnock) contributed significantly to our knowledge of this unique aggregation of dwarf minke whales, their interactions with humans in the Great Barrier Reef Marine Park and the sustainable management of these interactions. Some of the key findings of these three PhD studies are included in this report

    Sound Design for a System of 1000 Distributed Independent Audio-Visual Devices

    Full text link
    This paper describes the sound design for Bloom, a light and sound installation made up of 1000 distributed independent audio-visual pixel devices, each with RGB LEDs, Wifi, Accelerometer, GPS sensor, and sound hardware. These types of systems have been explored previously, but only a few systems have exceeded 30-50 devices and very few have included sound capability, and therefore the sound design possibilities for large systems of distributed audio devices are not yet well understood. In this article we describe the hardware and software implementation of sound synthesis for this system, and the implications for design of media for this context

    Sound design for a system of 1000 distributed independent audio-visual devices

    Get PDF
    This paper describes the sound design for Bloom, a light and sound installation made up of 1000 distributed independent audio-visual pixel devices, each with RGB LEDs, Wifi, Accelerometer, GPS sensor, and sound hardware. These types of systems have been explored previously, but only a few systems have exceeded 30-50 devices and very few have included sound capability, and therefore the sound design possibilities for large systems of distributed audio devices are not well understood. In this article we describe the hardware and software implementation of sound synthesis for this system, and the implications for design of media for this context

    Individual minke whale recognition using deep learning convolutional neural networks

    Get PDF
    The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification of individual whales is required for research on the whales’ population characteristics and for monitoring the potential impacts of tourism activities, including commercial swims with the whales. At present, it is not cost-effective for researchers to manually process and analyze the tens of thousands of underwater images collated after each observation/tourist season, and a large data base of historical non-identified imagery exists. This study reports the first proof of concept for recognizing individual dwarf minke whales using the Deep Learning Convolutional Neural Networks (CNN).The “off-the-shelf” Image net-trained VGG16 CNN was used as the feature-encoder of the perpixel sematic segmentation Automatic Minke Whale Recognizer (AMWR). The most frequently photographed whale in a sample of 76 individual whales (MW1020) was identified in 179 images out of the total 1320 images provided. Training and image augmentation procedures were developed to compensate for the small number of available images. The trained AMWR achieved 93% prediction accuracy on the testing subset of 36 positive/MW1020 and 228 negative/not-MW1020 images, where each negative image contained at least one of the other 75 whales. Furthermore on the test subset, AMWR achieved 74% precision, 80% recall, and 4% false-positive rate, making the presented approach comparable or better to other state-of-the-art individual animal recognition results

    Efficiency of scanning and attention to faces in infancy independently predict language development in a multiethnic and bilingual sample of 2-year-olds

    Get PDF
    Efficient visual exploration in infancy is essential for cognitive and language development. It allows infants to participate in social interactions by attending to faces and learning about objects of interest. Visual scanning of scenes depends on a number of factors and early differences in efficiency are likely contributing to differences in learning and language development during subsequent years. Predicting language development in diverse samples is particularly challenging, as additional multiple sources of variability affect infant performance. In this study we tested how the complexity of visual scanning in the presence or absence of a face at 6-7 months of age is related to language development at 2 years of age in a multi-ethnic and predominantly bilingual sample from diverse socio-economic backgrounds. We used Recurrence Quantification Analysis to measure the temporal and spatial distribution of fixations recurring in the same area of a visual scene. We found that in the absence of a face the temporal distribution of re-fixations on selected objects of interest (but not all) significantly predicted both receptive and expressive language scores, explaining 16 - 20% of the variance. Also, lower rate of re-fixations recurring in the presence of a face predicted higher receptive language scores, suggesting larger vocabulary in infants that effectively disengage from faces. Altogether, our results suggest that dynamic measures, which quantify the complexity of visual scanning can reliably and robustly predict language development in highly diverse samples. They suggest that selective attending to objects predicts language independently of attention to faces. As eye-tracking and language assessments were carried out in early intervention centres, our study demonstrates the utility of mobile eye-tracking setups for early detection of risk in attention and language development

    Beyond fixation durations: Recurrence quantification analysis reveals spatiotemporal dynamics of infant visual scanning

    Get PDF
    Standard looking-duration measures in eye-tracking data provide only general quantitative indices, while details of the spatiotemporal structuring of fixation sequences are lost. To overcome this, various tools have been developed to measure the dynamics of fixations. However, these analyses are only useful when stimuli have high perceptual similarity and they require the previous definition of areas of interest (AOIs). Although these methods have been widely applied in adult studies, relatively little is known about the temporal structuring of infant gaze-foraging behaviors such as variability of scanning over time or individual scanning patterns. Thus, to shed more light on the spatiotemporal characteristics of infant fixation sequences we apply for the first time a new methodology for nonlinear time-series analysis—the recurrence quantification analysis (RQA). We present how the dynamics of infant scanning varies depending on the scene content during a "pop-out" search task. Moreover, we show how the normalization of RQA measures with average fixation durations provides a more detailed account of the dynamics of fixation sequences. Finally, we link the RQA measures of temporal dynamics of scanning with the spatial information about the stimuli using heat maps of recurrences without the need for defining a priori AOIs and present how infants’ foraging strategies are driven by the image content. We conclude from our findings that the RQA methodology has potential applications in the analysis of the temporal dynamics of infant visual foraging offering advantages over existing methods

    Automatic sorting of Dwarf Minke Whale underwater images

    Get PDF
    Abstract: Apredictableaggregationofdwarfminkewhales(Balaenopteraacutorostratasubspecies) occurs annually in the Australian waters of the northern Great Barrier Reef in June–July, which has been the subject of a long-term photo-identification study. Researchers from the Minke Whale Project (MWP) at James Cook University collect large volumes of underwater digital imagery each season (e.g., 1.8TB in 2018), much of which is contributed by citizen scientists. Manual processing and analysis of this quantity of data had become infeasible, and Convolutional Neural Networks (CNNs) offered a potential solution. Our study sought to design and train a CNN that could detect whales from video footage in complex near-surface underwater surroundings and differentiate the whales from people, boats and recreational gear. We modified known classification CNNs to localise whales in video frames and digital still images. The required high classification accuracy was achieved by discovering an effective negative-labelling training technique. This resulted in a less than 1% false-positive classification rate and below 0.1% false-negative rate. The final operation-version CNN-pipeline processed all videos (with the interval of 10 frames) in approximately four days (running on two GPUs) delivering 1.95 million sorted images

    Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    Get PDF
    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants
    corecore