4 research outputs found

    A field and video-annotation guide for baited remote underwater stereo-video surveys of demersal fish assemblages

    Get PDF
    Researchers TL, BG, JW, NB and JM were supported by the Marine Biodiversity Hub through funding from the Australian Government's National Environmental Science Program. Data validation scripts and GlobalArchive.org were supported by the Australian Research Data Commons, the Gorgon-Barrow Island Gorgon Barrow Island Net Conservation Benefits Fund, administered by the Government of Western Australia and the BHP/UWA Biodiversity and Societal Benefits of Restricted Access Areas collaboration.1. Baited remote underwater stereo-video systems (stereo-BRUVs) are a popular tool to sample demersal fish assemblages and gather data on their relative abundance and body-size structure in a robust, cost-effective, and non-invasive manner. Given the rapid uptake of the method, subtle differences have emerged in the way stereo-BRUVs are deployed and how the resulting imagery are annotated. These disparities limit the interoperability of datasets obtained across studies, preventing broad-scale insights into the dynamics of ecological systems. 2. We provide the first globally accepted guide for using stereo-BRUVs to survey demersal fish assemblages and associated benthic habitats. 3. Information on stereo-BRUV design, camera settings, field operations, and image annotation are outlined. Additionally, we provide links to protocols for data validation, archiving, and sharing. 4. Globally, the use of stereo-BRUVs is spreading rapidly. We provide a standardised protocol that will reduce methodological variation among researchers and encourage the use of Findable, Accessible, Interoperable, and Reproducible (FAIR) workflows to increase the ability to synthesise global datasets and answer a broad suite of ecological questions.Publisher PDFPeer reviewe

    Unravelling student evaluations of courses and teachers

    No full text
    There is debate over the functional basis of student evaluations of academics, and fresh potential for looking at the data in new ways. Student evaluation data was collated over a three year period (Semester 2 2015 to Semester 1 2018). We used a General Linear Model to estimate the variation in course scores explained by a number of coordinator and course attributes. Three significant factors collectively explain 49% of the School’s variation in course scores—individual coordinator, student evaluation response rate, and mode of delivery. Next, we used hierarchical clustering to explore the inter-relationships among the eight course and teaching evaluation questions. Learning appears to be related to stimulation, whereas overall satisfaction appears to be related to quality of learning materials and course structure (i.e. aspects of course organisation). Student evaluation response rate is positively correlated to all eight course questions, but most positively to a question relating to receiving adequate feedback. This perhaps implies some reciprocity in the flow of information between student and coordinator. The overall teaching rating awarded to academics clusters most with approachability and encouragement of student input—aspects of temperament and style—and not with explanatory skill or organisational ability
    corecore