283 research outputs found

    The Larch Environment - Python programs as visual, interactive literature

    Get PDF
    The Larch Environment' is designed for the creation of programs that take the form of interactive technical literature. We introduce a novel approach to combined textual and visual programming by allowing visual, interactive objects to be embedded within textual source code, and segments of source code to be further embedded within those objects. We retain the strengths of text-based source code, while enabling visual programming where it is bene�cial. Additionally, embedded objects and code provide a simple object-oriented approach to extending the syntax of a language, in a similar fashion to LISP macros. We provide a rapid prototyping and experimentation environment in the form of an active document system which mixes rich text with executable source code. Larch is supported by a simple type coercion based presentation protocol that displays normal Java and Python objects in a visual, interactive form. The ability to freely combine objects and source code within one another allows for the construction of rich interactive documents and experimentation with novel programming language extensions

    Convolutional Neural Networks for Counting Fish in Fisheries Surveillance Video

    Get PDF
    We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4N^4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2\% to 16\%. We believe this is the first system that is able to handle footage from operational trawlers

    Eggshell conductance and respiration during development in a burrowing and non-burrowing bird

    Get PDF

    Future NASA solar system exploration activities: A framework for international cooperation

    Get PDF
    The goals and approaches for planetary exploration as defined for the NASA Solar System Exploration Program are discussed. The evolution of the program since the formation of the Solar System Exploration Committee (SSEC) in 1980 is reviewed and the primary missions comprising the program are described

    Self-ensembling for visual domain adaptation

    Get PDF
    This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et al., 2017) of temporal ensembling (Laine et al;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.Comment: 20 pages, 3 figure, accepted as a poster at ICLR 201

    Multi-spectral Pedestrian Detection via Image Fusion and Deep Neural Networks

    Get PDF
    The use of multi-spectral imaging has been found to improve the accuracy of deep neural network-based pedestrian detection systems, particularly in challenging night time conditions in which pedestrians are more clearly visible in thermal long-wave infrared bands than in plain RGB. In this article, the authors use the Spectral Edge image fusion method to fuse visible RGB and IR imagery, prior to processing using a neural network-based pedestrian detection system. The use of image fusion permits the use of a standard RGB object detection network without requiring the architectural modifications that are required to handle multi-spectral input. We contrast the performance of networks trained using fused images to those that use plain RGB images and networks that use a multi-spectral input. © 2018 Society for Imaging Science and Technology

    Threat-Based Approach to Risk, Case Study: The Strategic Homeland Infrastructure Risk Assessment (SHIRA)

    Get PDF
    The culture of risk management is beginning to grow at the Department of Homeland Security (DHS). Created in response to the attacks of September 2001, the Department has as one of its primary missions to protect the nation from terrorism.1 Five years after its creation, and through several reorganizations, DHS still struggles to master risk management with respect to terrorism. Although DHS realized the need for the collaboration of intelligence and security professionals to jointly assess risk at its inception,2 it was not until the formation of the Homeland Infrastructure Threat and Risk Analysis Center (HITRAC) that DHS had a truly integrated approach to terrorism risk analysis

    Using Deep Learning to Count Albatrosses from Space

    Get PDF
    In this paper we test the use of a deep learning approach to automatically count Wandering Albatrosses in Very High Resolution (VHR) satellite imagery. We use a dataset of manually labelled imagery provided by the British Antarctic Survey to train and develop our methods. We employ a U-Net architecture, designed for image segmentation, to simultaneously classify and localise potential albatrosses. We aid training with the use of the Focal Loss criterion, to deal with extreme class imbalance in the dataset. Initial results achieve peak precision and recall values of approximately 80%. Finally we assess the model’s performance in relation to interobserver variation, by comparing errors against an image labelled by multiple observers. We conclude model accuracy falls within the range of human counters. We hope that the methods will streamline the analysis of VHR satellite images, enabling more frequent monitoring of a species which is of high conservation concern

    Using deep learning to count albatrosses from space: Assessing results in light of ground truth uncertainty

    Get PDF
    Many wildlife species inhabit inaccessible environments, limiting researchers ability to conduct essential population surveys. Recently, very high resolution (sub-metre) satellite imagery has enabled remote monitoring of certain species directly from space; however, manual analysis of the imagery is time-consuming, expensive and subjective. State-of-the-art deep learning approaches can automate this process; however, often image datasets are small, and uncertainty in ground truth labels can affect supervised training schemes and the interpretation of errors. In this paper, we investigate these challenges by conducting both manual and automated counts of nesting Wandering Albatrosses on four separate islands, captured by the 31 cm resolution WorldView-3 sensor. We collect counts from six observers, and train a convolutional neural network (U-Net) using leave-one-island-out cross-validation and different combinations of ground truth labels. We show that (1) interobserver variation in manual counts is significant and differs between the four islands, (2) the small dataset can limit the networks ability to generalise to unseen imagery and (3) the choice of ground truth labels can have a significant impact on our assessment of network performance. Our final results show the network detects albatrosses as accurately as human observers for two of the islands, while in the other two misclassifications are largely caused by the presence of noise, cloud cover and habitat, which was not present in the training dataset. While the results show promise, we stress the importance of considering these factors for any study where data is limited and observer confidence is variable

    Colour augmentation for improved semi-supervised semantic segmentation.

    Get PDF
    Consistency regularization describes a class of approaches that have yielded state-of-the-art results for semi-supervised classification. While semi-supervised semantic segmentation proved to be more challenging, a number of successful approaches have been recently proposed. Recent work explored the challenges involved in using consistency regularization for segmentation problems. In their self-supervised work Chen et al. found that colour augmentation prevents a classification network from using image colour statistics as a short-cut for self-supervised learning via instance discrimination. Drawing inspiration from this we find that a similar problem impedes semi-supervised semantic segmentation and offer colour augmentation as a solution, improving semi-supervised semantic segmentation performance on challenging photographic imagery.Comment: 9 pages, 1 figur
    • …
    corecore