11 research outputs found

    ON THE FOUNDATIONS OF COMPUTABILITY THEORY

    Get PDF
    The principal motivation for this work is the observation that there are significant deficiencies in the foundations of conventional computability theory. This thesis examines the problems with conventional computability theory, including its failure to address discrepancies between theory and practice in computer science, semantic confusion in terminology, and limitations in the scope of conventional computing models. In light of these difficulties, fundamental notions are re-examined and revised definitions of key concepts such as “computer,” “computable,” and “computing power” are provided. A detailed analysis is conducted to determine desirable semantics and scope of applicability of foundational notions. The credibility of the revised definitions is ascertained by demonstrating by their ability to address identified problems with conventional definitions. Their practical utility is established through application to examples. Other related issues, including hidden complexity in computations, subtleties related to encodings, and the cardinalities of sets involved in computing, are examined. A resource-based meta-model for characterizing computing model properties is introduced. The proposed definitions are presented as a starting point for an alternate foundation for computability theory. However, formulation of the particular concepts under discussion is not the sole purpose of the thesis. The underlying objective of this research is to open discourse on alternate foundations of computability theory and to inspire re-examination of fundamental notions

    Sparse coding for efficient bioacoustic data mining: Preliminary application to analysis of whale songs

    No full text
    International audienceBioacoustic monitoring, such as surveys of animal populations and migration, needs efficient data mining methods to extract information from large datasets covering multi-year and multi-location recordings. Usually, the study of the humpback whake songs is based on the classification of sound units, notably to extract the song theme of the singers, which might signify the geographic origin and the year of the song. Most of these analyses are currently done with expert intervention, but the volume of recordings drive the need for automated methods for sound unit classification. This paper introduces a method for sparse coding of bioacous-tic recordings in order to efficiently compress and automatically extract patterns in data. Moreover, this paper proposes that sparse coding of the song at different time scales supports the distinction of stable song components versus those which evolve year to year. It is shown that shorter codes are more stable, occurring with similar frequency across two consecutive years, while the occurrence of longer units varies across years as expected based on the prior manual analysis. We conclude by exploring further possibilities of the application of this method for biopopulation analysis

    Expert, Crowd, Students or Algorithm: who holds the key to deep-sea imagery ‘big data’ processing?

    No full text
    2018 Ocean Sciences Meeting, 11-16 February, in Portland, OregonRecent technological development has increased our capacity to study the deep sea and the marine benthic realm, particularly with the development of multidisciplinary seafloor observatories. Since 2006, Ocean Networks Canada cabled observatories, have acquired nearly 65 TB and over 90 000 h of video data from seafloor cameras and remotely operated vehicles. Manual processing of these data is time-consuming and highly labour-intensive, and cannot be comprehensively undertaken by individual researchers. These videos are a crucial source of information for assessing natural variability and ecosystem responses to increasing human activity in the deep sea. We compared the performance of three groups of humans and one computer vision algorithm in counting individuals of the commercially important sablefish (or black cod) Anoplopoma fimbria, in recorded video from a cabled camera platform at 900 m depth in a submarine canyon in the Northeast Pacific. The first group of human observers were untrained volunteers recruited via a crowdsourcing platform and the second were experienced university students, who performed the task for their ichthyology class. Results were validated against counts obtained from a scientific expert. All groups produced relatively accurate results in comparison to the expert and all succeeded in detecting patterns and periodicities in fish abundance data. Trained volunteers displayed the highest accuracy and the algorithm the lowest. As seafloor observatories increase in number around the world, this study demonstrates the value of a hybrid combination of crowdsourcing and computer vision techniques as a tool to help process large volumes of imagery to support basic research and environmental monitoring. Reciprocally, by engaging large numbers of online participants in deep-sea research, this approach can contribute significantly to ocean literacy and informed citizen input to policy developmentPeer Reviewe

    Learning deep-sea substrate types with visual topic models

    No full text
    10.1109/WACV.2016.7477600International audienceno abstrac

    Expert, Crowd, Students or Algorithm: who holds the key to deep-sea imagery ‘big data’ processing?

    No full text
    1.Recent technological development has increased our capacity to study the deep sea and the marine benthic realm, particularly with the development of multidisciplinary seafloor observatories. Since 2006, Ocean Networks Canada cabled observatories, have acquired nearly 65 TB and over 90,000 hours of video data from seafloor cameras and Remotely Operated Vehicles (ROVs). Manual processing of these data is time-consuming and highly labour-intensive, and cannot be comprehensively undertaken by individual researchers. These videos are a crucial source of information for assessing natural variability and ecosystem responses to increasing human activity in the deep sea. 2.We compared the performance of three groups of humans and one computer vision algorithm in counting individuals of the commercially important sablefish (or black cod) Anoplopoma fimbria, in recorded video from a cabled camera platform at 900 m depth in a submarine canyon in the Northeast Pacific. The first group of human observers were untrained volunteers recruited via a crowdsourcing platform and the second were experienced university students, who performed the task for their ichthyology class. Results were validated against counts obtained from a scientific expert. 3.All groups produced relatively accurate results in comparison to the expert and all succeeded in detecting patterns and periodicities in fish abundance data. Trained volunteers displayed the highest accuracy and the algorithm the lowest. 4.As seafloor observatories increase in number around the world, this study demonstrates the value of a hybrid combination of crowdsourcing and computer vision techniques as a tool to help process large volumes of imagery to support basic research and environmental monitoring. Reciprocally, by engaging large numbers of online participants in deep-sea research, this approach can contribute significantly to ocean literacy and informed citizen input to policy development

    Data from: Expert, crowd, students or algorithm: who holds the key to deep-sea imagery ‘big data’ processing?

    No full text
    1. Recent technological development has increased our capacity to study the deep sea and the marine benthic realm, particularly with the development of multidisciplinary seafloor observatories. Since 2006, Ocean Networks Canada cabled observatories, has acquired nearly 65 TB and over 90,000 hours of video data from seafloor cameras and Remotely Operated Vehicles (ROVs). Manual processing of these data is time-consuming and highly labour-intensive, and cannot be comprehensively undertaken by individual researchers. These videos contain valuable information for faunal and environmental monitoring, and are a crucial source of information for assessing natural variability and ecosystem responses to increasing human activity in the deep sea. 2. In this study, we compared the performance of three groups of humans and one computer vision algorithm in counting individuals of the commercially important sablefish (or black cod) Anoplopoma fimbria, in recorded video from a cabled camera platform at 900 m depth in a submarine canyon in the Northeast Pacific. The first group of human observers were untrained volunteers recruited via a crowdsourcing platform and the second were experienced university students, who performed the task in the context of an ichthyology class. Results were validated against counts obtained from a scientific expert. 3. All groups produced relatively accurate results in comparison to the expert and all succeeded in detecting patterns and periodicities in fish abundance data. Trained volunteers displayed the highest accuracy and the algorithm the lowest. 4. As seafloor observatories increase in number around the world, this study demonstrates the value of a hybrid combination of crowdsourcing and computer vision techniques, as a tool to help process large volumes of imagery to support basic research and environmental monitoring. Reciprocally, by engaging large numbers of online participants in deep-sea research, this approach can contribute significantly to ocean literacy and informed citizen input to policy development

    algorithm_data

    No full text
    Contains all the data acquired using the computer vision algorithm. Details of the columns are as follows: Video: Name of the video in the Ocean Networks Canada digital infrastructure database. Date: Date of video acquisition Time: Time of video acquisition Counts: Number of sablefish in the video MatchDateTime: Date and Time used to match the counts among the different group

    expert_data

    No full text
    Contains all the data acquired by the PhD student referred as the expert. Details of the columns are as follows: Date: Date of video acquisition Time: Time of video acquisition Counts: Number of sablefish in the video MatchDateTime: Date and Time used to match the counts among the different group

    students_data

    No full text
    Contains all the data acquired by the group of students. Details of the columns are as follows: UserID: unique ID attributed to each observer in the Ocean Networks Canada digital infrastructure database. Date: Date of video acquisition Time: Time of video acquisition Counts: Number of sablefish in the video MatchDateTime: Date and Time used to match the counts among the different group
    corecore