47,484 research outputs found

    From eye to machine: shifting authority in color measurement

    Get PDF
    Given a subject so imbued with contention and conflicting theoretical stances, it is remarkable that automated instruments ever came to replace the human eye as sensitive arbiters of color specification. Yet, dramatic shifts in assumptions and practice did occur in the first half of the twentieth century. How and why was confidence transferred from careful observers to mechanized devices when the property being measured – color – had become so closely identified with human physiology and psychology? A fertile perspective on the problem is via the history of science and technology, paying particular attention to social groups and disciplinary identity to determine how those factors affected their communities’ cognitive territory. There were both common and discordant threads motivating the various technical groups that took on the problems of measuring light and color from the late nineteenth century onwards, and leading them towards the development of appropriate instruments for themselves. The transition from visual to photoelectric methods <i>could</i> be portrayed as a natural evolution, replacing the eye by an alternative roviding more sensitivity and convenience – indeed, this is the conventional positivist view propounded by technical histories. However, the adoption of new measurement technologies seldom is simple, and frequently has a significant cultural component. Beneath this slide towards automation lay a raft of implicit assumptions about objectivity, the nature of the observer, the role of instruments, and the trade-offs between standardization and descriptive power. While espousing rational arguments for a physical detector of color, its proponents weighted their views with tacit considerations. The reassignment of trust from the eye to automated instruments was influenced as much by the historical context as by intellectual factors. I will argue that several distinct aspects were involved, which include the reductive view of color provided by the trichromatic theory; the impetus provided by its association with photometry; the expanding mood for a quantitative and objective approach to scientific observation; and, the pressures for commercial standardization. As suggested by these factors, there was another shift of authority at play: from one technical specialism to another. The regularization of color involved appropriation of the subject by a particular set of social interests: communities of physicists and engineers espousing a ‘physicalist’ interpretation, rather than psychologists and physiologists for whom color was conceived as a more complex phenomenon. Moreover, the sources for automated color measurement, and instrumentation for measuring color, were primarily from the industrial sphere rather than from academic science. To understand these shifts, then, this chapter explores differing views of the importance of observers, machines and automation

    Reviewer Integration and Performance Measurement for Malware Detection

    Full text link
    We present and evaluate a large-scale malware detection system integrating machine learning with expert reviewers, treating reviewers as a limited labeling resource. We demonstrate that even in small numbers, reviewers can vastly improve the system's ability to keep pace with evolving threats. We conduct our evaluation on a sample of VirusTotal submissions spanning 2.5 years and containing 1.1 million binaries with 778GB of raw feature data. Without reviewer assistance, we achieve 72% detection at a 0.5% false positive rate, performing comparable to the best vendors on VirusTotal. Given a budget of 80 accurate reviews daily, we improve detection to 89% and are able to detect 42% of malicious binaries undetected upon initial submission to VirusTotal. Additionally, we identify a previously unnoticed temporal inconsistency in the labeling of training datasets. We compare the impact of training labels obtained at the same time training data is first seen with training labels obtained months later. We find that using training labels obtained well after samples appear, and thus unavailable in practice for current training data, inflates measured detection by almost 20 percentage points. We release our cluster-based implementation, as well as a list of all hashes in our evaluation and 3% of our entire dataset.Comment: 20 papers, 11 figures, accepted at the 13th Conference on Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA 2016

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    Automated multimodal volume registration based on supervised 3D anatomical landmark detection

    Get PDF
    We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ‘shot’ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ‘broadcast’ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
    corecore