1,543 research outputs found

    Letter from Richard McNemar

    Get PDF
    Copy of a letter written by Shaker Richard McNemar to Laurence Roelosson dated “Henderson County (Ky.) March 3d. 1809.

    A Review from Three Directions

    Full text link

    Forensic analysis of digital evidence from Palm Personal Digital Assistants

    Get PDF
    Personal Digital Assistants are becoming more affordable and commonplace. They provide mobile data storage, computational, and network abilities. When handheld devices are involved in a crime, forensic examiners need tools to properly retrieve and analyze data present on the device. Unfortunately, forensic analysis of handheld devices is not adequately documented and supported.;This report gives an overview of Palm handheld development and current forensic software related to Personal Digital Assistants. Procedures for device seizure, storage, imaging, and analysis are documented. In addition, a tool was developed as part of this work to aid forensic examiners in recovering evidence from memory image files

    Enhancing Sensitivity Classification with Semantic Features using Word Embeddings

    Get PDF
    Government documents must be reviewed to identify any sensitive information they may contain, before they can be released to the public. However, traditional paper-based sensitivity review processes are not practical for reviewing born-digital documents. Therefore, there is a timely need for automatic sensitivity classification techniques, to assist the digital sensitivity review process. However, sensitivity is typically a product of the relations between combinations of terms, such as who said what about whom, therefore, automatic sensitivity classification is a difficult task. Vector representations of terms, such as word embeddings, have been shown to be effective at encoding latent term features that preserve semantic relations between terms, which can also be beneficial to sensitivity classification. In this work, we present a thorough evaluation of the effectiveness of semantic word embedding features, along with term and grammatical features, for sensitivity classification. On a test collection of government documents containing real sensitivities, we show that extending text classification with semantic features and additional term n-grams results in significant improvements in classification effectiveness, correctly classifying 9.99% more sensitive documents compared to the text classification baseline

    Rapid Online Analysis of Local Feature Detectors and Their Complementarity

    Get PDF
    A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. © 2013 by the authors; licensee MDPI, Basel, Switzerland

    Book reviews

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45686/1/11336_2005_Article_BF02289203.pd

    The power of literacy: special education students’ perceptions of themselves as literate beings

    Get PDF
    Doctor of PhilosophyCurriculum and InstructionJeong-Hee KimThis phenomenological case study focuses on three secondary special education students’ perceptions of themselves and their lived experiences. The purpose of this study is twofold: First, to understand how secondary special education students perceive themselves as literate beings; and second, to illuminate how secondary special education students understand what it means to be literate and how their lived experiences have shaped their perceptions of being literate. Based on qualitative data, such as, interviews, observations, questionnaire, and a qualitive analysis method, called Interpretive Phenomenological Analysis, I have identified three themes of the lived experiences of the participants: 1) Students’ stability and/or instability of their lived experiences influenced their literacy practices; 2) Being identified as special education students did not prevent them from being literate; and 3) Different lived experiences led to different literacy practices. Based on these themes, I provide implications for educators and policy makers including: understanding secondary special education (SSE) students as literate beings; valuing the varied experiences that SSE students bring to classrooms; capitalizing on SSE students’ self-efficacy and resilience to promote students’ literacy; respecting SSE students’ literacy skills on out-of-school literacy; paying attention to the personal dimensions of literacy practices to meet the needs of the diverse learners; allowing SSE students to demonstrate their literacies in multiple ways; and collaborating between general education and special education teachers to benefit all students. The significance of this study resides in that it focuses on the literacy practices of secondary special education students, whose voices have been largely missing in the literature. This understanding of the voice and the lived experiences that secondary special education students bring to the classroom will help educators, policy makers, and curriculum writers find ways to better serve special education students. In so doing, this study reconceptualizes the power of literacy that needs to be fostered in SSE students, so that they can succeed not only in college and career but also in their personal lives

    Combining support vector machines and segmentation algorithms for efficient anomaly detection: a petroleum industry application

    Get PDF
    Proceedings of: International Joint Conference SOCO’14-CISIS’14-ICEUTE’14, Bilbao, Spain, June 25th–27th, 2014, ProceedingsAnomaly detection is the problem of finding patterns in data that do not conform to expected behavior. Similarly, when patterns are numerically distant from the rest of sample, anomalies are indicated as outliers. Anomaly detection had recently attracted the attention of the research community for real-world applications. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct, or react to the situations associated with them. In that sense, heavy extraction machines for pumping and generation operations like turbomachines are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. For dealing with this and with the lack of labeled data, in this paper we propose a combination of a fast and high quality segmentation algorithm with a one-class support vector machine approach for efficient anomaly detection in turbomachines. As result we perform empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection.This work was partially funded by CNPq BJT Project 407851/2012-7 and CNPq PVE Project 314017/2013-

    The first Automatic Translation Memory Cleaning Shared Task

    Get PDF
    This is an accepted manuscript of an article published by Springer in Machine Translation on 21/01/2017, available online: https://doi.org/10.1007/s10590-016-9183-x The accepted version of the publication may differ from the final published version.This paper reports on the organization and results of the rst Automatic Translation Memory Cleaning Shared Task. This shared task is aimed at nding automatic ways of cleaning translation memories (TMs) that have not been properly curated and thus include incorrect translations. As a follow up of the shared task, we also conducted two surveys, one targeting the teams participating in the shared task, and the other one targeting professional translators. While the researchers-oriented survey aimed at gathering information about the opinion of participants on the shared task, the translators-oriented survey aimed to better understand what constitutes a good TM unit and inform decisions that will be taken in future editions of the task. In this paper, we report on the process of data preparation and the evaluation of the automatic systems submitted, as well as on the results of the collected surveys

    Analysis of matched case–control data with multiple ordered disease states: possible choices and comparisons

    Full text link
    In an individually matched case–control study, effects of potential risk factors are ascertained through conditional logistic regression (CLR). Extension of CLR to situations with multiple disease or reference categories has been made through polychotomous CLR and is shown to be more efficient than carrying out separate CLRs for each subgroup. In this paper, we consider matched case–control studies where there is one control group, but there are multiple disease states with a natural ordering among themselves. This scenario can be observed when the cases can be further classified in terms of the seriousness or progression of the disease, for example, according to different stages of cancer. We explore several popular models for ordered categorical data in this context. We first adopt a cumulative logit or equivalently, a proportional-odds model to account for the ordinal nature of the data. The important distinction of this model from a stratified dichotomous and polychotomous logistic regression model is that the stratum-specific nuisance parameters cannot be eliminated in this model via the conditional-likelihood approach. We discuss a Mantel–Haenszel approach for analysing such data. We point out possible difficulties with standard likelihood-based approaches with the cumulative logit model when applied to case–control data. We then consider an alternative conditional adjacent-category logit model. We illustrate the methods by analysing data from a matched case–control study on low birthweight in newborns where infants are classified according to low and very low birthweight and a child with normal birthweight serves as a control. A simulation study compares the different ordinal methods with methods ignoring sub-classification of the ordered disease states. Copyright © 2007 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/56068/1/2790_ftp.pd
    • …
    corecore