4,964 research outputs found

    The Partial Evaluation Approach to Information Personalization

    Get PDF
    Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.Comment: Comprehensive overview of the PIPE model for personalizatio

    Regulating Mobile Mental Health Apps

    Get PDF
    Mobile medical apps (MMAs) are a fast‐growing category of software typically installed on personal smartphones and wearable devices. A subset of MMAs are aimed at helping consumers identify mental states and/or mental illnesses. Although this is a fledgling domain, there are already enough extant mental health MMAs both to suggest a typology and to detail some of the regulatory issues they pose. As to the former, the current generation of apps includes those that facilitate self‐assessment or self‐help, connect patients with online support groups, connect patients with therapists, or predict mental health issues. Regulatory concerns with these apps include their quality, safety, and data protection. Unfortunately, the regulatory frameworks that apply have failed to provide coherent risk‐assessment models. As a result, prudent providers will need to progress with caution when it comes to recommending apps to patients or relying on app‐generated data to guide treatment

    Rethinking Linking: Breathing New Life into OpenURL

    Get PDF
    [manuscript] In this issue of Library Technology Reports, authors Cindi Trainor and Jason Price revisit OpenURL and library linking. The OpenURL framework for context-sensitive linking has been in use for a decade, during which library collections and users\u27 behaviors have undergone radical change. This report examines how libraries can make use of web usability principles and data analysis to improve their local resolver installations and looks to the wider web for what the future of this integral library technology might hold

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Improving Regional and Teleseismic Detection for Single-Trace Waveforms Using a Deep Temporal Convolutional Neural Network Trained with an Array-Beam Catalog

    Get PDF
    The detection of seismic events at regional and teleseismic distances is critical to Nuclear Treaty Monitoring. Traditionally, detecting regional and teleseismic events has required the use of an expensive multi-instrument seismic array; however in this work, we present DeepPick, a novel seismic detection algorithm capable of array-like detection performance from a single-trace. We achieve this performance through three novel steps: First, a high-fidelity dataset is constructed by pairing array-beam catalog arrival-times with single-trace waveforms from the reference instrument of the array. Second, an idealized characteristic function is created, with exponential peaks aligned to the cataloged arrival times. Third, a deep temporal convolutional neural network is employed to learn the complex non-linear filters required to transform the single-trace waveforms into corresponding idealized characteristic functions. The training data consists of all arrivals in the International Seismological Centre Database for seven seismic arrays over a five year window from 1 January 2010 to 1 January 2015, yielding a total training set of 608,362 detections. The test set consists of the same seven arrays over a one year window from 1 January 2015 to 1 January 2016. We report our results by training the algorithm on six of the arrays and testing it on the seventh, so as to demonstrate the generalization and transportability of the technique to new stations. Detection performance against this test set is outstanding, yielding significant improvements in recall over existing techniques. Fixing a type-I error rate of 0.001, the algorithm achieves an overall recall (true positive rate) of 56% against the 141,095 array-beam arrivals in the test set, yielding 78,802 correct detections. This is more than twice the 37,572 detections made by an STA/LTA detector over the same period, and represents a 35% improvement over the 58,515 detections made by a state-of-the-art kurtosis-based detector. Furthermore, DeepPick provides at least a 4 dB improvement in detector sensitivity across the board, and is more computationally efficient, with run-times an order of magnitude faster than either of the other techniques tested. These results demonstrate the potential of our algorithm to significantly enhance the effectiveness of the global treaty monitoring network

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Analysis and Synthesis of Metadata Goals for Scientific Data

    Get PDF
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg’s (2005) metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (\u3e0.6), a Fisher’s exact test for nonparametric data was used to determine significance (p \u3c .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have “scheme harmonization” (compatibility and interoperability with related schemes) as an objective; schemes with the objective “abstraction” (a conceptual model exists separate from the technical implementation) also have the objective “sufficiency” (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective “data publication” do not have the objective “element refinement.” The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes
    corecore