24,058 research outputs found

    Caregiver Assessment Using Smart Gaming Technology: A Preliminary Approach

    Get PDF
    As pre-diagnostic technologies are becoming increasingly accessible, using them to improve the quality of care available to dementia patients and their caregivers is of increasing interest. Specifically, we aim to develop a tool for non-invasively assessing task performance in a simple gaming application. To address this, we have developed Caregiver Assessment using Smart Gaming Technology (CAST), a mobile application that personalizes a traditional word scramble game. Its core functionality uses a Fuzzy Inference System (FIS) optimized via a Genetic Algorithm (GA) to provide customized performance measures for each user of the system. With CAST, we match the relative level of difficulty of play using the individual's ability to solve the word scramble tasks. We provide an analysis of the preliminary results for determining task difficulty, with respect to our current participant cohort.Comment: 7 pages, 1 figures, 6 table

    Cooperation between expert knowledge and data mining discovered knowledge: Lessons learned

    Get PDF
    Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types

    Prospect patents, data markets, and the commons in data-driven medicine : openness and the political economy of intellectual property rights

    Get PDF
    Scholars who point to political influences and the regulatory function of patent courts in the USA have long questioned the courts’ subjective interpretation of what ‘things’ can be claimed as inventions. The present article sheds light on a different but related facet: the role of the courts in regulating knowledge production. I argue that the recent cases decided by the US Supreme Court and the Federal Circuit, which made diagnostics and software very difficult to patent and which attracted criticism for a wealth of different reasons, are fine case studies of the current debate over the proper role of the state in regulating the marketplace and knowledge production in the emerging information economy. The article explains that these patents are prospect patents that may be used by a monopolist to collect data that everybody else needs in order to compete effectively. As such, they raise familiar concerns about failure of coordination emerging as a result of a monopolist controlling a resource such as datasets that others need and cannot replicate. In effect, the courts regulated the market, primarily focusing on ensuring the free flow of data in the emerging marketplace very much in the spirit of the ‘free the data’ language in various policy initiatives, yet at the same time with an eye to boost downstream innovation. In doing so, these decisions essentially endorse practices of personal information processing which constitute a new type of public domain: a source of raw materials which are there for the taking and which have become most important inputs to commercial activity. From this vantage point of view, the legal interpretation of the private and the shared legitimizes a model of data extraction from individuals, the raw material of information capitalism, that will fuel the next generation of data-intensive therapeutics in the field of data-driven medicine

    A Review of Atrial Fibrillation Detection Methods as a Service

    Get PDF
    Atrial Fibrillation (AF) is a common heart arrhythmia that often goes undetected, and even if it is detected, managing the condition may be challenging. In this paper, we review how the RR interval and Electrocardiogram (ECG) signals, incorporated into a monitoring system, can be useful to track AF events. Were such an automated system to be implemented, it could be used to help manage AF and thereby reduce patient morbidity and mortality. The main impetus behind the idea of developing a service is that a greater data volume analyzed can lead to better patient outcomes. Based on the literature review, which we present herein, we introduce the methods that can be used to detect AF efficiently and automatically via the RR interval and ECG signals. A cardiovascular disease monitoring service that incorporates one or multiple of these detection methods could extend event observation to all times, and could therefore become useful to establish any AF occurrence. The development of an automated and efficient method that monitors AF in real time would likely become a key component for meeting public health goals regarding the reduction of fatalities caused by the disease. Yet, at present, significant technological and regulatory obstacles remain, which prevent the development of any proposed system. Establishment of the scientific foundation for monitoring is important to provide effective service to patients and healthcare professionals

    Information Systems and Healthcare XXXIV: Clinical Knowledge Management Systems—Literature Review and Research Issues for Information Systems

    Get PDF
    Knowledge Management (KM) has emerged as a possible solution to many of the challenges facing U.S. and international healthcare systems. These challenges include concerns regarding the safety and quality of patient care, critical inefficiency, disparate technologies and information standards, rapidly rising costs and clinical information overload. In this paper, we focus on clinical knowledge management systems (CKMS) research. The objectives of the paper are to evaluate the current state of knowledge management systems diffusion in the clinical setting, assess the present status and focus of CKMS research efforts, and identify research gaps and opportunities for future work across the medical informatics and information systems disciplines. The study analyzes the literature along two dimensions: (1) the knowledge management processes of creation, capture, transfer, and application, and (2) the clinical processes of diagnosis, treatment, monitoring and prognosis. The study reveals that the vast majority of CKMS research has been conducted by the medical and health informatics communities. Information systems (IS) researchers have played a limited role in past CKMS research. Overall, the results indicate that there is considerable potential for IS researchers to contribute their expertise to the improvement of clinical process through technology-based KM approaches

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
    • …
    corecore