52 research outputs found

    Linking medical records to an expert system

    Get PDF
    This presentation will be done using the IMR-Entry (Intelligent Medical Record Entry) system. IMR-Entry is a software program developed as a front-end to our diagnostic consultant software MEDAS (Medical Emergency Decision Assistance System). MEDAS (the Medical Emergency Diagnostic Assistance System) is a diagnostic consultant system using a multimembership Bayesian design for its inference engine and relational database technology for its knowledge base maintenance. Research on MEDAS began at the University of Southern California and the Institute of Critical Care in the mid 1970's with support from NASA and NSF. The MEDAS project moved to Chicago in 1982; its current progress is due to collaboration between Illinois Institute of Technology, The Chicago Medical School, Lake Forest College and NASA at KSC. Since the purpose of an expert system is to derive a hypothesis, its communication vocabulary is limited to features used by its knowledge base. The development of a comprehensive problem based medical record entry system which could handshake with an expert system while creating an electronic medical record at the same time was studied. IMR-E is a computer based patient record that serves as a front end to the expert system MEDAS. IMR-E is a graphically oriented comprehensive medical record. The programs major components are demonstrated

    International Telemedicine/Disaster Medicine Conference: Papers and Presentations

    Get PDF
    The first International Telemedicine/Disaster Medicine Conference was held in Dec. 1991. The overall purpose was to convene an international, multidisciplinary gathering of experts to discuss the emerging field of telemedicine and assess its future directions; principally the application of space technology to disaster response and management, but also to clinical medicine, remote health care, public health, and other needs. This collection is intended to acquaint the reader with recent landmark efforts in telemedicine as applied to disaster management and remote health care, the technical requirements of telemedicine systems, the application of telemedicine and telehealth in the U.S. space program, and the social and humanitarian dimensions of this area of medicine

    Powertrain Systems for Net-Zero Transport

    Get PDF
    The transport sector continues to shift towards alternative powertrains, particularly with the UK Government’s announcement to end the sale of petrol and diesel passenger cars by 2030 and increasing support for alternatives. Despite this announcement, the internal combustion continues to play a significant role both in the passenger car market through the use of hybrids and sustainable low carbon fuels, as well as a key role in other sectors such as heavy-duty vehicles and off-highway applications across the globe. Building on the industry-leading IC Engines conference, the 2021 Powertrain Systems for Net-Zero Transport conference (7-8 December 2021, London, UK) focussed on the internal combustion engine’s role in Net-Zero transport as well as covered developments in the wide range of propulsion systems available (electric, fuel cell, sustainable fuels etc) and their associated powertrains. To achieve the net-zero transport across the globe, the life-cycle analysis of future powertrain and energy was also discussed. Powertrain Systems for Net-Zero Transport provided a forum for engine, fuels, e-machine, fuel cell and powertrain experts to look closely at developments in powertrain technology required, to meet the demands of the net-zero future and global competition in all sectors of the road transportation, off-highway and stationary power industries

    Expressive and modular rule-based classifier for data streams

    Get PDF
    The advances in computing software, hardware, connected devices and wireless communication infrastructure in recent years have led to the desire to work with streaming data sources. Yet the number of techniques, approaches and algorithms which can work with data from a streaming source is still very limited, compared with batched data. Although data mining techniques have been a well-studied topic of knowledge discovery for decades, many unique properties as well as challenges in learning from a data stream have not been considered properly due to the actual presence of and the real needs to mine information from streaming data sources. This thesis aims to contribute to the knowledge by developing a rule-based algorithm to specifically learn classification rules from data streams, with the learned rules are expressive so that a human user can easily interpret the concept and rationale behind the predictions of the created model. There are two main structures to represent a classification model; the ‘tree-based’ structure and the ‘rule-based’ structure. Even though both forms of representation are popular and well-known in traditional data mining, they are different when it comes to interpretability and quality of models in certain circumstances. The first part of this thesis analyses background work and relevant topics in learning classification rules from data streams. This study provides information about the essential requirements to produce high quality classification rules from data streams and how many systems, algorithms and techniques related to learn the classification of a static dataset are not applicable in a streaming environment. The second part of the thesis investigates at a new technique to improve the efficiency and accuracy in learning heuristics from numeric features from a streaming data source. The computational cost is one of the important factors to be considered for an effective and practical learning algorithm/system because of the needs to learn from continuous arrivals of data examples sequentially and discard the seen data examples. If the computing cost is too expensive, then one may not be able to keep pace with the arrival of high velocity and possibly unbound data streams. The proposed technique was first discussed in the context of the use of Gaussian distribution as heuristics for building rule terms on numeric features. Secondly, empirical evaluation shows the successful integration of the proposed technique into an existing rule-based algorithm for the data stream, eRules. Continuing on the topic of a rule-based algorithm for classification data streams, the use of Hoeffding’s Inequality addresses another problem in learning from a data stream, namely how much data should be seen from a data stream before starting learning and how to keep the model updated over time. By incorporating the theory from Hoeffding’s Inequality, this study presents the Hoeffding Rules algorithm, which can induce modular rules directly from a streaming data source with dynamic window sizes throughout the learning period to ensure the efficiency and robustness towards the concept drifts. Concept drift is another unique challenge in mining data streams which the underlying concept of the data can change either gradually or abruptly over time and the learner should adapt to these changes as quickly as possible. This research focuses on the development of a rule-based algorithm, Hoeffding Rules, for data stream which considers streaming environments as primary data sources and addresses several unique challenges in learning rules from data streams such as concept drifts and computational efficiency. This knowledge facilitates the need and the importance of an interpretable machine learning model; applying new studies to improve the ability to mine useful insights from potentially high velocity, high volume and unbounded data streams. More broadly, this research complements the study in learning classification rules from data streams to address some of the unique challenges in data streams compared with conventional batch data, with the knowledge necessary to systematically and effectively learn expressive and modular classification rules from data streams

    Powertrain Systems for Net-Zero Transport

    Get PDF
    The transport sector continues to shift towards alternative powertrains, particularly with the UK Government’s announcement to end the sale of petrol and diesel passenger cars by 2030 and increasing support for alternatives. Despite this announcement, the internal combustion continues to play a significant role both in the passenger car market through the use of hybrids and sustainable low carbon fuels, as well as a key role in other sectors such as heavy-duty vehicles and off-highway applications across the globe. Building on the industry-leading IC Engines conference, the 2021 Powertrain Systems for Net-Zero Transport conference (7-8 December 2021, London, UK) focussed on the internal combustion engine’s role in Net-Zero transport as well as covered developments in the wide range of propulsion systems available (electric, fuel cell, sustainable fuels etc) and their associated powertrains. To achieve the net-zero transport across the globe, the life-cycle analysis of future powertrain and energy was also discussed. Powertrain Systems for Net-Zero Transport provided a forum for engine, fuels, e-machine, fuel cell and powertrain experts to look closely at developments in powertrain technology required, to meet the demands of the net-zero future and global competition in all sectors of the road transportation, off-highway and stationary power industries

    AMANDA : density-based adaptive model for nonstationary data under extreme verification latency scenarios

    Get PDF
    Gradual concept-drift refers to a smooth and gradual change in the relations between input and output data in the underlying distribution over time. The problem generates a model obsolescence and consequently a quality decrease in predictions. Besides, there is a challenging task during the stream: The extreme verification latency (EVL) to verify the labels. For batch scenarios, state-of-the-art methods propose an adaptation of a supervised model by using an unconstrained least squares importance fitting (uLSIF) algorithm or a semi-supervised approach along with a core support extraction (CSE) method. However, these methods do not properly tackle the mentioned problems due to their high computational time for large data volumes, lack in representing the right samples of the drift or even for having several parameters for tuning. Therefore, we propose a density-based adaptive model for nonstationary data (AMANDA), which uses a semi-supervised classifier along with a CSE method. AMANDA has two variations: AMANDA with a fixed cutting percentage (AMANDA-FCP); and AMANDA with a dynamic cutting percentage (AMANDADCP). Our results indicate that the two variations of AMANDA outperform the state-of-the-art methods for almost all synthetic datasets and real ones with an improvement up to 27.98% regarding the average error. We have found that the use of AMANDA-FCP improved the results for a gradual concept-drift even with a small size of initial labeled data. Moreover, our results indicate that SSL classifiers are improved when they work along with our static or dynamic CSE methods. Therefore, we emphasize the importance of research directions based on this approach.Concept-drift gradual refere-se à mudança suave e gradual na distribuição dos dados conforme o tempo passa. Este problema causa obsolescência no modelo de aprendizado e queda na qualidade das previsões. Além disso, existe um complicador durante o processamento dos dados: a latência de verificação extrema (LVE) para se verificar os rótulos. Métodos do estado da arte propõem uma adaptação do modelo supervisionado usando uma abordagem de estimação de importância baseado em mínimos quadrados ou usando uma abordagem semi-supervisionada em conjunto com a extração de instâncias centrais, na sigla em inglês (CSE). Entretanto, estes métodos não tratam adequadamente os problemas mencionados devido ao fato de requererem alto tempo computacional para processar grandes volumes de dados, falta de correta seleção das instâncias que representam a mudança da distribuição, ou ainda por demandarem o ajuste de grande quantidade de parâmetros. Portanto, propomos um modelo adaptativo baseado em densidades para dados não-estacionários (AMANDA), que tem como base um classificador semi-supervisionado e um método CSE baseado em densidade. AMANDA tem duas variações: percentual de corte fixo (AMANDAFCP); e percentual de corte dinâmico (AMANDA-DCP). Nossos resultados indicam que as duas variações da proposta superam o estado da arte em quase todas as bases de dados sintéticas e reais em até 27,98% em relação ao erro médio. Concluímos que a aplicação do método AMANDA-FCP faz com que a classificação melhore mesmo quando há uma pequena porção inicial de dados rotulados. Mais ainda, os classificadores semi-supervisionados são melhorados quando trabalham em conjunto com nossos métodos de CSE, estático ou dinâmico
    • …
    corecore