65 research outputs found

    A Qualitative Evaluation of IoT-driven eHealth: Knowledge Management, Business Models and Opportunities, Deployment and Evolution

    Get PDF
    eHealth has a major potential, and its adoption may be considered necessary to achieve increased ambulant and remote medical care, increased quality, reduced personnel needs, and reduced costs potential in healthcare. In this paper the authors try to give a reasonable, qualitative evaluation of IoT-driven eHealth from theoretical and practical viewpoints. They look at associated knowledge management issues and contributions of IoT to eHealth, along with requirements, benefits, limitations and entry barriers. Important attention is given to security and privacy issues. Finally, the conditions for business plans and accompanying value chains are realistically analyzed. The resulting implementation issues and required commitments are also discussed based on a case study analysis. The authors confirm that IoT-driven eHealth can happen and will happen; however, much more needs to be addressed to bring it back in sync with medical and general technological developments in an industrial state-of-the-art perspective and to get recognized and get timely the benefits

    Semantics-Empowered Big Data Processing with Applications

    Get PDF
    We discuss the nature of Big Data and address the role of semantics in analyzing and processing Big Data that arises in the context of Physical-Cyber-Social Systems. We organize our research around the Five Vs of Big Data, where four of the Vs are harnessed to produce the fifth V - value. To handle the challenge of Volume, we advocate semantic perception that can convert low-level observational data to higher-level abstractions more suitable for decision-making. To handle the challenge of Variety, we resort to the use of semantic models and annotations of data so that much of the intelligent processing can be done at a level independent of heterogeneity of data formats and media. To handle the challenge of Velocity, we seek to use continuous semantics capability to dynamically create event or situation specific models and recognize relevant new concepts, entities and facts. To handle Veracity, we explore the formalization of trust models and approaches to glean trustworthiness. The above four Vs of Big Data are harnessed by the semantics-empowered analytics to derive value for supporting practical applications transcending physical-cyber-social continuum

    Dietary Phenylalanine Requirement of Fingerling Oreochromis Niloticus (Linnaeus)

    Get PDF
    This study was conducted to determine the dietary phenylalanine for fingerling Oreochromis niloticus by conducting an 8 weeks experiment in a flow-through system (1-1.5L/min) at 28°C water temperature. Phenylalanine requirement was determined by feeding six casein-gelatin based amino acid test diets (350 g kg– 1 CP; 16.72 kJ g–1 GE) with graded levels of phenylalanine (4, 6.5, 9, 11.5, 14 and 16.5 g kg–1 dry diet) at a constant level (10 g kg–1) of dietary tyrosine to triplicate groups of fish (1.65±0.09 g) near to satiation. Absolute weight gain (AWG g fish-1), feed conversion ratio (FCR), protein deposition (PD%), phenylalanine retention efficiency (PRE%) and RNA/DNA ratio was found to improve with the increasing concentrations of phenylalanine and peaked at 11.5 g kg–1 of dry diet. Quadratic regression analysis of AWG, PD and PRE against varying levels of dietary phenylalanine indicated the requirement at 12.1, 11.6, and 12.7 g kg–1 dry diet, respectively and the inclusion of phenylalanine at 12.1 g kg–1 of dry diet, corresponding to 34.6 g kg–1 dietary protein is optimum for this fish. Based on above data, total aromatic amino acid requirement of fingerling O. niloticus was found to be 20.6 g kg–1 (12.1 g kg–1 phenylalanine+8.5 g kg–1 tyrosine) of dry diet, corresponding to 58.8 g kg–1 of dietary protein

    Wellness Representation of Users in Social Media: Towards Joint Modelling of Heterogeneity and Temporality

    Get PDF
    The increasing popularity of social media has encouraged health consumers to share, explore, and validate health and wellness information on social networks, which provide a rich repository of Patient Generated Wellness Data (PGWD). While data-driven healthcare has attracted a lot of attention from academia and industry for improving care delivery through personalized healthcare, limited research has been done on harvesting and utilizing PGWD available on social networks. Recently, representation learning has been widely used in many applications to learn low-dimensional embedding of users. However, existing approaches for representation learning are not directly applicable to PGWD due to its domain nature as characterized by longitudinality, incompleteness, and sparsity of observed data as well as heterogeneity of the patient population. To tackle these problems, we propose an approach which directly learns the embedding from longitudinal data of users, instead of vector-based representation. In particular, we simultaneously learn a low-dimensional latent space as well as the temporal evolution of users in the wellness space. The proposed method takes into account two types of wellness prior knowledge: (1) temporal progression of wellness attributes; and (2) heterogeneity of wellness attributes in the patient population. Our approach scales well to large datasets using parallel stochastic gradient descent. We conduct extensive experiments to evaluate our framework at tackling three major tasks in wellness domain: attribute prediction, success prediction, and community detection. Experimental results on two real-world datasets demonstrate the ability of our approach in learning effective user representations

    Integrated Machine Learning and Optimization Frameworks with Applications in Operations Management

    Full text link
    Incorporation of contextual inference in the optimality analysis of operational problems is a canonical characteristic of data-informed decision making that requires interdisciplinary research. In an attempt to achieve individualization in operations management, we design rigorous and yet practical mechanisms that boost efficiency, restrain uncertainty and elevate real-time decision making through integration of ideas from machine learning and operations research literature. In our first study, we investigate the decision of whether to admit a patient to a critical care unit which is a crucial operational problem that has significant influence on both hospital performance and patient outcomes. Hospitals currently lack a methodology to selectively admit patients to these units in a way that patient’s individual health metrics can be incorporated while considering the hospital’s operational constraints. We model the problem as a complex loss queueing network with a stochastic model of how long risk-stratified patients spend time in particular units and how they transition between units. A data-driven optimization methodology then approximates an optimal admission control policy for the network of units. While enforcing low levels of patient blocking, we optimize a monotonic dual-threshold admission policy. Our methodology captures utilization and accessibility in a network model of care pathways while supporting the personalized allocation of scarce care resources to the neediest patients. The interesting benefits of admission thresholds that vary by day of week are also examined. In the second study, we analyze the efficiency of surgical unit operations in the era of big data. The accuracy of surgical case duration predictions is a crucial element in hospital operational performance. We propose a comprehensive methodology that incorporates both structured and unstructured data to generate individualized predictions regarding the overall distribution of surgery durations. Consequently, we investigate methods to incorporate such individualized predictions into operational decision-making. We introduce novel prescriptive models to address optimization under uncertainty in the fundamental surgery appointment scheduling problem by utilizing the multi-dimensional data features available prior to the surgery. Electronic medical records systems provide detailed patient features that enable the prediction of individualized case time distributions; however, existing approaches in this context usually employ only limited, aggregate information, and do not take advantages of these detailed features. We show how the quantile regression forest, can be integrated into three common optimization formulations that capture the stochasticity in addressing this problem, including stochastic optimization, robust optimization and distributionally robust optimization. In the last part of this dissertation, we provide the first study on online learning problems under stochastic constraints that are "soft", i.e., need to be satisfied with high likelihood. Under a Bayesian framework, we propose and analyze a scheme that provides statistical feasibility guarantees throughout the learning horizon, by using posterior Monte Carlo samples to form sampled constraints that generalize the scenario generation approach commonly used in chance-constrained programming. We demonstrate how our scheme can be integrated into Thompson sampling and illustrate it with an application in online advertisement.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145936/1/meisami_1.pd

    Preface

    Get PDF

    Making augmented human intelligence in medicine practical: A case study of treating major depressive disorder

    Get PDF
    Individualized medicine tailors diagnoses and treatment options on an individual patient basis. This is a paradigm shift from choosing a treatment based on highest reported efficacy in clinical trials, which is often not effective for all individuals. In this dissertation, we assert that treatment selection and management can be individualized when clinicians assessment of disease symptoms are augmented with a few analytically identified patient-specific measures (e.g., genomics, metabolomics) that are prognostic or predictive of treatment outcomes. Patient-derived biological, clinical and symptom measures are sufficiently complex, i.e., heterogeneous, noisy and high-dimensional. The question for research then becomes: “Which few among these large complex measures are sufficient to augment the clinician’s disease assessment and treatment logic to individualize treatment decisions?” This dissertation introduces, ALMOND — Analytics and Machine Learning Framework for Actionable Intelligence from Clinical and Omics Data. As a case study, this dissertation describes how ALMOND addresses the unmet need for individualized medicine in treating major depressive disorder — the leading cause of medical disabilities worldwide. The biggest challenge in individualizing treatment of depression is in the heterogeneity of how depressive symptoms manifest between individuals, and in their varied response to the same treatment. ALMOND comprises a systematic analytical workflow to individualize antidepressant treatment by addressing the challenge of heterogeneity of major depressive disorder. First, “right patients” are identified by stratifying patients using unsupervised learning, that serves as a foundation to associate their disease states with multiple pharmacological (drug-associated) measures. Second, “right drug” selection is shown to be feasible by demonstrating that psychiatrists’ depression severity assessments augmented with pharmacogenomic measures can accurately predict remission of depressive symptoms using supervised learning. Finally, probabilistic graphs provide early and easily interpretable prognoses at the “right time” to a psychiatrist by accounting for changes in routinely assessed depressive symptoms’ severity. By choosing antidepressants that have the highest-likelihood of the patient achieving remission, the chances of persisting depressive symptoms are reduced, which is often the leading medical conditions in those who commit suicide or develop chronic illnesses

    Bilingual Autoencoder-based Efficient Harmonization of Multi-source Private Data for Accurate Predictive Modeling

    Get PDF
    Sharing electronic health record data is essential for advanced analysis, but may put sensitive information at risk. Several studies have attempted to address this risk using contextual embedding, but with many hospitals involved, they are often inefficient and inflexible. Thus, we propose a bilingual autoencoder-based model to harmonize local embeddings in different spaces. Cross-hospital reconstruction of embeddings makes encoders map embeddings from hospitals to a shared space and align them spontaneously. We also suggest two-phase training to prevent distortion of embeddings during harmonization with hospitals that have biased information. In experiments, we used medical event sequences from the Medical Information Mart for Intensive Care-III dataset and simulated the situation of multiple hospitals. For evaluation, we measured the alignment of events from different hospitals and the prediction accuracy of a patient & rsquo;s diagnosis in the next admission in three scenarios in which local embeddings do not work. The proposed method efficiently harmonizes embeddings in different spaces, increases prediction accuracy, and gives flexibility to include new hospitals, so is superior to previous methods in most cases. It will be useful in predictive tasks to utilize distributed data while preserving private information
    • 

    corecore