12 research outputs found

    Machine Learning in Chronic Pain Research: A Scoping Review

    Get PDF
    Given the high prevalence and associated cost of chronic pain, it has a significant impact on individuals and society. Improvements in the treatment and management of chronic pain may increase patients’ quality of life and reduce societal costs. In this paper, we evaluate state-of-the-art machine learning approaches in chronic pain research. A literature search was conducted using the PubMed, IEEE Xplore, and the Association of Computing Machinery (ACM) Digital Library databases. Relevant studies were identified by screening titles and abstracts for keywords related to chronic pain and machine learning, followed by analysing full texts. Two hundred and eighty-seven publications were identified in the literature search. In total, fifty-three papers on chronic pain research and machine learning were reviewed. The review showed that while many studies have emphasised machine learning-based classification for the diagnosis of chronic pain, far less attention has been paid to the treatment and management of chronic pain. More research is needed on machine learning approaches to the treatment, rehabilitation, and self-management of chronic pain. As with other chronic conditions, patient involvement and self-management are crucial. In order to achieve this, patients with chronic pain need digital tools that can help them make decisions about their own treatment and care

    Context-based energy disaggregation in smart homes

    Get PDF
    In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM) approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz). The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM) in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments

    Context-based energy disaggregation in smart homes

    Get PDF
    In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM) approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz). The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM) in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments

    Clustering Approaches for Multi-source Entity Resolution

    Get PDF
    Entity Resolution (ER) or deduplication aims at identifying entities, such as specific customer or product descriptions, in one or several data sources that refer to the same real-world entity. ER is of key importance for improving data quality and has a crucial role in data integration and querying. The previous generation of ER approaches focus on integrating records from two relational databases or performing deduplication within a single database. Nevertheless, in the era of Big Data the number of available data sources is increasing rapidly. Therefore, large-scale data mining or querying systems need to integrate data obtained from numerous sources. For example, in online digital libraries or E-Shops, publications or products are incorporated from a large number of archives or suppliers across the world or within a specified region or country to provide a unified view for the user. This process requires data consolidation from numerous heterogeneous data sources, which are mostly evolving. By raising the number of sources, data heterogeneity and velocity as well as the variance in data quality is increased. Therefore, multi-source ER, i.e. finding matching entities in an arbitrary number of sources, is a challenging task. Previous efforts for matching and clustering entities between multiple sources (> 2) mostly treated all sources as a single source. This approach excludes utilizing metadata or provenance information for enhancing the integration quality and leads up to poor results due to ignorance of the discrepancy between quality of sources. The conventional ER pipeline consists of blocking, pair-wise matching of entities, and classification. In order to meet the new needs and requirements, holistic clustering approaches that are capable of scaling to many data sources are needed. The holistic clustering-based ER should further overcome the restriction of pairwise linking of entities by making the process capable of grouping entities from multiple sources into clusters. The clustering step aims at removing false links while adding missing true links across sources. Additionally, incremental clustering and repairing approaches need to be developed to cope with the ever-increasing number of sources and new incoming entities. To this end, we developed novel clustering and repairing schemes for multi-source entity resolution. The approaches are capable of grouping entities from multiple clean (duplicate-free) sources, as well as handling data from an arbitrary combination of clean and dirty sources. The multi-source clustering schemes exclusively developed for multi-source ER can obtain superior results compared to general purpose clustering algorithms. Additionally, we developed incremental clustering and repairing methods in order to handle the evolving sources. The proposed incremental approaches are capable of incorporating new sources as well as new entities from existing sources. The more sophisticated approach is able to repair previously determined clusters, and consequently yields improved quality and a reduced dependency on the insert order of the new entities. To ensure scalability, the parallel variation of all approaches are implemented on top of the Apache Flink framework which is a distributed processing engine. The proposed methods have been integrated in a new end-to-end ER tool named FAMER (FAst Multi-source Entity Resolution system). The FAMER framework is comprised of Linking and Clustering components encompassing both batch and incremental ER functionalities. The output of Linking part is recorded as a similarity graph where each vertex represents an entity and each edge maintains the similarity relationship between two entities. Such a similarity graph is the input of the Clustering component. The comprehensive comparative evaluations overall show that the proposed clustering and repairing approaches for both batch and incremental ER achieve high quality while maintaining the scalability

    Ecosystemic Evolution Feeded by Smart Systems

    Get PDF
    Information Society is advancing along a route of ecosystemic evolution. ICT and Internet advancements, together with the progression of the systemic approach for enhancement and application of Smart Systems, are grounding such an evolution. The needed approach is therefore expected to evolve by increasingly fitting into the basic requirements of a significant general enhancement of human and social well-being, within all spheres of life (public, private, professional). This implies enhancing and exploiting the net-living virtual space, to make it a virtuous beneficial integration of the real-life space. Meanwhile, contextual evolution of smart cities is aiming at strongly empowering that ecosystemic approach by enhancing and diffusing net-living benefits over our own lived territory, while also incisively targeting a new stable socio-economic local development, according to social, ecological, and economic sustainability requirements. This territorial focus matches with a new glocal vision, which enables a more effective diffusion of benefits in terms of well-being, thus moderating the current global vision primarily fed by a global-scale market development view. Basic technological advancements have thus to be pursued at the system-level. They include system architecting for virtualization of functions, data integration and sharing, flexible basic service composition, and end-service personalization viability, for the operation and interoperation of smart systems, supporting effective net-living advancements in all application fields. Increasing and basically mandatory importance must also be increasingly reserved for human–technical and social–technical factors, as well as to the associated need of empowering the cross-disciplinary approach for related research and innovation. The prospected eco-systemic impact also implies a social pro-active participation, as well as coping with possible negative effects of net-living in terms of social exclusion and isolation, which require incisive actions for a conformal socio-cultural development. In this concern, speed, continuity, and expected long-term duration of innovation processes, pushed by basic technological advancements, make ecosystemic requirements stricter. This evolution requires also a new approach, targeting development of the needed basic and vocational education for net-living, which is to be considered as an engine for the development of the related ‘new living know-how’, as well as of the conformal ‘new making know-how’

    IMPROVEMENT OF POWER QUALITY OF HYBRID GRID BY NON-LINEAR CONTROLLED DEVICE CONSIDERING TIME DELAYS AND CYBER-ATTACKS

    Get PDF
    Power Quality is defined as the ability of electrical grid to supply a clean and stable power supply. Steady-state disturbances such as harmonics, faults, voltage sags and swells, etc., deteriorate the power quality of the grid. To ensure constant voltage and frequency to consumers, power quality should be improved and maintained at a desired level. Although several methods are available to improve the power quality in traditional power grids, significant challenges exist in modern power grids, such as non-linearity, time delay and cyber-attacks issues, which need to be considered and solved. This dissertation proposes novel control methods to address the mentioned challenges and thus to improve the power quality of modern hybrid grids.In hybrid grids, the first issue is faults occurring at different points in the system. To overcome this issue, this dissertation proposes non-linear controlled methods like the Fuzzy Logic controlled Thyristor Switched Capacitor (TSC), Adaptive Neuro Fuzzy Inference System (ANFIS) controlled TSC, and Static Non-Linear controlled TSC. The next issue is the time delay introduced in the network due to its complexities and various computations required. This dissertation proposes two new methods such as the Fuzzy Logic Controller and Modified Predictor to minimize adverse effects of time delays on the power quality enhancement. The last and major issue is the cyber-security aspect of the hybrid grid. This research analyzes the effects of cyber-attacks on various components such as the Energy Storage System (ESS), the automatic voltage regulator (AVR) of the synchronous generator, the grid side converter (GSC) of the wind generator, and the voltage source converter (VSC) of Photovoltaic (PV) system, located in a hybrid power grid. Also, this dissertation proposes two new techniques such as a Non-Linear (NL) controller and a Proportional-Integral (PI) controller for mitigating the adverse effects of cyber-attacks on the mentioned devices, and a new detection and mitigation technique based on the voltage threshold for the Supercapacitor Energy System (SES). Simulation results obtained through the MATLAB/Simulink software show the effectiveness of the proposed new control methods for power quality improvement. Also, the proposed methods perform better than conventional methods

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents
    corecore