216 research outputs found

    Behavioural pattern identification and prediction in intelligent environments

    Get PDF
    In this paper, the application of soft computing techniques in prediction of an occupant's behaviour in an inhabited intelligent environment is addressed. In this research, daily activities of elderly people who live in their own homes suffering from dementia are studied. Occupancy sensors are used to extract the movement patterns of the occupant. The occupancy data is then converted into temporal sequences of activities which are eventually used to predict the occupant behaviour. To build the prediction model, different dynamic recurrent neural networks are investigated. Recurrent neural networks have shown a great ability in finding the temporal relationships of input patterns. The experimental results show that non-linear autoregressive network with exogenous inputs model correctly extracts the long term prediction patterns of the occupant and outperformed the Elman network. The results presented here are validated using data generated from a simulator and real environments

    Modeling and identification of power electronic converters

    Get PDF
    Nowadays, many industries are moving towards more electrical systems and components. This is done with the purpose of enhancing the efficiency of their systems while being environmentally friendlier and sustainable. Therefore, the development of power electronic systems is one of the most important points of this transition. Many manufacturers have improved their equipment and processes in order to satisfy the new necessities of the industries (aircraft, automotive, aerospace, telecommunication, etc.). For the particular case of the More Electric Aircraft (MEA), there are several power converters, inverters and filters that are usually acquired from different manufacturers. These are switched mode power converters that feed multiple loads, being a critical element in the transmission systems. In some cases, these manufacturers do not provide the sufficient information regarding the functionality of the devices such as DC/DC power converters, rectifiers, inverters or filters. Consequently, there is the need to model and identify the performance of these components to allow the aforementioned industries to develop models for the design stage, for predictive maintenance, for detecting possible failures modes, and to have a better control over the electrical system. Thus, the main objective of this thesis is to develop models that are able to describe the behavior of power electronic converters, whose parameters and/or topology are unknown. The algorithms must be replicable and they should work in other types of converters that are used in the power electronics field. The thesis is divided in two main cores, which are the parameter identification for white-box models and the black-box modeling of power electronics devices. The proposed approaches are based on optimization algorithms and deep learning techniques that use non-intrusive measurements to obtain a set of parameters or generate a model, respectively. In both cases, the algorithms are trained and tested using real data gathered from converters used in aircrafts and electric vehicles. This thesis also presents how the proposed methodologies can be applied to more complex power systems and for prognostics tasks. Concluding, this thesis aims to provide algorithms that allow industries to obtain realistic and accurate models of the components that they are using in their electrical systems.En la actualidad, el uso de sistemas y componentes eléctricos complejos se extiende a múltiples sectores industriales. Esto se hace con el propósito de mejorar su eficiencia y, en consecuencia, ser más sostenibles y amigables con el medio ambiente. Por tanto, el desarrollo de sistemas electrónicos de potencia es uno de los puntos más importantes de esta transición. Muchos fabricantes han mejorado sus equipos y procesos para satisfacer las nuevas necesidades de las industrias (aeronáutica, automotriz, aeroespacial, telecomunicaciones, etc.). Para el caso particular de los aviones más eléctricos (MEA, por sus siglas en inglés), existen varios convertidores de potencia, inversores y filtros que suelen adquirirse a diferentes fabricantes. Se trata de convertidores de potencia de modo conmutado que alimentan múltiples cargas, siendo un elemento crítico en los sistemas de transmisión. En algunos casos, estos fabricantes no proporcionan la información suficiente sobre la funcionalidad de los dispositivos como convertidores de potencia DC-DC, rectificadores, inversores o filtros. En consecuencia, existe la necesidad de modelar e identificar el desempeño de estos componentes para permitir que las industrias mencionadas desarrollan modelos para la etapa de diseño, para el mantenimiento predictivo, para la detección de posibles modos de fallas y para tener un mejor control del sistema eléctrico. Así, el principal objetivo de esta tesis es desarrollar modelos que sean capaces de describir el comportamiento de un convertidor de potencia, cuyos parámetros y/o topología se desconocen. Los algoritmos deben ser replicables y deben funcionar en otro tipo de convertidores que se utilizan en el campo de la electrónica de potencia. La tesis se divide en dos núcleos principales, que son la identificación de parámetros de los convertidores y el modelado de caja negra (black-box) de dispositivos electrónicos de potencia. Los enfoques propuestos se basan en algoritmos de optimización y técnicas de aprendizaje profundo que utilizan mediciones no intrusivas de las tensiones y corrientes de los convertidores para obtener un conjunto de parámetros o generar un modelo, respectivamente. En ambos casos, los algoritmos se entrenan y prueban utilizando datos reales recopilados de convertidores utilizados en aviones y vehículos eléctricos. Esta tesis también presenta cómo las metodologías propuestas se pueden aplicar a sistemas eléctricos más complejos y para tareas de diagnóstico. En conclusión, esta tesis tiene como objetivo proporcionar algoritmos que permitan a las industrias obtener modelos realistas y precisos de los componentes que están utilizando en sus sistemas eléctricos.Postprint (published version

    Nonlinear predictive threshold model for real-time abnormal gait detection

    Get PDF
    Falls are critical events for human health due to the associated risk of physical and psychological injuries. Several fall related systems have been developed in order to reduce injuries. Among them, fall-risk prediction systems are one of the most promising approaches, as they strive to predict a fall before its occurrence. A category of fall-risk prediction systems evaluates balance and muscle strength through some clinical functional assessment tests, while other prediction systems investigate the recognition of abnormal gait patterns to predict a fall in real-time. The main contribution of this paper is a nonlinear model of user gait in combination with a threshold-based classification in order to recognize abnormal gait patterns with low complexity and high accuracy. In addition, a dataset with realistic parameters is prepared to simulate abnormal walks and to evaluate fall prediction methods. The accelerometer and gyroscope sensors available in a smartphone have been exploited to create the dataset. The proposed approach has been implemented and compared with the state-of-the-art approaches showing that it is able to predict an abnormal walk with a higher accuracy (93.5%) and a higher efficiency (up to 3.5 faster) than other feasible approaches

    Short and Long-Term Structural Health Monitoring of Highway Bridges

    Get PDF
    Structural Health Monitoring (SHM) is a promising tool for condition assessment of bridge structures. SHM of bridges can be performed for different purposes in long or short-term. A few aspects of short- and long-term monitoring of highway bridges are addressed in this research. Without quantifying environmental effects, applying vibration-based damage detection techniques may result in false damage identification. As part of a long-term monitoring project, the effect of temperature on vibrational characteristics of two continuously monitored bridges are studied. Natural frequencies of the structures are identified from ambient vibration data using the Natural Excitation Technique (NExT) along with the Eigen System Realization (ERA) algorithm. Variability of identified natural frequencies is investigated based on statistical properties of identified frequencies. Different statistical models are tested and the most accurate model is selected to remove the effect of temperature from the identified frequencies. After removing temperature effects, different damage cases are simulated on calibrated finite-element models. Comparing the effect of simulated damages on natural frequencies showed what levels of damage could be detected with this method. Evaluating traffic loads can be helpful to different areas including bridge design and assessment, pavement design and maintenance, fatigue analysis, economic studies and enforcement of legal weight limits. In this study, feasibility of using a single-span bridge as a weigh-in-motion tool to quantify the gross vehicle weights (GVW) of trucks is studied. As part of a short-term monitoring project, this bridge was subjected to four sets of high speed, live-load tests. Measured strain data are used to implement bridge weigh-in-motion (B-WIM) algorithms and calculate the corresponding velocities and GVWs. A comparison is made between calculated and static weights, and furthermore, between supposed speeds and estimated speeds of the trucks. Vibration-based techniques that use finite-element (FE) model updating for SHM of bridges are common for infrastructure applications. This study presents the application of both static and dynamic-based FE model updating of a full scale bridge. Both dynamic and live-load testing were conducted on this bridge and vibration, strain, and deflections were measured at different locations. A FE model is calibrated using different error functions. This model could capture both global and local response of the structure and the performance of the updated model is validated with part of the collected measurements that were not included in the calibration process

    Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks

    Get PDF
    Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1]. The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs. The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit. The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead. The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead

    The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting

    Get PDF
    The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.Comment: under revie

    An analysis of software aging in cloud environment

    Get PDF
    Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps

    Soft sensors in automotive applications

    Get PDF
    2017 - 2018In this work, design and validation techniques of two soft sensors for the estimation of the motorcycle vertical dynamic have been proposed. The aim of this work is to develop soft sensors able to predict the rear and front stroke of a motorcycle suspension. This kind of information are typically used in the control loop of semi‐active or active suspension systems. Replacing the hard sensor with a soft sensor, enable to reduce cost and improve reliability of the system. An analysis of the motorcycle physical model has been carried out to analyze the correlation existing among motorcycle vertical dynamic quantities in order to determine which of them are necessary for the development of a suspension stroke soft sensor. More in details, a first soft sensor for the rear stroke has been developed using a Nonlinear Auto‐Regressive with eXogenous inputs (NARX) neural network. A second soft sensor for the front suspension stroke velocity has been designed using two different techniques based respectively on Digital filtering and NARX neural network. As an example of application, an Instrument Fault Detection (IFD) scheme, based on the rear stroke soft sensor, has been shown. Experimental results have demonstrated the good reliability and promptness of the scheme in detecting different typologies of faults as losing calibration faults, hold‐faults, and open/short circuit faults thanks to the soft sensor developed. Finally, the scheme has been successfully implemented and tested on an ARM microcontroller, to confirm the feasibility of a real‐time implementation on actual processing units used in such context. [edited by Author]XXX cicl
    corecore