99 research outputs found

    Novel Model Based on Artificial Neural Networks to Predict Short-Term Temperature Evolution in Museum Environment

    Get PDF
    The environmental microclimatic characteristics are often subject to fluctuations of considerable importance, which can cause irreparable damage to art works. We explored the applicability of Artificial Intelligence (AI) techniques to the Cultural Heritage area, with the aim of predicting short-term microclimatic values based on data collected at Rosenborg Castle (Copenhagen), housing the Royal Danish Collection. Specifically, this study applied the NAR (Nonlinear Autoregressive) and NARX (Nonlinear Autoregressive with Exogenous) models to the Rosenborg microclimate time series. Even if the two models were applied to small datasets, they have shown a good adaptive capacity predicting short-time future values. This work explores the use of AI in very short forecasting of microclimate variables in museums as a potential tool for decision-support systems to limit the climate-induced damages of artworks within the scope of their preventive conservation. The proposed model could be a useful support tool for the management of the museums

    Personalized Posture and Fall Classification with Shallow Gated Recurrent Units

    Get PDF
    Link to final publication : https://ieeexplore.ieee.org/document/8787455International audienceActivities of Daily Living (ADL) classification is a key part of assisted living systems as it can be used to assess a person autonomy. We present in this paper an activity classification pipeline using Gated Recurrent Units (GRU) and inertial sequences. We aim to take advantage of the feature extraction properties of neural networks to free ourselves from defining rules or manually choosing features. We also investigate the advantages of resampling input sequences and personalizing GRU models to improve the performances. We evaluate our models on two datasets: a dataset containing five common postures: sitting, lying, standing, walking and transfer and a dataset named MobiAct V2 providing ADL and falls. Results show that the proposed approach could benefit eHealth services and particularly activity monitoring

    Overløpskontroll i avløpsnett med forskjellige modelleringsteknikker og internet of things

    Get PDF
    Increased urbanization and extreme rainfall events are causing more frequent instances of sewer overflow, leading to the pollution of water resources and negative environmental, health, and fiscal impacts. At the same time, the treatment capacity of wastewater treatment plants is seriously affected. The main aim of this Ph.D. thesis is to use the Internet of Things and various modeling techniques to investigate the use of real-time control on existing sewer systems to mitigate overflow. The role of the Internet of Things is to provide continuous monitoring and real-time control of sewer systems. Data collected by the Internet of Things are also useful for model development and calibration. Models are useful for various purposes in real-time control, and they can be distinguished as those suitable for simulation and those suitable for prediction. Models that are suitable for a simulation, which describes the important phenomena of a system in a deterministic way, are useful for developing and analyzing different control strategies. Meanwhile, models suitable for prediction are usually employed to predict future system states. They use measurement information about the system and must have a high computational speed. To demonstrate how real-time control can be used to manage sewer systems, a case study was conducted for this thesis in Drammen, Norway. In this study, a hydraulic model was used as a model suitable for simulation to test the feasibility of different control strategies. Considering the recent advances in artificial intelligence and the large amount of data collected through the Internet of Things, the study also explored the possibility of using artificial intelligence as a model suitable for prediction. A summary of the results of this work is presented through five papers. Paper I demonstrates that one mainstream artificial intelligence technique, long short-term memory, can precisely predict the time series data from the Internet of Things. Indeed, the Internet of Things and long short-term memory can be powerful tools for sewer system managers or engineers, who can take advantage of real-time data and predictions to improve decision-making. In Paper II, a hydraulic model and artificial intelligence are used to investigate an optimal in-line storage control strategy that uses the temporal storage volumes in pipes to reduce overflow. Simulation results indicate that during heavy rainfall events, the response behavior of the sewer system differs with respect to location. Overflows at a wastewater treatment plant under different control scenarios were simulated and compared. The results from the hydraulic model show that overflows were reduced dramatically through the intentional control of pipes with in-line storage capacity. To determine available in-line storage capacity, recurrent neural networks were employed to predict the upcoming flow coming into the pipes that were to be controlled. Paper III and Paper IV describe a novel inter-catchment wastewater transfer solution. The inter-catchment wastewater transfer method aims at redistributing spatially mismatched sewer flows by transferring wastewater from a wastewater treatment plant to its neighboring catchment. In Paper III, the hydraulic behaviors of the sewer system under different control scenarios are assessed using the hydraulic model. Based on the simulations, inter-catchment wastewater transfer could efficiently reduce total overflow from a sewer system and wastewater treatment plant. Artificial intelligence was used to predict inflow to the wastewater treatment plant to improve inter-catchment wastewater transfer functioning. The results from Paper IV indicate that inter-catchment wastewater transfer might result in an extra burden for a pump station. To enhance the operation of the pump station, long short-term memory was employed to provide multi-step-ahead water level predictions. Paper V proposes a DeepCSO model based on large and high-resolution sensors and multi-task learning techniques. Experiments demonstrated that the multi-task approach is generally better than single-task approaches. Furthermore, the gated recurrent unit and long short-term memory-based multi-task learning models are especially suitable for capturing the temporal and spatial evolution of combined sewer overflow events and are superior to other methods. The DeepCSO model could help guide the real-time operation of sewer systems at a citywide level.publishedVersio

    Recurrent Neural Networks for Representing, Segmenting, and Classifying Surgical Activities

    Get PDF
    Robot-assisted surgery has enabled scalable, transparent capture of high-quality data during operation, and this has in turn led to many new research opportunities. Among these opportunities are those that aim to improve the objectivity and efficiency of surgical training, which include making performance assessment and feedback more objective and consistent; providing more specific or localized assessment and feedback; delegating this responsibility to machines, which have the potential to provide feedback in any desired abundance; and having machines go even further, for example by optimizing practice routines, in the form of a virtual coach. In this thesis, we focus on a foundation that serves all of these objectives: automated surgical activity recognition, or in other words the ability to automatically determine what activities a surgeon is performing and when those activities are taking place. First, we introduce the use of recurrent neural networks (RNNs) for localizing and classifying surgical activities from motion data. Here, we show for the first time that this task is possible at the level of maneuvers, which unlike the activities considered in prior work are already a part of surgical training curricula. Second, we study the ability of RNNs to learn dependencies over extremely long time periods, which we posit are present in surgical motion data; and we introduce MIST RNNs, a new RNN architecture that is capable of capturing these extremely long-term dependencies. Third, we investigate unsupervised learning using surgical motion data: we show that predicting future motion from past motion with RNNs, using motion data alone, leads to meaningful and useful representations of surgical motion. This approach leads to the discovery of surgical activities from unannotated data, and to state-of-the-art performance for querying a database of surgical activity using motion-based queries. Finally, we depart from a common yet limiting assumption in nearly all prior work on surgical activity recognition: that annotated training data, which is difficult and expensive to acquire, is available in abundance. We demonstrate for the first time that both gesture recognition and maneuver recognition are feasible even when very few annotated sequences are available; and that future-prediction based representation learning, prior to the recognition phase, yields significant performance improvements when annotated data is scarce

    Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks

    Get PDF
    Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1]. The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs. The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit. The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead. The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
    • …
    corecore