154,163 research outputs found

    Research and development of a pilot project using GNSS and Earth observation (GeoSHM) for structural health monitoring of the Forth Road Bridge in Scotland

    Get PDF
    GeoSHM (GNSS and Earth Observation for Structural Health Monitoring) is a feasibility study project funded under the Integrated Application Promotion (IAP) program of the European Space Agency (ESA) in August 2013. Through integrated use of GNSS, Remote Sensing technologies and environmental data, GeoSHM can offer bridge owners an effective tool to assess the operational conditions of their assets. A reference system that consists of four GNSS receivers and two anemometers was installed on the Forth Road Bridge (FRB) in Scotland. This first stage monitoring system is producing precise 3D real-time displacements under different loading conditions. It can also provide essential land movement information to assess potential threats due to underground water extraction, geo-hazards and other industrial activities. The GeoSHM Feasibility Study has proved that even a small scale monitoring system can make possible for the Bridgemaster of the FRB to fully understand the loading and response effect of the bridge, and identify unusual deformations under extreme weather conditions (wind gust, etc.). Furthermore, EO data has proved to be extremely useful for the subsidence detection, as the SAR interferometry images have shown that there is no significant subsidence of the towers of the FRB or in the surrounding area. Gathering real-time GNSS data has produced continuous and accurate estimation of the displacement time-series of the structure. The issues and gaps identified from GeoSHM FS will form a solid foundation for the next stage development of GeoSHM service – demonstration, which is a two-year project and have started in February 2016. A new consortium of GeoSHM has been formed, focusing on significant refinements to the system reliability, sensor integration, data acquisition, data transmission, data fusion and SHM information extraction. This further developed GeoSHM system will be installed on a few Chinese bridges and the reference monitoring system on the FRB will be expanded as a pre-operational system

    RFID-Based Manufacturing Execution System for Mould Enterprises

    Get PDF
    According to the problem that it's difficult for mould enterprises to manage and control the production process accurately by using the current manufacturing execution system, the radio frequency identification (RFID) technology was introduced into the manufacturing execution system. In this paper, a RFID-based manufacturing execution system is proposed for tracing and managing the real-time manufacturing process of mould. The framework of RFID-based manufacturing execution system for mould enterprises was established, under this framework, the key technologies including RFID-based shop-floor model of mould enterprises, information fusion model for real-time monitoring, objective function of dynamic job shop scheduling were described respectively. Finally, through the research and application of the system, a novel mode was provided for manufacturing process management of mould enterprises

    A sensor fusion strategy based on a distributed optical sensing of airframe deformation applied to actuator load estimation

    Get PDF
    Real-time health monitoring of mechatronic onboard systems often involves model-based approaches comparing measured (physical) signals with numerical models or statistical data. This approach often requires the accurate measurement of specific physical quantities characterizing the state of the real system, the command inputs, and the various boundary conditions that can act as sources of disturbance. In this regard, the authors study sensor fusion techniques capable of integrating the information provided by a network of optical sensors based on Bragg gratings to reconstruct the signals acquired by one or more virtual sensors (separately or simultaneously). With an appropriate Fiber Bragg Gratings (FBGs) network, it is possible to measure directly (locally) several physical quantities (e.g. temperature, vibration, deformation, humidity, etc.), and, at the same time, use these data to estimate other effects that significantly influence the system behavior but which, for various reasons, are not directly measurable. In this case, such signals could be "virtually measured" by suitably designed and trained artificial neural networks (ANNs). The authors propose a specific sensing technology based on FBGs, combining suitable accuracy levels with minimal invasiveness, low complexity, and robustness to EM disturbances and harsh environmental conditions. The test case considered to illustrate the proposed methodology refers to a servomechanical application designed to monitor the health status in real-time of the flight control actuators using a model-based approach. Since the external aerodynamic loads acting on the system influence the operation of most of the actuators, their measurement would be helpful to accurately simulate the monitoring model's dynamic response. Therefore, the authors evaluate the proposed sensor fusion strategy effectiveness by using a distributed sensing of the airframe strain to infer the aerodynamic loads acting on the flight control actuator. Operationally speaking, a structural and an aerodynamic model are combined to generate a database used to train data-based surrogates correlating strain measurements to the corresponding actuator load

    Development of an underwater camera system for inland freshwater aquaculture

    Get PDF
    Computer vision and image processing technologies are applied towards aquatic research to understand fish and its interaction with other fishes and their environment. The understanding of vision-based data acquisition and processing aids in developing predictive frameworks and decision support systems for efficient aquaculture monitoring and management. However, this emerging field is confronted by a lack of high-quality underwater visual data, whether from public or local setups. An accessible underwater camera system that intensively obtains underwater visual data periodically and in real-time is the most desired system for such emerging studies. In this regard, an underwater camera system that captures underwater images from an inland freshwater aquaculture setup was proposed. The components of the underwater camera system are primarily based on Raspberry Pi, an open-source computing platform. The underwater camera continuously provides a real-time video streaming link of underwater scenes, and the local processor periodically acquires and stores data from this link in the form of images. These data are stored locally and remotely. Also, the local processor initiates a connection to a remote processor to allow the remote view of the real-time video streaming link. Aside from accessing the data and streaming link remotely, the remote processor analyzes the statistics of the underwater images to motivate the application of color balance and fusion, a state-of-the-art underwater image enhancement method. The applications of the proposed system and the enhancement to the captures are objectively evaluated. The proposed system captured around 1.2 Gb worth of 8 MP underwater images during daytime every day and stored these images in cloud storage. Also, the system captured subjects within 10-35 cm of turbid fishpond water. The statistical analysis of the gathered data revealed that underwater images from turbid fishpond setups have low quality in terms of inaccurate color representations (i.e., dominant green intensities and mostly submissive blue intensities) and low contrast. These observations appropriated the application of color balance and fusion to the locally acquired data. Furthermore, the objective evaluation revealed that color balance and fusion is the most effective method of improving information content and edge details, as quantified by high color information entropies and high average gradients. These metrics revealed the effectiveness of the proposed data acquisition and preprocessing system

    Activity recognition in mental health monitoring using multi-channel data collection and neural network

    Get PDF
    Treball de Final de Màster Universitari Erasmus Mundus en Tecnologia Geoespacial (Pla de 2013). Codi: SIW013. Curs acadèmic 2020-2021Ecological momentary assessment (EMA) methods can be used to extract context related information by studying a subject’s behaviour in an environment in real-time. In mental health EMA can be used to assess patients with mental disorders by deriving contextual information from data and provide psychological interventions based on the behaviour of the person. With the advancements in technology smart devices such as mobile phone and smartwatch can be used to collect EMA data. Such a contextual information system is used in SyMptOMS, which uses accelerometer data from smartphone for activity recognition of the patient. Monitoring patients with mental disorders can be useful and psychological interventions can be provided in real time to control their behavior. In this research study, we aim to investigate the effect of multi-channel data on the accuracy of human activity recognition using neural network model by predicting activities based on data from smartphone and smartwatch accelerometer sensors. In addition to this the study investigates model performance for similar activities such as SITTING and LYING DOWN. Tri-axial accelerometer data is collected at the same time from smartphone and smartwatch using a data collection application. Features are extracted from the raw data and then used as input to a neural network. The model is trained for single data input from smartphone and smartwatch as well the data from sensor fusion. The performance of the model is evaluated by using test samples from collected data. Results show that model with multi-channel data achieves a higher accuracy of activity recognition than the model with only single-channel data source

    Camera Planning and Fusion in a Heterogeneous Camera Network

    Get PDF
    Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application. I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method. The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time

    Model-based observer proposal for surface roughness monitoring

    Get PDF
    Comunicación presentada a MESIC 2019 8th Manufacturing Engineering Society International Conference (Madrid, 19-21 de Junio de 2019)In the literature, many different machining monitoring systems for surface roughness and tool condition have been proposed and validated experimentally. However, these approaches commonly require costly equipment and experimentation. In this paper, we propose an alternative monitoring system for surface roughness based on a model-based observer considering simple relationships between tool wear, power consumption and surface roughness. The system estimates the surface roughness according to simple models and updates the estimation fusing the information from quality inspection and power consumption. This monitoring strategy is aligned with the industry 4.0 practices and promotes the fusion of data at different shop-floor levels
    • …
    corecore