53 research outputs found

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    A hybrid localization method for Wireless Capsule Endoscopy (WCE)

    Get PDF
    Wireless capsule endoscopy (WCE) is a well-established diagnostic tool for visualizing the Gastrointestinal (GI) tract. WCE provides a unique view of the GI system with minimum discomfort for patients. Doctors can determine the type and severity of abnormality by analyzing the taken images. Early diagnosis helps them act and treat the disease in its earlier stages. However, the location information is missing in the frames. Pictures labeled by their location assist doctors in prescribing suitable medicines. The disease progress can be monitored, and the best treatment can be advised for the patients. Furthermore, at the time of surgery, it indicates the correct position for operation. Several attempts have been performed to localize the WCE accurately. Methods such as Radio frequency (RF), magnetic, image processing, ultrasound, and radiative imaging techniques have been investigated. Each one has its strengths and weaknesses. RF-based and magnetic-based localization methods need an external reference point, such as a belt or box around the patient, which limits their activities and causes discomfort. Computing the location solely based on an external reference could not distinguish between GI motion from capsule motion. Hence, this relative motion causes errors in position estimation. The GI system can move inside the body, while the capsule is stationary inside the intestine. This proposal presents two pose fusion methods, Method 1 and Method 2, that compensate for the relative motion of the GI tract with respect to the body. Method 1 is based on the data fusion from the Inertial measurement unit (IMU) sensor and side wall cameras. The IMU sensor consists of 9 Degree-Of-Freedom (DOF), including a gyroscope, an accelerometer, and a magnetometer to monitor the capsule’s orientation and its heading vector (the heading vector is a three-dimensional vector pointing to the direction of the capsule's head). Four monochromic cameras are placed at the side of the capsule to measure the displacement. The proposed method computes the heading vector using IMU data. Then, the heading vector is fused with displacements to estimate the 3D trajectory. This method has high accuracy in the short term. Meanwhile, due to the accumulation of errors from side wall cameras, the estimated trajectory tends to drift over time. Method 2 was developed to resolve the drifting issue while keeping the same positioning error. The capsule is equipped with four side wall cameras and a magnet. Magnetic localization acquires the capsule’s global position using 9 three-axis Hall effect sensors. However, magnetic localization alone cannot distinguish between the capsule’s and GI tract’s motions. To overcome this issue and increase tracking accuracy, side wall cameras are utilized, which are responsible for measuring the capsule’s movement, not the involuntary motion of the GI system. A complete setup is designed to test the capsule and perform the experiments. The results show that Method 2 has an average position error of only 3.5 mm and can compensate for the GI tract’s relative movements. Furthermore, environmental parameters such as magnetic interference and the nonhomogeneous structure of the GI tract have little influence on our system compared to the available magnetic localization methods. The experiment showed that Method 2 is suitable for localizing the WCE inside the body

    Monitoring, modelling and quantification of accumulation of damage on masonry structures due to recursive loads

    Get PDF
    The use of induced seismicity is gaining in popularity, particularly in Northern Europe, as people strive to increase local energy supplies. Τhe local building stock, comprising mainly of low-rise domestic masonry structures without any aseismic design, has been found susceptible to these induced tremors. Induced seismicity is generally characterized by frequent small-to-medium magnitude earthquakes in which structural and non-structural damage have been reported. Since the induced earthquakes are caused by third parties liability issues arise and a damage claim mechanism is activated. Typically, any damage are evaluated by visual inspections. This damage assessment process has been found rather cumbersome since visual inspections are laborious, slow and expensive while the identification of the cause of any light damage is a challenging task rendering essential the development of a more reliable approach. The aim of this PhD study is to gain a better understanding of the monitoring, modelling and quantification of accumulation of damage in masonry structures due to recursive loads. Fraeylemaborg, the most emblematic monument in the Groningen region dating back to the 14 th century, has experienced damage due to the induced seismic activity in the region in recent years. A novel monitoring approach is proposed to detect damage accumulation due to induced seismicity on the monument. Results of the monitoring, in particular the monitoring of the effects of induced seismic activity,, as well as the usefulness and need of various monitoring data for similar cases are discussed. A numerical model is developed and calibrated based on experimental findings and different loading scenarios are compared with the actual damage patterns observed on the structure. Vision-based techniques are developed for the detection of damage accumulation in masonry structures in an attempt to enhance effectiveness of the inspection process. In particular, an artificial intelligence solution is proposed for the automatic detection of cracks on masonry structures. A dataset with photographs from masonry structures is produced containing complex backgrounds and various crack types and sizes. Moreover, different convolutional neural networks are evaluated on their efficacy to automatically detect cracks. Furthermore, computer vision and photogrammetry methods are considered along with novel invisible markers for monitoring cracks. The proposed method shifts the marker reflection and its contrast with the background into the invisible wavelength of light (i.e. to the near-infrared) so that the markers are not easily distinguishable. The method is thus particularly vi suitable for monitoring historical buildings where it is important to avoid any interventions or disruption to the authenticity of the basic fabric of construction.. Further on, the quantification and modelling of damage in masonry structures are attempted by taking into consideration the initiation and propagation of damage due to earthquake excitations. The evaluation of damage in masonry structures due to (induced) earthquakes represents a challenging task. Cumulative damage due to subsequent ground motions is expected to have an effect on the seismic capacity of a structure. Crack patterns obtained from experimental campaigns from the literature are investigated and their correlation with damage propagation is examined. Discontinuous modelling techniques are able to reliably reproduce damage initiation and propagation by accounting for residual cracks even for low intensity loading. Detailed models based on the Distinct Element Method and Finite Element Model analysis are considered to capture and quantify the cumulative damage in micro level in masonry subjected to seismic loads. Finally, an experimental campaign is undertaken to investigate the accumulation of damage in masonry structure under repetitive load. Six wall specimens resembling the configuration of a spandrel element are tested under three-point in-plane bending considering different loading protocols. The walls were prepared adopting materials and practices followed in the Groningen region. Different numerical approaches are researched for their efficacy to reproduce the experimental response and any limitations are highlighted

    Monitoring Snow Cover and Snowmelt Dynamics and Assessing their Influences on Inland Water Resources

    Get PDF
    Snow is one of the most vital cryospheric components owing to its wide coverage as well as its unique physical characteristics. It not only affects the balance of numerous natural systems but also influences various socio-economic activities of human beings. Notably, the importance of snowmelt water to global water resources is outstanding, as millions of populations rely on snowmelt water for daily consumption and agricultural use. Nevertheless, due to the unprecedented temperature rise resulting from the deterioration of climate change, global snow cover extent (SCE) has been shrinking significantly, which endangers the sustainability and availability of inland water resources. Therefore, in order to understand cryo-hydrosphere interactions under a warming climate, (1) monitoring SCE dynamics and snowmelt conditions, (2) tracking the dynamics of snowmelt-influenced waterbodies, and (3) assessing the causal effect of snowmelt conditions on inland water resources are indispensable. However, for each point, there exist many research questions that need to be answered. Consequently, in this thesis, five objectives are proposed accordingly. Objective 1: Reviewing the characteristics of SAR and its interactions with snow, and exploring the trends, difficulties, and opportunities of existing SAR-based SCE mapping studies; Objective 2: Proposing a novel total and wet SCE mapping strategy based on freely accessible SAR imagery with all land cover classes applicability and global transferability; Objective 3: Enhancing total SCE mapping accuracy by fusing SAR- and multi-spectral sensor-based information, and providing total SCE mapping reliability map information; Objective 4: Proposing a cloud-free and illumination-independent inland waterbody dynamics tracking strategy using freely accessible datasets and services; Objective 5: Assessing the influence of snowmelt conditions on inland water resources

    A Study on Efficient Designs of Approximate Arithmetic Circuits

    Get PDF
    Approximate computing is a popular field where accuracy is traded with energy. It can benefit applications such as multimedia, mobile computing and machine learning which are inherently error resilient. Error introduced in these applications to a certain degree is beyond human perception. This flexibility can be exploited to design area, delay and power efficient architectures. However, care must be taken on how approximation compromises the correctness of results. This research work aims to provide approximate hardware architectures with error metrics and design metrics analyzed and their effects in image processing applications. Firstly, we study and propose unsigned array multipliers based on probability statistics and with approximate 4-2 compressors, full adders and half adders. This work deals with a new design approach for approximation of multipliers. The partial products of the multiplier are altered to introduce varying probability terms. Logic complexity of approximation is varied for the accumulation of altered partial products based on their probability. The proposed approximation is utilized in two variants of 16-bit multipliers. Synthesis results reveal that two proposed multipliers achieve power savings of 72% and 38% respectively compared to an exact multiplier. They have better precision when compared to existing approximate multipliers. Mean relative error distance (MRED) figures are as low as 7.6% and 0.02% for the proposed approximate multipliers, which are better than the previous state-of-the-art works. Performance of the proposed multipliers is evaluated with geometric mean filtering application, where one of the proposed models achieves the highest peak signal to noise ratio (PSNR). Second, approximation is proposed for signed Booth multiplication. Approximation is introduced in partial product generation and partial product accumulation circuits. In this work, three multipliers (ABM-M1, ABM-M2, and ABM-M3) are proposed in which the modified Booth algorithm is approximated. In all three designs, approximate Booth partial product generators are designed with different variations of approximation. The approximations are performed by reducing the logic complexity of the Booth partial product generator, and the accumulation of partial products is slightly modified to improve circuit performance. Compared to the exact Booth multiplier, ABM-M1 achieves up to 15% reduction in power consumption with an MRED value of 7.9 × 10-4. ABM-M2 has power savings of up to 60% with an MRED of 1.1 × 10-1. ABM-M3 has power savings of up to 50% with an MRED of 3.4 × 10-3. Compared to existing approximate Booth multipliers, the proposed multipliers ABM-M1 and ABM-M3 achieve up to a 41% reduction in power consumption while exhibiting very similar error metrics. Image multiplication and matrix multiplication are used as case studies to illustrate the high performance of the proposed approximate multipliers. Third, distributed arithmetic based sum of products units approximation is analyzed. Sum of products units are key elements in many digital signal processing applications. Three approximate sum of products models which are based on distributed arithmetic are proposed. They are designed for different levels of accuracy. First model of approximate sum of products achieves an improvement up to 64% on area and 70% on power, when compared to conventional unit. Other two models provide an improvement of 32% and 48% on area and 54% and 58% on power, respectively, with a reduced error rate compared to the first model. Third model achieves MRED and normalized mean error distance (NMED) as low as 0.05% and 0.009%. Performance of approximate units is evaluated with a noisy image smoothing application, where the proposed models are capable of achieving higher PSNR than existing state of the art techniques. Fourth, approximation is applied in division architecture. Two approximation models are proposed for restoring divider. In the first design, approximation is performed at circuit level, where approximate divider cells are utilized in place of exact ones by simplifying the logic equations. In the second model, restoring divider is analyzed strategically and number of restoring divider cells are reduced by finding the portions of divisor and dividend with significant information. An approximation factor pp is used in both designs. In model 1, the design with p=8 has a 58% reduction in both area and power consumption compared to exact design, with a Q-MRED of 1.909 × 10-2 and Q-NMED of 0.449 × 10-2. The second model with an approximation factor p=4 has 54% area savings and 62% power savings compared to exact design. The proposed models are found to have better error metrics compared to existing designs, with better performance at similar error values. A change detection image processing application is used for real time assessment of proposed and existing approximate dividers and one of the models achieves a PSNR of 54.27 dB

    Quantitative Analysis of Radiation-Associated Parenchymal Lung Change

    Get PDF
    Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density. 200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes. Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes. The effect of local dose on tissue class revealed a strong dose-dependent relationship We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patient’s functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
    corecore