377 research outputs found

    A trust supportive framework for pervasive computing systems

    Get PDF
    Recent years have witnessed the emergence and rapid growth of pervasive comput- ing technologies such as mobile ad hoc networks, radio frequency identification (RFID), Wi-Fi etc. Many researches are proposed to provide services while hiding the comput- ing systems into the background environment. Trust is of critical importance to protect service integrity & availability as well as user privacies. In our research, we design a trust- supportive framework for heterogeneous pervasive devices to collaborate with high security confidence while vanishing the details to the background. We design the overall system ar- chitecture and investigate its components and their relations, then we jump into details of the critical components such as authentication and/or identification and trust management. With our trust-supportive framework, the pervasive computing system can have low-cost, privacy-friendly and secure environment for its vast amount of services

    Modularity-based approaches to community detection in multilayer networks with applications toward precision medicine

    Get PDF
    Networks have become an important tool for the analysis of complex systems across many different disciplines including computer science, biology, chemistry, social sciences, and importantly, cancer medicine. Networks in the real world typically exhibit many forms of higher order organization. The subfield of networks analysis known as community detection aims to provide tools for discovering and interpreting the global structure of a networks-based on the connectivity patterns of its edges. In this thesis, we provide an overview of the methods for community detection in networks with an emphasis on modularity-based approaches. We discuss several caveats and drawbacks of currently available methods. We also review the success that network analyses have had in interpreting large scale 'omics' data in the context of cancer biology. In the second and third chapters, we present CHAMP and multimodbp, two useful community detection tools that seek to overcome several of the deficiencies in modularity-based community detection. In the final chapter, we develop a networks-based significance test for addressing an important question in the field of oncology: are mutations in DNA damage repair genes associated with elevated levels of tumor mutational burden. We apply the tools of network analysis to this question and showcase how this approach yields new insight into the structure of the problem, revealing what we call the TMB Paradox. We close by demonstrating the clinical utility of our findings in predicting patient response to novel immunotherapies.Doctor of Philosoph

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Using contour information and segmentation for object registration, modeling and retrieval

    Get PDF
    This thesis considers different aspects of the utilization of contour information and syntactic and semantic image segmentation for object registration, modeling and retrieval in the context of content-based indexing and retrieval in large collections of images. Target applications include retrieval in collections of closed silhouettes, holistic w ord recognition in handwritten historical manuscripts and shape registration. Also, the thesis explores the feasibility of contour-based syntactic features for improving the correspondence of the output of bottom-up segmentation to semantic objects present in the scene and discusses the feasibility of different strategies for image analysis utilizing contour information, e.g. segmentation driven by visual features versus segmentation driven by shape models or semi-automatic in selected application scenarios. There are three contributions in this thesis. The first contribution considers structure analysis based on the shape and spatial configuration of image regions (socalled syntactic visual features) and their utilization for automatic image segmentation. The second contribution is the study of novel shape features, matching algorithms and similarity measures. Various applications of the proposed solutions are presented throughout the thesis providing the basis for the third contribution which is a discussion of the feasibility of different recognition strategies utilizing contour information. In each case, the performance and generality of the proposed approach has been analyzed based on extensive rigorous experimentation using as large as possible test collections

    Fingerprint-based localization in massive MIMO systems using machine learning and deep learning methods

    Get PDF
    À mesure que les réseaux de communication sans fil se développent vers la 5G, une énorme quantité de données sera produite et partagée sur la nouvelle plate-forme qui pourra être utilisée pour promouvoir de nouveaux services. Parmis ceux-ci, les informations de localisation des terminaux mobiles (MT) sont remarquablement utiles. Par exemple, les informations de localisation peuvent être utilisées dans différents cas de services d'enquête et d'information, de services communautaires, de suivi personnel, ainsi que de communications sensibles à la localisation. De nos jours, bien que le système de positionnement global (GPS) des MT offre la possibilité de localiser les MT, ses performances sont médiocres dans les zones urbaines où une ligne de vue directe (LoS) aux satellites est bloqué avec de nombreux immeubles de grande hauteur. En outre, le GPS a une consommation d'énergie élevée. Par conséquent, les techniques de localisation utilisant la télémétrie, qui sont basées sur les informations de signal radio reçues des MT tels que le temps d'arrivée (ToA), l'angle d'arrivée (AoA) et la réception de la force du signal (RSS), ne sont pas en mesure de fournir une localisation de précision satisfaisante. Par conséquent, il est particulièrement difficile de fournir des informations de localisation fiables des MT dans des environnements complexes avec diffusion et propagation par trajets multiples. Les méthodes d'apprentissage automatique basées sur les empreintes digitales (FP) sont largement utilisées pour la localisation dans des zones complexes en raison de leur haute fiabilité, rentabilité et précision et elles sont flexibles pour être utilisées dans de nombreux systèmes. Dans les réseaux 5G, en plus d'accueillir plus d'utilisateurs à des débits de données plus élevés avec une meilleure fiabilité tout en consommant moins d'énergie, une localisation de haute précision est également requise. Pour relever un tel défi, des systèmes massifs à entrées multiples et sorties multiples (MIMO) ont été introduits dans la 5G en tant que technologie puissante et potentielle pour non seulement améliorer l'efficacité spectrale et énergétique à l'aide d'un traitement relativement simple, mais également pour fournir les emplacements précis des MT à l'aide d'un très grand nombre d'antennes associées à des fréquences porteuses élevées. Il existe deux types de MIMO massifs (M-MIMO), soit distribué et colocalisé. Ici, nous visons à utiliser la méthode basée sur les FP dans les systèmes M-MIMO pour fournir un système de localisation précis et fiable dans un réseau sans fil 5G. Nous nous concentrons principalement sur les deux extrêmes du paradigme M-MIMO. Un grand réseau d'antennes colocalisé (c'est-à-dire un MIMO massif colocalisé) et un grand réseau d'antennes géographiquement distribué (c'est-à-dire un MIMO massif distribué). Ensuite, nous ex trayons les caractéristiques du signal et du canal à partir du signal reçu dans les systèmes M-MIMO sous forme d'empreintes digitales et proposons des modèles utilisant les FP basés sur le regroupement et la régression pour estimer l'emplacement des MT. Grâce à cette procédure, nous sommes en mesure d'améliorer les performances de localisation de manière significative et de réduire la complexité de calcul de la méthode basée sur les FP.As wireless communication networks are growing into 5G, an enormous amount of data will be produced and shared on the new platform, which can be employed in promoting new services. Location information of mobile terminals (MTs) is remarkably useful among them, which can be used in different use cases of inquiry and information services, community services, personal tracking, as well as location-aware communications. Nowadays, although the Global Positioning System (GPS) offers the possibility to localize MTs, it has poor performance in urban areas where a direct line-of-sight (LoS) to the satellites is blocked by many tall buildings. Besides, GPS has a high power consumption. Consequently, the ranging based localization techniques, which are based on radio signal information received from MTs such as time-of-arrival (ToA), angle-of-arrival (AoA), and received signal strength (RSS), are not able to provide satisfactory localization accuracy. Therefore, it is a notably challenging problem to provide precise and reliable location information of MTs in complex environments with rich scattering and multipath propagation. Fingerprinting (FP)-based machine learning methods are widely used for localization in complex areas due to their high reliability, cost-efficiency, and accuracy and they are flexible to be used in many systems. In 5G networks, besides accommodating more users at higher data rates with better reliability while consuming less power, high accuracy localization is also required in 5G networks. To meet such a challenge, massive multiple-input multiple-output (MIMO) systems have been introduced in 5G as a powerful and potential technology to not only improve spectral and energy efficiency using relatively simple processing but also provide an accurate locations of MTs using a very large number of antennas combined with high carrier frequencies. There are two types of massive MIMO (M-MIMO), distributed and collocated. Here, we aim to use the FP-based method in M-MIMO systems to provide an accurate and reliable localization system in a 5G wireless network. We mainly focus on the two extremes of the M-MIMO paradigm. A large collocated antenna array (i.e., collocated M-MIMO ) and a large geographically distributed antenna array (i.e., distributed M-MIMO). Then, we extract signal and channel features from the received signal in M-MIMO systems as fingerprints and propose FP-based models using clustering and regression to estimate MT's location. Through this procedure, we are able to improve localization performance significantly and reduce the computational complexity of the FP-based method

    Robust and real-time hand detection and tracking in monocular video

    Get PDF
    In recent years, personal computing devices such as laptops, tablets and smartphones have become ubiquitous. Moreover, intelligent sensors are being integrated into many consumer devices such as eyeglasses, wristwatches and smart televisions. With the advent of touchscreen technology, a new human-computer interaction (HCI) paradigm arose that allows users to interface with their device in an intuitive manner. Using simple gestures, such as swipe or pinch movements, a touchscreen can be used to directly interact with a virtual environment. Nevertheless, touchscreens still form a physical barrier between the virtual interface and the real world. An increasingly popular field of research that tries to overcome this limitation, is video based gesture recognition, hand detection and hand tracking. Gesture based interaction allows the user to directly interact with the computer in a natural manner by exploring a virtual reality using nothing but his own body language. In this dissertation, we investigate how robust hand detection and tracking can be accomplished under real-time constraints. In the context of human-computer interaction, real-time is defined as both low latency and low complexity, such that a complete video frame can be processed before the next one becomes available. Furthermore, for practical applications, the algorithms should be robust to illumination changes, camera motion, and cluttered backgrounds in the scene. Finally, the system should be able to initialize automatically, and to detect and recover from tracking failure. We study a wide variety of existing algorithms, and propose significant improvements and novel methods to build a complete detection and tracking system that meets these requirements. Hand detection, hand tracking and hand segmentation are related yet technically different challenges. Whereas detection deals with finding an object in a static image, tracking considers temporal information and is used to track the position of an object over time, throughout a video sequence. Hand segmentation is the task of estimating the hand contour, thereby separating the object from its background. Detection of hands in individual video frames allows us to automatically initialize our tracking algorithm, and to detect and recover from tracking failure. Human hands are highly articulated objects, consisting of finger parts that are connected with joints. As a result, the appearance of a hand can vary greatly, depending on the assumed hand pose. Traditional detection algorithms often assume that the appearance of the object of interest can be described using a rigid model and therefore can not be used to robustly detect human hands. Therefore, we developed an algorithm that detects hands by exploiting their articulated nature. Instead of resorting to a template based approach, we probabilistically model the spatial relations between different hand parts, and the centroid of the hand. Detecting hand parts, such as fingertips, is much easier than detecting a complete hand. Based on our model of the spatial configuration of hand parts, the detected parts can be used to obtain an estimate of the complete hand's position. To comply with the real-time constraints, we developed techniques to speed-up the process by efficiently discarding unimportant information in the image. Experimental results show that our method is competitive with the state-of-the-art in object detection while providing a reduction in computational complexity with a factor 1 000. Furthermore, we showed that our algorithm can also be used to detect other articulated objects such as persons or animals and is therefore not restricted to the task of hand detection. Once a hand has been detected, a tracking algorithm can be used to continuously track its position in time. We developed a probabilistic tracking method that can cope with uncertainty caused by image noise, incorrect detections, changing illumination, and camera motion. Furthermore, our tracking system automatically determines the number of hands in the scene, and can cope with hands entering or leaving the video canvas. We introduced several novel techniques that greatly increase tracking robustness, and that can also be applied in other domains than hand tracking. To achieve real-time processing, we investigated several techniques to reduce the search space of the problem, and deliberately employ methods that are easily parallelized on modern hardware. Experimental results indicate that our methods outperform the state-of-the-art in hand tracking, while providing a much lower computational complexity. One of the methods used by our probabilistic tracking algorithm, is optical flow estimation. Optical flow is defined as a 2D vector field describing the apparent velocities of objects in a 3D scene, projected onto the image plane. Optical flow is known to be used by many insects and birds to visually track objects and to estimate their ego-motion. However, most optical flow estimation methods described in literature are either too slow to be used in real-time applications, or are not robust to illumination changes and fast motion. We therefore developed an optical flow algorithm that can cope with large displacements, and that is illumination independent. Furthermore, we introduce a regularization technique that ensures a smooth flow-field. This regularization scheme effectively reduces the number of noisy and incorrect flow-vector estimates, while maintaining the ability to handle motion discontinuities caused by object boundaries in the scene. The above methods are combined into a hand tracking framework which can be used for interactive applications in unconstrained environments. To demonstrate the possibilities of gesture based human-computer interaction, we developed a new type of computer display. This display is completely transparent, allowing multiple users to perform collaborative tasks while maintaining eye contact. Furthermore, our display produces an image that seems to float in thin air, such that users can touch the virtual image with their hands. This floating imaging display has been showcased on several national and international events and tradeshows. The research that is described in this dissertation has been evaluated thoroughly by comparing detection and tracking results with those obtained by state-of-the-art algorithms. These comparisons show that the proposed methods outperform most algorithms in terms of accuracy, while achieving a much lower computational complexity, resulting in a real-time implementation. Results are discussed in depth at the end of each chapter. This research further resulted in an international journal publication; a second journal paper that has been submitted and is under review at the time of writing this dissertation; nine international conference publications; a national conference publication; a commercial license agreement concerning the research results; two hardware prototypes of a new type of computer display; and a software demonstrator

    Advanced image processing techniques for detection and quantification of drusen

    Get PDF
    Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and TechnologyDrusen are common features in the ageing macula, caused by accumulation of extracellular materials beneath the retinal surface, visible in retinal fundus images as yellow spots. In the ophthalmologists’ opinion, the evaluation of the total drusen area, in a sequence of images taken during a treatment, will help to understand the disease progression and effectiveness. However, this evaluation is fastidious and difficult to reproduce when performed manually. A literature review on automated drusen detection showed that the works already published were limited to techniques of either adaptive or global thresholds which showed a tendency to produce a significant number of false positives. The purpose for this work was to propose an alternative method to automatically quantify drusen using advanced digital image processing techniques. This methodology is based on a detection and modelling algorithm to automatically quantify drusen. It includes an image pre-processing step to correct the uneven illumination by using smoothing splines fitting and to normalize the contrast. To quantify drusen a detection and modelling algorithm is adopted. The detection uses a new gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. These are then fitted by Gaussian functions, to produce a model of the image, which is used to compute the affected areas. To validate the methodology, two software applications, one for semi-automated (MD3RI) and other for automated detection of drusen (AD3RI), were implemented. The first was developed for Ophthalmologists to manually analyse and mark drusen deposits, while the other implemented algorithms for automatic drusen quantification.Four studies to assess the methodology accuracy involving twelve specialists have taken place. These compared the automated method to the specialists and evaluated its repeatability. The studies were analysed regarding several indicators, which were based on the total affected area and on a pixel-to-pixel analysis. Due to the high variability among the graders involved in the first study, a new evaluation method, the Weighed Matching Analysis, was developed to improve the pixel-to-pixel analysis by using the statistical significance of the observations to differentiate positive and negative pixels. From the results of these studies it was concluded that the methodology proposed is capable to automatically measure drusen in an accurate and reproducible process. Also, the thesis proposes new image processing algorithms, for image pre-processing, image segmentation,image modelling and images comparison, which are also applicable to other image processing fields

    Distributed Load Testing by Modeling and Simulating User Behavior

    Get PDF
    Modern human-machine systems such as microservices rely upon agile engineering practices which require changes to be tested and released more frequently than classically engineered systems. A critical step in the testing of such systems is the generation of realistic workloads or load testing. Generated workload emulates the expected behaviors of users and machines within a system under test in order to find potentially unknown failure states. Typical testing tools rely on static testing artifacts to generate realistic workload conditions. Such artifacts can be cumbersome and costly to maintain; however, even model-based alternatives can prevent adaptation to changes in a system or its usage. Lack of adaptation can prevent the integration of load testing into system quality assurance, leading to an incomplete evaluation of system quality. The goal of this research is to improve the state of software engineering by addressing open challenges in load testing of human-machine systems with a novel process that a) models and classifies user behavior from streaming and aggregated log data, b) adapts to changes in system and user behavior, and c) generates distributed workload by realistically simulating user behavior. This research contributes a Learning, Online, Distributed Engine for Simulation and Testing based on the Operational Norms of Entities within a system (LODESTONE): a novel process to distributed load testing by modeling and simulating user behavior. We specify LODESTONE within the context of a human-machine system to illustrate distributed adaptation and execution in load testing processes. LODESTONE uses log data to generate and update user behavior models, cluster them into similar behavior profiles, and instantiate distributed workload on software systems. We analyze user behavioral data having differing characteristics to replicate human-machine interactions in a modern microservice environment. We discuss tools, algorithms, software design, and implementation in two different computational environments: client-server and cloud-based microservices. We illustrate the advantages of LODESTONE through a qualitative comparison of key feature parameters and experimentation based on shared data and models. LODESTONE continuously adapts to changes in the system to be tested which allows for the integration of load testing into the quality assurance process for cloud-based microservices
    corecore