630 research outputs found

    A skewness-aware matrix factorization approach for mesh-structured cloud services

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Online cloud services need to fulfill clients' requests scalably and fast. State-of-the-art cloud services are increasingly deployed as a distributed service mesh. Service to service communication is frequent in the mesh. Unfortunately, problematic events may occur between any pair of nodes in the mesh, therefore, it is vital to maximize the network visibility. A state-of-the-art approach is to model pairwise RTTs based on a latent factor model represented as a low-rank matrix factorization. A latent factor corresponds to a rank-1 component in the factorization model, and is shared by all node pairs. However, different node pairs usually experience a skewed set of hidden factors, which should be fully considered in the model. In this paper, we propose a skewness-aware matrix factorization method named SMF. We decompose the matrix factorization into basic units of rank-one latent factors, and progressively combine rank-one factors for different node pairs. We present a unifying framework to automatically and adaptively select the rank-one factors for each node pair, which not only preserves the low rankness of the matrix model, but also adapts to skewed network latency distributions. Over real-world RTT data sets, SMF significantly improves the relative error by a factor of 0.2 x to 10 x, converges fast and stably, and compactly captures fine-grained local and global network latency structures.Peer ReviewedPostprint (author's final draft

    EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous Visual Hulls

    Full text link
    3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications. State of the art is based on traditional RGB frames that enable optimization of photo-consistency cross views. In this paper, we study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency as well as by the biological evidence that eyes in nature capture the same data and still perceive well 3D shape. The foundation of our hypothesis that 3D reconstruction is feasible using events lies in the information contained in the occluding contours and in the continuous scene acquisition with events. We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object. We represent ACE by a spatially and temporally continuous implicit function defined in the event x-y-t space. Furthermore, we design a novel continuous Voxel Carving algorithm enabled by the high temporal resolution of the Apparent Contour Events. To evaluate the performance of the method, we collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We demonstrate the ability of EvAC3D to reconstruct high-fidelity mesh surfaces from real event sequences while allowing the refinement of the 3D reconstruction for each individual event.Comment: 16 pages, 8 figures, European Conference on Computer Vision (ECCV) 202

    Real-Time Trigger and online Data Reduction based on Machine Learning Methods for Particle Detector Technology

    Get PDF
    Moderne Teilchenbeschleuniger-Experimente generieren während zur Laufzeit immense Datenmengen. Die gesamte erzeugte Datenmenge abzuspeichern, überschreitet hierbei schnell das verfügbare Budget für die Infrastruktur zur Datenauslese. Dieses Problem wird üblicherweise durch eine Kombination von Trigger- und Datenreduktionsmechanismen adressiert. Beide Mechanismen werden dabei so nahe wie möglich an den Detektoren platziert um die gewünschte Reduktion der ausgehenden Datenraten so frühzeitig wie möglich zu ermöglichen. In solchen Systeme traditionell genutzte Verfahren haben währenddessen ihre Mühe damit eine effiziente Reduktion in modernen Experimenten zu erzielen. Die Gründe dafür liegen zum Teil in den komplexen Verteilungen der auftretenden Untergrund Ereignissen. Diese Situation wird bei der Entwicklung der Detektorauslese durch die vorab unbekannten Eigenschaften des Beschleunigers und Detektors während des Betriebs unter hoher Luminosität verstärkt. Aus diesem Grund wird eine robuste und flexible algorithmische Alternative benötigt, welche von Verfahren aus dem maschinellen Lernen bereitgestellt werden kann. Da solche Trigger- und Datenreduktion-Systeme unter erschwerten Bedingungen wie engem Latenz-Budget, einer großen Anzahl zu nutzender Verbindungen zur Datenübertragung und allgemeinen Echtzeitanforderungen betrieben werden müssen, werden oft FPGAs als technologische Basis für die Umsetzung genutzt. Innerhalb dieser Arbeit wurden mehrere Ansätze auf Basis von FPGAs entwickelt und umgesetzt, welche die vorherrschenden Problemstellungen für das Belle II Experiment adressieren. Diese Ansätze werden über diese Arbeit hinweg vorgestellt und diskutiert werden

    Neuromorphic Systems for Pattern Recognition and Uav Trajectory Planning

    Get PDF
    Detection and control are two essential components in an intelligent system. This thesis investigates novel techniques in both areas with a focus on the applications of handwritten text recognition and UAV flight control. Recognizing handwritten texts is a challenging task due to many different writing styles and lack of clear boundary between adjacent characters. The difficulty is greatly increased if the detection algorithms is solely based on pattern matching without information of dynamics of handwriting trajectories. Motivated by the aforementioned challenges, this thesis first investigates the pattern recognition problem. We use offline handwritten texts recognition as a case study to explore the performance of a recurrent belief propagation model. We first develop a probabilistic inference network to post process the recognition results of deep Convolutional Neural Network (CNN) (e.g. LeNet) and collect individual characters to form words. The output of the inference network is a set of words and their probability. A series of post processing and improvement techniques are then introduced to further increase the recognition accuracy. We study the performance of proposed model through various comparisons. The results show that it significantly improves the accuracy by correcting deletion, insertion and replacement errors, which are the main sources of invalid candidate words. Deep Reinforcement Learning (DRL) has widely been applied to control the autonomous systems because it provides solutions for various complex decision-making tasks that previously could not be solved solely with deep learning. To enable autonomous Unmanned Aerial Vehicles (UAV), this thesis presents a two-level trajectory planning framework for UAVs in an indoor environment. A sequence of waypoints is selected at the higher-level, which leads the UAV from its current position to the destination. At the lower-level, an optimal trajectory is generated analytically between each pair of adjacent waypoints. The goal of trajectory generation is to maintain the stability of the UAV, and the goal of the waypoints planning is to select waypoints with the lowest control thrust throughout the entire trip while avoiding collisions with obstacles. The entire framework is implemented using DRL, which learns the highly complicated and nonlinear interaction between those two levels, and the impact from the environment. Given the pre-planned trajectory, this thesis further presents an actor-critic reinforcement learning framework that realizes continuous trajectory control of the UAV through a set of desired waypoints. We construct a deep neural network and develop reinforcement learning for better trajectory tracking. In addition, Field Programmable Gate Arrays (FPGA) based hardware acceleration is designed for energy efficient real-time control. If we are to integrate the trajectory planning model onto a UAV system for real-time on-board planning, a key challenge is how to deliver required performance under strict memory and computational constraints. Techniques that compress Deep Neural Network (DNN) models attract our attention because they allow optimized neural network models to be efficiently deployed on platforms with limited energy and storage capacity. However, conventional model compression techniques prune the DNN after it is fully trained, which is very time-consuming especially when the model is trained using DRL. To overcome the limitation, we present an early phase integrated neural network weight compression system for DRL based waypoints planning. By applying pruning at an early phase, the compression of the DRL model can be realized without significant overhead in training. By tightly integrating pruning and retraining at the early phase, we achieve a higher model compression rate, reduce more memory and computing complexity, and improve the success rate compared to the original work
    corecore