1,478 research outputs found

    Navigace mobilních robotů v neznámém prostředí s využitím měření vzdáleností

    Get PDF
    The ability of a robot to navigate itself in the environment is a crucial step towards its autonomy. Navigation as a subtask of the development of autonomous robots is the subject of this thesis, focusing on the development of a method for simultaneous localization an mapping (SLAM) of mobile robots in six degrees of freedom (DOF). As a part of this research, a platform for 3D range data acquisition based on a continuously inclined laser rangefinder was developed. This platform is presented, evaluating the measurements and also presenting the robotic equipment on which the platform can be fitted. The localization and mapping task is equal to the registration of multiple 3D images into a common frame of reference. For this purpose, a method based on the Iterative Closest Point (ICP) algorithm was developed. First, the originally implemented SLAM method is presented, focusing on the time-wise performance and the registration quality issues introduced by the implemented algorithms. In order to accelerate and improve the quality of the time-demanding 6DOF image registration, an extended method was developed. The major extension is the introduction of a factorized registration, extracting 2D representations of vertical objects called leveled maps from the 3D point sets, ensuring these representations are 3DOF invariant. The extracted representations are registered in 3DOF using ICP algorithm, allowing pre-alignment of the 3D data for the subsequent robust 6DOF ICP based registration. The extended method is presented, showing all important modifications to the original method. The developed registration method was evaluated using real 3D data acquired in different indoor environments, examining the benefits of the factorization and other extensions as well as the performance of the original ICP based method. The factorization gives promising results compared to a single phase 6DOF registration in vertically structured environments. Also, the disadvantages of the method are discussed, proposing possible solutions. Finally, the future prospects of the research are presented.Schopnost lokalizace a navigace je podmínkou autonomního provozu mobilních robotů. Předmětem této disertační práce jsou navigační metody se zaměřením na metodu pro simultánní lokalizaci a mapování (SLAM) mobilních robotů v šesti stupních volnosti (6DOF). Nedílnou součástí tohoto výzkumu byl vývoj platformy pro sběr 3D vzdálenostních dat s využitím kontinuálně naklápěného laserového řádkového scanneru. Tato platforma byla vyvinuta jako samostatný modul, aby mohla být umístěna na různé šasi mobilních robotů. Úkol lokalizace a mapování je ekvivalentní registraci více 3D obrazů do společného souřadného systému. Pro tyto účely byla vyvinuta metoda založená na algoritmu Iterative Closest Point Algorithm (ICP). Původně implementovaná verze navigační metody využívá ICP s akcelerací pomocí kd-stromů přičemž jsou zhodnoceny její kvalitativní a výkonnostní aspekty. Na základě této analýzy byly vyvinuty rozšíření původní metody založené na ICP. Jednou z hlavních modifikací je faktorizace registračního procesu, kdy tato faktorizace je založena na redukci dat: vytvoření 2D „leveled“ map (ve smyslu jednoúrovňových map) ze 3D vzdálenostních obrazů. Pro tuto redukci je technologicky i algoritmicky zajištěna invariantnost těchto map vůči třem stupňům volnosti. Tyto redukované mapy jsou registrovány pomocí ICP ve zbylých třech stupních volnosti, přičemž získaná transformace je aplikována na 3D data za účelem před-registrace 3D obrazů. Následně je provedena robustní 6DOF registrace. Rozšířená metoda je v disertační práci v popsána spolu se všemi podstatnými modifikacemi. Vyvinutá metoda byla otestována a zhodnocena s využitím skutečných 3D vzdálenostních dat naměřených v různých vnitřních prostředích. Jsou zhodnoceny přínosy faktorizace a jiných modifikací ve srovnání s původní jednofázovou 6DOF registrací, také jsou zmíněny nevýhody implementované metody a navrženy způsoby jejich řešení. Nakonec následuje návrh budoucího výzkumu a diskuse o možnostech dalšího rozvoje.

    A Survey on Homomorphic Encryption Schemes: Theory and Implementation

    Full text link
    Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. Especially with popular cloud services, the control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars of achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, as well as extending the state of the art HE, PHE, SWHE, and FHE systems.Comment: - Updated. (October 6, 2017) - This paper is an early draft of the survey that is being submitted to ACM CSUR and has been uploaded to arXiv for feedback from stakeholder

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    A comprehensive meta-analysis of cryptographic security mechanisms for cloud computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The concept of cloud computing offers measurable computational or information resources as a service over the Internet. The major motivation behind the cloud setup is economic benefits, because it assures the reduction in expenditure for operational and infrastructural purposes. To transform it into a reality there are some impediments and hurdles which are required to be tackled, most profound of which are security, privacy and reliability issues. As the user data is revealed to the cloud, it departs the protection-sphere of the data owner. However, this brings partly new security and privacy concerns. This work focuses on these issues related to various cloud services and deployment models by spotlighting their major challenges. While the classical cryptography is an ancient discipline, modern cryptography, which has been mostly developed in the last few decades, is the subject of study which needs to be implemented so as to ensure strong security and privacy mechanisms in today’s real-world scenarios. The technological solutions, short and long term research goals of the cloud security will be described and addressed using various classical cryptographic mechanisms as well as modern ones. This work explores the new directions in cloud computing security, while highlighting the correct selection of these fundamental technologies from cryptographic point of view

    Crux: Locality-Preserving Distributed Services

    Full text link
    Distributed systems achieve scalability by distributing load across many machines, but wide-area deployments can introduce worst-case response latencies proportional to the network's diameter. Crux is a general framework to build locality-preserving distributed systems, by transforming an existing scalable distributed algorithm A into a new locality-preserving algorithm ALP, which guarantees for any two clients u and v interacting via ALP that their interactions exhibit worst-case response latencies proportional to the network latency between u and v. Crux builds on compact-routing theory, but generalizes these techniques beyond routing applications. Crux provides weak and strong consistency flavors, and shows latency improvements for localized interactions in both cases, specifically up to several orders of magnitude for weakly-consistent Crux (from roughly 900ms to 1ms). We deployed on PlanetLab locality-preserving versions of a Memcached distributed cache, a Bamboo distributed hash table, and a Redis publish/subscribe. Our results indicate that Crux is effective and applicable to a variety of existing distributed algorithms.Comment: 11 figure

    Using Alternating Decision Treets in Multi-Leveled Hierarchical Cloud Based System

    Get PDF
    Cloud platforms are an essential part of modern world. Used in all kind of fields, from education to science, they became inseparable from information technologies and computing sphere. However the problem of performance optimization still exists and is not entirely solved even today. With introduction of machine learning and artificial intelligence this task became easier to solve. This paper presents a way to utilize decision trees to control performance of a computational system and to balance load on different nodes in attempt to increase quality of service

    Performance benchmarking, analysis, and optimization of deep learning inference

    Get PDF
    The world sees a proliferation of deep learning (DL) models and their wide adoption in different application domains. This has made the performance benchmarking, understanding, and optimization of DL inference an increasingly pressing task for both hardware designers and system providers, as they would like to offer the best possible computing system to serve DL models with the desired latency, throughput, and energy requirements while maximizing resource utilization. However, DL faces the following challenges in performance engineering. Benchmarking — While there have been significant efforts to develop benchmark suites that evaluate widely used DL models, developing, maintaining, and running benchmarks takes a non-trivial amount of effort, and DL benchmarking has been hampered in part due to the lack of representative and up-to-date benchmarking suites. Performance Understanding — Understanding the performance of DL workloads is challenging as their characteristics depend on the interplay between the models, frameworks, system libraries, and the hardware (or the HW/SW stack). Existing profiling tools are disjoint, however, and only focus on profiling within a particular level of the stack. This largely limits the types of analysis that can be performed on model execution. Optimization Advising — The current DL optimization process is manual and ad-hoc that requires a lot of effort and expertise. Existing tools lack the highly desired abilities to characterize ideal performance, identify sources of inefficiency, and quantify the benefits of potential optimizations. Such deficiencies have led to slow DL characterization/optimization cycles that cannot keep up with the fast pace at which new DL innovations are introduced. Evaluation and Comparison — The current DL landscape is fast-paced and is rife with non-uniform models, hardware/software (HW/SW) stacks, but lacks a DL benchmarking platform to facilitate evaluation and comparison of DL innovations, be it models, frameworks, libraries, or hardware. Due to the lack of a benchmarking platform, the current practice of evaluating the benefits of proposed DL innovations is both arduous and error-prone — stifling the adoption of the innovations. This thesis addresses the above challenges in DL performance engineering. First we introduce DLBricks, a composable benchmark generation design that reduces the effort of developing, maintaining, and running DL benchmarks. DLBricks decomposes DL models into a set of unique runnable networks and constructs the original model’s performance using the performance of the generated benchmarks. Then, we present XSP, an across-stack profiling design that correlates profiles from different sources to obtain a holistic and hierarchical view of DL model execution. XSP innovatively leverages distributed tracing and accurately capture the profiles at each level of the HW/SW stack in spite of profiling overhead. Next, we propose Benanza, a systematic DL benchmarking and analysis design that guides researchers to potential optimization opportunities and assesses hypothetical execution scenarios on GPUs. Finally, we design MLModelScope, a consistent, reproducible, and scalable DL benchmarking platform to facilitate evaluation and comparison of DL innovations. This thesis also briefly discusses TrIMS, TOPS, and CommScope which are developed based on the needs observed from the performance benchmarking and optimization work to solve relevant problems in the DL domain
    corecore