541 research outputs found

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    The Role of a Microservice Architecture on cybersecurity and operational resilience in critical systems

    Get PDF
    Critical systems are characterized by their high degree of intolerance to threats, in other words, their high level of resilience, because depending on the context in which the system is inserted, the slightest failure could imply significant damage, whether in economic terms, or loss of reputation, of information, of infrastructure, of the environment, or human life. The security of such systems is traditionally associated with legacy infrastructures and data centers that are monolithic, which translates into increasingly high evolution and protection challenges. In the current context of rapid transformation where the variety of threats to systems has been consistently increasing, this dissertation aims to carry out a compatibility study of the microservice architecture, which is denoted by its characteristics such as resilience, scalability, modifiability and technological heterogeneity, being flexible in structural adaptations, and in rapidly evolving and highly complex settings, making it suited for agile environments. It also explores what response artificial intelligence, more specifically machine learning, can provide in a context of security and monitorability when combined with a simple banking system that adopts the microservice architecture.Os sistemas críticos são caracterizados pelo seu elevado grau de intolerância às ameaças, por outras palavras, o seu alto nível de resiliência, pois dependendo do contexto onde se insere o sistema, a mínima falha poderá implicar danos significativos, seja em termos económicos, de perda de reputação, de informação, de infraestrutura, de ambiente, ou de vida humana. A segurança informática de tais sistemas está tradicionalmente associada a infraestruturas e data centers legacy, ou seja, de natureza monolítica, o que se traduz em desafios de evolução e proteção cada vez mais elevados. No contexto atual de rápida transformação, onde as variedades de ameaças aos sistemas têm vindo consistentemente a aumentar, esta dissertação visa realizar um estudo de compatibilidade da arquitetura de microserviços, que se denota pelas suas caraterísticas tais como a resiliência, escalabilidade, modificabilidade e heterogeneidade tecnológica, sendo flexível em adaptações estruturais, e em cenários de rápida evolução e elevada complexidade, tornando-a adequada a ambientes ágeis. Explora também a resposta que a inteligência artificial, mais concretamente, machine learning, pode dar num contexto de segurança e monitorabilidade quando combinado com um simples sistema bancário que adota uma arquitetura de microserviços

    Energy Concerns with HPC Systems and Applications

    Full text link
    For various reasons including those related to climate changes, {\em energy} has become a critical concern in all relevant activities and technical designs. For the specific case of computer activities, the problem is exacerbated with the emergence and pervasiveness of the so called {\em intelligent devices}. From the application side, we point out the special topic of {\em Artificial Intelligence}, who clearly needs an efficient computing support in order to succeed in its purpose of being a {\em ubiquitous assistant}. There are mainly two contexts where {\em energy} is one of the top priority concerns: {\em embedded computing} and {\em supercomputing}. For the former, power consumption is critical because the amount of energy that is available for the devices is limited. For the latter, the heat dissipated is a serious source of failure and the financial cost related to energy is likely to be a significant part of the maintenance budget. On a single computer, the problem is commonly considered through the electrical power consumption. This paper, written in the form of a survey, we depict the landscape of energy concerns in computer activities, both from the hardware and the software standpoints.Comment: 20 page

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    Transactional memory for high-performance embedded systems

    Get PDF
    The increasing demand for computational power in embedded systems, which is required for various tasks, such as autonomous driving, can only be achieved by exploiting the resources offered by modern hardware. Due to physical limitations, hardware manufacturers have moved to increase the number of cores per processor instead of further increasing clock rates. Therefore, in our view, the additionally required computing power can only be achieved by exploiting parallelism. Unfortunately writing parallel code is considered a difficult and complex task. Hardware Transactional Memories (HTMs) are a suitable tool to write sophisticated parallel software. However, HTMs were not specifically developed for embedded systems and therefore cannot be used without consideration. The use of conventional HTMs increases complexity and makes it more difficult to foresee implications with other important properties of embedded systems. This thesis therefore describes how an HTM for embedded systems could be implemented. The HTM was designed to allow the parallel execution of software and to offer functionality which is useful for embedded systems. Hereby the focus lay on: elimination of the typical limitations of conventional HTMs, several conflict resolution mechanisms, investigation of real time behavior, and a feature to conserve energy. To enable the desired functionalities, the structure of the HTM described in this work strongly differs from a conventional HTM. In comparison to the baseline HTM, which was also designed and implemented in this thesis, the biggest adaptation concerns the conflict detection. It was modified so that conflicts can be detected and resolved centrally. For this, the cache hierarchy as well as the cache coherence had to be adapted and partially extended. The system was implemented in the cycle-accurate gem5 simulator. The eight benchmarks of the STAMP benchmark suite were used for evaluation. The evaluation of the various functionalities shows that the mechanisms work and add value for the operation in embedded systems.Der immer größer werdende Bedarf an Rechenleistung in eingebetteten Systemen, der für verschiedene Aufgaben wie z. B. dem autonomen Fahren benötigt wird, kann nur durch die effiziente Nutzung der zur Verfügung stehenden Ressourcen erreicht werden. Durch physikalische Grenzen sind Prozessorhersteller dazu übergegangen, Prozessoren mit mehreren Prozessorkernen auszustatten, statt die Taktraten weiter anzuheben. Daher kann die zusätzlich benötigte Rechenleistung aus unserer Sicht nur durch eine Steigerung der Parallelität gelingen. Hardwaretransaktionsspeicher (HTS) erlauben es ihren Nutzern schnell und einfach parallele Programme zu schreiben. Allerdings wurden HTS nicht speziell für eingebettete Systeme entwickelt und sind daher nur eingeschränkt für diese nutzbar. Durch den Einsatz herkömmlicher HTS steigt die Komplexität und es wird somit schwieriger abzusehen, ob andere wichtige Eigenschaften erreicht werden können. Um den Einsatz von HTS in eingebettete Systeme besser zu ermöglichen, beschreibt diese Arbeit einen konkreten Ansatz. Der HTS wurde hierzu so entwickelt, dass er eine parallele Ausführung von Programmen ermöglicht und Eigenschaften besitzt, welche für eingebettete Systeme nützlich sind. Dazu gehören unter anderem: Wegfall der typischen Limitierungen herkömmlicher HTS, Einflussnahme auf den Konfliktauflösungsmechanismus, Unterstützung einer abschätzbaren Ausführung und eine Funktion, um Energie einzusparen. Um die gewünschten Funktionalitäten zu ermöglichen, unterscheidet sich der Aufbau des in dieser Arbeit beschriebenen HTS stark von einem klassischen HTS. Im Vergleich zu dem Referenz HTS, der ebenfalls im Rahmen dieser Arbeit entworfen und implementiert wurde, betrifft die größte Anpassung die Konflikterkennung. Sie wurde derart verändert, dass die Konflikte zentral erkannt und aufgelöst werden können. Hierfür mussten die Cache-Hierarchie und Cache-Kohärenz stark angepasst und teilweise erweitert werden. Das System wurde in einem taktgenauen Simulator, dem gem5-Simulator, umgesetzt. Zur Evaluation wurden die acht Benchmarks der STAMP-Benchmark-Suite eingesetzt. Die Evaluation der verschiedenen Funktionen zeigt, dass die Mechanismen funktionieren und somit einen Mehrwert für eingebettete Systeme bieten

    A multi-level functional IR with rewrites for higher-level synthesis of accelerators

    Get PDF
    Specialised accelerators deliver orders of magnitude higher energy-efficiency than general-purpose processors. Field Programmable Gate Arrays (FPGAs) have become the substrate of choice, because the ever-changing nature of modern workloads, such as machine learning, demands reconfigurability. However, they are notoriously hard to program directly using Hardware Description Languages (HDLs). Traditional High-Level Synthesis (HLS) tools improve productivity, but come with their own problems. They often produce sub-optimal designs and programmers are still required to write hardware-specific code, thus development cycles remain long. This thesis proposes Shir, a higher-level synthesis approach for high-performance accelerator design with a hardware-agnostic programming entry point, a multi-level Intermediate Representation (IR), a compiler and rewrite rules for optimisation. First, a novel, multi-level functional IR structure for accelerator design is described. The IRs operate on different levels of abstraction, cleanly separating different hardware concerns. They enable the expression of different forms of parallelism and standard memory features, such as asynchronous off-chip memories or synchronous on-chip buffers, as well as arbitration of such shared resources. Exposing these features at the IR level is essential for achieving high performance. Next, mechanical lowering procedures are introduced to automatically compile a program specification through Shir’s functional IRs until low-level HDL code for FPGA synthesis is emitted. Each lowering step gradually adds implementation details. Finally, this thesis presents rewrite rules for automatic optimisations around parallelisation, buffering and data reshaping. Reshaping operations pose a challenge to functional approaches in particular. They introduce overheads that compromise performance or even prevent the generation of synthesisable hardware designs altogether. This fundamental issue is solved by the application of rewrite rules. The viability of this approach is demonstrated by running matrix multiplication and 2D convolution on an Intel Arria 10 FPGA. A limited design space exploration is conducted, confirming the ability of the IR to exploit various hardware features. Using rewrite rules for optimisation, it is possible to generate high-performance designs that are competitive with highly tuned OpenCL implementations and that outperform hardware-agnostic OpenCL code. The performance impact of the optimisations is further evaluated showing that they are essential to achieving high performance, and in many cases also necessary to produce hardware that fits the resource constraints

    Multiscale visualization approaches for Volunteered Geographic Information and Location-based Social Media

    Get PDF
    Today, “zoomable” maps are a state-of-the-art way to explore the world, available to anyone with Internet access. However, the process of creating this visualization has been rather loosely investigated and documented. Nevertheless, with an increasing amount of available data, interactive maps have become a more integral approach to visualizing and exploring big datasets and user-generated data. OpenStreetMap and online platforms such as Twitter and Flickr offer application programming interfaces (APIs) with geographic information. They are well-known examples of this visualization challenge and are often used as examples. In addition, an increasing number of public administrations collect open data and publish their data sets, which makes the task of visualization even more relevant. This dissertation deals with the visualization of user-generated geodata as a multiscale map. The basics of today’s multiscale maps—their history, technologies, and possibilities—are explored and abstracted. This work introduces two new multiscale-focused visualization approaches for point data from volunteered geographic information (VGI) and location-based social media (LBSM). One contribution of this effort is a visualization methodology for spatially referenced information in the form of point geometries, using nominally scaled data from social media such as Twitter or Flickr. Typical for this data is a high number of social media posts in different categories—a post on social media corresponds to a point in a specific category. Due to the sheer quantity and similar characteristics, the posts appear generic rather than unique. This type of dataset can be explored using the new method of micro diagrams to visualize the dataset on multiple scales and resolutions. The data is aggregated into small grid cells, and the numerical proportion is shown with small diagrams, which can visually merge into heterogenous areas through colors depicting a specific category. The diagram sizes allow the user to estimate the overall number of aggregated points in a grid cell. A different visualization approach is proposed for more unique points, considered points of interest (POI), based on the selection method. The goal is to identify more locally relevant points from the data set, considered more important compared to other points in the neighborhood, which are then compared by numerical attribute. The method, derived from topographic isolation and called discrete isolation, is the distance from one point to the next with a higher attribute value. By using this measure, the most essential points can be easily selected by choosing a minimum distance and producing a homogenous spatial of the selected points within the chosen dataset. The two newly developed approaches are applied to multiscale mapping by constructing example workflows that produce multiscale maps. The publicly available multiscale mapping workflows OpenMapTiles and OpenStreetMap Carto, using OpenStreetMap data, are systematically explored and analyzed. The result is a general workflow for multiscale map production and a short overview of the toolchain software. In particular, the generalization approaches in the example projects are discussed and these are classified into cartographic theories on the basis of literature. The workflow is demonstrated by building a raster tile service for the micro diagrams and a vector tile service for the discrete isolation, able to be used with just a web browser. In conclusion, these new approaches for point data using VGI and LBSM allow better qualitative visualization of geodata. While analyzing vast global datasets is challenging, exploring and analyzing hidden data patterns is fruitful. Creating this degree of visualization and producing maps on multiple scales is a complicated task. The workflows and tools provided in this thesis will make map production on a worldwide scale easier.:1 Introduction 1 1.1 Motivation .................................................................................................. 3 1.2 Visualization of crowdsourced geodata on multiple scales ............ 5 1.2.1 Research objective 1: Visualization of point collections ......... 6 1.2.2 Research objective 2: Visualization of points of interest ......... 7 1.2.3 Research objective 3: Production of multiscale maps ............. 7 1.3 Reader’s guide ......................................................................................... 9 1.3.1 Structure ........................................................................................... 9 1.3.2 Related Publications ....................................................................... 9 1.3.3 Formatting and layout ................................................................. 10 1.3.4 Online examples ........................................................................... 10 2 Foundations of crowdsourced mapping on multiple scales 11 2.1 Types and properties of crowdsourced data .................................. 11 2.2 Currents trends in cartography ......................................................... 11 2.3 Definitions .............................................................................................. 12 2.3.1 VGI .................................................................................................. 12 2.3.2 LBSM .............................................................................................. 13 2.3.3 Space, place, and location......................................................... 13 2.4 Visualization approaches for crowdsourced geodata ................... 14 2.4.1 Review of publications and visualization approaches ........... 14 2.4.2 Conclusions from the review ...................................................... 15 2.4.3 Challenges mapping crowdsourced data ................................ 17 2.5 Technologies for serving multiscale maps ...................................... 17 2.5.1 Research about multiscale maps .............................................. 17 2.5.2 Web Mercator projection ............................................................ 18 2.5.3 Tiles and zoom levels .................................................................. 19 2.5.4 Raster tiles ..................................................................................... 21 2.5.5 Vector tiles .................................................................................... 23 2.5.6 Tiling as a principle ..................................................................... 25 3 Point collection visualization with categorized attributes 26 3.1 Target users and possible tasks ....................................................... 26 3.2 Example data ......................................................................................... 27 3.3 Visualization approaches .................................................................... 28 3.3.1 Common techniques .................................................................... 28 3.3.2 The micro diagram approach .................................................... 30 3.4 The micro diagram and its parameters ............................................ 33 3.4.1 Aggregating points into a regular structure ............................ 33 3.4.2 Visualizing the number of data points ...................................... 35 3.4.3 Grid and micro diagrams ............................................................ 36 3.4.4 Visualizing numerical proportions with diagrams .................. 37 3.4.5 Influence of color and color brightness ................................... 38 3.4.6 Interaction options with micro diagrams .................................. 39 3.5 Application and user-based evaluation ............................................ 39 3.5.1 Micro diagrams in a multiscale environment ........................... 39 3.5.2 The micro diagram user study ................................................... 41 3.5.3 Point collection visualization discussion .................................. 47 4 Selection of POIs for visualization 50 4.1 Approaches for point selection .......................................................... 50 4.2 Methods for point selection ................................................................ 51 4.2.1 Label grid approach .................................................................... 52 4.2.2 Functional importance approach .............................................. 53 4.2.3 Discrete isolation approach ....................................................... 54 4.3 Functional evaluation of selection methods .................................... 56 4.3.1 Runtime comparison .................................................................... 56 4.3.2 Use cases for discrete isolation ................................................ 57 4.4 Discussion of the selection approaches .......................................... 61 4.4.1 A critical view of the use cases ................................................. 61 4.4.2 Comparing the approaches ........................................................ 62 4.4.3 Conclusion ..................................................................................... 64 5 Creating multiscale maps 65 5.1 Examples of multiscale map production .......................................... 65 5.1.1 OpenStreetMap Infrastructure ................................................... 66 5.1.2 OpenStreetMap Carto ................................................................. 67 5.1.3 OpenMapTiles ............................................................................... 73 5.2 Methods of multiscale map production ............................................ 80 5.2.1 OpenStreetMap tools ................................................................... 80 5.2.2 Geoprocessing .............................................................................. 80 5.2.3 Database ........................................................................................ 80 5.2.4 Creating tiles ................................................................................. 82 5.2.5 Caching .......................................................................................... 82 5.2.6 Styling tiles .................................................................................... 82 5.2.7 Viewing tiles ................................................................................... 83 5.2.8 The stackless approach to tile creation ................................... 83 5.3 Example workflows for creating multiscale maps ........................... 84 5.3.1 Raster tiles: OGC services and micro diagrams .................... 84 5.3.2 Vector tiles: Slippy map and vector tiles ................................. 87 5.4 Discussion of approaches and workflows ....................................... 90 5.4.1 Map production as a rendering pipeline .................................. 90 5.4.2 Comparison of OpenStreetMap Carto and OpenMapTiles .. 92 5.4.3 Discussion of the implementations ........................................... 93 5.4.4 Generalization in map production workflows .......................... 95 5.4.5 Conclusions ................................................................................. 101 6 Discussion 103 6.1 Development for web mapping ........................................................ 103 6.1.1 The role of standards in map production .............................. 103 6.1.2 Technological development ..................................................... 103 6.2 New data, new mapping techniques? ............................................. 104 7 Conclusion 106 7.1 Visualization of point collections ..................................................... 106 7.2 Visualization of points of interest ................................................... 107 7.3 Production of multiscale maps ........................................................ 107 7.4 Synthesis of the research questions .............................................. 108 7.5 Contributions ....................................................................................... 109 7.6 Limitations ............................................................................................ 110 7.7 Outlook ................................................................................................. 111 8 References 113 9 Appendix 130 9.1 Zoom levels and Scale ...................................................................... 130 9.3 Full information about selected UGC papers ................................ 131 9.4 Timeline of mapping technologies .................................................. 133 9.5 Timeline of map providers ................................................................ 133 9.6 Code snippets from own map production workflows .................. 134 9.6.1 Vector tiles workflow ................................................................. 134 9.6.2 Raster tiles workflow.................................................................. 137Heute sind zoombare Karten Alltag für jeden Internetznutzer. Die Erstellung interaktiv zoombarer Karten ist allerdings wenig erforscht, was einen deutlichen Gegensatz zu ihrer aktuellen Bedeutung und Nutzungshäufigkeit darstellt. Die Forschung in diesem Bereich ist also umso notwendiger. Steigende Datenmengen und größere Regionen, die von Karten abgedeckt werden sollen, unterstreichen den Forschungsbedarf umso mehr. Beispiele für stetig wachsende Datenmengen sind Geodatenquellen wie OpenStreetMap aber auch freie amtliche Geodatensätze (OpenData), aber auch die zunehmende Zahl georeferenzierter Inhalte auf Internetplatformen wie Twitter oder Flickr zu nennen. Das Thema dieser Arbeit ist die Visualisierung eben dieser nutzergenerierten Geodaten mittels zoombarer Karten. Dafür wird die Entwicklung der zugrundeliegenden Technologien über die letzten zwei Jahr-zehnte und die damit verbundene Möglichkeiten vorgestellt. Weitere Beiträge sind zwei neue Visualisierungsmethoden, die sich besonders für die Darstellung von Punktdaten aus raumbezogenen nutzergenerierten Daten und georeferenzierte Daten aus Sozialen Netzwerken eignen. Ein Beitrag dieser Arbeit ist eine neue Visualisierungsmethode für raumbezogene Informationen in Form von Punktgeometrien mit nominal skalierten Daten aus Sozialen Medien, wie beispielsweise Twitter oder Flickr. Typisch für diese Daten ist eine hohe Anzahl von Beiträgen mit unterschiedlichen Kategorien. Wobei die Beiträge, bedingt durch ihre schiere Menge und ähnlicher Ei-genschaften, eher generisch als einzigartig sind. Ein Beitrag in den So-zia len Medien entspricht dabei einem Punkt mit einer bestimmten Katego-rie. Ein solcher Datensatz kann mit der neuen Methode der „micro diagrams“ in verschiedenen Maßstäben und Auflösungen visualisiert und analysiert werden. Dazu werden die Daten in kleine Gitterzellen aggregiert. Die Menge und Verteilung der über die Kategorien aggregierten Punkte wird durch kleine Diagramme dargestellt, wobei die Farben die verschiedenen Kategorien visualisieren. Durch die geringere Größe der einzelnen Diagramme verschmelzen die kleinen Diagramme visuell, je nach der Verteilung der Farben für die Kategorien. Bei genauerem Hinsehen ist die Schätzung der Menge der aggregierten Punkte über die Größe der Diagramme die Menge und die Verteilung über die Kategorien möglich. Für einzigartigere Punkte, die als Points of Interest (POI) angesehen werden, wird ein anderer Visualisierungsansatz vorgeschlagen, der auf einer Auswahlmethode basiert. Ziel ist es dabei lokal relevantere Punkte aus dem Datensatz zu identifizieren, die im Vergleich zu anderen Punkten in der Nachbarschaft des Punktes verglichen nach einem numerischen Attribut wichtiger sind. Die Methode ist von dem geographischen Prinzip der Dominanz von Bergen abgeleitet und wird „discrete isolation“ genannt. Es handelt sich dabei um die Distanz von einem Punkt zum nächsten mit einem höheren Attributwert. Durch die Verwendung dieses Maßes können lokal bedeutende Punkte leicht ausgewählt werden, indem ein minimaler Abstand gewählt und so räumlich gleichmäßig verteilte Punkte aus dem Datensatz ausgewählt werden. Die beiden neu vorgestellten Methoden werden in den Kontext der zoombaren Karten gestellt, indem exemplarische Arbeitsabläufe erstellt werden, die als Er-gebnis eine zoombare Karte liefern. Dazu werden die frei verfügbaren Beispiele zur Herstellung von weltweiten zoombaren Karten mit nutzergenerierten Geo-daten von OpenStreetMap, anhand der Kartenprojekte OpenMapTiles und O-penStreetMap Carto analysiert und in Arbeitsschritte gegliedert. Das Ergebnis ist ein wiederverwendbarer Arbeitsablauf zur Herstellung zoombarer Karten, ergänzt durch eine Auswahl von passender Software für die einzelnen Arbeits-schritte. Dabei wird insbesondere auf die Generalisierungsansätze in den Beispielprojekten eingegangen und diese anhand von Literatur in die kartographische Theorie eingeordnet. Zur Demonstration des Workflows wird je ein Raster Tiles Dienst für die „micro diagrams“ und ein Vektor Tiles Dienst für die „discrete isolation“ erstellt. Beide Dienste lassen sich mit einem aktuellen Webbrowser nutzen. Zusammenfassend ermöglichen diese neuen Visualisierungsansätze für Punkt-daten aus VGI und LBSM eine bessere qualitative Visualisierung der neuen Geodaten. Die Analyse riesiger globaler Datensätze ist immer noch eine Herausforderung, aber die Erforschung und Analyse verborgener Muster in den Daten ist lohnend. Die Erstellung solcher Visualisierungen und die Produktion von Karten in verschiedenen Maßstäben ist eine komplexe Aufgabe. Die in dieser Arbeit vorgestellten Arbeitsabläufe und Werkzeuge erleichtern die Erstellung von Karten in globalem Maßstab.:1 Introduction 1 1.1 Motivation .................................................................................................. 3 1.2 Visualization of crowdsourced geodata on multiple scales ............ 5 1.2.1 Research objective 1: Visualization of point collections ......... 6 1.2.2 Research objective 2: Visualization of points of interest ......... 7 1.2.3 Research objective 3: Production of multiscale maps ............. 7 1.3 Reader’s guide ......................................................................................... 9 1.3.1 Structure ........................................................................................... 9 1.3.2 Related Publications ....................................................................... 9 1.3.3 Formatting and layout ................................................................. 10 1.3.4 Online examples ........................................................................... 10 2 Foundations of crowdsourced mapping on multiple scales 11 2.1 Types and properties of crowdsourced data .................................. 11 2.2 Currents trends in cartography ......................................................... 11 2.3 Definitions .............................................................................................. 12 2.3.1 VGI .................................................................................................. 12 2.3.2 LBSM .............................................................................................. 13 2.3.3 Space, place, and location......................................................... 13 2.4 Visualization approaches for crowdsourced geodata ................... 14 2.4.1 Review of publications and visualization approaches ........... 14 2.4.2 Conclusions from the review ...................................................... 15 2.4.3 Challenges mapping crowdsourced data ................................ 17 2.5 Technologies for serving multiscale maps ...................................... 17 2.5.1 Research about multiscale maps .............................................. 17 2.5.2 Web Mercator projection ............................................................ 18 2.5.3 Tiles and zoom levels .................................................................. 19 2.5.4 Raster tiles ..................................................................................... 21 2.5.5 Vector tiles .................................................................................... 23 2.5.6 Tiling as a principle ..................................................................... 25 3 Point collection visualization with categorized attributes 26 3.1 Target users and possible tasks ....................................................... 26 3.2 Example data ......................................................................................... 27 3.3 Visualization approaches .................................................................... 28 3.3.1 Common techniques .................................................................... 28 3.3.2 The micro diagram approach .................................................... 30 3.4 The micro diagram and its parameters ............................................ 33 3.4.1 Aggregating points into a regular structure ............................ 33 3.4.2 Visualizing the number of data points ...................................... 35 3.4.3 Grid and micro diagrams ............................................................ 36 3.4.4 Visualizing numerical proportions with diagrams .................. 37 3.4.5 Influence of color and color brightness ................................... 38 3.4.6 Interaction options with micro diagrams .................................. 39 3.5 Application and user-based evaluation ............................................ 39 3.5.1 Micro diagrams in a multiscale environment ........................... 39 3.5.2 The micro diagram user study ................................................... 41 3.5.3 Point collection vis

    Flexible Hardware-based Security-aware Mechanisms and Architectures

    Get PDF
    For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated. Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements. In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches

    Applying Hypervisor-Based Fault Tolerance Techniques to Safety-Critical Embedded Systems

    Get PDF
    This document details the work conducted through the development of this thesis, and it is structured as follows: • Chapter 1, Introduction, has briefly presented the motivation, objectives, and contributions of this thesis. • Chapter 2, Fundamentals, exposes a series of concepts that are necessary to correctly understand the information presented in the rest of the thesis, such as the concepts of virtualization, hypervisors, or software-based fault tolerance. In addition, this chapter includes an exhaustive review and comparison between the different hypervisors used in scientific studies dealing with safety-critical systems, and a brief review of some works that try to improve fault tolerance in the hypervisor itself, an area of research that is outside the scope of this work, but that complements the mechanism presented and could be established as a line of future work. • Chapter 3, Problem Statement and Related Work, explains the main reasons why the concept of Hypervisor-Based Fault Tolerance was born and reviews the main articles and research papers on the subject. This review includes both papers related to safety-critical embedded systems (such as the research carried out in this thesis) and papers related to cloud servers and cluster computing that, although not directly applicable to embedded systems, may raise useful concepts that make our solution more complete or allow us to establish future lines of work. • Chapter 4, Proposed Solution, begins with a brief comparison of the work presented in Chapter 3 to establish the requirements that our solution must meet in order to be as complete and innovative as possible. It then sets out the architecture of the proposed solution and explains in detail the two main elements of the solution: the Voter and the Health Monitoring partition. • Chapter 5, Prototype, explains in detail the prototyping of the proposed solution, including the choice of the hypervisor, the processing board, and the critical functionality to be redundant. With respect to the voter, it includes prototypes for both the software version (the voter is implemented in a virtual machine) and the hardware version (the voter is implemented as IP cores on the FPGA). • Chapter 6, Evaluation, includes the evaluation of the prototype developed in Chapter 5. As a preliminary step and given that there is no evidence in this regard, an exercise is carried out to measure the overhead involved in using the XtratuM hypervisor versus not using it. Subsequently, qualitative tests are carried out to check that Health Monitoring is working as expected and a fault injection campaign is carried out to check the error detection and correction rate of our solution. Finally, a comparison is made between the performance of the hardware and software versions of Voter. • Chapter 7, Conclusions and Future Work, is dedicated to collect the conclusions obtained and the contributions made during the research (in the form of articles in journals, conferences and contributions to projects and proposals in the industry). In addition, it establishes some lines of future work that could complete and extend the research carried out during this doctoral thesis.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Katzalin Olcoz Herrero.- Secretario: Félix García Carballeira.- Vocal: Santiago Rodríguez de la Fuent

    Data Management for Dynamic Multimedia Analytics and Retrieval

    Get PDF
    Multimedia data in its various manifestations poses a unique challenge from a data storage and data management perspective, especially if search, analysis and analytics in large data corpora is considered. The inherently unstructured nature of the data itself and the curse of dimensionality that afflicts the representations we typically work with in its stead are cause for a broad range of issues that require sophisticated solutions at different levels. This has given rise to a huge corpus of research that puts focus on techniques that allow for effective and efficient multimedia search and exploration. Many of these contributions have led to an array of purpose-built, multimedia search systems. However, recent progress in multimedia analytics and interactive multimedia retrieval, has demonstrated that several of the assumptions usually made for such multimedia search workloads do not hold once a session has a human user in the loop. Firstly, many of the required query operations cannot be expressed by mere similarity search and since the concrete requirement cannot always be anticipated, one needs a flexible and adaptable data management and query framework. Secondly, the widespread notion of staticity of data collections does not hold if one considers analytics workloads, whose purpose is to produce and store new insights and information. And finally, it is impossible even for an expert user to specify exactly how a data management system should produce and arrive at the desired outcomes of the potentially many different queries. Guided by these shortcomings and motivated by the fact that similar questions have once been answered for structured data in classical database research, this Thesis presents three contributions that seek to mitigate the aforementioned issues. We present a query model that generalises the notion of proximity-based query operations and formalises the connection between those queries and high-dimensional indexing. We complement this by a cost-model that makes the often implicit trade-off between query execution speed and results quality transparent to the system and the user. And we describe a model for the transactional and durable maintenance of high-dimensional index structures. All contributions are implemented in the open-source multimedia database system Cottontail DB, on top of which we present an evaluation that demonstrates the effectiveness of the proposed models. We conclude by discussing avenues for future research in the quest for converging the fields of databases on the one hand and (interactive) multimedia retrieval and analytics on the other
    corecore