12 research outputs found

    A New Cryptographic Encryption Approach for Cloud Environment

    Get PDF
    Cloud security and trust management are an important issue in the cloud environment. Cloud computing is the result of the evolution of virtualization, service-oriented design, and the widespread adoption of involuntary and utility computing. Today, cloud computing is the fastest growing technical term and captures a global service-oriented market, so cloud computing service providers and cloud computing consumers need to maintain trust between them. In cloud security, if you discuss the security procedures of traditional IT information systems, designing security into cloud software during the software development life cycle can greatly reduce the cloud attack surface. With cloud computing providing Security as a Service (SAAS), security software is an important issue. From a cloud customer perspective, the use of security services in the cloud reduces the need for security software development. The requirements for security software development are transferred to the cloud provider. This work proposes a new cloud environment security and trust management algorithm, which uses the cryptosystem method to improve the single alphabet based on the concept of multi-letter cipher. Encryption and decryption is applied to plain text to encrypt text and cipher text for plain text conversion. In this work, the algorithm's power consumption, encryption and decryption throughput, and security analysis are also presented

    A Secure Mechanism for Big Data Collection in Large Scale Internet of Vehicle

    Get PDF
    As an extension for Internet of Things (IoT), Internet of Vehicles (IoV) achieves unified management in smart transportation area. With the development of IoV, an increasing number of vehicles are connected to the network. Large scale IoV collects data from different places and various attributes, which conform with heterogeneous nature of big data in size, volume, and dimensionality. Big data collection between vehicle and application platform becomes more and more frequent through various communication technologies, which causes evolving security attack. However, the existing protocols in IoT cannot be directly applied in big data collection in large scale IoV. The dynamic network structure and growing amount of vehicle nodes increases the complexity and necessary of the secure mechanism. In this paper, a secure mechanism for big data collection in large scale IoV is proposed for improved security performance and efficiency. To begin with, vehicles need to register in the big data center to connect into the network. Afterwards, vehicles associate with big data center via mutual authentication and single sign-on algorithm. Two different secure protocols are proposed for business data and confidential data collection. The collected big data is stored securely using distributed storage. The discussion and performance evaluation result shows the security and efficiency of the proposed secure mechanism

    Edge-MapReduce-Based Intelligent Information-Centric IoV: Cognitive Route Planning

    Get PDF
    With the rapid development of automatic vehicles (AVs), vehicles have become important intelligent objects in Smart City. Vehicles bring huge amounts of data for Intelligent Transportation System (ITS), and at the same time, they also put forward new application requirements. However, it is difficult to obtain and analyze massive data and provide accurate application services for AVs. In today\u27s society of traffic explosion, how to plan the route of vehicles has become a hot issue. In order to solve this problem, we introduced content-data-friendly information-center networking (ICN) architecture into the Internet of Vehicles (IoV), and achieved efficient route planning for AVs through the Big Data acquisition and analysis architecture in ICN. We use the analytical capabilities of the network to achieve active cognitive access to traffic data. At the same time, we use game theory to achieve the incentive mechanism for task distribution and information sharing. Finally, the simulation results show that the method is effective

    Preserving Edge Knowledge Sharing Among IoT Services: A Blockchain-Based Approach

    Get PDF
    Edge computational intelligence, integrating artificial intelligence (AI) and edge computing into Internet of Things (IoT), will generate many scattered knowledge. To enable auditable and delay-sensitive IoT services, these knowledge will be shared among decentralized intelligent network edges (DINEs), end users, and supervisors frequently. Blockchain has a promising ability to provide a traceable, privacy-preserving and tamper-resistant ledger for sharing edge knowledge. However, due to the complicated environments of network edges, knowledge sharing among DINEs still faces many challenges. Firstly, the resource limitation and mobility of DINEs impede the applicability of existing consensus tricks (e.g., Poof of Work, Proof of Stake, and Paxos) of blockchain. Secondly, the adversaries may eavesdrop the content of edge knowledge or entice the blockchain to forks using some attacking models (like man-in-the-middle attack, denial of services, etc.). In this article, an user-centric blockchain (UCB) framework is proposed for preserving edge knowledge sharing in IoT. Significant superiorities of UCB benefit from the proof of popularity (PoP) consensus mechanism, which is more energy-efficient and fast. Security analysis and experiments based on Raspberry Pi 3 Model B demonstrate its feasibility with low block generating delay and complexity

    Polymer-based additive manufacturing: process optimisation for low-cost industrial robotics manufacture

    Get PDF
    The robotics design process can be complex with potentially multiple design iterations. The use of 3D printing is ideal for rapid prototyping and has conventionally been utilised in concept development and for exploring different design parameters that are ultimately used to meet an intended application or routine. During the initial stage of a robot development, exploiting 3D printing can provide design freedom, customisation and sustainability and ultimately lead to direct cost benefits. Traditionally, robot specifications are selected on the basis of being able to deliver a specific task. However, a robot that can be specified by design parameters linked to a distinctive task can be developed quickly, inexpensively, and with little overall risk utilising a 3D printing process. Numerous factors are inevitably important for the design of industrial robots using polymer-based additive manufacturing. However, with an extensive range of new polymer-based additive manufacturing techniques and materials, these could provide significant benefits for future robotics design and development

    Schedulability, Response Time Analysis and New Models of P-FRP Systems

    Get PDF
    Functional Reactive Programming (FRP) is a declarative approach for modeling and building reactive systems. FRP has been shown to be an expressive formalism for building applications of computer graphics, computer vision, robotics, etc. Priority-based FRP (P-FRP) is a formalism that allows preemption of executing programs and guarantees real-time response. Since functional programs cannot maintain state and mutable data, changes made by programs that are preempted have to be rolled back. Hence in P-FRP, a higher priority task can preempt the execution of a lower priority task, but the preempted lower priority task will have to restart after the higher priority task has completed execution. This execution paradigm is called Abort-and-Restart (AR). Current real-time research is focused on preemptive of non-preemptive models of execution and several state-of-the-art methods have been developed to analyze the real-time guarantees of these models. Unfortunately, due to its transactional nature where preempted tasks are aborted and have to restart, the execution semantics of P-FRP does not fit into the standard definitions of preemptive or non-preemptive execution, and the research on the standard preemptive and non-preemptive may not applicable for the P-FRP AR model. Out of many research areas that P-FRP may demands, we focus on task scheduling which includes task and system modeling, priority assignment, schedulability analysis, response time analysis, improved P-FRP AR models, algorithms and corresponding software. In this work, we review existing results on P-FRP task scheduling and then present our research contributions: (1) a tighter feasibility test interval regarding the task release offsets as well as a linked list based algorithm and implementation for scheduling simulation; (2) P-FRP with software transactional memory-lazy conflict detection (STM-LCD); (3) a non-work-conserving scheduling model called Deferred Start; (4) a multi-mode P-FRP task model; (5) SimSo-PFRP, the P-FRP extension of SimSo - a SimPy-based, highly extensible and user friendly task generator and task scheduling simulator.Computer Science, Department o

    Vectorization system for unstructured codes with a Data-parallel Compiler IR

    Get PDF
    With Dennard Scaling coming to an end, Single Instruction Multiple Data (SIMD) offers itself as a way to improve the compute throughput of CPUs. One fundamental technique in SIMD code generators is the vectorization of data-parallel code regions. This has applications in outer-loop vectorization, whole-function vectorization and vectorization of explicitly data-parallel languages. This thesis makes contributions to the reliable vectorization of data-parallel code regions with unstructured, reducible control flow. Reducibility is the case in practice where all control-flow loops have exactly one entry point. We present P-LLVM, a novel, full-featured, intermediate representation for vectorizers that provides a semantics for the code region at every stage of the vectorization pipeline. Partial control-flow linearization is a novel partial if-conversion scheme, an essential technique to vectorize divergent control flow. Different to prior techniques, partial linearization has linear running time, does not insert additional branches or blocks and gives proved guarantees on the control flow retained. Divergence of control induces value divergence at join points in the control-flow graph (CFG). We present a novel control-divergence analysis for directed acyclic graphs with optimal running time and prove that it is correct and precise under common static assumptions. We extend this technique to obtain a quadratic-time, control-divergence analysis for arbitrary reducible CFGs. For this analysis, we show on a range of realistic examples how earlier approaches are either less precise or incorrect. We present a feature-complete divergence analysis for P-LLVM programs. The analysis is the first to analyze stack-allocated objects in an unstructured control setting. Finally, we generalize single-dimensional vectorization of outer loops to multi-dimensional tensorization of loop nests. SIMD targets benefit from tensorization through more opportunities for re-use of loaded values and more efficient memory access behavior. The techniques were implemented in the Region Vectorizer (RV) for vectorization and TensorRV for loop-nest tensorization. Our evaluation validates that the general-purpose RV vectorization system matches the performance of more specialized approaches. RV performs on par with the ISPC compiler, which only supports its structured domain-specific language, on a range of tree traversal codes with complex control flow. RV is able to outperform the loop vectorizers of state-of-the-art compilers, as we show for the SPEC2017 nab_s benchmark and the XSBench proxy application.Mit dem Ausreizen des Dennard Scalings erreichen die gewohnten Zuwächse in der skalaren Rechenleistung zusehends ihr Ende. Moderne Prozessoren setzen verstärkt auf parallele Berechnung, um den Rechendurchsatz zu erhöhen. Hierbei spielen SIMD Instruktionen (Single Instruction Multiple Data), die eine Operation gleichzeitig auf mehrere Eingaben anwenden, eine zentrale Rolle. Eine fundamentale Technik, um SIMD Programmcode zu erzeugen, ist der Einsatz datenparalleler Vektorisierung. Diese unterliegt populären Verfahren, wie der Vektorisierung äußerer Schleifen, der Vektorisierung gesamter Funktionen bis hin zu explizit datenparallelen Programmiersprachen. Der Beitrag der vorliegenden Arbeit besteht darin, ein zuverlässiges Vektorisierungssystem für datenparallelen Code mit reduziblem Steuerfluss zu entwickeln. Diese Anforderung ist für alle Steuerflussgraphen erfüllt, deren Schleifen nur einen Eingang haben, was in der Praxis der Fall ist. Wir präsentieren P-LLVM, eine ausdrucksstarke Zwischendarstellung für Vektorisierer, welche dem Programm in jedem Stadium der Transformation von datenparallelem Code zu SIMD Code eine definierte Semantik verleiht. Partielle Steuerfluss-Linearisierung ist ein neuer Algorithmus zur If-Conversion, welcher Sprünge erhalten kann. Anders als existierende Verfahren hat Partielle Linearisierung eine lineare Laufzeit und fügt keine neuen Sprünge oder Blöcke ein. Wir zeigen Kriterien, unter denen der Algorithmus Steuerfluss erhält, und beweisen diese. Steuerflussdivergenz induziert Divergenz an Punkten zusammenfließenden Steuerflusses. Wir stellen eine neue Steuerflussdivergenzanalyse für azyklische Graphen mit optimaler Laufzeit vor und beweisen deren Korrektheit und Präzision. Wir verallgemeinern die Technik zu einem Algorithmus mit quadratischer Laufzeit für beliebiege, reduzible Steuerflussgraphen. Eine Studie auf realistischen Beispielgraphen zeigt, dass vergleichbare Techniken entweder weniger präsize sind oder falsche Ergebnisse liefern. Ebenfalls präsentieren wir eine Divergenzanalyse für P-LLVM Programme. Diese Analyse ist die erste Divergenzanalyse, welche Divergenz in stapelallokierten Objekten unter unstrukturiertem Steuerfluss analysiert. Schließlich generalisieren wir die eindimensionale Vektorisierung von äußeren Schleifen zur multidimensionalen Tensorisierung von Schleifennestern. Tensorisierung eröffnet für SIMD Prozessoren mehr Möglichkeiten, bereits geladene Werte wiederzuverwenden und das Speicherzugriffsverhalten des Programms zu optimieren, als dies mit Vektorisierung der Fall ist. Die vorgestellten Techniken wurden in den Region Vectorizer (RV) für Vektorisierung und TensorRV für die Tensorisierung von Schleifennestern implementiert. Wir zeigen auf einer Reihe von steuerflusslastigen Programmen für die Traversierung von Baumdatenstrukturen, dass RV das gleiche Niveau erreicht wie der ISPC Compiler, welcher nur seine strukturierte Eingabesprache verarbeiten kann. RV kann schnellere SIMD-Programme erzeugen als die Schleifenvektorisierer in aktuellen Industriecompilern. Dies demonstrieren wir mit dem nab_s benchmark aus der SPEC2017 Benchmarksuite und der XSBench Proxy-Anwendung

    Season-Based Occupancy Prediction in Residential Buildings Using Data Mining Techniques

    Get PDF
    Considering the continuous increase of global energy consumption and the fact that buildings account for a large part of electricity use, it is essential to reduce energy consumption in buildings to mitigate greenhouse gas emissions and costs for both building owners and tenants. A reliable occupancy prediction model plays a critical role in improving the performance of energy simulation and occupant-centric building operations. In general, occupancy and occupant activities differ by season, and it is important to account for the dynamic nature of occupancy in simulations and to propose energy-efficient strategies. The present work aims to develop a data mining-based framework, including feature selection and the establishment of seasonal-customized occupancy prediction (SCOP) models to predict the occupancy in buildings considering different seasons. In the proposed framework, the recursive feature elimination with cross-validation (RFECV) feature selection was first implemented to select the optimal variables concerning the highest prediction accuracy. Later, six machine learning (ML) algorithms were considered to establish four SCOP models to predict occupancy presence, and their prediction performances were compared in terms of prediction accuracy and computational cost. To evaluate the effectiveness of the developed data mining framework, it was applied to an apartment in Lyon, France. The results show that the RFECV process reduced the computational time while improving the ML models’ prediction performances. Additionally, the SCOP models could achieve higher prediction accuracy than the conventional prediction model measured by performance evaluation metrics of F-1 score and area under the curve. Among the considered ML models, the gradient-boosting decision tree, random forest, and artificial neural network showed better performances, achieving more than 85% accuracy in Summer, Fall, and Winter, and over 80% in Spring. The essence of the framework is valuable for developing strategies for building energy consumption estimation and higher-resolution occupancy level prediction, which are easily influenced by seasons

    Improving Demand Forecasting: The Challenge of Forecasting Studies Comparability and a Novel Approach to Hierarchical Time Series Forecasting

    Get PDF
    Bedarfsprognosen sind in der Wirtschaft unerlässlich. Anhand des erwarteten Kundenbe-darfs bestimmen Firmen beispielsweise welche Produkte sie entwickeln, wie viele Fabri-ken sie bauen, wie viel Personal eingestellt wird oder wie viel Rohmaterial geordert wer-den muss. Fehleinschätzungen bei Bedarfsprognosen können schwerwiegende Auswir-kungen haben, zu Fehlentscheidungen führen, und im schlimmsten Fall den Bankrott einer Firma herbeiführen. Doch in vielen Fällen ist es komplex, den tatsächlichen Bedarf in der Zukunft zu antizipie-ren. Die Einflussfaktoren können vielfältig sein, beispielsweise makroökonomische Ent-wicklung, das Verhalten von Wettbewerbern oder technologische Entwicklungen. Selbst wenn alle Einflussfaktoren bekannt sind, sind die Zusammenhänge und Wechselwirkun-gen häufig nur schwer zu quantifizieren. Diese Dissertation trägt dazu bei, die Genauigkeit von Bedarfsprognosen zu verbessern. Im ersten Teil der Arbeit wird im Rahmen einer überfassenden Übersicht über das gesamte Spektrum der Anwendungsfelder von Bedarfsprognosen ein neuartiger Ansatz eingeführt, wie Studien zu Bedarfsprognosen systematisch verglichen werden können und am Bei-spiel von 116 aktuellen Studien angewandt. Die Vergleichbarkeit von Studien zu verbes-sern ist ein wesentlicher Beitrag zur aktuellen Forschung. Denn anders als bspw. in der Medizinforschung, gibt es für Bedarfsprognosen keine wesentlichen vergleichenden quan-titativen Meta-Studien. Der Grund dafür ist, dass empirische Studien für Bedarfsprognosen keine vereinheitlichte Beschreibung nutzen, um ihre Daten, Verfahren und Ergebnisse zu beschreiben. Wenn Studien hingegen durch systematische Beschreibung direkt miteinan-der verglichen werden können, ermöglicht das anderen Forschern besser zu analysieren, wie sich Variationen in Ansätzen auf die Prognosegüte auswirken – ohne die aufwändige Notwendigkeit, empirische Experimente erneut durchzuführen, die bereits in Studien beschrieben wurden. Diese Arbeit führt erstmals eine solche Systematik zur Beschreibung ein. Der weitere Teil dieser Arbeit behandelt Prognoseverfahren für intermittierende Zeitreihen, also Zeitreihen mit wesentlichem Anteil von Bedarfen gleich Null. Diese Art der Zeitreihen erfüllen die Anforderungen an Stetigkeit der meisten Prognoseverfahren nicht, weshalb gängige Verfahren häufig ungenügende Prognosegüte erreichen. Gleichwohl ist die Rele-vanz intermittierender Zeitreihen hoch – insbesondere Ersatzteile weisen dieses Bedarfs-muster typischerweise auf. Zunächst zeigt diese Arbeit in drei Studien auf, dass auch die getesteten Stand-der-Technik Machine Learning Ansätze bei einigen bekannten Datensät-zen keine generelle Verbesserung herbeiführen. Als wesentlichen Beitrag zur Forschung zeigt diese Arbeit im Weiteren ein neuartiges Verfahren auf: Der Similarity-based Time Series Forecasting (STSF) Ansatz nutzt ein Aggregation-Disaggregationsverfahren basie-rend auf einer selbst erzeugten Hierarchie statistischer Eigenschaften der Zeitreihen. In Zusammenhang mit dem STSF Ansatz können alle verfügbaren Prognosealgorithmen eingesetzt werden – durch die Aggregation wird die Stetigkeitsbedingung erfüllt. In Expe-rimenten an insgesamt sieben öffentlich bekannten Datensätzen und einem proprietären Datensatz zeigt die Arbeit auf, dass die Prognosegüte (gemessen anhand des Root Mean Square Error RMSE) statistisch signifikant um 1-5% im Schnitt gegenüber dem gleichen Verfahren ohne Einsatz von STSF verbessert werden kann. Somit führt das Verfahren eine wesentliche Verbesserung der Prognosegüte herbei. Zusammengefasst trägt diese Dissertation zum aktuellen Stand der Forschung durch die zuvor genannten Verfahren wesentlich bei. Das vorgeschlagene Verfahren zur Standardi-sierung empirischer Studien beschleunigt den Fortschritt der Forschung, da sie verglei-chende Studien ermöglicht. Und mit dem STSF Verfahren steht ein Ansatz bereit, der zuverlässig die Prognosegüte verbessert, und dabei flexibel mit verschiedenen Arten von Prognosealgorithmen einsetzbar ist. Nach dem Erkenntnisstand der umfassenden Literatur-recherche sind keine vergleichbaren Ansätze bislang beschrieben worden

    Telecommunication Systems

    Get PDF
    This book is based on both industrial and academic research efforts in which a number of recent advancements and rare insights into telecommunication systems are well presented. The volume is organized into four parts: "Telecommunication Protocol, Optimization, and Security Frameworks", "Next-Generation Optical Access Technologies", "Convergence of Wireless-Optical Networks" and "Advanced Relay and Antenna Systems for Smart Networks." Chapters within these parts are self-contained and cross-referenced to facilitate further study
    corecore