499 research outputs found

    A New Framework for the Analysis of Large Scale Multi-Rate Power Data

    Get PDF
    A new framework for the analysis of large scale, multi-rate power data is introduced. The system comprises high rate power grid data acquisition devices, software modules for big data management and large scale time series analysis. The power grid modeling and simulation modules enable to run power flow simulations. Visualization methods support data exploration for captured, simulated and analyzed energy data. A remote software control module for the proposed tools is provided

    Data processing of high-rate low-voltage distribution grid recordings for smart grid monitoring and analysis

    Get PDF
    Power networks will change from a rigid hierarchic architecture to dynamic interconnected smart grids. In traditional power grids, the frequency is the controlled quantity to maintain supply and load power balance. Thereby, high rotating mass inertia ensures for stability. In the future, system stability will have to rely more on real-time measurements and sophisticated control, especially when integrating fluctuating renewable power sources or high-load consumers like electrical vehicles to the low-voltage distribution grid. In the present contribution, we describe a data processing network for the in-house developed low-voltage, high-rate measurement devices called electrical data recorder (EDR). These capture units are capable of sending the full high-rate acquisition data for permanent storage in a large-scale database. The EDR network is specifically designed to serve for reliable and secured transport of large data, live performance monitoring, and deep data mining. We integrate dedicated different interfaces for statistical evaluation, big data queries, comparative analysis, and data integrity tests in order to provide a wide range of useful post-processing methods for smart grid analysis. We implemented the developed EDR network architecture for high-rate measurement data processing and management at different locations in the power grid of our Institute. The system runs stable and successfully collects data since several years. The results of the implemented evaluation functionalities show the feasibility of the implemented methods for signal processing, in view of enhanced smart grid operation. © 2015, Maaß et al.; licensee Springer

    Exploring New Paradigms for Mobile Edge Computing

    Get PDF
    Edge computing has been rapidly growing in recent years to meet the surging demands from mobile apps and Internet of Things (IoT). Similar to the Cloud, edge computing provides computation, storage, data, and application services to the end-users. However, edge computing is usually deployed at the edge of the network, which can provide low-latency and high-bandwidth services for end devices. So far, edge computing is still not widely adopted. One significant challenge is that the edge computing environment is usually heterogeneous, involving various operating systems and platforms, which complicates app development and maintenance. in this dissertation, we explore to combine edge computing with virtualization techniques to provide a homogeneous environment, where edge nodes and end devices run exactly the same operating system. We develop three systems based on the homogeneous edge computing environment to improve the security and usability of end-device applications. First, we introduce vTrust, a new mobile Trusted Execution Environment (TEE), which offloads the general execution and storage of a mobile app to a nearby edge node and secures the I/O between the edge node and the mobile device with the aid of a trusted hypervisor on the mobile device. Specifically, vTrust establishes an encrypted I/O channel between the local hypervisor and the edge node, such that any sensitive data flowing through the hosted mobile OS is encrypted. Second, we present MobiPlay, a record-and-replay tool for mobile app testing. By collaborating a mobile phone with an edge node, MobiPlay can effectively record and replay all types of input data on the mobile phone without modifying the mobile operating system. to do so, MobiPlay runs the to-be-tested application on the edge node under exactly the same environment as the mobile device and allows the tester to operate the application on a mobile device. Last, we propose vRent, a new mechanism to leverage smartphone resources as edge node based on Xen virtualization and MiniOS. vRent aims to mitigate the shortage of available edge nodes. vRent enforces isolation and security by making the users\u27 android OSes as Guest OSes and rents the resources to a third-party in the form of MiniOSes

    ECLAP 2012 Conference on Information Technologies for Performing Arts, Media Access and Entertainment

    Get PDF
    It has been a long history of Information Technology innovations within the Cultural Heritage areas. The Performing arts has also been enforced with a number of new innovations which unveil a range of synergies and possibilities. Most of the technologies and innovations produced for digital libraries, media entertainment and education can be exploited in the field of performing arts, with adaptation and repurposing. Performing arts offer many interesting challenges and opportunities for research and innovations and exploitation of cutting edge research results from interdisciplinary areas. For these reasons, the ECLAP 2012 can be regarded as a continuation of past conferences such as AXMEDIS and WEDELMUSIC (both pressed by IEEE and FUP). ECLAP is an European Commission project to create a social network and media access service for performing arts institutions in Europe, to create the e-library of performing arts, exploiting innovative solutions coming from the ICT

    Intrusion Detection from Heterogenous Sensors

    Get PDF
    RÉSUMÉ De nos jours, la protection des systĂšmes et rĂ©seaux informatiques contre diffĂ©rentes attaques avancĂ©es et distribuĂ©es constitue un dĂ©fi vital pour leurs propriĂ©taires. L’une des menaces critiques Ă  la sĂ©curitĂ© de ces infrastructures informatiques sont les attaques rĂ©alisĂ©es par des individus dont les intentions sont malveillantes, qu’ils soient situĂ©s Ă  l’intĂ©rieur et Ă  l’extĂ©rieur de l’environnement du systĂšme, afin d’abuser des services disponibles, ou de rĂ©vĂ©ler des informations confidentielles. Par consĂ©quent, la gestion et la surveillance des systĂšmes informatiques est un dĂ©fi considĂ©rable considĂ©rant que de nouvelles menaces et attaques sont dĂ©couvertes sur une base quotidienne. Les systĂšmes de dĂ©tection d’intrusion, Intrusion Detection Systems (IDS) en anglais, jouent un rĂŽle clĂ© dans la surveillance et le contrĂŽle des infrastructures de rĂ©seau informatique. Ces systĂšmes inspectent les Ă©vĂ©nements qui se produisent dans les systĂšmes et rĂ©seaux informatiques et en cas de dĂ©tection d’activitĂ© malveillante, ces derniers gĂ©nĂšrent des alertes afin de fournir les dĂ©tails des attaques survenues. Cependant, ces systĂšmes prĂ©sentent certaines limitations qui mĂ©ritent d’ĂȘtre adressĂ©es si nous souhaitons les rendre suffisamment fiables pour rĂ©pondre aux besoins rĂ©els. L’un des principaux dĂ©fis qui caractĂ©rise les IDS est le grand nombre d’alertes redondantes et non pertinentes ainsi que le taux de faux-positif gĂ©nĂ©rĂ©s, faisant de leur analyse une tĂąche difficile pour les administrateurs de sĂ©curitĂ© qui tentent de dĂ©terminer et d’identifier les alertes qui sont rĂ©ellement importantes. Une partie du problĂšme rĂ©side dans le fait que la plupart des IDS ne prennent pas compte les informations contextuelles (type de systĂšmes, applications, utilisateurs, rĂ©seaux, etc.) reliĂ©es Ă  l’attaque. Ainsi, une grande partie des alertes gĂ©nĂ©rĂ©es par les IDS sont non pertinentes en ce sens qu’elles ne permettent de comprendre l’attaque dans son contexte et ce, malgrĂ© le fait que le systĂšme ait rĂ©ussi Ă  correctement dĂ©tecter une intrusion. De plus, plusieurs IDS limitent leur dĂ©tection Ă  un seul type de capteur, ce qui les rend inefficaces pour dĂ©tecter de nouvelles attaques complexes. Or, ceci est particuliĂšrement important dans le cas des attaques ciblĂ©es qui tentent d’éviter la dĂ©tection par IDS conventionnels et par d’autres produits de sĂ©curitĂ©. Bien que de nombreux administrateurs systĂšme incorporent avec succĂšs des informations de contexte ainsi que diffĂ©rents types de capteurs et journaux dans leurs analyses, un problĂšme important avec cette approche reste le manque d’automatisation, tant au niveau du stockage que de l’analyse. Afin de rĂ©soudre ces problĂšmes d’applicabilitĂ©, divers types d’IDS ont Ă©tĂ© proposĂ©s dans les derniĂšres annĂ©es, dont les IDS de type composant pris sur Ă©tagĂšre, commercial off-the-shelf (COTS) en anglais, qui sont maintenant largement utilisĂ©s dans les centres d’opĂ©rations de sĂ©curitĂ©, Security Operations Center (SOC) en anglais, de plusieurs grandes organisations. D’un point de vue plus gĂ©nĂ©ral, les diffĂ©rentes approches proposĂ©es peuvent ĂȘtre classĂ©es en diffĂ©rentes catĂ©gories : les mĂ©thodes basĂ©es sur l’apprentissage machine, tel que les rĂ©seaux bayĂ©siens, les mĂ©thodes d’extraction de donnĂ©es, les arbres de dĂ©cision, les rĂ©seaux de neurones, etc., les mĂ©thodes impliquant la corrĂ©lation d’alertes et les approches fondĂ©es sur la fusion d’alertes, les systĂšmes de dĂ©tection d’intrusion sensibles au contexte, les IDS dit distribuĂ©s et les IDS qui reposent sur la notion d’ontologie de base. Étant donnĂ© que ces diffĂ©rentes approches se concentrent uniquement sur un ou quelques-uns des dĂ©fis courants reliĂ©s aux IDS, au meilleure de notre connaissance, le problĂšme dans son ensemble n’a pas Ă©tĂ© rĂ©solu. Par consĂ©quent, il n’existe aucune approche permettant de couvrir tous les dĂ©fis des IDS modernes prĂ©cĂ©demment mentionnĂ©s. Par exemple, les systĂšmes qui reposent sur des mĂ©thodes d’apprentissage machine classent les Ă©vĂ©nements sur la base de certaines caractĂ©ristiques en fonction du comportement observĂ© pour un type d’évĂ©nements, mais ils ne prennent pas en compte les informations reliĂ©es au contexte et les relations pouvant exister entre plusieurs Ă©vĂ©nements. La plupart des techniques de corrĂ©lation d’alerte proposĂ©es ne considĂšrent que la corrĂ©lation entre plusieurs capteurs du mĂȘme type ayant un Ă©vĂ©nement commun et une sĂ©mantique d’alerte similaire (corrĂ©lation homogĂšne), laissant aux administrateurs de sĂ©curitĂ© la tĂąche d’effectuer la corrĂ©lation entre les diffĂ©rents types de capteurs hĂ©tĂ©rogĂšnes. Pour leur part, les approches sensibles au contexte n’emploient que des aspects limitĂ©s du contexte sous-jacent. Une autre limitation majeure des diffĂ©rentes approches proposĂ©es est l’absence d’évaluation prĂ©cise basĂ©e sur des ensembles de donnĂ©es qui contiennent des scĂ©narios d’attaque complexes et modernes. À cet effet, l’objectif de cette thĂšse est de concevoir un systĂšme de corrĂ©lation d’évĂ©nements qui peut prendre en considĂ©ration plusieurs types hĂ©tĂ©rogĂšnes de capteurs ainsi que les journaux de plusieurs applications (par exemple, IDS/IPS, pare-feu, base de donnĂ©es, systĂšme d’exploitation, antivirus, proxy web, routeurs, etc.). Cette mĂ©thode permettra de dĂ©tecter des attaques complexes qui laissent des traces dans les diffĂ©rents systĂšmes, et d’incorporer les informations de contexte dans l’analyse afin de rĂ©duire les faux-positifs. Nos contributions peuvent ĂȘtre divisĂ©es en quatre parties principales : 1) Nous proposons la Pasargadae, une solution complĂšte sensible au contexte et reposant sur une ontologie de corrĂ©lation des Ă©vĂ©nements, laquelle effectue automatiquement la corrĂ©lation des Ă©vĂ©nements par l’analyse des informations recueillies auprĂšs de diverses sources. Pasargadae utilise le concept d’ontologie pour reprĂ©senter et stocker des informations sur les Ă©vĂ©nements, le contexte et les vulnĂ©rabilitĂ©s, les scĂ©narios d’attaques, et utilise des rĂšgles d’ontologie de logique simple Ă©crites en Semantic Query-Enhance Web Rule Language (SQWRL) afin de corrĂ©ler diverse informations et de filtrer les alertes non pertinentes, en double, et les faux-positifs. 2) Nous proposons une approche basĂ©e sur, mĂ©ta-Ă©vĂ©nement , tri topologique et l‘approche corrĂ©lation dâ€˜Ă©vĂ©nement basĂ©e sur sĂ©mantique qui emploie Pasargadae pour effectuer la corrĂ©lation d’évĂ©nements Ă  travers les Ă©vĂ©nements collectĂ©s de plusieurs capteurs rĂ©partis dans un rĂ©seau informatique. 3) Nous proposons une approche alerte de fusion basĂ©e sur sĂ©mantique, contexte sensible, qui s‘appuie sur certains des sous-composantes de Pasargadae pour effectuer une alerte fusion hĂ©tĂ©rogĂšne recueillies auprĂšs IDS hĂ©tĂ©rogĂšnes. 4) Dans le but de montrer le niveau de flexibilitĂ© de Pasargadae, nous l’utilisons pour mettre en oeuvre d’autres approches proposĂ©es d‘alertes et de corrĂ©lation dâ€˜Ă©vĂ©nements. La somme de ces contributions reprĂ©sente une amĂ©lioration significative de l’applicabilitĂ© et la fiabilitĂ© des IDS dans des situations du monde rĂ©el. Afin de tester la performance et la flexibilitĂ© de l’approche de corrĂ©lation d’évĂ©nements proposĂ©s, nous devons aborder le manque d’infrastructures expĂ©rimental adĂ©quat pour la sĂ©curitĂ© du rĂ©seau. Une Ă©tude de littĂ©rature montre que les approches expĂ©rimentales actuelles ne sont pas adaptĂ©es pour gĂ©nĂ©rer des donnĂ©es de rĂ©seau de grande fidĂ©litĂ©. Par consĂ©quent, afin d’accomplir une Ă©valuation complĂšte, d’abord, nous menons nos expĂ©riences sur deux scĂ©narios d’étude d‘analyse de cas distincts, inspirĂ©s des ensembles de donnĂ©es d’évaluation DARPA 2000 et UNB ISCX IDS. Ensuite, comme une Ă©tude dĂ©posĂ©e complĂšte, nous employons Pasargadae dans un vrai rĂ©seau informatique pour une pĂ©riode de deux semaines pour inspecter ses capacitĂ©s de dĂ©tection sur un vrai terrain trafic de rĂ©seau. Les rĂ©sultats obtenus montrent que, par rapport Ă  d’autres amĂ©liorations IDS existants, les contributions proposĂ©es amĂ©liorent considĂ©rablement les performances IDS (taux de dĂ©tection) tout en rĂ©duisant les faux positifs, non pertinents et alertes en double.----------ABSTRACT Nowadays, protecting computer systems and networks against various distributed and multi-steps attack has been a vital challenge for their owners. One of the essential threats to the security of such computer infrastructures is attacks by malicious individuals from inside and outside of the system environment to abuse available services, or reveal their confidential information. Consequently, managing and supervising computer systems is a considerable challenge, as new threats and attacks are discovered on a daily basis. Intrusion Detection Systems (IDSs) play a key role in the surveillance and monitoring of computer network infrastructures. These systems inspect events occurred in computer systems and networks and in case of any malicious behavior they generate appropriate alerts describing the attacks’ details. However, there are a number of shortcomings that need to be addressed to make them reliable enough in the real-world situations. One of the fundamental challenges in real-world IDS is the large number of redundant, non-relevant, and false positive alerts that they generate, making it a difficult task for security administrators to determine and identify real and important alerts. Part of the problem is that most of the IDS do not take into account contextual information (type of systems, applications, users, networks, etc.), and therefore a large portion of the alerts are non-relevant in that even though they correctly recognize an intrusion, the intrusion fails to reach its objectives. Additionally, to detect newer and complicated attacks, relying on only one detection sensor type is not adequate, and as a result many of the current IDS are unable to detect them. This is especially important with respect to targeted attacks that try to avoid detection by conventional IDS and by other security products. While many system administrators are known to successfully incorporate context information and many different types of sensors and logs into their analysis, an important problem with this approach is the lack of automation in both storage and analysis. In order to address these problems in IDS applicability, various IDS types have been proposed in the recent years and commercial off-the-shelf (COTS) IDS products have found their way into Security Operations Centers (SOC) of many large organizations. From a general perspective, these works can be categorized into: machine learning based approaches including Bayesian networks, data mining methods, decision trees, neural networks, etc., alert correlation and alert fusion based approaches, context-aware intrusion detection systems, distributed intrusion detection systems, and ontology based intrusion detection systems. To the best of our knowledge, since these works only focus on one or few of the IDS challenges, the problem as a whole has not been resolved. Hence, there is no comprehensive work addressing all the mentioned challenges of modern intrusion detection systems. For example, works that utilize machine learning approaches only classify events based on some features depending on behavior observed with one type of events, and they do not take into account contextual information and event interrelationships. Most of the proposed alert correlation techniques consider correlation only across multiple sensors of the same type having a common event and alert semantics (homogeneous correlation), leaving it to security administrators to perform correlation across heterogeneous types of sensors. Context-aware approaches only employ limited aspects of the underlying context. The lack of accurate evaluation based on the data sets that encompass modern complex attack scenarios is another major shortcoming of most of the proposed approaches. The goal of this thesis is to design an event correlation system that can correlate across several heterogeneous types of sensors and logs (e.g. IDS/IPS, firewall, database, operating system, anti-virus, web proxy, routers, etc.) in order to hope to detect complex attacks that leave traces in various systems, and incorporate context information into the analysis, in order to reduce false positives. To this end, our contributions can be split into 4 main parts: 1) we propose the Pasargadae comprehensive context-aware and ontology-based event correlation framework that automatically performs event correlation by reasoning on the information collected from various information resources. Pasargadae uses ontologies to represent and store information on events, context and vulnerability information, and attack scenarios, and uses simple ontology logic rules written in Semantic Query-Enhance Web Rule Language (SQWRL) to correlate various information and filter out non-relevant alerts and duplicate alerts, and false positives. 2) We propose a meta-event based, topological sort based and semantic-based event correlation approach that employs Pasargadae to perform event correlation across events collected form several sensors distributed in a computer network. 3) We propose a semantic-based context-aware alert fusion approach that relies on some of the subcomponents of Pasargadae to perform heterogeneous alert fusion collected from heterogeneous IDS. 4) In order to show the level of flexibility of Pasargadae, we use it to implement some other proposed alert and event correlation approaches. The sum of these contributions represent a significant improvement in the applicability and reliability of IDS in real-world situations. In order to test the performance and flexibility of the proposed event correlation approach, we need to address the lack of experimental infrastructure suitable for network security. A study of the literature shows that current experimental approaches are not appropriate to generate high fidelity network data. Consequently, in order to accomplish a comprehensive evaluation, first, we conduct our experiments on two separate analysis case study scenarios, inspired from the DARPA 2000 and UNB ISCX IDS evaluation data sets. Next, as a complete field study, we employ Pasargadae in a real computer network for a two weeks period to inspect its detection capabilities on a ground truth network traffic. The results obtained show that compared to other existing IDS improvements, the proposed contributions significantly improve IDS performance (detection rate) while reducing false positives, non-relevant and duplicate alerts

    Expressive policy based authorization model for resource-constrained device sensors.

    Get PDF
    Los capítulos II, III y IV estån sujetos a confidencialidad por el autor 92 p.Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that expose services that can adapt to user behavior or be managed with the goal of achieving higher productivity, often in multistakeholder applications. In such environments, smart things are cheap sensors (and actuators) and, therefore, constrained devices. However, they are also critical components because of the importance of the provided information. Given that, strong security in general and access control in particular is a must.However, tightness, feasibility and usability of existing access control models do not cope well with the principle of least privilege; they lack both expressiveness and the ability to update the policy to be enforced in the sensors. In fact, (1) traditional access control solutions are not feasible in all constrained devices due their big impact on the performance although they provide the highest effectiveness by means of tightness and flexibility. (2) Recent access control solutions designed for constrained devices can be implemented only in not so constrained ones and lack policy expressiveness in the local authorization enforcement. (3) Access control solutions currently feasible in the most severely constrained devices have been based on authentication and very coarse grained and static policies, scale badly, and lack a feasible policy based access control solution aware of local context of sensors.Therefore, there is a need for a suitable End-to-End (E2E) access control model to provide fine grained authorization services in service oriented open scenarios, where operation and management access is by nature dynamic and that integrate massively deployed constrained but manageable sensors. Precisely, the main contribution of this thesis is the specification of such a highly expressive E2E access control model suitable for all sensors including the most severely constrained ones. Concretely, the proposed E2E access control model consists of three main foundations. (1) A hybrid architecture, which combines advantages of both centralized and distributed architectures to enable multi-step authorization. Fine granularity of the enforcement is enabled by (2) an efficient policy language and codification, which are specifically defined to gain expressiveness in the authorization policies and to ensure viability in very-constrained devices. The policy language definition enables both to make granting decisions based on local context conditions, and to react accordingly to the requests by the execution of additional tasks defined as obligations.The policy evaluation and enforcement is performed not only during the security association establishment but also afterward, while such security association is in use. Moreover, this novel model provides also control over access behavior, since iterative re-evaluation of the policy is enabled during each individual resource access.Finally, (3) the establishment of an E2E security association between two mutually authenticated peers through a security protocol named Hidra. Such Hidra protocol, based on symmetric key cryptography, relies on the hybrid three-party architecture to enable multi-step authorization as well as the instant provisioning of a dynamic security policy in the sensors. Hidra also enables delegated accounting and audit trail. Proposed access control features cope with tightness, feasibility and both dimensions of usability such as scalability and manageability, which are the key unsolved challenges in the foreseen open and dynamic scenarios enabled by IoT. Related to efficiency, the high compression factor of the proposed policy codification and the optimized Hidra security protocol relying on a symmetric cryptographic schema enable the feasibility as it is demonstrated by the validation assessment. Specifically, the security evaluation and both the analytical and experimental performance evaluation demonstrate the feasibility and adequacy of the proposed protocol and access control model.Concretely, the security validation consists of the assessment that the Hidra security protocol meets the security goals of mutual strong authentication, fine-grained authorization, confidentiality and integrity of secret data and accounting. The security analysis of Hidra conveys on the one hand, how the design aspects of the message exchange contribute to the resilience against potential attacks. On the other hand, a formal security validation supported by a software tool named AVISPA ensures the absence of flaws and the correctness of the design of Hidra.The performance validation is based on an analytical performance evaluation and a test-bed implementation of the proposed access control model for the most severely constrained devices. The key performance factor is the length of the policy instance, since it impacts proportionally on the three critical parameters such as the delay, energy consumption, memory footprint and therefore, on the feasibility.Attending to the obtained performance measures, it can be concluded that the proposed policy language keeps such balance since it enables expressive policy instances but always under limited length values. Additionally, the proposed policy codification improves notably the performance of the protocol since it results in the best policy length compression factor compared with currently existing and adopted standards.Therefore, the assessed access control model is the first approach to bring to severely constrained devices a similar expressiveness level for enforcement and accounting as in current Internet. The positive performance evaluation concludes the feasibility and suitability of this access control model, which notably rises the security features on severely constrained devices for the incoming smart scenarios.Additionally, there is no comparable impact assessment of policy expressiveness of any other access control model. That is, the presented analysis models as well as results might be a reference for further analysis and benchmarkingGaur egun darabilzkigun hainbeste gailutan mikroprozesadoreak daude txertatuta, eragiten duten prozesuan neurketak egin eta logika baten ondorioz ekiteko. Horretarako, bai sentsoreak eta baita aktuadoreak erabiltzen dira (hemendik aurrera, komunitatean onartuta dagoenez, sentsoreak esango diegu nahiz eta erabilpen biak izan). Orain arteko erabilpen zabalenetako konekzio motak, banaka edota sare lokaletan konekatuta izan dira. Era honetan, sentsoreak elkarlanean elkarreri eraginez edota zerbitzari nagusi baten agindupean, erakunde baten prozesuak ahalbideratu eta hobetzeko erabili izan dira.Internet of Things (IoT) deritzonak, sentsoreak dituzten gailuak Internet sarearen bidez konektatu eta prozesu zabalagoak eta eraginkorragoak ahalbidetzen ditu. Smartcity, Smartgrid, Smartfactory eta bestelako smart adimendun ekosistemak, gaur egun dauden eta datozen komunikaziorako teknologien aukerak baliatuz, erabilpen berriak ahalbideratu eta eragina areagotzea dute helburu.Era honetan, ekosistema hauek zabalak dira, eremu ezberdinetako erakundeek hartzen dute parte, eta berariazko sentsoreak dituzten gailuen kopurua izugarri handia da. Sentsoreak beraz, berariazkoak, merkeak eta txikiak dira, eta orain arteko lehenengo erabilpen nagusia, magnitude fisikoren bat neurtzea eta neurketa hauek zerbitzari zentralizatu batera bidaltzea izan da. Hau da, inguruan gertatzen direnak neurtu, eta zerbitzari jakin bati neurrien datuak aldiro aldiro edota atari baten baldintzapean igorri. Zerbitzariak logika aplikatu eta sistema osoa adimendun moduan jardungo du. Jokabide honetan, aurretik ezagunak diren entitateen arteko komunikazioen segurtasuna bermatzearen kexka, nahiz eta Internetetik pasatu, hein onargarri batean ebatzita dago gaur egun.Baina adimendun ekosistema aurreratuak sentsoreengandik beste jokabide bat ere aurreikusten dute. Sentsoreek eurekin harremanak izateko moduko zerbitzuak ere eskaintzen dituzte. Erakunde baten prozesuetan, beste jatorri bateko erakundeekin elkarlanean, jokabide honen erabilpen nagusiak bi dira. Batetik, prozesuan parte hartzen duen erabiltzaileak (eta jabeak izan beharrik ez duenak) inguruarekin harremanak izan litzake, eta bere ekintzetan gailuak bere berezitasunetara egokitzearen beharrizana izan litzake. Bestetik, sentsoreen jarduera eta mantenimendua zaintzen duten teknikariek, beroriek egokitzeko zerbitzuen beharrizana izan dezakete.Holako harremanak, sentsoreen eta erabiltzaileen kokalekua zehaztugabea izanik, kasu askotan Internet bidez eta zuzenak (end-to-end) izatea aurreikusten da. Hau da, sentsore txiki asko daude handik hemendik sistemaren adimena ahalbidetuz, eta harreman zuzenetarako zerbitzu ñimiñoak eskainiz. Batetik, zerbitzu zuzena, errazagoa eta eraginkorragoa dena, bestetik erronkak ere baditu. Izan ere, sentsoreak hain txikiak izanik, ezin dituzte gaur egungo protokolo eta mekanismo estandarak gauzatu. Beraz, sare mailatik eta aplikazio mailarainoko berariazko protokoloak sortzen ari dira.Tamalez, protokolo hauek arinak izatea dute helburu eta segurtasuna ez dute behar den moduan aztertu eta gauzatzen. Eta egon badaude berariazko sarbide kontrolerako ereduak baina baliabideen urritasuna dela eta, ez dira ez zorrotzak ez kudeagarriak. Are gehiago, Gartnerren arabera, erabilpen aurreratuetan inbertsioa gaur egun mugatzen duen traba Nagusia segurtasunarekiko mesfidantza da.Eta hauxe da erronka eta tesi honek landu duen gaia: batetik sentsoreak hain txikiak izanik, eta baliabideak hain urriak (10kB RAM, 100 kB Flash eta bateriak, sentsore txikienetarikoetan), eta bestetik Internet sarea hain zabala eta arriskutsua izanik, segurtasuna areagotuko duen sarbide zuzenaren kontrolerako eredu zorrotz, arin eta kudeagarri berri bat zehaztu eta bere erabilgarritasuna aztertu

    Design rules and guidelines for generic condition-based maintenance software's Graphic User Interface

    Get PDF
    The task of selecting and developing a method of Human Computer Interaction (HCI) for a Condition Based Maintenance (CBM) system, is investigated in this thesis. Efficiently and accurately communicating machinery health information extracted from Condition Monitoring (CM) equipment, to aid and assist plant and machinery maintenance decisions, is the crux of the problem being researched. Challenges facing this research include: the multitude of different CM techniques, developed for measuring different component and machinery condition parameters; the multitude of different methods of HCI; and the multitude of different ways of communicating machinery health conditions to CBM practitioners. Each challenge will be considered whilst pursuing the objective of identifying a generic set of design and development principles, applicable to the design and development of a CBM system's Human Machine Interface (HMI). [Continues.

    Mobiilse vÀrkvÔrgu protsessihaldus

    Get PDF
    VĂ€rkvĂ”rk, ehk Asjade Internet (Internet of Things, lĂŒh IoT) edendab lahendusi nagu nn tark linn, kus meid igapĂ€evaselt ĂŒmbritsevad objektid on ĂŒhendatud infosĂŒsteemidega ja ka ĂŒksteisega. Selliseks nĂ€iteks vĂ”ib olla teekatete seisukorra monitoorimissĂŒsteem. VĂ”rku ĂŒhendatud sĂ”idukitelt (nt bussidelt) kogutakse videomaterjali, mida seejĂ€rel töödeldakse, et tuvastada löökauke vĂ”i lume kogunemist. Tavaliselt hĂ”lmab selline lahendus keeruka tsentraalse sĂŒsteemi ehitamist. Otsuste langetamiseks (nt milliseid sĂ”idukeid parasjagu protsessi kaasata) vajab keskne sĂŒsteem pidevat ĂŒhendust kĂ”igi IoT seadmetega. Seadmete hulga kasvades vĂ”ib keskne lahendus aga muutuda pudelikaelaks. Selliste protsesside disaini, haldust, automatiseerimist ja seiret hĂ”lbustavad mĂ€rkimisvÀÀrselt Ă€riprotsesside halduse (Business Process Management, lĂŒh BPM) valdkonna standardid ja tööriistad. Paraku ei ole BPM tehnoloogiad koheselt kasutatavad uute paradigmadega nagu Udu- ja Servaarvutus, mis tuleviku vĂ€rkvĂ”rgu jaoks vajalikud on. Nende puhul liigub suur osa otsustustest ja arvutustest ĂŒksikutest andmekeskustest servavĂ”rgu seadmetele, mis asuvad lĂ”ppkasutajatele ja IoT seadmetele lĂ€hemal. Videotöötlust vĂ”iks teostada mini-andmekeskustes, mis on paigaldatud ĂŒle linna, nĂ€iteks bussipeatustesse. Arvestades IoT seadmete ĂŒha suurenevat hulka, vĂ€hendab selline koormuse jaotamine vĂ€hendab riski, et tsentraalne andmekeskust ĂŒlekoormamist. Doktoritöö uurib, kuidas mobiilsusega seonduvaid IoT protsesse taoliselt ĂŒmber korraldada, kohanedes pidevalt muutlikule, liikuvate seadmetega tĂ€idetud servavĂ”rgule. Nimelt on ĂŒhendused katkendlikud, mistĂ”ttu otsuste langetus ja planeerimine peavad arvestama muuhulgas mobiilseadmete liikumistrajektoore. Töö raames valminud prototĂŒĂŒpe testiti Android seadmetel ja simulatsioonides. Lisaks valmis tööriistakomplekt STEP-ONE, mis vĂ”imaldab teadlastel hĂ”lpsalt simuleerida ja analĂŒĂŒsida taolisi probleeme erinevais realistlikes stsenaariumites nagu seda on tark linn.The Internet of Things (IoT) promotes solutions such as a smart city, where everyday objects connect with info systems and each other. One example is a road condition monitoring system, where connected vehicles, such as buses, capture video, which is then processed to detect potholes and snow build-up. Building such a solution typically involves establishing a complex centralised system. The centralised approach may become a bottleneck as the number of IoT devices keeps growing. It relies on constant connectivity to all involved devices to make decisions, such as which vehicles to involve in the process. Designing, automating, managing, and monitoring such processes can greatly be supported using the standards and software systems provided by the field of Business Process Management (BPM). However, BPM techniques are not directly applicable to new computing paradigms, such as Fog Computing and Edge Computing, on which the future of IoT relies. Here, a lot of decision-making and processing is moved from central data-centers to devices in the network edge, near the end-users and IoT sensors. For example, video could be processed in mini-datacenters deployed throughout the city, e.g., at bus stops. This load distribution reduces the risk of the ever-growing number of IoT devices overloading the data center. This thesis studies how to reorganise the process execution in this decentralised fashion, where processes must dynamically adapt to the volatile edge environment filled with moving devices. Namely, connectivity is intermittent, so decision-making and planning need to involve factors such as the movement trajectories of mobile devices. We examined this issue in simulations and with a prototype for Android smartphones. We also showcase the STEP-ONE toolset, allowing researchers to conveniently simulate and analyse these issues in different realistic scenarios, such as those in a smart city.  https://www.ester.ee/record=b552551
    • 

    corecore