420 research outputs found

    Bidirectional distributed data aggregation

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaTransforming data between two different domains is a typical problem in software engineering. Ideally such transformations should be bidirectional, so that changes in either domain can be propagated to the other one. Many of the existing bidirectional transformation frameworks are instantiations of the so called lenses [1], proposed as a solution to the well-known view-update problem: if we construct a view that abstracts information in a source domain, how can changes in the view be propagated back to the source? The goal of a distributed data aggregation algorithm is precisely to compute in one or more network nodes a local view of a given global property of interest. As expected, such algorithms react to updates in the distributed input values, but so far no mechanisms were proposed to bidirectionalize them, so that updates in the computed view can be reflected back to the inputs. The goal of this thesis is precisely to research the viability of such bidirectionalizon in a distributed setting.Transformação de dados entre dois domínios distintos é um problema típico em engenharia de software. Idealmente tais transformações deveriam ser bidireccionais, e assim essas modificações poderiam ser propagadas de qualquer domínio para o outro. Muitas das ferramentas existentes para transformação bidireccional são variações das chamadas lentes, propostas como solução para o bem conhecido problema view-update: se construirmos uma vista que abstrai a informação de uma fonte de informação, como podem as modificações na vista serem propagadas de volta para a fonte de informação? O objectivo de um algoritmo distribuído de agregação é precisamente obter em um ou mais nodos da rede uma vista local resultante de uma dada propriedade global de interesse. Como esperado, tais algoritmos reagem a modificações nos valores de entrada, mas até agora nenhum mecanismo foi proposto para a sua bidireccionalização, ou seja, em para permitir modificações na vista obtida possam ser refletidas nos valores de entrada nos nós da rede. O objectivo desta tese é precisa- mente investigar qual a viabilidade dessa bidireccionalização em cenários distribuídos

    Power quality and electromagnetic compatibility: special report, session 2

    Get PDF
    The scope of Session 2 (S2) has been defined as follows by the Session Advisory Group and the Technical Committee: Power Quality (PQ), with the more general concept of electromagnetic compatibility (EMC) and with some related safety problems in electricity distribution systems. Special focus is put on voltage continuity (supply reliability, problem of outages) and voltage quality (voltage level, flicker, unbalance, harmonics). This session will also look at electromagnetic compatibility (mains frequency to 150 kHz), electromagnetic interferences and electric and magnetic fields issues. Also addressed in this session are electrical safety and immunity concerns (lightning issues, step, touch and transferred voltages). The aim of this special report is to present a synthesis of the present concerns in PQ&EMC, based on all selected papers of session 2 and related papers from other sessions, (152 papers in total). The report is divided in the following 4 blocks: Block 1: Electric and Magnetic Fields, EMC, Earthing systems Block 2: Harmonics Block 3: Voltage Variation Block 4: Power Quality Monitoring Two Round Tables will be organised: - Power quality and EMC in the Future Grid (CIGRE/CIRED WG C4.24, RT 13) - Reliability Benchmarking - why we should do it? What should be done in future? (RT 15

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    ADAPTIVE FAULT DETECTION AND CONDITION MONITORING OF INDUCTION MOTOR

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Développement d'une nouvelle technique de pointé automatique pour les données de sismique réfraction

    Get PDF
    Accurate picking of first arrival times plays an important role in many seismic studies, particularly in seismic tomography and reservoirs or aquifers monitoring. A new adaptive algorithm has been developed based on combining three picking methods (Multi-Nested Windows, Higher Order Statistics and Akaike Information Criterion). It exploits the benefits of integrating three properties (energy, gaussianity, and stationarity), which reveal the presence of first arrivals. Since time uncertainties estimating is of crucial importance for seismic tomography, the developed algorithm provides automatically the associated errors of picked arrival times. The comparison of resulting arrival times with those picked manually, and with other algorithms of automatic picking, demonstrates the reliable performance of this algorithm. It is nearly a parameter-free algorithm, which is straightforward to implement and demands low computational resources. However, high noise level in the seismic records declines the efficiency of the developed algorithm. To improve the signal-to-noise ratio of first arrivals, and thereby to increase their detectability, double stacking in the time domain has been proposed. This approach is based on the key principle of the local similarity of stacked traces. The results demonstrate the feasibility of applying the double stacking before the automatic picking.Un pointé précis des temps de premières arrivées sismiques joue un rôle important dans de nombreuses études d’imagerie sismique. Un nouvel algorithme adaptif est développé combinant trois approches associant l’utilisation de fenêtres multiples imbriquées, l’estimation des propriétés statistiques d’ordre supérieur et le critère d’information d’Akaike. L’algorithme a l’avantage d’intégrer plusieurs propriétés (l’énergie, la gaussianité, et la stationnarité) dévoilant la présence des premières arrivées. Tandis que les incertitudes de pointés ont, dans certains cas, d’importance équivalente aux pointés eux-mêmes, l’algorithme fournit aussi automatiquement une estimation sur leur incertitudes. La précision et la fiabilité de cet algorithme sont évaluées en comparant les résultats issus de ce dernier avec ceux provenant d’un pointé manuel, ainsi que d’autres pointeurs automatiques. Cet algorithme est simple à mettre en œuvre et ne nécessite pas de grandes performances informatiques. Cependant, la présence de bruit dans les données peut en dégrader la performance. Une double sommation dans le domaine temporel est alors proposée afin d’améliorer la détectabilité des premières arrivées. Ce processus est fondé sur un principe clé : la ressemblance locale entre les traces stackées. Les résultats montrent l’intérêt qu’il y a à appliquer cette sommation avant de réaliser le pointé automatique

    Energy adaptive buildings:From sensor data to being aware of users

    Get PDF

    Middleware for transparent TCP connection migration : masking faulty TCP-based services

    Get PDF
    Masteroppgave i informasjons- og kommunikasjonsteknologi 2004 - Høgskolen i Agder, GrimstadMission critical TCP-based services create a demand for robust and fault tolerant TCP communication. Sense Intellifield monitors drill operations on rig sites offshore. Critical TCP-based services need to be available 24 hours, 7 days a week, and the service providers need to tolerate server failure. How to make TCP robust and fault tolerant without modifying existing infrastructure like existing client/server applications, services, TCP stacks, kernels, or operating systems is the motivation of this thesis. We present a new middleware approach, first of its kind, to allow TCP-based services to survive server failure by migrating TCP connections from failed servers to replicated surviving servers. The approach is based on a proxy technique, which requires modifications to existing infrastructure. Our unique middleware approach is simple, practical, and can be built into existing infrastructure without modifying it. A middleware approach has never been used to implement the proxy based technique. Experiments for validation of functionality and measurement of performance of the middleware prototype are conducted. The results show that our technique adds significant robustness and fault tolerance to TCP, without modifying existing infrastructure. One of the consequences of using a middleware to make TCP communication robust and fault tolerant is added latency. Another consequence is that TCP communication can survive server failure, and mask it. Companies providing robust and fault tolerant TCP, is no longer dependant of third party hardware and/or software. By implementing our solution, they can gain economical advantages. A main focus of this report is to present a prototype that demonstrates our technique and middleware approach. We present relevant background theory which has lead to the design architecture of a middleware approach to make TCP communication fault tolerant. Finally we conduct experiments to uncover the feasibility and performance of the prototype, followed by a discussion and conclusion

    Segurança de computadores por meio de autenticação intrínseca de hardware

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, Mario Lúcio Côrtes e Diego de Freitas AranhaTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Neste trabalho apresentamos Computer Security by Hardware-Intrinsic Authentication (CSHIA), uma arquitetura de computadores segura para sistemas embarcados que tem como objetivo prover autenticidade e integridade para código e dados. Este trabalho está divido em três fases: Projeto da Arquitetura, sua Implementação, e sua Avaliação de Segurança. Durante a fase de projeto, determinamos como integridade e autenticidade seriam garantidas através do uso de Funções Fisicamente Não Clonáveis (PUFs) e propusemos um algoritmo de extração de chaves criptográficas de memórias cache de processadores. Durante a implementação, flexibilizamos o projeto da arquitetura para fornecer diferentes possibilidades de configurações sem comprometimento da segurança. Então, avaliamos seu desempenho levando em consideração o incremento em área de chip, aumento de consumo de energia e memória adicional para diferentes configurações. Por fim, analisamos a segurança de PUFs e desenvolvemos um novo ataque de canal lateral que circunvê a propriedade de unicidade de PUFs por meio de seus elementos de construçãoAbstract: This work presents Computer Security by Hardware-Intrinsic Authentication (CSHIA), a secure computer architecture for embedded systems that aims at providing authenticity and integrity for code and data. The work encompassed three phases: Design, Implementation, and Security Evaluation. In design, we laid out the basic ideas behind CSHIA, namely, how integrity and authenticity are employed through the use of Physical Unclonable Functions (PUFs), and we proposed an algorithm to extract cryptographic keys from the intrinsic memories of processors. In implementation, we made CSHIA¿s design more flexible, allowing different configurations without compromising security. Then, we evaluated CSHIA¿s performance and overheads, such as area, energy, and memory, for multiple configurations. Finally, we evaluated security of PUFs, which led us to develop a new side-channel-based attack that enabled us to circumvent PUFs¿ uniqueness property through their architectural elementsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2015/06829-2; 2016/25532-3147614/2014-7FAPESPCNP
    corecore