33 research outputs found

    Method and system for environmentally adaptive fault tolerant computing

    Get PDF
    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition

    An Experimental Evaluation of the REE SIFT Environment for Spaceborne Applications

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Design and Evaluation of Preemptive Control Signature Checking for Distributed Applications

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Conception et implémentation de systèmes résilients par une approche à composants

    Get PDF
    L'évolution des systèmes pendant leur vie opérationnelle est incontournable. Les systèmes sûrs de fonctionnement doivent évoluer pour s'adapter à des changements comme la confrontation à de nouveaux types de fautes ou la perte de ressources. L'ajout de cette dimension évolutive à la fiabilité conduit à la notion de résilience informatique. Parmi les différents aspects de la résilience, nous nous concentrons sur l'adaptativité. La sûreté de fonctionnement informatique est basée sur plusieurs moyens, dont la tolérance aux fautes à l'exécution, où l'on attache des mécanismes spécifiques (Fault Tolerance Mechanisms, FTMs) à l'application. A ce titre, l'adaptation des FTMs à l'exécution s'avère un défi pour développer des systèmes résilients. Dans la plupart des travaux de recherche existants, l'adaptation des FTMs à l'exécution est réalisée de manière préprogrammée ou se limite à faire varier quelques paramètres. Tous les FTMs envisageables doivent être connus dès le design du système et déployés et attachés à l'application dès le début. Pourtant, les changements ont des origines variées et, donc, vouloir équiper un système pour le pire scénario est impossible. Selon les observations pendant la vie opérationnelle, de nouveaux FTMs peuvent être développés hors-ligne, mais intégrés pendant l'exécution. On dénote cette capacité comme adaptation agile, par opposition à l'adaptation préprogrammée. Dans cette thèse, nous présentons une approche pour développer des systèmes sûrs de fonctionnement flexibles dont les FTMs peuvent s'adapter à l'exécution de manière agile par des modifications à grain fin pour minimiser l'impact sur l'architecture initiale. D'abord, nous proposons une classification d'un ensemble de FTMs existants basée sur des critères comme le modèle de faute, les caractéristiques de l'application et les ressources nécessaires. Ensuite, nous analysons ces FTMs et extrayons un schéma d'exécution générique identifiant leurs parties communes et leurs points de variabilité. Après, nous démontrons les bénéfices apportés par les outils et les concepts issus du domaine du génie logiciel, comme les intergiciels réflexifs à base de composants, pour développer une librairie de FTMs adaptatifs à grain fin. Nous évaluons l'agilité de l'approche et illustrons son utilité à travers deux exemples d'intégration : premièrement, dans un processus de développement dirigé par le design pour les systèmes ubiquitaires et, deuxièmement, dans un environnement pour le développement d'applications pour des réseaux de capteurs. ABSTRACT : Evolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs

    The Fight Master, Winter 1990, Vol. 13 Issue 1

    Get PDF

    Perceptions of Urban Secondary Science Teachers Regarding Social Learning Professional Development

    Get PDF
    Traditional classroom environments may not motivate students to learn and may lack interactive connections between educators and learners in the classroom. The problem addressed in this research study is the lack of understanding of science teachers\u27 use and perception of innovative social learning strategies implemented in urban classrooms. The purpose of this research study was to establish urban science teachers\u27 perceptions regarding social learning strategies within their classrooms. The conceptual framework of Hall and Hord\u27s levels of use was used. The research questions addressed in this study focused on the perceptions and experiences of secondary science teachers in a large, urban school system. A qualitative case study design was used with face-to-face interviews, reflective journals, and lesson plans based on the social learning professional development. The inclusion criteria encompassed those teachers who attended the professional development regarding social learning, were still employed by this school system, and had used the social learning strategies, resulting in 8 participants. Open coding was used to highlight data and mark sections of the text in codes or labels. The findings demonstrated which social learning strategies the participants found most successful. Teachers stated that students gravitated towards the opportunity to be a part of the learning process. They also realized that social learning is a valuable way to give students interdependence, social skills, ways to solve problems in a real-world manner, and higher-level thinking skills. This study may provide positive social change by improving the understanding of the concerns of educators, enabling facilitators to address these concerns to improve future professional development, as well as improving individual teacher pedagogy

    Similarity Measure Based on Entropy and Census and Multi-Resolution Disparity Estimation Technique for Stereo Matching

    Get PDF
    Stereo matching is one of the most active research areas in the field of computer vision. Stereo matching aims to obtain 3D information by extracting correct correspondence between two images captured from different point of views. There are two research parts in stereo matching: similarity measure between correspondence points and optimization technique for dence disparity estimation. The crux of stereo matching problem in similarity measure perspective is how to deal with the inferent points ambiguity that results from the ambiguous local appearances of image points. Similarity measures in stereo matching are classified as feature-based, intensity-based or non-parametric measure. And most similarity measures in the literatures are based on pixel intensity comparison. When images are taken at different illumination conditions or different sensors used, it is very unlikely that the corresponding pixels would have the same intensity creating false correspondences if it is only based on intensity matching functions alone. Especially illumination variations between input images can cause serious degrade in the performance of stereo matching algorithms. In this situation, mutual information-based method is powerful. However, it is still ambiguous or erroneous in considering local illumination variations between images. Therefore, similarity measure to these radiometric variations are demanded and become inevitable for stereo matching. Optimization method in stereo matching can be classified into two categories: local and global optimization methods, and most state-of-the-art algorithms fall into global optimization method. Global optimization methods can greatly suppress the matching ambiguities caused by various factors such as occluded and textureless regions. However, They are usually computationally expensive due to the slow-converging optimization process. In this paper, it was proposed that a stereo matching similarity measure based on entropy and census transform and an optimization technique using dynamic programming to estimate disparity efficiently based on multi-resolution method. Proposed similarity measure is composed of entropy, Haar wavelet feature vector, and modified Census transform. In general, mutual information similarity measure based on entropy about stereo images and disparity map is a popular and powerful similarity measure which is robust to complex intensity transformation. However, it is still ambiguous or erroneous with local radiometric variations, since it only accounts for global variation between images, and does not contain spatial information. Haar wavelet response can express frequency properties of image regions and is robust to various intensity changes and bias. Therefore, entropy was utilized with Haar wavelet feature vector as geometric measure. Modified Census transform was used as another spatial similarity measure. Census transform is a well-known non-parametric measure. And it is powerful to textureless and disparity discontinuity region and robust to noisy environment. A combination of entropy with Haar wavelet feature vector and modified Census transform as similarity measure was proposed to find correspondence. It is invariant to local radiometric variations and global illumination changes, so it can be applied to find correspondence for images which undergo local as well as global radiometric variations. Proposed optimization method is a new disparity estimation technique based on dynamic programming. A method using dynamic programming with 8-direction energy aggregation to estimate accurate disparity map was applied. Using 8-direction energy aggregation, accurate disparities can be found at disparity discontinuous region and suppress a streaking phenomenon in disparity map. Finally, the multi-resolution scheme was proposed to increase efficiency while processing and disparity estimation method. A Gaussian pyramid which prevent the ailasing at low-resolution image pyramid levels was used. And the multi-resolution scheme was proposed to perform matching at every levels to find accurate disparity. This method can perform matching efficiently and make accurate disparity map. And proposed method was validated with experimental results on stereo images.제 1 장 서론...........................................1 1.1 연구 목적 및 배경...............................1 1.2관련 연구........................................3 1.3연구 내용........................................6 1.4논문의 구성......................................7 제 2 장 스테레오 시각과 스테레오 정합..................8 2.1 스테레오 시각...................................8 2.2스테레오정합....................................10 2.2.1 유사도 척도.................................10 2.2.2 최적화 방법.................................15 2.3 환경 변화에 강인한 유사도 척도.................18 2.3.1 특징 기반 유사도 척도.......................18 2.3.2 명암도 기반 유사도 척도.....................19 2.3.3 비모수 유사도 척도..........................22 제 3 장 엔트로피 및 Census 기반의 유사도 척도.........24 3.1 엔트로피 기반의 유사도 척도....................26 3.1.1 엔트로피....................................26 3.1.2 엔트로피를 이용한 MI 유사도 척도............27 3.2 제안한 Haar 웨이블렛 특징을 결합한 엔트로피 유사도척도.................................................28 3.2.1 화소 단위 엔트로피.........................28 3.2.2 Haar웨이블렛 특징을 결합한 엔트로피........35 3.3 제안한Census 변환 기반의 유사도 척도..........46 3.3.1 Census 변환................................46 3.3.2 제안한 Census 변환을 이용한 유사도 척도....49 제 4 장 8방향 동적 계획법을 이용한 변위추정..........53 4.1 동적 계획법...................................53 4.2 제안한 8방향 동적 계획법......................57 제 5 장 다해상도 기반의 스테레오 정합................67 5.1 가우시안 영상 피라미드........................67 5.2 제안한 다해상도 기반 스테레오 정합............71 제 6 장 실험 및 고찰.................................77 6.1 정합 성능 평가 방법...........................77 6.2 스테레오 정합 실험............................79 6.2.1 RDS 영상 실험..............................79 6.2.2 환경 변화가 없는 표준 영상 실험............84 6.2.3 환경 변화가 발생한 표준 영상 실험..........92 6.2.4 실제 영상 실험............................110 6.3 계산 속도....................................118 6.4 제안한 방법의 정합 성능에 대한 고찰..........121 제 7 장 결 론.....................................123 참고 문헌...........................................12

    Hardware Error Detection Using AN-Codes

    Get PDF
    Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems. The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation. We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used. We present the following novel encoding approaches: - Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and - Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes. In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time. For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations. Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors. This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially
    corecore