383 research outputs found

    Concurrent Error Detection Methods for Asynchronous Burst-Mode Machines

    Get PDF
    Abstract-Asynchronous controllers exhibit various characteristics that limit the effectiveness and applicability of the Concurrent Error Detection (CED) methods developed for their synchronous counterparts. Asynchronous Burst-Mode Machines (ABMMs), for example, do not have a global clock to synchronize the ABMM with the additional circuitry that is typically used by synchronous CED methods (for example, duplication). Therefore, performing effective CED in ABMMs requires a synchronization method that will appropriately enable the checker (for example, comparator) in order to avoid false alarms. Also, ABMMs contain redundant logic, which guarantees the hazard-free operation required for correct interaction between the circuit and its environment. Redundant logic, however, allows some single event transients to manifest themselves only as hazards but not as logic discrepancies. Therefore, performing effective CED in ABMMs requires the ability to detect hazards with which synchronous CED methods are not concerned. In this work, we first devise hardware solutions for performing checking synchronization and hazard detection. We then demonstrate how these solutions enable the development of three complete CED methods for ABMMs. The first method (Duplication-based CED) is an adaptation of the well-known duplication method within the context of ABMMs. The second method (Transition-Triggered CED) is a variation of duplication wherein the implementation cost is reduced by allowing hazards in the duplicate circuit. In contrast to these two methods, which are nonintrusive, the third method (Berger code-based CED) is intrusive since it requires reencoding of the ABMM with check symbols based on the Berger code. Although this intrusiveness may slightly impact performance, Berger code-based CED incurs the lowest area overhead among the three methods, as indicated through experimental results

    Contribuitions and developments on nonintrusive load monitoring

    Get PDF
    Energy efficiency is a key subject in our present world agenda, not only because of greenhouse gas emissions, which contribute to global warming, but also because of possible supply interruptions. In Brazil, energy wastage in the residential market is estimated to be around 15%. Previous studies have indicated that the most savings were achieved with specific appliance, electricity consumption feedback, which caused behavioral changes and encouraged consumers to pursue energy conservation. Nonintrusive Load Monitoring (NILM) is a relatively new term. It aims to disaggregate global consumption at an appliance level, using only a single point of measurement. Various methods have been suggested to infer when appliances are turned on and off, using the analysis of current and voltage aggregated waveforms. Within this context, we aim to provide a methodology for NILM to determine which sets of electrical features and feature extraction rates, obtained from aggregated household data, are essential to preserve equivalent levels of accuracy; thus reducing the amount of data that needs to be transferred to, and stored on, cloud servers. As an addendum to this thesis, a Brazilian appliance dataset, sampled from real appliances, was developed for future NILM developments and research. Beyond that, a low-cost NILM smart meter was developed to encourage consumers to change their habits to more sustainable methods.Eficiência energética é um assunto essencial na agenda mundial. No Brasil, o desperdício de energia no setor residencial é estimado em 15%. Estudos indicaram que maiores ganhos em eficiência são conseguidos quando o usuário recebe as informações de consumo detalhadas por cada aparelho, provocando mudanças comportamentais e incentivando os consumidores na conservação de energia. Monitoramento não intrusivo de cargas (NILM da sigla em inglês) é um termo relativamente novo. A sua finalidade é inferir o consumo de um ambiente até observar os consumos individualizados de cada equipamento utilizando-se de apenas um único ponto de medição. Métodos sofisticados têm sido propostos para inferir quando os aparelhos são ligados e desligados em um ambiente. Dentro deste contexto, este trabalho apresenta uma metodologia para a definição de um conjunto mínimo de características elétricas e sua taxa de extração que reduz a quantidade de dados a serem transmitidos e armazenados em servidores de processamento de dados, preservando níveis equivalentes de acurácia. São utilizadas diferentes técnicas de aprendizado de máquina visando à caracterização e solução do problema. Como adendo ao trabalho, apresenta-se um banco de dados de eletrodomésticos brasileiros, com amostras de equipamentos nacionais para desenvolvimentos futuros em NILM, além de um medidor inteligente de baixo custo para desagregação de cargas, visando tornar o consumo de energia mais sustentável

    Master of Science

    Get PDF
    thesisConcurrent programs are extremely important for efficiently programming future HPC systems. Large scientific programs may employ multiple processes or threads to run on HPC systems for days. Reliability is an essential requirement of existing concurrent programs. Therefore, verification of concurrent programs becomes increasingly important. Today we have two significant challenges in developing concurrent program verification tools: The first is scalability. Since new types of concurrent programs keep being created, verification tools need to scale to handle all these new types of programs. The second is providing formal coverage guarantee. Dynamic verification tools always face a huge schedule space. Both these capabilities must exist for testing programs that follow multiple concurrency models. Most current dynamic verification tools can only explore either thread level or process level schedules. Consequently, they fail to verify hybrid programs. Exploring mixed process and thread level schedules is not an ideal solution because the state space will grow exponentially in both levels. It is hard to systematically traverse these mixed schedules. Therefore, our approach is to determinize all concurrent APIs except one API whose schedules will then be explored. To improve search efficiency, we proposed a random-walk based heuristic algorithm. We observed many concurrent programs and concluded some common structures of them. Based on the existence of these structures, we can make dynamic verification tools focusing on specific regions and bypassing regions of less interest. We propose a random sampling of executions in the regions of less interest

    Estimating Level of Engagement from Ocular Landmarks

    Get PDF
    E-learning offers many advantages like being economical, flexible and customizable, but also has challenging aspects such as lack of – social-interaction, which results in contemplation and sense of remoteness. To overcome these and sustain learners’ motivation, various stimuli can be incorporated. Nevertheless, such adjustments initially require an assessment of engagement level. In this respect, we propose estimating engagement level from facial landmarks exploiting the facts that (i) perceptual decoupling is promoted by blinking during mentally demanding tasks; (ii) eye strain increases blinking rate, which also scales with task disengagement; (iii) eye aspect ratio is in close connection with attentional state and (iv) users’ head position is correlated with their level of involvement. Building empirical models of these actions, we devise a probabilistic estimation framework. Our results indicate that high and low levels of engagement are identified with considerable accuracy, whereas medium levels are inherently more challenging, which is also confirmed by inter-rater agreement of expert coders

    Memory safety and untrusted extensions for TinyOS

    Get PDF
    Journal ArticleSensor network applications should be reliable. However, TinyOS, the dominant sensor net OS, lacks basic building blocks for reliable software systems: memory protection, isolation, and safe termination. These features are typically found in general-purpose operating systems but are believed to be too expensive for tiny embedded systems with a few kilobytes of RAM. We dispel this notion and show that CCured, a safe dialect of C, can be leveraged to provide memory safety for largely unmodified TinyOS applications. We build upon safety to implement two very different environments for TinyOS applications. The first, Safe TinyOS, provides a minimal kernel for safely executing trusted applications. Safe execution traps and identifies bugs that would otherwise have silently corrupted RAM. The second environment, UTOS, implements a user-kernel boundary that supports isolation and safe termination of untrusted code. Existing TinyOS components can often be ported to UTOS with little effort. To create our environments, we substantially augmented the CCured toolchain to emit code that is safe under interrupt-driven concurrency, to reduce storage requirements by compressing error messages, to refactor direct hardware access into calls to trusted helper functions, and to make safe programs more efficient using whole-program optimization. A surprising result of our work is that a safe, optimized TinyOS program can be faster than the original unsafe, unoptimized application

    Robust and reliable hardware accelerator design through high-level synthesis

    Get PDF
    System-on-chip design is becoming increasingly complex as technology scaling enables more and more functionality on a chip. This scaling-driven complexity has resulted in a variety of reliability and validation challenges including logic bugs, hot spots, wear-out, and soft errors. To make matters worse, as we reach the limits of Dennard scaling, efforts to improve system performance and energy efficiency have resulted in the integration of a wide variety of complex hardware accelerators in SoCs. Thus the challenge is to design complex, custom hardware that is efficient, but also correct and reliable. High-level synthesis shows promise to address the problem of complex hardware design by providing a bridge from the high-productivity software domain to the hardware design process. Much research has been done on high-level synthesis efficiency optimizations. This dissertation shows that high-level synthesis also has the power to address validation and reliability challenges through three automated solutions targeting three key stages in the hardware design and use cycle: pre-silicon debugging, post-silicon validation, and post-deployment error detection. Our solution for rapid pre-silicon debugging of accelerator designs is hybrid tracing: comparing a datapath-level trace of hardware execution with a reference software implementation at a fine temporal and spatial granularity to detect logic bugs. An integrated backtrace process delivers source-code meaning to the hardware designer, pinpointing the location of bug activation and providing a strong hint for potential bug fixes. Experimental results show that we are able to detect and aid in localization of logic bugs from both C/C++ specifications as well as the high-level synthesis engine itself. A variation of this solution tailored for rapid post-silicon validation of accelerator designs is hybrid hashing: inserting signature generation logic in a hardware design to create a heavily compressed signature stream that captures the internal behavior of the design at a fine temporal and spatial granularity for comparison with a reference set of signatures generated by high-level simulation to detect bugs. Using hybrid hashing, we demonstrate an improvement in error detection latency (time elapsed from when a bug is activated to when it manifests as an observable failure) of two orders of magnitude and a threefold improvement in bug coverage compared to traditional post-silicon validation techniques. Hybrid hashing also uncovered previously unknown bugs in the CHStone benchmark suite, which is widely used by the HLS community. Hybrid hashing incurs less than 10% area overhead for the accelerator it validates with negligible performance impact, and we also introduce techniques to minimize any possible intrusiveness introduced by hybrid hashing. Finally, our solution for post-deployment error detection is modulo-3 shadow datapaths: performing lightweight shadow computations in modulo-3 space for each main computation. We leverage the binding and scheduling flexibility of high-level synthesis to detect control errors through diverse binding and minimize area cost through intelligent checkpoint scheduling and modulo-3 reducer sharing. We introduce logic and dataflow optimizations to further reduce cost. We evaluated our technique with 12 high-level synthesis benchmarks from the arithmetic-oriented PolyBench benchmark suite using FPGA emulated netlist-level error injection. We observe coverages of 99.1% for stuck-at faults, 99.5% for soft errors, and 99.6% for timing errors with a 25.7% area cost and negligible performance impact. Leveraging a mean error detection latency of 12.75 cycles (4150× faster than end result check) for soft errors, we also explore a rollback recovery method with an additional area cost of 28.0%, observing a 175× increase in reliability against soft errors. While the area cost of our modulo shadow datapaths is much better than traditional modular redundancy approaches, we want to maximize the applicability of our approach. To this end, we take a dive into gate-level architectural design for modulo arithmetic functional units. We introduce new low-cost gate-level architectures for all four key functional units in a shadow datapath: (1) a modulo reduction algorithm that generates architectures consisting entirely of full-adder standard cells; (2) minimum-area modulo adder and subtractor architectures; (3) an array-based modulo multiplier design; and (4) a modulo equality comparator that handles the residue encoding produced by the above. We compare our new functional units to the previous state-of-the-art approach, observing a 12.5% reduction in area and a 47.1% reduction in delay for a 32-bit mod-3 reducer; that our reducer costs, which tend to dominate shadow datapath costs, do not increase with larger modulo bases; and that for modulo-15 and above, all of our modulo functional units have better area and delay then their previous counterparts. We also demonstrate the practicality of our approach by designing a custom shadow datapath for error detection of a multiply accumulate functional unit, which has an area overhead of only 12% for a 32-bit main datapath and 2-bit modulo-3 shadow datapath. Taking our reliability solution further, we look at the bigger picture of modulo shadow datapaths combined with other solutions at different abstraction layers, looking to answer the following question: Given all of the existing reliability improvement techniques for application-specific hardware accelerators, what techniques or combinations of techniques are the most cost-effective? To answer this question, we consider a soft error fault model and empirically evaluate cross-layer combinations of ABFT, EDDI, and modulo shadow datapaths in the context of high-level synthesis; parity in logic synthesis; and flip-flop hardening techniques at the physical design level. We measure the reliability benefit and area, energy, and performance cost of each technique individually and for interesting technique combinations through FPGA emulated fault-injection and physical place-and-route. Our results show that a combination of parity and flip-flop hardening is the most cost-effective in general with an average 1.3% area cost and 5.7% energy cost for a 50× improvement in reliability. The addition of modulo-3 shadow datapaths to this combination provides some additional benefit for some applications, even without considering its combinational logic, stuck-at fault, and timing error protection benefits. We also observe new efficiency challenges for ABFT and EDDI when used for hardware accelerators

    Monitoring fatigue and drowsiness in motor vehicle occupants using electrocardiogram and heart rate - A systematic review

    Get PDF
    Introdução: A fadiga é um estado complexo que pode resultar em diminuição da vigilância, frequentemente acompanhada de sonolência. A fadiga durante a condução contribui significativamente para acidentes de trânsito em todo o mundo, destacando-se a necessidade de técnicas de monitorização eficazes. Existem várias tecnologias para aumentar a segurança do condutor e reduzir os riscos de acidentes, como sistemas de deteção de fadiga que podem alertar os condutores à medida que a sonolência se instala. Em particular, a análise dos padrões de frequência cardíaca pode oferecer informações valiosas sobre a condição fisiológica e o nível de vigilância do condutor, permitindo-lhe compreender os seus níveis de fadiga. Esta revisão tem como objetivo estabelecer o estado atual das estratégias de monitorização para ocupantes de veículos, com foco específico na avaliação da fadiga pela frequência cardíaca e variabilidade da frequência cardíaca. Métodos: Realizamos uma pesquisa sistemática da literatura nas bases de dados Web of Science, SCOPUS e Pubmed, utilizando os termos veículo, condutor, monitoração fisiológica, fadiga, sono, eletrocardiograma, frequência cardíaca e variabilidade da frequência cardíaca. Examinamos artigos publicados entre 1 de janeiro de 2018 e 31 de janeiro de 2023. Resultados: Um total de 371 artigos foram identificados, dos quais 71 foram incluídos neste estudo. Entre os artigos incluídos, 57 utilizam o eletrocardiograma (ECG) como sinal adquirido para medir a frequência cardíaca, sendo que a maioria das leituras de ECG foi obtida através de sensores de contacto (n=41), seguidos por sensores vestíveis não invasivos (n=11). Relativamente à validação, 23 artigos não mencionam qualquer tipo de validação, enquanto a maioria se baseia em avaliações subjetivas de fadiga relatadas pelos próprios participantes (n=27) e avaliações feitas por observadores com base em vídeos (n=11). Dos artigos incluídos, apenas 14 englobam um sistema de estimativa de fadiga e sonolência. Alguns relatam um desempenho satisfatórios, no entanto, o tamanho reduzido da amostra limita a abrangência de quaisquer conclusões. Conclusão: Esta revisão destaca o potencial da análise da frequência cardíaca e da instrumentação não invasiva para a monitorização contínua do estado do condutor e deteção de sonolência. Uma das principais questões é a falta de métodos suficientes de validação e estimativa para a fadiga, o que contribui para a insuficiência dos métodos na criação de sistemas de alarme proativos. Esta área apresenta grandes perspetivas, mas ainda está longe de ser implementada de forma fiável.Background: Fatigue is a complex state that can result in decreased alertness, often accompanied by drowsiness. Driving fatigue has become a significant contributor to traffic accidents globally, highlighting the need for effective monitoring techniques. Various technologies exist to enhance driver safety and minimize accident risks, such as fatigue detection systems that can alert drivers as drowsiness sets in. In particular, measuring heart rate patterns may offer valuable insights into the occupant's physiological condition and level of alertness, and may allow them to understand their fatigue levels. This review aims to establish the current state of the art of monitoring strategies for vehicle occupants, specifically focusing on fatigue assessed by heart rate and heart rate variability. Methods: We performed a systematic literature search in the databases of Web Of Science, SCOPUS and Pubmed, using the terms vehicle, driver, physiologic monitoring, fatigue, sleep, electrocardiogram, heart rate and heart rate variability. We examine articles published between 1st of january 2018 and 31st of January 2023. Results: A total of 371 papers were identified from which 71 articles were included in this study. Among the included papers, 57 utilized electrocardiogram (ECG) as the acquired signal for heart rate (HR) measures, with most ECG readings obtained through contact sensors (n=41), followed by non-intrusive wearable sensors (n=11). Regarding validation, 23 papers do not report validation, while the majority rely on subjective self-reported fatigue ratings (n=27) and video-based observer ratings(n=11). From the included papers, only 14 comprise a fatigue and drowsiness estimation system. Some report acceptable performances, but reduced sample size limits the reach of any conclusions. Conclusions: This review highlights the potential of HR analysis and non-intrusive instrumentation for continuous monitoring of driver's status and detecting sleepiness. One major issue is the lack of sufficient validation and estimation methods for fatigue, contributing to the insufficiency of methods in providing proactive alarm systems. This area shows great promise but is still far from being reliably implemented

    Estimating Level of Engagement from Ocular Landmarks

    Get PDF
    E-learning offers many advantages like being economical, flexible and customizable, but also has challenging aspects such as lack of – social-interaction, which results in contemplation and sense of remoteness. To overcome these and sustain learners’ motivation, various stimuli can be incorporated. Nevertheless, such adjustments initially require an assessment of engagement level. In this respect, we propose estimating engagement level from facial landmarks exploiting the facts that (i) perceptual decoupling is promoted by blinking during mentally demanding tasks; (ii) eye strain increases blinking rate, which also scales with task disengagement; (iii) eye aspect ratio is in close connection with attentional state and (iv) users’ head position is correlated with their level of involvement. Building empirical models of these actions, we devise a probabilistic estimation framework. Our results indicate that high and low levels of engagement are identified with considerable accuracy, whereas medium levels are inherently more challenging, which is also confirmed by inter-rater agreement of expert coders

    NASA Tech Briefs, January 2004

    Get PDF
    Topics covered include: Multisensor Instrument for Real-Time Biological Monitoring; Sensor for Monitoring Nanodevice-Fabrication Plasmas; Backed Bending Actuator; Compact Optoelectronic Compass; Micro Sun Sensor for Spacecraft; Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets; Finned-Ladder Slow-Wave Circuit for a TWT; Directional Radio-Frequency Identification Tag Reader; Integrated Solar-Energy-Harvesting and -Storage Device; Event-Driven Random-Access-Windowing CCD Imaging System; Stroboscope Controller for Imaging Helicopter Rotors; Software for Checking State-charts; Program Predicts Broadband Noise from a Turbofan Engine; Protocol for a Delay-Tolerant Data-Communication Network; Software Implements a Space-Mission File-Transfer Protocol; Making Carbon-Nanotube Arrays Using Block Copolymers: Part 2; Modular Rake of Pitot Probes; Preloading To Accelerate Slow-Crack-Growth Testing; Miniature Blimps for Surveillance and Collection of Samples; Hybrid Automotive Engine Using Ethanol-Burning Miller Cycle; Fabricating Blazed Diffraction Gratings by X-Ray Lithography; Freeze-Tolerant Condensers; The StarLight Space Interferometer; Champagne Heat Pump; Controllable Sonar Lenses and Prisms Based on ERFs; Measuring Gravitation Using Polarization Spectroscopy; Serial-Turbo-Trellis-Coded Modulation with Rate-1 Inner Code; Enhanced Software for Scheduling Space-Shuttle Processing; Bayesian-Augmented Identification of Stars in a Narrow View; Spacecraft Orbits for Earth/Mars-Lander Radio Relay; and Self-Inflatable/Self-Rigidizable Reflectarray Antenna
    corecore