291 research outputs found

    Joining and aggregating datasets using CouchDB

    Get PDF
    Data mining typically requires implementing operations that involve cross-cutting entity boundaries and are awkward to implement in document-oriented databases. CouchDB, for example, models entities as documents, with highly isolated entity boundaries, and on which joins cannot be directly performed. This project shows how join and aggregation can be achieved across entity boundaries in such systems, as encountered for example in the pre-processing and exploration stages of educational data mining. A software stack is presented as a means by which this can be achieved; first, datasets are processed via ETL operations, then MapReduce is used to create indices of ordered and aggregated data. Finally, a Couchdb list function is used to iterate through these indices and perform joins, and to compute aggregated values on joined datasets such as variance and correlations. In terms of the case study, it is shown that the proposed approach to implementing cross-document joins and aggregation is effective and scalable. In addition, it was discovered that for the 2014 - 2016 UCT cohorts, NBT scores correlate better with final grades for the CSC1015F course than do Grade 12 results for English, Science and Mathematics

    Ultra low-power fault-tolerant SRAM design in 90nm CMOS technology

    Get PDF
    With the increment of mobile, biomedical and space applications, digital systems with low-power consumption are required. As a main part in digital systems, low-power memories are especially desired. Reducing the power supply voltages to sub-threshold region is one of the effective approaches for ultra low-power applications. However, the reduced Static Noise Margin (SNM) of Static Random Access Memory (SRAM) imposes great challenges to the subthreshold SRAM design. The conventional 6-transistor SRAM cell does not function properly at sub-threshold supply voltage range because it has no enough noise margin for reliable operation. In order to achieve ultra low-power at sub-threshold operation, previous research work has demonstrated that the read and write decoupled scheme is a good solution to the reduced SNM problem. A Dual Interlocked Storage Cell (DICE) based SRAM cell was proposed to eliminate the drawback of conventional DICE cell during read operation. This cell can mitigate the singleevent effects, improve the stability and also maintain the low-power characteristic of subthreshold SRAM, In order to make the proposed SRAM cell work under different power supply voltages from 0.3 V to 0.6 V, an improved replica sense scheme was applied to produce a reference control signal, with which the optimal read time could be achieved. In this thesis, a 2K ~8 bits SRAM test chip was designed, simulated and fabricated in 90nm CMOS technology provided by ST Microelectronics. Simulation results suggest that the operating frequency at VDD = 0.3 V is up to 4.7 MHz with power dissipation 6.0 Ć’ĂŠW, while it is 45.5 MHz at VDD = 0.6 V dissipating 140 Ć’ĂŠW. However, the area occupied by a single cell is larger than that by conventional SRAM due to additional transistors used. The main contribution of this thesis project is that we proposed a new design that could simultaneously solve the ultra low-power and radiation-tolerance problem in large capacity memory design

    Reliability-yield allocation for semiconductor integrated circuits: modeling and optimization

    Get PDF
    This research develops yield and reliability models for fault-tolerant semiconductor integrated circuits and develops optimization algorithms that can be directly applied to these models. Since defects cause failures in microelectronics systems, accurate yield and reliability models considering these defects as well as optimization techniques determining efficient defect-tolerant schemes are essential in semiconductor manufacturing and nanomanufacturing to ensure manufacturability and productivity. The defect-based yield model considers various types of failures, fault-tolerant schemes such as hierarchical redundancy and error correcting code, and burn-in effects, simultaneously. The reliability model counts on carry-over single-cell failures accompanied by the failure rate of the semiconductor integrated circuits under the assumption of an error correcting code policy. The redundancy allocation problem, which seeks to find an optimal allocation of redundancy that maximizes system reliability, is one of the representative problems in reliability optimization. The problem is typically formulated as a nonconvex integer nonlinear programming problem that is nonseparable and coherent. Two iterative heuristics, tree and scanning heuristics, and variants are studied to obtain local optima and a branch-and-bound algorithm is proposed to find the global optimum for redundancy allocation problems. The proposed algorithms engage a multiple-search paths strategy to accelerate efficiency. Experimental results of these algorithms indicate that they are superior to the existing algorithms in terms of computation time and solution quality. An example of memory semiconductor integrated circuits is presented to show the applicability of both the yield and reliability models and the optimization algorithms to fault-tolerant semiconductor integrated circuits

    Army-NASA aircrew/aircraft integration program (A3I) software detailed design document, phase 3

    Get PDF
    The capabilities and design approach of the MIDAS (Man-machine Integration Design and Analysis System) computer-aided engineering (CAE) workstation under development by the Army-NASA Aircrew/Aircraft Integration Program is detailed. This workstation uses graphic, symbolic, and numeric prototyping tools and human performance models as part of an integrated design/analysis environment for crewstation human engineering. Developed incrementally, the requirements and design for Phase 3 (Dec. 1987 to Jun. 1989) are described. Software tools/models developed or significantly modified during this phase included: an interactive 3-D graphic cockpit design editor; multiple-perspective graphic views to observe simulation scenarios; symbolic methods to model the mission decomposition, equipment functions, pilot tasking and loading, as well as control the simulation; a 3-D dynamic anthropometric model; an intermachine communications package; and a training assessment component. These components were successfully used during Phase 3 to demonstrate the complex interactions and human engineering findings involved with a proposed cockpit communications design change in a simulated AH-64A Apache helicopter/mission that maps to empirical data from a similar study and AH-1 Cobra flight test

    Hardware / Software Architectural and Technological Exploration for Energy-Efficient and Reliable Biomedical Devices

    Get PDF
    Nowadays, the ubiquity of smart appliances in our everyday lives is increasingly strengthening the links between humans and machines. Beyond making our lives easier and more convenient, smart devices are now playing an important role in personalized healthcare delivery. This technological breakthrough is particularly relevant in a world where population aging and unhealthy habits have made non-communicable diseases the first leading cause of death worldwide according to international public health organizations. In this context, smart health monitoring systems termed Wireless Body Sensor Nodes (WBSNs), represent a paradigm shift in the healthcare landscape by greatly lowering the cost of long-term monitoring of chronic diseases, as well as improving patients' lifestyles. WBSNs are able to autonomously acquire biological signals and embed on-node Digital Signal Processing (DSP) capabilities to deliver clinically-accurate health diagnoses in real-time, even outside of a hospital environment. Energy efficiency and reliability are fundamental requirements for WBSNs, since they must operate for extended periods of time, while relying on compact batteries. These constraints, in turn, impose carefully designed hardware and software architectures for hosting the execution of complex biomedical applications. In this thesis, I develop and explore novel solutions at the architectural and technological level of the integrated circuit design domain, to enhance the energy efficiency and reliability of current WBSNs. Firstly, following a top-down approach driven by the characteristics of biomedical algorithms, I perform an architectural exploration of a heterogeneous and reconfigurable computing platform devoted to bio-signal analysis. By interfacing a shared Coarse-Grained Reconfigurable Array (CGRA) accelerator, this domain-specific platform can achieve higher performance and energy savings, beyond the capabilities offered by a baseline multi-processor system. More precisely, I propose three CGRA architectures, each contributing differently to the maximization of the application parallelization. The proposed Single, Multi and Interleaved-Datapath CGRA designs allow the developed platform to achieve substantial energy savings of up to 37%, when executing complex biomedical applications, with respect to a multi-core-only platform. Secondly, I investigate how the modeling of technology reliability issues in logic and memory components can be exploited to adequately adjust the frequency and supply voltage of a circuit, with the aim of optimizing its computing performance and energy efficiency. To this end, I propose a novel framework for workload-dependent Bias Temperature Instability (BTI) impact analysis on biomedical application results quality. Remarkably, the framework is able to determine the range of safe circuit operating frequencies without introducing worst-case guard bands. Experiments highlight the possibility to safely raise the frequency up to 101% above the maximum obtained with the classical static timing analysis. Finally, through the study of several well-known biomedical algorithms, I propose an approach allowing energy savings by dynamically and unequally protecting an under-powered data memory in a new way compared to regular error protection schemes. This solution relies on the Dynamic eRror compEnsation And Masking (DREAM) technique that reduces by approximately 21% the energy consumed by traditional error correction codes

    Hard-Real-Time Computing Performance in a Cloud Environment

    Get PDF
    The United States Department of Defense (DoD) is rapidly working with DoD Services to move from multi-year (e.g., 7-10) traditional acquisition programs to a commercial industrybased approach for software development. While commercial technologies and approaches provide an opportunity for rapid fielding of mission capabilities to pace threats, the suitability of commercial technologies to meet hard-real-time requirements within a surface combat system is unclear. This research establishes technical data to validate the effectiveness and suitability of current commercial technologies to meet the hard-real-time demands of a DoD combat management system. (Moreland Jr., 2013) conducted similar research; however, microservices, containers, and container orchestration technologies were not on the DoD radar at the time. Updated knowledge in this area will inform future DoD roadmaps and investments. A mission-based approach using Mission Engineering will be used to set the context for applied research. A hypothetical yet operationally relevant Strait Transit scenario has been established to provide context for definition of experimental parameters to be set while assessing the hypothesis. System models federated to form a system-of-systems architecture and data from a cloud computing environment are used to collect data for quantitative analysis

    Auto-tuning compiler options for HPC

    Get PDF
    • …
    corecore