1,830 research outputs found

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Self-adaptivity of applications on network on chip multiprocessors: the case of fault-tolerant Kahn process networks

    Get PDF
    Technology scaling accompanied with higher operating frequencies and the ability to integrate more functionality in the same chip has been the driving force behind delivering higher performance computing systems at lower costs. Embedded computing systems, which have been riding the same wave of success, have evolved into complex architectures encompassing a high number of cores interconnected by an on-chip network (usually identified as Multiprocessor System-on-Chip). However these trends are hindered by issues that arise as technology scaling continues towards deep submicron scales. Firstly, growing complexity of these systems and the variability introduced by process technologies make it ever harder to perform a thorough optimization of the system at design time. Secondly, designers are faced with a reliability wall that emerges as age-related degradation reduces the lifetime of transistors, and as the probability of defects escaping post-manufacturing testing is increased. In this thesis, we take on these challenges within the context of streaming applications running in network-on-chip based parallel (not necessarily homogeneous) systems-on-chip that adopt the no-remote memory access model. In particular, this thesis tackles two main problems: (1) fault-aware online task remapping, (2) application-level self-adaptation for quality management. For the former, by viewing fault tolerance as a self-adaptation aspect, we adopt a cross-layer approach that aims at graceful performance degradation by addressing permanent faults in processing elements mostly at system-level, in particular by exploiting redundancy available in multi-core platforms. We propose an optimal solution based on an integer linear programming formulation (suitable for design time adoption) as well as heuristic-based solutions to be used at run-time. We assess the impact of our approach on the lifetime reliability. We propose two recovery schemes based on a checkpoint-and-rollback and a rollforward technique. For the latter, we propose two variants of a monitor-controller- adapter loop that adapts application-level parameters to meet performance goals. We demonstrate not only that fault tolerance and self-adaptivity can be achieved in embedded platforms, but also that it can be done without incurring large overheads. In addressing these problems, we present techniques which have been realized (depending on their characteristics) in the form of a design tool, a run-time library or a hardware core to be added to the basic architecture

    Micropipeline controller design and verification with applications in signal processing

    Get PDF

    NONCONTACT DIFFUSE CORRELATION TOMOGRAPHY OF BREAST TUMOR

    Get PDF
    Since aggressive cancers are frequently hypermetabolic with angiogenic vessels, quantification of blood flow (BF) can be vital for cancer diagnosis. Our laboratory has developed a noncontact diffuse correlation tomography (ncDCT) system for 3-D imaging of BF distribution in deep tissues (up to centimeters). The ncDCT system employs two sets of optical lenses to project source and detector fibers respectively onto the tissue surface, and applies finite element framework to model light transportation in complex tissue geometries. This thesis reports our first step to adapt the ncDCT system for 3-D imaging of BF contrasts in human breast tumors. A commercial 3-D camera was used to obtain breast surface geometry which was then converted to a solid volume mesh. An ncDCT probe scanned over a region of interest on the breast mesh surface and the measured boundary data were used for 3-D image reconstruction of BF distribution. This technique was tested with computer simulations and in 28 patients with breast tumors. Results from computer simulations suggest that relatively high accuracy can be achieved when the entire tumor was within the sensitive region of diffuse light. Image reconstruction with a priori knowledge of the tumor volume and location can significantly improve the accuracy in recovery of tumor BF contrasts. In vivo ncDCT imaging results from the majority of breast tumors showed higher BF contrasts in the tumor regions compared to the surrounding tissues. Reconstructed tumor depths and dimensions matched ultrasound imaging results when the tumors were within the sensitive region of light propagation. The results demonstrate that ncDCT system has the potential to image BF distributions in soft and vulnerable tissues without distorting tissue hemodynamics. In addition to this primary study, detector fibers with different modes (i.e., single-mode, few-mode, multimode) for photon collection were experimentally explored to improve the signal-to-noise ratio of diffuse correlation spectroscopy flow-oximeter measurements

    NONINVASIVE MULTIMODAL DIFFUSE OPTICAL IMAGING OF VULNERABLE TISSUE HEMODYNAMICS

    Get PDF
    Measurement of tissue hemodynamics provides vital information for the assessment of tissue viability. This thesis reports three noninvasive near-infrared diffuse optical systems for spectroscopic measurements and tomographic imaging of tissue hemodynamics in vulnerable tissues with the goal of disease diagnosis and treatment monitoring. A hybrid near-infrared spectroscopy/diffuse correlation spectroscopy (NIRS/DCS) instrument with a contact fiber-optic probe was developed and utilized for simultaneous and continuous monitoring of blood flow (BF), blood oxygenation, and oxidative metabolism in exercising gastrocnemius. Results measured by the hybrid NIRS/DCS instrument in 37 subjects (mean age: 67 ± 6) indicated that vitamin D supplement plus aerobic training improved muscle metabolic function in older population. To reduce the interference and potential infection risk on vulnerable tissues caused by the contact measurement, a noncontact diffuse correlation spectroscopy/tomography (ncDCS/ncDCT) system was then developed. The ncDCS/ncDCT system employed optical lenses to project limited numbers of sources and detectors on the tissue surface. A motor-driven noncontact probe scanned over a region of interest to collect boundary data for three dimensional (3D) tomographic imaging of blood flow distribution. The ncDCS was tested for BF measurements in mastectomy skin flaps. Nineteen (19) patients underwent mastectomy and implant-based breast reconstruction were measured before and immediately after mastectomy. The BF index after mastectomy in each patient was normalized to its baseline value before surgery to get relative BF (rBF). Since rBF values in the patients with necrosis (n = 4) were significantly lower than those without necrosis (n = 15), rBF levels can be used to predict mastectomy skin flap necrosis. The ncDCT was tested for 3D imaging of BF distributions in chronic wounds of 5 patients. Spatial variations in BF contrasts over the wounded tissues were observed, indicating the capability of ncDCT in detecting tissue hemodynamic heterogeneities. To improve temporal/spatial resolution and avoid motion artifacts due to a long mechanical scanning of ncDCT, an electron-multiplying charge-coupled device based noncontact speckle contrast diffuse correlation tomography (scDCT) was developed. Validation of scDCT was done by imaging both high and low BF contrasts in tissue-like phantoms and human forearms. In a wound imaging study using scDCT, significant lower BF values were observed in the burned areas/volumes compared to surrounding normal tissues in two patients with burn. One limitation in this study was the potential influence of other unknown tissue optical properties such as tissue absorption coefficient (µa) on BF measurements. A new algorithm was then developed to extract both µa and BF using light intensities and speckle contrasts measured by scDCT at multiple source-detector distances. The new algorithm was validated using tissue-like liquid phantoms with varied values of µa and BF index. In-vivo validation and application of the innovative scDCT technique with the new algorithm is the subject of future work

    MULTIMODAL NONCONTACT DIFFUSE OPTICAL REFLECTANCE IMAGING OF BLOOD FLOW AND FLUORESCENCE CONTRASTS

    Get PDF
    In this study we design a succession of three increasingly adept diffuse optical devices towards the simultaneous 3D imaging of blood flow and fluorescence contrasts in relatively deep tissues. These metrics together can provide future insights into the relationship between blood flow distributions and fluorescent or fluorescently tagged agents. A noncontact diffuse correlation tomography (ncDCT) device was firstly developed to recover flow by mechanically scanning a lens-based apparatus across the sample. The novel flow reconstruction technique and measuring boundary curvature were advanced in tandem. The establishment of CCD camera detection with a high sampling density and flow recovery by speckle contrast followed with the next instrument, termed speckle contrast diffuse correlation tomography (scDCT). In scDCT, an optical switch sequenced coherent near-infrared light into contact-based source fibers around the sample surface. A fully noncontact reflectance mode device finalized improvements by combining noncontact scDCT (nc_scDCT) and diffuse fluorescence tomography (DFT) techniques. In the combined device, a galvo-mirror directed polarized light to the sample surface. Filters and a cross polarizer in stackable tubes promoted extracting flow indices, absorption coefficients, and fluorescence concentrations (indocyanine green, ICG). The scDCT instrumentation was validated through detection of a cubical solid tissue-like phantom heterogeneity beneath a liquid phantom (background) surface where recovery of its center and dimensions agreed with the known values. The combined nc_scDCT/DFT identified both a cubical solid phantom and a tube of stepwise varying ICG concentration (absorption and fluorescence contrast). The tube imaged by nc_scDCT/DFT exhibited expected trends in absorption and fluorescence. The tube shape, orientation, and localization were recovered in general agreement with actuality. The flow heterogeneity localization was successfully extracted and its average relative flow values in agreement with previous studies. Increasing ICG concentrations induced notable disturbances in the tube region (≥ 0.25 μM/1 μM for 785 nm/830 nm) suggesting the graduating absorption (320% increase at 785 nm) introduced errors. We observe that 830 nm is lower in the ICG absorption spectrum and the correspondingly measured flow encountered less influence than 785 nm. From these results we anticipate the best practice in future studies to be utilization of a laser source with wavelength in a low region of the ICG absorption spectrum (e.g., 830 nm) or to only monitor flow prior to ICG injection or post-clearance. In addition, ncDCT was initially tested in a mouse tumor model to examine tumor size and averaged flow changes over a four-day interval. The next steps in forwarding the combined device development include the straightforward automation of data acquisition and filter rotation and applying it to in vivo tumor studies. These animal/clinical models may seek information such as simultaneous detection of tumor flow, fluorescence, and absorption contrasts or analyzing the relationship between variably sized fluorescently tagged nanoparticles and their tumor deposition relationship to flow distributions

    SOFTWARE DIAGNOSIS USING COMPRESSED SIGNATURE SEQUENCES

    Get PDF
    Software monitoring and debugging can be efficiently supported by one of the concurrent error detection methods, the application of watchdog processors. A watchdog processor, as a co-processor, receives and evaluates signatures assigned to the states of the program execution. After the checking, it stores the run-time sequence of signatures which identify the statements of the program. In this way, a trace of the statements executed before the error is available. The signature buffer can be efficiently utilized if the signature sequence is compressed. In the paper, two real-time compression methods are presented and compared. The first one uses predefined dictionaries, while the other one utilizes the structural information encoded in the signatures
    corecore