4,123 research outputs found

    Engineering Approaches for Improving Cortical Interfacing and Algorithms for the Evaluation of Treatment Resistant Epilepsy

    Get PDF
    abstract: Epilepsy is a group of disorders that cause seizures in approximately 2.2 million people in the United States. Over 30% of these patients have epilepsies that do not respond to treatment with anti-epileptic drugs. For this population, focal resection surgery could offer long-term seizure freedom. Surgery candidates undergo a myriad of tests and monitoring to determine where and when seizures occur. The “gold standard” method for focus identification involves the placement of electrocorticography (ECoG) grids in the sub-dural space, followed by continual monitoring and visual inspection of the patient’s cortical activity. This process, however, is highly subjective and uses dated technology. Multiple studies were performed to investigate how the evaluation process could benefit from an algorithmic adjust using current ECoG technology, and how the use of new microECoG technology could further improve the process. Computational algorithms can quickly and objectively find signal characteristics that may not be detectable with visual inspection, but many assume the data are stationary and/or linear, which biological data are not. An empirical mode decomposition (EMD) based algorithm was developed to detect potential seizures and tested on data collected from eight patients undergoing monitoring for focal resection surgery. EMD does not require linearity or stationarity and is data driven. The results suggest that a biological data driven algorithm could serve as a useful tool to objectively identify changes in cortical activity associated with seizures. Next, the use of microECoG technology was investigated. Though both ECoG and microECoG grids are composed of electrodes resting on the surface of the cortex, changing the diameter of the electrodes creates non-trivial changes in the physics of the electrode-tissue interface that need to be accounted for. Experimenting with different recording configurations showed that proper grounding, referencing, and amplification are critical to obtain high quality neural signals from microECoG grids. Finally, the relationship between data collected from the cortical surface with micro and macro electrodes was studied. Simultaneous recordings of the two electrode types showed differences in power spectra that suggest the inclusion of activity, possibly from deep structures, by macroelectrodes that is not accessible by microelectrodes.Dissertation/ThesisDoctoral Dissertation Bioengineering 201

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024×1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    Helicopter human factors research

    Get PDF
    Helicopter flight is among the most demanding of all human-machine integrations. The inherent manual control complexities of rotorcraft are made even more challenging by the small margin for error created in certain operations, such as nap-of-the-Earth (NOE) flight, by the proximity of the terrain. Accident data recount numerous examples of unintended conflict between helicopters and terrain and attest to the perceptual and control difficulties associated with low altitude flight tasks. Ames Research Center, in cooperation with the U.S. Army Aeroflightdynamics Directorate, has initiated an ambitious research program aimed at increasing safety margins for both civilian and military rotorcraft operations. The program is broad, fundamental, and focused on the development of scientific understandings and technological countermeasures. Research being conducted in several areas is reviewed: workload assessment, prediction, and measure validation; development of advanced displays and effective pilot/automation interfaces; identification of visual cues necessary for low-level, low-visibility flight and modeling of visual flight-path control; and pilot training

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems
    corecore