7,114 research outputs found

    Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion

    Get PDF
    Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging

    Mass fluxes and isofluxes of methane (CH4) at a New Hampshire fen measured by a continuous wave quantum cascade laser spectrometer

    Get PDF
    We have developed a mid‐infrared continuous‐wave quantum cascade laser direct‐absorption spectrometer (QCLS) capable of high frequency (≄1 Hz) measurements of 12CH4 and 13CH4 isotopologues of methane (CH4) with in situ 1‐s RMS image precision of 1.5 ‰ and Allan‐minimum precision of 0.2 ‰. We deployed this QCLS in a well‐studied New Hampshire fen to compare measurements of CH4 isoflux by eddy covariance (EC) to Keeling regressions of data from automated flux chamber sampling. Mean CH4 fluxes of 6.5 ± 0.7 mg CH4 m−2 hr−1 over two days of EC sampling in July, 2009 were indistinguishable from mean autochamber CH4 fluxes (6.6 ± 0.8 mgCH4 m−2 hr−1) over the same period. Mean image composition of emitted CH4 calculated using EC isoflux methods was −71 ± 8 ‰ (95% C.I.) while Keeling regressions of 332 chamber closing events over 8 days yielded a corresponding value of −64.5 ± 0.8 ‰. Ebullitive fluxes, representing ∌10% of total CH4 fluxes at this site, were on average 1.2 ‰ enriched in 13C compared to diffusive fluxes. CH4 isoflux time series have the potential to improve process‐based understanding of methanogenesis, fully characterize source isotopic distributions, and serve as additional constraints for both regional and global CH4 modeling analysis

    Marshall Space Flight Center Research and Technology Report 2019

    Get PDF
    Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review

    Get PDF
    Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of these approaches. The arti

    Grid Infrastructure for Satellite Data Processing in Ukraine

    Get PDF
    In this paper conceptual foundations for the development of Grid systems that aimed for satellite data processing are discussed. The state of the art of development of such Grid systems is analyzed, and a model of Grid system for satellite data processing is proposed. An experience obtained within the development of the Grid system for satellite data processing in the Space Research Institute of NASU-NSAU is discussed

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    PRAXIS: low thermal emission high efficiency OH suppressed fibre spectrograph

    Full text link
    PRAXIS is a second generation instrument that follows on from GNOSIS, which was the first instrument using fibre Bragg gratings for OH background suppression. The Bragg gratings reflect the NIR OH lines while being transparent to light between the lines. This gives a much higher signal-noise ratio at low resolution but also at higher resolutions by removing the scattered wings of the OH lines. The specifications call for high throughput and very low thermal and detector noise so that PRAXIS will remain sky noise limited. The optical train is made of fore-optics, an IFU, a fibre bundle, the Bragg grating unit, a second fibre bundle and a spectrograph. GNOSIS used the pre-existing IRIS2 spectrograph while PRAXIS will use a new spectrograph specifically designed for the fibre Bragg grating OH suppression and optimised for 1470 nm to 1700 nm (it can also be used in the 1090 nm to 1260 nm band by changing the grating and refocussing). This results in a significantly higher transmission due to high efficiency coatings, a VPH grating at low incident angle and low absorption glasses. The detector noise will also be lower. Throughout the PRAXIS design special care was taken at every step along the optical path to reduce thermal emission or stop it leaking into the system. This made the spectrograph design challenging because practical constraints required that the detector and the spectrograph enclosures be physically separate by air at ambient temperature. At present, the instrument uses the GNOSIS fibre Bragg grating OH suppression unit. We intend to soon use a new OH suppression unit based on multicore fibre Bragg gratings which will allow increased field of view per fibre. Theoretical calculations show that the gain in interline sky background signal-noise ratio over GNOSIS may very well be as high as 9 with the GNOSIS OH suppression unit and 17 with the multicore fibre OH suppression unit.Comment: SPIE conference proceedings 915

    Infrared Image Super-Resolution: Systematic Review, and Future Trends

    Full text link
    Image Super-Resolution (SR) is essential for a wide range of computer vision and image processing tasks. Investigating infrared (IR) image (or thermal images) super-resolution is a continuing concern within the development of deep learning. This survey aims to provide a comprehensive perspective of IR image super-resolution, including its applications, hardware imaging system dilemmas, and taxonomy of image processing methodologies. In addition, the datasets and evaluation metrics in IR image super-resolution tasks are also discussed. Furthermore, the deficiencies in current technologies and possible promising directions for the community to explore are highlighted. To cope with the rapid development in this field, we intend to regularly update the relevant excellent work at \url{https://github.com/yongsongH/Infrared_Image_SR_SurveyComment: Submitted to IEEE TNNL
    • 

    corecore