7,194 research outputs found

    Evaluation of Intra- and Interscanner Reliability of MRI Protocols for Spinal Cord Gray Matter and Total Cross-Sectional Area Measurements.

    Get PDF
    BackgroundIn vivo quantification of spinal cord atrophy in neurological diseases using MRI has attracted increasing attention.PurposeTo compare across different platforms the most promising imaging techniques to assess human spinal cord atrophy.Study typeTest/retest multiscanner study.SubjectsTwelve healthy volunteers.Field strength/sequenceThree different 3T scanner platforms (Siemens, Philips, and GE) / optimized phase sensitive inversion recovery (PSIR), T1 -weighted (T1 -w), and T2 *-weighted (T2 *-w) protocols.AssessmentOn all images acquired, two operators assessed contrast-to-noise ratio (CNR) between gray matter (GM) and white matter (WM), and between WM and cerebrospinal fluid (CSF); one experienced operator measured total cross-sectional area (TCA) and GM area using JIM and the Spinal Cord Toolbox (SCT).Statistical testsCoefficient of variation (COV); intraclass correlation coefficient (ICC); mixed effect models; analysis of variance (t-tests).ResultsFor all the scanners, GM/WM CNR was higher for PSIR than T2 *-w (P < 0.0001) and WM/CSF CNR for T1 -w was the highest (P < 0.0001). For TCA, using JIM, median COVs were smaller than 1.5% and ICC >0.95, while using SCT, median COVs were in the range 2.2-2.75% and ICC 0.79-0.95. For GM, despite some failures of the automatic segmentation, median COVs using SCT on T2 *-w were smaller than using JIM manual PSIR segmentations. In the mixed effect models, the subject was always the main contributor to the variance of area measurements and scanner often contributed to TCA variance (P < 0.05). Using JIM, TCA measurements on T2 *-w were different than on PSIR (P = 0.0021) and T1 -w (P = 0.0018), while using SCT, no notable differences were found between T1 -w and T2 *-w (P = 0.18). JIM and SCT-derived TCA were not different on T1 -w (P = 0.66), while they were different for T2 *-w (P < 0.0001). GM area derived using SCT/T2 *-w versus JIM/PSIR were different (P < 0.0001).Data conclusionThe present work sets reference values for the magnitude of the contribution of different effects to cord area measurement intra- and interscanner variability.Level of evidence1 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2019;49:1078-1090

    Collision warning design in automotive head-up displays

    Get PDF
    Abstract. In the last few years, the automotive industry has experienced a large growth in the hardware and the underlying electronics. The industry benefits from both Human Machine Interface (HMI) research and modern technology. There are many applications of the Advanced Driver Assistant System (ADAS) and their positive impact on drivers is even more. Forward Collision Warning (FCW) is one of many applications of ADAS. In the last decades, different approaches and tools are used to implement FCW systems. Current Augmented Reality (AR) applications are feasible to integrate in modern cars. In this thesis work, we introduce three different FCW designs: static, animated and 3D animated warnings. We test the proposed designs in three different environments: day, night and rain. The designs static and animated achieve a minimum response time 0.486 s whereas the 3D animated warning achieves 1.153 s

    Analysis of cardiac magnetic resonance images : towards quantification in clinical practice

    Get PDF

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Spinal cord grey matter segmentation challenge

    Get PDF
    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication

    iDriving: Toward Safe and Efficient Infrastructure-directed Autonomous Driving

    Full text link
    Autonomous driving will become pervasive in the coming decades. iDriving improves the safety of autonomous driving at intersections and increases efficiency by improving traffic throughput at intersections. In iDriving, roadside infrastructure remotely drives an autonomous vehicle at an intersection by offloading perception and planning from the vehicle to roadside infrastructure. To achieve this, iDriving must be able to process voluminous sensor data at full frame rate with a tail latency of less than 100 ms, without sacrificing accuracy. We describe algorithms and optimizations that enable it to achieve this goal using an accurate and lightweight perception component that reasons on composite views derived from overlapping sensors, and a planner that jointly plans trajectories for multiple vehicles. In our evaluations, iDriving always ensures safe passage of vehicles, while autonomous driving can only do so 27% of the time. iDriving also results in 5x lower wait times than other approaches because it enables traffic-light free intersections

    NASA Automated Rendezvous and Capture Review. Executive summary

    Get PDF
    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure
    • …
    corecore