146 research outputs found

    High-Resolution Quantitative Cone-Beam Computed Tomography: Systems, Modeling, and Analysis for Improved Musculoskeletal Imaging

    Get PDF
    This dissertation applies accurate models of imaging physics, new high-resolution imaging hardware, and novel image analysis techniques to benefit quantitative applications of x-ray CT in in vivo assessment of bone health. We pursue three Aims: 1. Characterization of macroscopic joint space morphology, 2. Estimation of bone mineral density (BMD), and 3. Visualization of bone microstructure. This work contributes to the development of extremity cone-beam CT (CBCT), a compact system for musculoskeletal (MSK) imaging. Joint space morphology is characterized by a model which draws an analogy between the bones of a joint and the plates of a capacitor. Virtual electric field lines connecting the two surfaces of the joint are computed as a surrogate measure of joint space width, creating a rich, non-degenerate, adaptive map of the joint space. We showed that by using such maps, a classifier can outperform radiologist measurements at identifying osteoarthritic patients in a set of CBCT scans. Quantitative BMD accuracy is achieved by combining a polyenergetic model-based iterative reconstruction (MBIR) method with fast Monte Carlo (MC) scatter estimation. On a benchtop system emulating extremity CBCT, we validated BMD accuracy and reproducibility via a series of phantom studies involving inserts of known mineral concentrations and a cadaver specimen. High-resolution imaging is achieved using a complementary metal-oxide semiconductor (CMOS)-based x-ray detector featuring small pixel size and low readout noise. A cascaded systems model was used to performed task-based optimization to determine optimal detector scintillator thickness in nominal extremity CBCT imaging conditions. We validated the performance of a prototype scanner incorporating our optimization result. Strong correlation was found between bone microstructure metrics obtained from the prototype scanner and µCT gold standard for trabecular bone samples from a cadaver ulna. Additionally, we devised a multiresolution reconstruction scheme allowing fast MBIR to be applied to large, high-resolution projection data. To model the full scanned volume in the reconstruction forward model, regions outside a finely sampled region-of-interest (ROI) are downsampled, reducing runtime and cutting memory requirements while maintaining image quality in the ROI

    Design, implementation and evaluation of automated surveillance systems

    Get PDF
    El reconocimiento de patrones ha conseguido un nivel de complejidad que nos permite reconocer diferente tipo de eventos, incluso peligros, y actuar en concordancia para minimizar el impacto de una situación complicada y abordarla de la mejor manera posible. Sin embargo, creemos que todavía se puede llegar a alcanzar aplicaciones más eficientes con algoritmos más precisos. Nuestra aplicación quiere probar a incluir el nuevo paradigma de la programación, las redes neuronales. Nuestra idea en principio fue explorar la alternativa que las nuevas redes neuronales convolucionales aportaban, en donde se podía ver en vídeos de ejemplos la alta tasa de detección e identificación que, por ejemplo, YOLOv2 podría mostrar. Después de comparar las características, vimos que YOLOv3 ofrecía un buen balance entre precisión y rapidez como comentaremos más adelante. Debido a la tasa de baja detecciones, haremos uso de los filtros de Kalman para ayudarnos a la hora de hacer reidentificación de personas y objetos. En este proyecto, haremos un estudio además de las alternativas de videovigilancia con las que cuentan empresas del sector y veremos que clase de productos ofrecen y, por otro lado, observaremos cuales son los trabajos de los grupos de investigadores de otras universidades que más similitudes tienen con nuestro objetivo. Dedicaremos, por lo tanto, el uso de esta red neuronal para detectar eventos como el abandono de mochilas y para mostrar la densidad de tránsito en localizaciones concretas, así como utilizaremos una metodología más tradicional, el flujo óptico, para detectar actuaciones anormales en una multitud.Automatic surveillance system is getting more and more sophisticated with the increasing calculation power that computers are reaching. The aim of this project is to take advantage of these tools and with the new classification and detection technology brought by neural networks, develop a surveillance application that can recognize certain behaviours (which are the detection of lost backpacks and suitcases, detection of abnormal crowd activity and heatmap of density occupation). To develop this program, python has been the selected programming language used, where YOLO and OpenCV form the spine of this project. After testing the code, it has been proved that due to the constrains of the detection for small objects, the project does not perform as it should for real development, but still it shows potential for the detection of lost backpacks in certain videos from the GBA dataset [1] and PETS2006 dataset [2]. The abnormal activity detection for crowds is made with a simple algorithm that seems to perform well, detecting the anomalies in all the testing dataset used, generated by the University of Minnesota [3]. Finally, the heatmap can display correctly the projection of people on the ground for five second, just as intended. The objective of this software is to be part of the core of what could be a future application with more modules that will be able to perform full automated surveillance tasks and gather useful information data, and these advances and future proposal will be explained in this memory.Máster Universitario en Ingeniería Industrial (M141

    Microscopy Conference 2017 (MC 2017) - Proceedings

    Get PDF
    Das Dokument enthält die Kurzfassungen der Beiträge aller Teilnehmer an der Mikroskopiekonferenz "MC 2017", die vom 21. bis 25.08.2017, in Lausanne stattfand

    Multimodal assessment of emotional responses by physiological monitoring: novel auditory and visual elicitation strategies in traditional and virtual reality environments

    Get PDF
    This doctoral thesis explores novel strategies to quantify emotions and listening effort through monitoring of physiological signals. Emotions are a complex aspect of the human experience, playing a crucial role in our survival and adaptation to the environment. The study of emotions fosters important applications, such as Human-Computer and Human-Robot interaction or clinical assessment and treatment of mental health conditions such as depression, anxiety, stress, chronic anger, and mood disorders. Listening effort is also an important area of study, as it provides insight into the listeners’ challenges that are usually not identified by traditional audiometric measures. The research is divided into three lines of work, each with a unique emphasis on the methods of emotion elicitation and the stimuli that are most effective in producing emotional responses, with a specific focus on auditory stimuli. The research fostered the creation of three experimental protocols, as well as the use of an available online protocol for studying emotional responses including monitoring of both peripheral and central physiological signals, such as skin conductance, respiration, pupil dilation, electrocardiogram, blood volume pulse, and electroencephalography. An emotional protocol was created for the study of listening effort using a speech-in-noise test designed to be short and not induce fatigue. The results revealed that the listening effort is a complex problem that cannot be studied with a univariate approach, thus necessitating the use of multiple physiological markers to study different physiological dimensions. Specifically, the findings demonstrate a strong association between the level of auditory exertion, the amount of attention and involvement directed towards stimuli that are readily comprehensible compared to those that demand greater exertion. Continuing with the auditory domain, peripheral physiological signals were studied in order to discriminate four emotions elicited in a subject who listened to music for 21 days, using a previously designed and publicly available protocol. Surprisingly, the processed physiological signals were able to clearly separate the four emotions at the physiological level, demonstrating that music, which is not typically studied extensively in the literature, can be an effective stimulus for eliciting emotions. Following these results, a flat-screen protocol was created to compare physiological responses to purely visual, purely auditory, and combined audiovisual emotional stimuli. The results show that auditory stimuli are more effective in separating emotions at the physiological level. The subjects were found to be much more attentive during the audio-only phase. In order to overcome the limitations of emotional protocols carried out in a laboratory environment, which may elicit fewer emotions due to being an unnatural setting for the subjects under study, a final emotional elicitation protocol was created using virtual reality. Scenes similar to reality were created to elicit four distinct emotions. At the physiological level, it was noted that this environment is more effective in eliciting emotions. To our knowledge, this is the first protocol specifically designed for virtual reality that elicits diverse emotions. Furthermore, even in terms of classification, the use of virtual reality has been shown to be superior to traditional flat-screen protocols, opening the doors to virtual reality for the study of conditions related to emotional control
    corecore