364 research outputs found

    Relative Neighbourhood Networks for Archaeological Analysis

    Get PDF

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Real-time RGB-Depth preception of humans for robots and camera networks

    Get PDF
    This thesis deals with robot and camera network perception using RGB-Depth data. The goal is to provide efficient and robust algorithms for interacting with humans. For this reason, a special care has been devoted to design algorithms which can run in real-time on consumer computers and embedded cards. The main contribution of this thesis is the 3D body pose estimation of the human body. We propose two novel algorithms which take advantage of the data stream of a RGB-D camera network outperforming the state-of-the-art performance in both single-view and multi-view tests. While the first algorithm works on point cloud data which is feasible also with no external light, the second one performs better, since it deals with multiple persons with negligible overhead and does not rely on the synchronization between the different cameras in the network. The second contribution regards long-term people re-identification in camera networks. This is particularly challenging since we cannot rely on appearance cues, in order to be able to re-identify people also in different days. We address this problem by proposing a face-recognition framework based on a Convolutional Neural Network and a Bayes inference system to re-assign the correct ID and person name to each new track. The third contribution is about Ambient Assisted Living. We propose a prototype of an assistive robot which periodically patrols a known environment, reporting unusual events as people fallen on the ground. To this end, we developed a fast and robust approach which can work also in dimmer scenes and is validated using a new publicly-available RGB-D dataset recorded on-board of our open-source robot prototype. As a further contribution of this work, in order to boost the research on this topics and to provide the best benefit to the robotics and computer vision community, we released under open-source licenses most of the software implementations of the novel algorithms described in this work

    Doctor of Philosophy

    Get PDF
    dissertationHigh arterial tortuosity, or twistedness, is a sign of many vascular diseases. Some ocular diseases are clinically diagnosed in part by assessment of increased tortuosity of ocular blood vessels. Increased arterial tortuosity is seen in other vascular diseases but is not commonly used for clinical diagnosis. This study develops the use of existing magnetic resonance angiography (MRA) image data to study arterial tortuosity in a range of arteries of hypertensive and intracranial aneurysm patients. The accuracy of several centerline extraction algorithms based on Dijkstra's algorithm was measured in numeric phantoms. The stability of the algorithms was measured in brain arteries. A centerline extraction algorithm was selected based on its accuracy. A centerline tortuosity metric was developed using a curve of tortuosity scores. This tortuosity metric was tested on phantoms and compared to observer-based tortuosity rankings on a test data set. The tortuosity metric was then used to measure and compare with negative controls the tortuosity of brain arteries from intracranial aneurysm and hypertension patients. A Dijkstra based centerline extraction algorithm employing a distance-from-edge weighted center of mass (DFE-COM) cost function of the segmented arteries was selected based on generating 15/16 anatomically correct centerlines in a looping artery iv compared to 15/16 for the center of mass (COM) cost function and 7/16 for the inverse modified distance from edge cost function. The DFE-COM cost function had a lower root mean square error in a lopsided phantom (0.413) than the COM cost function (0.879). The tortuosity metric successfully ordered electronic phantoms of arteries by tortuosity. The tortuosity metric detected an increase in arterial tortuosity in hypertensive patients in 13/13 (10/13 significant at α = 0.05). The metric detected increased tortuosity in a subset of the aneurysm patients with Loeys-Dietz syndrome (LDS) in 7/7 (three significant at α = 0.001). The tortuosity measurement combination of the centerline algorithm and the distance factor metric tortuosity curve was able to detect increases in arterial tortuosity in hypertensives and LDS patients. Therefore the methods validated here can be used to study arterial tortuosity in other hypertensive population samples and in genetic subsets related to LDS

    Comparative Genomics and Phylogenomic Analysis of the Genus Salinivibrio

    Get PDF
    In the genomic era phylogenetic relationship among prokaryotes can be inferred from the core orthologous genes (OGs) or proteins in order to elucidate their evolutionary history and current taxonomy should benefits of that. The genus Salinivibrio belongs to the family Vibrionaceae and currently includes only five halophilic species, in spite the fact that new strains are very frequently isolated from hypersaline environments. Species belonging to this genus have undergone several reclassifications and, moreover, there are many strains of Salinivibrio with available genomes which have not been affiliated to the existing species or have been wrongly designated. Therefore, a phylogenetic study using the available genomic information is necessary to clarify the relationships of existing strains within this genus and to review their taxonomic affiliation. For that purpose, we have also sequenced the first complete genome of a Salinivibrio species, Salinivibrio kushneri AL184T, which was employed as a reference to order the contigs of the draft genomes of the type strains of the current species of this genus, as well as to perform a comparative analysis with all the other available Salinivibrio sp. genomes. The genome of S. kushneri AL184T was assembled in two circular chromosomes (with sizes of 2.84 Mb and 0.60 Mb, respectively), as typically occurs in members of the family Vibrionaceae, with nine complete ribosomal operons, which might explain the fast growing rate of salinivibrios cultured under laboratory conditions. Synteny analysis among the type strains of the genus revealed a high level of genomic conservation in both chromosomes, which allow us to hypothesize a slow speciation process or homogenization events taking place in this group of microorganisms to be tested experimentally in the future. Phylogenomic and orthologous average nucleotide identity (OrthoANI)/average amino acid identity (AAI) analyses also evidenced the elevated level of genetic relatedness within members of this genus and allowed to group all the Salinivibrio strains with available genomes in seven separated species. Genome-scale attribute study of the salinivibrios identified traits related to polar flagellum, facultatively anaerobic growth and osmotic response, in accordance to the phenotypic features described for species of this genus.Spanish Ministry of Economy and Competitiveness Project CGL2017- 83385-PEspaña Junta de Andalucía BIO-213Spanish, University of Seville VIPPIT-US-201

    Segmentation and skeletonization techniques for cardiovascular image analysis

    Get PDF

    COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data

    Get PDF
    The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and ``closed'' repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to ``pooled-data'' solutions (i.e. as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions

    Enriching information extraction pipelines in clinical decision support systems

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Os estudos sanitarios de múltiples centros son importantes para aumentar a repercusión dos resultados da investigación médica debido ao número de suxeitos que poden participar neles. Para simplificar a execución destes estudos, o proceso de intercambio de datos debería ser sinxelo, por exemplo, mediante o uso de bases de datos interoperables. Con todo, a consecución desta interoperabilidade segue sendo un tema de investigación en curso, sobre todo debido aos problemas de gobernanza e privacidade dos datos. Na primeira fase deste traballo, propoñemos varias metodoloxías para optimizar os procesos de estandarización das bases de datos sanitarias. Este traballo centrouse na estandarización de fontes de datos heteroxéneas nun esquema de datos estándar, concretamente o OMOP CDM, que foi desenvolvido e promovido pola comunidade OHDSI. Validamos a nosa proposta utilizando conxuntos de datos de pacientes con enfermidade de Alzheimer procedentes de distintas institucións. Na seguinte etapa, co obxectivo de enriquecer a información almacenada nas bases de datos de OMOP CDM, investigamos solucións para extraer conceptos clínicos de narrativas non estruturadas, utilizando técnicas de recuperación de información e de procesamento da linguaxe natural. A validación realizouse a través de conxuntos de datos proporcionados en desafíos científicos, concretamente no National NLP Clinical Challenges(n2c2). Na etapa final, propuxémonos simplificar a execución de protocolos de estudos provenientes de múltiples centros, propoñendo solucións novas para perfilar, publicar e facilitar o descubrimento de bases de datos. Algunhas das solucións desenvolvidas están a utilizarse actualmente en tres proxectos europeos destinados a crear redes federadas de bases de datos de saúde en toda Europa.[Resumen] Los estudios sanitarios de múltiples centros son importantes para aumentar la repercusión de los resultados de la investigación médica debido al número de sujetos que pueden participar en ellos. Para simplificar la ejecución de estos estudios, el proceso de intercambio de datos debería ser sencillo, por ejemplo, mediante el uso de bases de datos interoperables. Sin embargo, la consecución de esta interoperabilidad sigue siendo un tema de investigación en curso, sobre todo debido a los problemas de gobernanza y privacidad de los datos. En la primera fase de este trabajo, proponemos varias metodologías para optimizar los procesos de estandarización de las bases de datos sanitarias. Este trabajo se centró en la estandarización de fuentes de datos heterogéneas en un esquema de datos estándar, concretamente el OMOP CDM, que ha sido desarrollado y promovido por la comunidad OHDSI. Validamos nuestra propuesta utilizando conjuntos de datos de pacientes con enfermedad de Alzheimer procedentes de distintas instituciones. En la siguiente etapa, con el objetivo de enriquecer la información almacenada en las bases de datos de OMOP CDM, hemos investigado soluciones para extraer conceptos clínicos de narrativas no estructuradas, utilizando técnicas de recuperación de información y de procesamiento del lenguaje natural. La validación se realizó a través de conjuntos de datos proporcionados en desafíos científicos, concretamente en el National NLP Clinical Challenges (n2c2). En la etapa final, nos propusimos simplificar la ejecución de protocolos de estudios provenientes de múltiples centros, proponiendo soluciones novedosas para perfilar, publicar y facilitar el descubrimiento de bases de datos. Algunas de las soluciones desarrolladas se están utilizando actualmente en tres proyectos europeos destinados a crear redes federadas de bases de datos de salud en toda Europa.[Abstract] Multicentre health studies are important to increase the impact of medical research findings due to the number of subjects that they are able to engage. To simplify the execution of these studies, the data-sharing process should be effortless, for instance, through the use of interoperable databases. However, achieving this interoperability is still an ongoing research topic, namely due to data governance and privacy issues. In the first stage of this work, we propose several methodologies to optimise the harmonisation pipelines of health databases. This work was focused on harmonising heterogeneous data sources into a standard data schema, namely the OMOP CDM which has been developed and promoted by the OHDSI community. We validated our proposal using data sets of Alzheimer’s disease patients from distinct institutions. In the following stage, aiming to enrich the information stored in OMOP CDM databases, we have investigated solutions to extract clinical concepts from unstructured narratives, using information retrieval and natural language processing techniques. The validation was performed through datasets provided in scientific challenges, namely in the National NLP Clinical Challenges (n2c2). In the final stage, we aimed to simplify the protocol execution of multicentre studies, by proposing novel solutions for profiling, publishing and facilitating the discovery of databases. Some of the developed solutions are currently being used in three European projects aiming to create federated networks of health databases across Europe
    corecore