1,399 research outputs found

    Delaunay Deformable Models: Topology-Adaptive Meshes Based on the Restricted Delaunay Triangulation

    Get PDF
    International audienceIn this paper, we propose a robust and efficient La- grangian approach, which we call Delaunay Deformable Models, for modeling moving surfaces undergoing large de- formations and topology changes. Our work uses the con- cept of restricted Delaunay triangulation, borrowed from computational geometry. In our approach, the interface is represented by a triangular mesh embedded in the Delau- nay tetrahedralization of interface points. The mesh is it- eratively updated by computing the restricted Delaunay tri- angulation of the deformed objects. Our method has many advantages over popular Eulerian techniques such as the level set method and over hybrid Eulerian-Lagrangian tech- niques such as the particle level set method: localization accuracy, adaptive resolution, ability to track properties as- sociated to the interface, seamless handling of triple junc- tions. Our work brings a rigorous and efficient alternative to existing topology-adaptive mesh techniques such as T- snakes

    Combining Multimodal Biomarkers to Guide Deep Brain Stimulation Programming in Parkinson Disease

    Get PDF
    BACKGROUND Deep brain stimulation (DBS) programming of multicontact DBS leads relies on a very time-consuming manual screening procedure, and strategies to speed up this process are needed. Beta activity in subthalamic nucleus (STN) local field potentials (LFP) has been suggested as a promising marker to index optimal stimulation contacts in patients with Parkinson disease. OBJECTIVE In this study, we investigate the advantage of algorithmic selection and combination of multiple resting and movement state features from STN LFPs and imaging markers to predict three relevant clinical DBS parameters (clinical efficacy, therapeutic window, side-effect threshold). MATERIALS AND METHODS STN LFPs were recorded at rest and during voluntary movements from multicontact DBS leads in 27 hemispheres. Resting- and movement-state features from multiple frequency bands (alpha, low beta, high beta, gamma, fast gamma, high frequency oscillations [HFO]) were used to predict the clinical outcome parameters. Subanalyses included an anatomical stimulation sweet spot as an additional feature. RESULTS Both resting- and movement-state features contributed to the prediction, with resting (fast) gamma activity, resting/movement-modulated beta activity, and movement-modulated HFO being most predictive. With the proposed algorithm, the best stimulation contact for the three clinical outcome parameters can be identified with a probability of almost 90% after considering half of the DBS lead contacts, and it outperforms the use of beta activity as single marker. The combination of electrophysiological and imaging markers can further improve the prediction. CONCLUSION LFP-guided DBS programming based on algorithmic selection and combination of multiple electrophysiological and imaging markers can be an efficient approach to improve the clinical routine and outcome of DBS patients

    Tuberculosis bacteria detection and counting in fluorescence microscopy images using a multi-stage deep learning pipeline

    Get PDF
    The manual observation of sputum smears by fluorescence microscopy for the diagnosis and treatment monitoring of patients with tuberculosis (TB) is a laborious and subjective task. In this work, we introduce an automatic pipeline which employs a novel deep learning-based approach to rapidly detect Mycobacterium tuberculosis (Mtb) organisms in sputum samples and thus quantify the burden of the disease. Fluorescence microscopy images are used as input in a series of networks, which ultimately produces a final count of present bacteria more quickly and consistently than manual analysis by healthcare workers. The pipeline consists of four stages: annotation by cycle-consistent generative adversarial networks (GANs), extraction of salient image patches, classification of the extracted patches, and finally, regression to yield the final bacteria count. We empirically evaluate the individual stages of the pipeline as well as perform a unified evaluation on previously unseen data that were given ground-truth labels by an experienced microscopist. We show that with no human intervention, the pipeline can provide the bacterial count for a sample of images with an error of less than 5%.Publisher PDFPeer reviewe

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions

    Full text link
    Machine learning is expected to fuel significant improvements in medical care. To ensure that fundamental principles such as beneficence, respect for human autonomy, prevention of harm, justice, privacy, and transparency are respected, medical machine learning systems must be developed responsibly. Many high-level declarations of ethical principles have been put forth for this purpose, but there is a severe lack of technical guidelines explicating the practical consequences for medical machine learning. Similarly, there is currently considerable uncertainty regarding the exact regulatory requirements placed upon medical machine learning systems. This survey provides an overview of the technical and procedural challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible solutions to address these challenges. First, a brief review of existing regulations affecting medical machine learning is provided, showing that properties such as safety, robustness, reliability, privacy, security, transparency, explainability, and nondiscrimination are all demanded already by existing law and regulations - albeit, in many cases, to an uncertain degree. Next, the key technical obstacles to achieving these desirable properties are discussed, as well as important techniques to overcome these obstacles in the medical context. We notice that distribution shift, spurious correlations, model underspecification, uncertainty quantification, and data scarcity represent severe challenges in the medical context. Promising solution approaches include the use of large and representative datasets and federated learning as a means to that end, the careful exploitation of domain knowledge, the use of inherently transparent models, comprehensive out-of-distribution model testing and verification, as well as algorithmic impact assessments

    Models and Algorithms for Ultra-Wideband Localization in Single- and Multi-Robot Systems

    Get PDF
    Location is a piece of information that empowers almost any type of application. In contrast to the outdoors, where global navigation satellite systems provide geo-spatial positioning, there are still millions of square meters of indoor space that are unaccounted for by location sensing technology. Moreover, predictions show that people’s activities are likely to shift more and more towards urban and indoor environments– the United Nations predict that by 2020, over 80% of the world’s population will live in cities. Meanwhile, indoor localization is a problem that is not simply solved: people, indoor furnishings, walls and building structures—in the eyes of a positioning sensor, these are all obstacles that create a very challenging environment. Many sensory modalities have difficulty in overcoming such harsh conditions when used alone. For this reason, and also because we aim for a portable, miniaturizable, cost-effective solution, with centimeter-level accuracy, we choose to solve the indoor localization problem with a hybrid approach that consists of two complementary components: ultra-wideband localization, and collaborative localization. In pursuit of the final, hybrid product, our research leads us to ask what benefits collaborative localization can provide to ultra-wideband localization—and vice versa. The road down this path includes diving into these orthogonal sub-domains of indoor localization to produce two independent localization solutions, before finally combining them to conclude our work. As for all systems that can be quantitatively examined, we recognize that the quality of our final product is defined by the rigor of our evaluation process. Thus, a core element of our work is the experimental setup, which we design in a modular fashion, and which we complexify incrementally according to the various stages of our studies. With the goal of implementing an evaluation system that is systematic, repeatable, and controllable, our approach is centered around the mobile robot. We harness this platform to emulate mobile targets, and track it in real-time with a highly reliable ground truth positioning system. Furthermore, we take advantage of the miniature size of our mobile platform, and include multiple entities to form a multi-robot system. This augmented setup then allows us to use the same experimental rigor to evaluate our collaborative localization strategies. Finally, we exploit the consistency of our experiments to perform cross-comparisons of the various results throughout the presented work. Ultra-wideband counts among the most interesting technologies for absolute indoor localization known to date. Owing to its fine delay resolution and its ability to penetrate through various materials, ultra-wideband provides a potentially high ranging accuracy, even in cluttered, non-line-of-sight environments. However, despite its desirable traits, the resolution of non-line-of-sight signals remains a hard problem. In other words, if a non-line-of-sight signal is not recognized as such, it leads to significant errors in the position estimate. Our work improves upon state-of-the-art by addressing the peculiarities of ultra-wideband signal propagation with models that capture the spatiality as well as the multimodal nature of the error statistics. Simultaneously, we take care to develop an underlying error model that is compact and that can be calibrated by means of efficient algorithms. In order to facilitate the usage of our multimodal error model, we use a localization algorithm that is based on particle filters. Our collaborative localization strategy distinguishes itself from prior work by emphasizing cost-efficiency, full decentralization, and scalability. The localization method is based on relative positioning and uses two quantities: relative range and relative bearing. We develop a relative robot detection model that integrates these measurements, and is embedded in our particle filter based localization framework. In addition to the robot detection model, we consider an algorithmic component, namely a reciprocal particle sampling routine, which is designed to facilitate the convergence of a robot’s position estimate. Finally, in order to reduce the complexity of our collaborative localization algorithm, and in order to reduce the amount of positioning data to be communicated between the robots, we develop a particle clustering method, which is used in conjunction with our robot detection model. The final stage of our research investigates the combined roles of collaborative localization and ultra-wideband localization. Numerous experiments are able to validate our overall localization strategy, and show that the performance can be significantly improved when using two complementary sensory modalities. Since the fusion of ultra-wideband positioning sensors with exteroceptive sensors has hardly been considered so far, our studies present pioneering work in this domain. Several insights indicate that collaboration—even if through noisy sensors—is a useful tool to reduce localization errors. In particular, we show that our collaboration strategy can provide the means to minimize the localization error, given that the collaborative design parameters are optimally tuned. Our final results show median localization errors below 10 cm in cluttered environments

    Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Get PDF
    Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research

    Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

    Get PDF
    This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011198) , (Institute for Information & communications Technology Planning & Evaluation) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) , and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688).Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI mode ľs decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.National Research Foundation of Korea Ministry of Science, ICT & Future Planning, Republic of Korea Ministry of Science & ICT (MSIT), Republic of Korea 2021R1A2C1011198Institute for Information amp; communications Technology Planning amp; Evaluation) (IITP) - Korea government (MSIT) under the ICT Creative Consilience Program IITP-2021-2020-0-01821AI Platform to Fully Adapt and Reflect Privacy-Policy Changes2022-0-0068
    • …
    corecore