13 research outputs found

    Fast, Robust, Quantizable Approximate Consensus

    Get PDF
    We introduce a new class of distributed algorithms for the approximate consensus problem in dynamic rooted networks, which we call amortized averaging algorithms. They are deduced from ordinary averaging algorithms by adding a value-gathering phase before each value update. This results in a drastic drop in decision times, from being exponential in the number n of processes to being polynomial under the assumption that each process knows n. In particular, the amortized midpoint algorithm is the first algorithm that achieves a linear decision time in dynamic rooted networks with an optimal contraction rate of 1/2 at each update step. We then show robustness of the amortized midpoint algorithm under violation of network assumptions: it gracefully degrades if communication graphs from time to time are non rooted, or under a wrong estimate of the number of processes. Finally, we prove that the amortized midpoint algorithm behaves well if processes can store and send only quantized values, rendering it well-suited for the design of dynamic networked systems. As a corollary we obtain that the 2-set consensus problem is solvable in linear time in any dynamic rooted network model

    Tight Bounds for Asymptotic and Approximate Consensus

    Get PDF
    We study the performance of asymptotic and approximate consensus algorithms under harsh environmental conditions. The asymptotic consensus problem requires a set of agents to repeatedly set their outputs such that the outputs converge to a common value within the convex hull of initial values. This problem, and the related approximate consensus problem, are fundamental building blocks in distributed systems where exact consensus among agents is not required or possible, e.g., man-made distributed control systems, and have applications in the analysis of natural distributed systems, such as flocking and opinion dynamics. We prove tight lower bounds on the contraction rates of asymptotic consensus algorithms in dynamic networks, from which we deduce bounds on the time complexity of approximate consensus algorithms. In particular, the obtained bounds show optimality of asymptotic and approximate consensus algorithms presented in [Charron-Bost et al., ICALP'16] for certain dynamic networks, including the weakest dynamic network model in which asymptotic and approximate consensus are solvable. As a corollary we also obtain asymptotically tight bounds for asymptotic consensus in the classical asynchronous model with crashes. Central to our lower bound proofs is an extended notion of valency, the set of reachable limits of an asymptotic consensus algorithm starting from a given configuration. We further relate topological properties of valencies to the solvability of exact consensus, shedding some light on the relation of these three fundamental problems in dynamic networks

    Two-agent approximate agreement from an epistemic logic perspective

    Get PDF
    We investigate the two agents approximate agreement problem in a dynamic network in which topology may change unpredictably,and where consensus is not solvable. It is known that the number of rounds necessary and sufficient to guarantee that the two agents output values 1/k^3 away from each other is k. We distil ideas from previous papers to provide a self-contained, elementary introduction, that explains this result from the epistemic logic perspective

    Two-agent approximate agreement from an epistemic logic perspective

    Get PDF
    We investigate the two agents approximate agreement problem in a dynamic network in which topology may change unpredictably, and where consensus is not solvable. It is known that the number of rounds necessary and sufficient to guarantee that the two agents output values 1/k3 away from each other is k. We distil ideas from previous papers to provide a self-contained, elementary introduction, that explains this result from the epistemic logic perspective

    Deep Risk Prediction and Embedding of Patient Data: Application to Acute Gastrointestinal Bleeding

    Get PDF
    Acute gastrointestinal bleeding is a common and costly condition, accounting for over 2.2 million hospital days and 19.2 billion dollars of medical charges annually. Risk stratification is a critical part of initial assessment of patients with acute gastrointestinal bleeding. Although all national and international guidelines recommend the use of risk-assessment scoring systems, they are not commonly used in practice, have sub-optimal performance, may be applied incorrectly, and are not easily updated. With the advent of widespread electronic health record adoption, longitudinal clinical data captured during the clinical encounter is now available. However, this data is often noisy, sparse, and heterogeneous. Unsupervised machine learning algorithms may be able to identify structure within electronic health record data while accounting for key issues with the data generation process: measurements missing-not-at-random and information captured in unstructured clinical note text. Deep learning tools can create electronic health record-based models that perform better than clinical risk scores for gastrointestinal bleeding and are well-suited for learning from new data. Furthermore, these models can be used to predict risk trajectories over time, leveraging the longitudinal nature of the electronic health record. The foundation of creating relevant tools is the definition of a relevant outcome measure; in acute gastrointestinal bleeding, a composite outcome of red blood cell transfusion, hemostatic intervention, and all-cause 30-day mortality is a relevant, actionable outcome that reflects the need for hospital-based intervention. However, epidemiological trends may affect the relevance and effectiveness of the outcome measure when applied across multiple settings and patient populations. Understanding the trends in practice, potential areas of disparities, and value proposition for using risk stratification in patients presenting to the Emergency Department with acute gastrointestinal bleeding is important in understanding how to best implement a robust, generalizable risk stratification tool. Key findings include a decrease in the rate of red blood cell transfusion since 2014 and disparities in access to upper endoscopy for patients with upper gastrointestinal bleeding by race/ethnicity across urban and rural hospitals. Projected accumulated savings of consistent implementation of risk stratification tools for upper gastrointestinal bleeding total approximately $1 billion 5 years after implementation. Most current risk scores were designed for use based on the location of the bleeding source: upper or lower gastrointestinal tract. However, the location of the bleeding source is not always clear at presentation. I develop and validate electronic health record based deep learning and machine learning tools for patients presenting with symptoms of acute gastrointestinal bleeding (e.g., hematemesis, melena, hematochezia), which is more relevant and useful in clinical practice. I show that they outperform leading clinical risk scores for upper and lower gastrointestinal bleeding, the Glasgow Blatchford Score and the Oakland score. While the best performing gradient boosted decision tree model has equivalent overall performance to the fully connected feedforward neural network model, at the very low risk threshold of 99% sensitivity the deep learning model identifies more very low risk patients. Using another deep learning model that can model longitudinal risk, the long-short-term memory recurrent neural network, need for transfusion of red blood cells can be predicted at every 4-hour interval in the first 24 hours of intensive care unit stay for high risk patients with acute gastrointestinal bleeding. Finally, for implementation it is important to find patients with symptoms of acute gastrointestinal bleeding in real time and characterize patients by risk using available data in the electronic health record. A decision rule-based electronic health record phenotype has equivalent performance as measured by positive predictive value compared to deep learning and natural language processing-based models, and after live implementation appears to have increased the use of the Acute Gastrointestinal Bleeding Clinical Care pathway. Patients with acute gastrointestinal bleeding but with other groups of disease concepts can be differentiated by directly mapping unstructured clinical text to a common ontology and treating the vector of concepts as signals on a knowledge graph; these patients can be differentiated using unbalanced diffusion earth mover’s distances on the graph. For electronic health record data with data missing not at random, MURAL, an unsupervised random forest-based method, handles data with missing values and generates visualizations that characterize patients with gastrointestinal bleeding. This thesis forms a basis for understanding the potential for machine learning and deep learning tools to characterize risk for patients with acute gastrointestinal bleeding. In the future, these tools may be critical in implementing integrated risk assessment to keep low risk patients out of the hospital and guide resuscitation and timely endoscopic procedures for patients at higher risk for clinical decompensation

    Deep Intellectual Property: A Survey

    Full text link
    With the widespread application in industrial manufacturing and commercial services, well-trained deep neural networks (DNNs) are becoming increasingly valuable and crucial assets due to the tremendous training cost and excellent generalization performance. These trained models can be utilized by users without much expert knowledge benefiting from the emerging ''Machine Learning as a Service'' (MLaaS) paradigm. However, this paradigm also exposes the expensive models to various potential threats like model stealing and abuse. As an urgent requirement to defend against these threats, Deep Intellectual Property (DeepIP), to protect private training data, painstakingly-tuned hyperparameters, or costly learned model weights, has been the consensus of both industry and academia. To this end, numerous approaches have been proposed to achieve this goal in recent years, especially to prevent or discover model stealing and unauthorized redistribution. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field. More than 190 research contributions are included in this survey, covering many aspects of Deep IP Protection: challenges/threats, invasive solutions (watermarking), non-invasive solutions (fingerprinting), evaluation metrics, and performance. We finish the survey by identifying promising directions for future research.Comment: 38 pages, 12 figure

    Characterization of Dedicated PET Equipment with Non-Conventional Geometry

    Full text link
    [ES] Desde su creación en la década de 1950, las imágenes tomográficas han resultado muy valiosas en el ámbito médico ayudando tanto en el diagnóstico como en el tratamiento de múltiples enfermedades. Dentro de la imagen molecular, los escáneres PET (Tomografía por Emisión de Positrones) generan información detallada de la interacción de los radio-trazadores con el tejido de estudio, pudiendo combinar dicha información con imagen anatómica de escáneres TC (Tomografía Computarizada) o RM (Resonancia Magnética). Con el fin de aumentar las prestaciones de estos equipos, como la sensibilidad y la resolución espacial, los PET de cuerpo completo recientemente aumentan su cobertura axial. Sin embargo, el precio de estos dispositivos se multiplica, dificultando su compra en muchos hospitales y centros de investigación. Como alternativa, los escáneres PET específicos de órganos manejan un menor número de detectores haciéndolos más económicos. El objetivo de este tipo de escáneres es mejorar el rendimiento de los dispositivos acercando los detectores al paciente lo máximo posible, optimizando su diseño para un órgano en específico. Otra ventaja es la posible portabilidad de los aparatos. En esta tesis introducimos dos posibles diseños de PET específicos orientados a distintos órganos y con diferente tecnología y geometría y además un escáner preclínico con una geometría novedosa. El primer escáner fue construido de un proyecto nacional llamado PROSPET, fue diseñado y optimizado para hacer imagen de la próstata, debido a la conocida elevada tasa de cáncer de próstata en hombres. El 17% de la población masculina sufrirá cáncer de próstata. El detector escogido para este diseño está compuesto por cristales centelladores monolíticos acoplados a una matriz de fotomultiplicadores de silicio. Inicialmente se pensó en crear un escáner compuesto por dos palas. Sin embargo, los resultados con pacientes no fueron satisfactorios debido a la falta de información angular y la ausencia de información temporal precisa en los detectores. Por tanto, se construyó una configuración de anillo con un diámetro reducido en comparación con escáneres de cuerpo completo. Se apreció un aumento en la sensibilidad y la resolución espacial, así como una buena calidad de imagen utilizando fantomas. El segundo escáner, llamado proyecto CardioPET, está orientado a visualizar el corazón cuando el paciente está sometido a condiciones de estrés farmacológico. Para este dispositivo se utilizó el diseño de dos palas, pero usando cristales pixelados, mejorando la resolución temporal, permitiendo implantar algoritmos de tiempo de vuelo. Se han montado y testeado dos palas tanto con simulaciones como experimentalmente con buenas prestaciones. Además, se procedió a registrar el movimiento de las fuentes de radiación con el fin de aplicar correcciones de movimiento con la ayuda de una cámara externa y unos marcadores ARUCO. Los algoritmos de corrección de movimiento fueron testeados, demostrando un buen funcionamiento. El último dispositivo fue diseñado para optimizar la configuración PET de anillo lo máximo posible. Para ello, se eliminaron los espaciados entre detectores en un escáner pequeño de animales, creando un único detector centellador de forma cilíndrica. Con esto se busca aumentar la sensibilidad, pues ya no se pierden interacciones en los huecos, y también la resolución espacial. Dos prototipos fueron testeados con simulaciones, y validados experimentalmente. El primero con caras de salida planas y el segundo totalmente cilíndrico. En ambos diseños se observaron efectos debidos a la curvatura del detector que necesariamente han de ser compensados con una calibración.[CA] Des de la seua creació en la dècada de 1950, les imatges tomogràfiques hi han resultat molt valuoses en àmbit mèdic ajudant tant en el diagnòstic com en el tractament de moltes malalties. Dins de la imatge molecular, els escàners PET (Tomografia per Emissió de Positrons) generen informació detallada de la interacció dels traçadors amb el teixit del pacient, podent combinar aquesta informació amb imatge anatòmica d'escàners TC (Tomografia Axial Automatitzada) o RM (Ressonancia Magnètica). Amb el fi d'augmentar les prestacions d’aquests equips, els PET de cos complet augmenten la seua cobertura axial, multiplicant el preu dels dispositius i dificultant la seua compra en hospitals i centres d’investigació. Com a alternativa, els escàners PET específics d'òrgans utilitzen un menor nombre de detectors resultant així un preu més econòmic. Un altre avantatge és la possible portabilitat dels aparells. En aquesta tesi abordem tres possibles dissenys de PET específics orientats a diferents òrgans i amb diferent tecnologia i geometria. El primer de tots, un projecte nacional denominat PROSPET, ha sigut dissenyat i optimitzat per a fer imatge de la pròstata, ja que és molt coneguda l'elevada taxa de càncer de pròstata en homes. El 17% de población masculina patirà càncer de pròstata. El detector escollit per a aquest disseny està format per cristals centellejadors monolítics acoblats a una matriu de fotomultiplicadors de silici. De primeres es va pensar a crear un escàner compost per dues pales, ja que permetria disposar els detectors molt a prop del pacient. El resultat no va ser molt satisfactori a causa de la falta d'informació angular i l'absència d'informació temporal precisa. Per tant, l'última iteració va consistir en una configuració d'anell amb un diàmetre reduït en comparació amb els escàners de cos complet. Es va observar una millora en la sensibilitat i la resolució espacial, així com una qualitat d'imatge acceptable. El segon dispositiu va ser dissenyat per a optimitzar la configuració d'anell el màxim possible. Per això es van llevar els espaiats entre detectors, creant un únic detector de forma cilíndrica. Amb aquest disseny es busca augmentar la sensibilitat, ja que no es perden interaccions en els espaiats, i també la resolució espacial. Dos prototips van ser testejats amb simulacions i validats experimentalment. El primer amb cares d'eixida planars i el segon totalment cilíndric. En els dos dissenys es va observar efectes deguts a la curvatura del detector que necessàriament ha de ser compensat amb una calibració. L’últim escàner, denominat projecte CardioPET, està orientat a visualitzar el cor durant el pacient quan és sotmés a condicions d'estrés farmacologic. escàner, denominat projecte CardioPET, està orientat a visualitzar el cor durant el pacient quan és sotmés a condicions d'estrés. Es va recuperar el disseny de les pales per aquest dispositiu, però utilitzant cristals pixelats, millorant la resolució temporal. Dues pales van ser muntades i testejades tant amb simulacions com experimentalment amb bones prestacions. A més, es va registrar el moviment de les fonts de radiació amb la fi d'aplicar correcció de moviment amb l'ajuda d'una càmera externa i uns marcadors ARUCO. Els algoritmes de correcció de moviment també van ser testejats, demostrant un bon funcionament. L'últim dispositiu va ser dissenyat per a optimitzar la configuració d'anell el màxim possible. Per això es van llevar els espaiats entre detectors, creant un únic detector de forma cilíndrica. Amb aquest disseny es busca augmentar la sensibilitat, ja que no es perden interaccions en els espaiats, i també la resolució espacial. Dos prototips van ser testejats amb simulacions i validats experimentalment. El primer amb cares d'eixida planars i el segon totalment cilíndric. En els dos dissenys es va observar efectes deguts a la curvatura del detector que necessàriament ha de ser compensat amb una calibració.[EN] Since their introduction in the 1950-decade, tomographic images have become very valuable in the medical field helping both in diagnostics and in a variety of illnesses treatment. In the molecular imaging field, Positron Emission Tomography (PET) provides accurate information of the radio-tracers interactions with the patient tissue. Moreover, it is possible to combine this information with anatomical images provided by CT (Computed Tomography) or MR (Magnetic Resonance) scanners. With the aim to improve PET systems performance, such as the spatial resolution and the sensitivity, whole body (WB) PET scanners with large axial coverage are recently proposed. However, the system cost increases and, thus, makes difficult their installation in many hospitals or research centers. Organ-dedicated PET scanners, as an alternative to such large systems, use a lower number of detectors, so their price is considerably more economical. The goal of this kind of systems is to boost PET performance by placing the detectors as close as possible to the patient, optimizing the design for a specific organ instead of a large volume. Other advantage of these scanners is their portability. In this thesis we have worked in the design and validation of two organ-dedicated PET scanners with different geometries and technologies, as well as in a novel pre-clinical PET. The first scanner was the result from a national project called PROSPET. A PET system was designed and optimized to image the prostate area. Notice there is a high incidence rate of prostate cancer in the male population. 17% of male population will suffer prostate cancer. For this scanner, the detector modules were composed by a monolithic LYSO scintillation block coupled to a photosensor array based on silicon photomultipliers (SiPM). The first design configuration was made by two panels. However, patient results were not satisfactory due to the lack of angular information and the poor detector time resolution. Therefore, it was rebuilt in a ring configuration with a reduced diameter in comparison with WB-PET scanners. A high sensitivity and spatial resolution were found, as well as a good image quality using phantoms. The second PET scanner, called CardioPET, also arose from a national grant, and it was implemented to visualize the heart area when the patient is under stress condition. The two panels geometry was also implemented for this system, but using pixelated crystals, therefore improving the detector time resolution and allowing to use time of flight (TOF) reconstruction algorithms. Two panels were mounted and tested with both simulation and experimental data with good results. Furthermore, the patient motion was registered applying movement correction techniques with the help of an external optical camera device and ARUCO markers. These algorithms were tested showing a good performance. The last device that we worked within this PhD thesis was designed to optimize the classical ring PET configuration as much as possible. To do so, the gaps between the detector modules in a small animal PET were eliminated by building a single detector with a cylindrical scintillator shape. The goal is to improve the sensitivity, given that there are no event losses in the gaps and to also boost the spatial resolution since there are not edges. Two prototypes were tested with simulations, and experimentally validated as well. The first of them was built with planar outer faces whereas the second was fully cylindrical. In both designs some effects originated from the detector curvature were observed and successfully corrected during the calibration.This thesis was supported by a FPI grant under 2017-08582 reference in the PhD program: “Programa de Doctorado en Tecnologías para la Salud y el Bienestar” belonging to the Polytechnic University of Valencia. The grant was supported by the “Consejo Superior de Investigaciones Científicas” together with the “Agencia Estatal de Investigación” and the “Fondo Social Europeo”.Cañizares Ledo, G. (2022). Characterization of Dedicated PET Equipment with Non-Conventional Geometry [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/184977TESI

    Analogue Gravity

    Full text link

    The Fifteenth Marcel Grossmann Meeting

    Get PDF
    The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
    corecore