64 research outputs found

    Give more data, awareness and control to individual citizens, and they will help COVID-19 containment.

    Get PDF
    The rapid dynamics of COVID-19 calls for quick and effective tracking of virus transmission chains and early detection of outbreaks, especially in the "phase 2" of the pandemic, when lockdown and other restriction measures are progressively withdrawn, in order to avoid or minimize contagion resurgence. For this purpose, contact-tracing apps are being proposed for large scale adoption by many countries. A centralized approach, where data sensed by the app are all sent to a nation-wide server, raises concerns about citizens' privacy and needlessly strong digital surveillance, thus alerting us to the need to minimize personal data collection and avoiding location tracking. We advocate the conceptual advantage of a decentralized approach, where both contact and location data are collected exclusively in individual citizens' "personal data stores", to be shared separately and selectively (e.g., with a backend system, but possibly also with other citizens), voluntarily, only when the citizen has tested positive for COVID-19, and with a privacy preserving level of granularity. This approach better protects the personal sphere of citizens and affords multiple benefits: it allows for detailed information gathering for infected people in a privacy-preserving fashion; and, in turn this enables both contact tracing, and, the early detection of outbreak hotspots on more finely-granulated geographic scale. The decentralized approach is also scalable to large populations, in that only the data of positive patients need be handled at a central level. Our recommendation is two-fold. First to extend existing decentralized architectures with a light touch, in order to manage the collection of location data locally on the device, and allow the user to share spatio-temporal aggregates-if and when they want and for specific aims-with health authorities, for instance. Second, we favour a longer-term pursuit of realizing a Personal Data Store vision, giving users the opportunity to contribute to collective good in the measure they want, enhancing self-awareness, and cultivating collective efforts for rebuilding society

    Prometheus unbound or Paradise regained: the concept of causality in the contemporary AI-data science debate

    No full text
    This essay highlights some aspects, core themes and controversies regarding causality from a historical-philosophical perspective with special attention to their role in the AI-data science debate. Firstly, it outlines the contours of this debate and subsequently addresses the aporia of causality in statistics, AI and the philosophy and science. In view of the prevalent crisis some key themes and controversies are identified, and a frame of reference is proposed, that may clarify historical controversies and the current state of “agreeing to disagree” in science and philosophy. Secondly, the essay highlights the historical scope of the concept, outlines some early perspectives and “key moments”, that involved main conceptual shifts. Thirdly, the essay outlines the rise of statistics and its role in attempting to defuse the crises by entering a sort of progressing liaison with causality. Finally, it is shown how research in AI has further shaped the concept and how and why causality is about to play a crucial role in the current quest for responsible, explainable and transparent AI and data science

    Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics

    No full text
    The increasing availability of sensor monitoring data has stimulated the development of Remaining-Useful-Life (RUL) prognostics and maintenance planning models. However, existing studies focus either on RUL prognostics only, or propose maintenance planning based on simple assumptions about degradation trends. We propose a framework to integrate data-driven probabilistic RUL prognostics into predictive maintenance planning. We estimate the distribution of RUL using Convolutional Neural Networks with Monte Carlo dropout. These prognostics are updated over time, as more measurements become available. We further pose the maintenance planning problem as a Deep Reinforcement Learning (DRL) problem where maintenance actions are triggered based on the estimates of the RUL distribution. We illustrate our framework for the maintenance of aircraft turbofan engines. Using our DRL approach, the total maintenance cost is reduced by 29.3% compared to the case when engines are replaced at the mean-estimated-RUL. In addition, 95.6% of unscheduled maintenance is prevented, and the wasted life of the engines is limited to only 12.81 cycles. Overall, we propose a roadmap for predictive maintenance from sensor measurements to data-driven probabilistic RUL prognostics, to maintenance planning

    Characterising Seismic Data

    No full text
    When a seismologist analyses a new seismogram it is often useful to have access to a set of similar seismograms. For example if she tries to determine the event, if any, that caused the particular readings on her seismogram. So, the question is: when are two seismograms similar? To dene such a notion of similarity, we rst preprocess the seismogram by a wavelet decomposition, followed by a discretisation of the wavelet coecients. Next we introduce a new type of patterns on the resulting set of aligned symbolic time series. These patterns, called block patterns, satisfy an Apriori property and can thus be found with a levelwise search. Next we use MDL to dene when a set of such patterns is characteristic for the data. We introduce the MuLTi-Krimp algorithm to nd such code sets. In experiments we show that these code sets are both good at distinguishing between dissimilar seismograms and good at recognising similar seismograms. Moreover, we show how such a code set can be used to generate a synthetic seismogram that shows what all seismograms in a cluster have in common
    • …
    corecore