161 research outputs found

    Distributed averaging for accuracy prediction in networked systems

    Full text link
    Distributed averaging is among the most relevant cooperative control problems, with applications in sensor and robotic networks, distributed signal processing, data fusion, and load balancing. Consensus and gossip algorithms have been investigated and successfully deployed in multi-agent systems to perform distributed averaging in synchronous and asynchronous settings. This study proposes a heuristic approach to estimate the convergence rate of averaging algorithms in a distributed manner, relying on the computation and propagation of local graph metrics while entailing simple data elaboration and small message passing. The protocol enables nodes to predict the time (or the number of interactions) needed to estimate the global average with the desired accuracy. Consequently, nodes can make informed decisions on their use of measured and estimated data while gaining awareness of the global structure of the network, as well as their role in it. The study presents relevant applications to outliers identification and performance evaluation in switching topologies

    A Lossy Compression Technique Enabling Duplication-Aware Sequence Alignment

    Get PDF
    In spite of the recognized importance of tandem duplications in genome evolution, commonly adopted sequence comparison algorithms do not take into account complex mutation events involving more than one residue at the time, since they are not compliant with the underlying assumption of statistical independence of adjacent residues. As a consequence, the presence of tandem repeats in sequences under comparison may impair the biological significance of the resulting alignment. Although solutions have been proposed, repeat-aware sequence alignment is still considered to be an open problem and new efficient and effective methods have been advocated. The present paper describes an alternative lossy compression scheme for genomic sequences which iteratively collapses repeats of increasing length. The resulting approximate representations do not contain tandem duplications, while retaining enough information for making their comparison even more significant than the edit distance between the original sequences. This allows us to exploit traditional alignment algorithms directly on the compressed sequences. Results confirm the validity of the proposed approach for the problem of duplication-aware sequence alignment

    A Monte Carlo Method for Assessing the Quality of Duplication-Aware Alignment Algorithms

    Get PDF
    The increasing availability of high throughput sequencing technologies poses several challenges concerning the analysis of genomic data. Within this context, duplication-aware sequence alignment taking into account complex mutation events is regarded as an important problem, particularly in light of recent evolutionary bioinformatics researches that highlighted the role of tandem duplications as one of the most important mutation events. Traditional sequence comparison algorithms do not take into account these events, resulting in poor alignments in terms of biological significance, mainly because of their assumption of statistical independence among contiguous residues. Several duplication-aware algorithms have been proposed in the last years which differ either for the type of duplications they consider or for the methods adopted to identify and compare them. However, there is no solution which clearly outperforms the others and no methods exist for assessing the reliability of the resulting alignments. This paper proposes a Monte Carlo method for assessing the quality of duplication-aware alignment algorithms and for driving the choice of the most appropriate alignment technique to be used in a specific context

    Supporting Preemptive Multitasking in Wireless Sensor Networks

    Get PDF
    Supporting the concurrent execution of multiple tasks on lightweight sensor nodes could enable the deployment of independent applications on a shared wireless sensor network, thus saving cost and time by exploiting infrastructures which are typically underutilized if dedicated to a single task. Existing approaches to wireless sensor network programming provide limited support to concurrency at the cost of reducing the generality and the expressiveness of the language adopted. This paper presents a java-compatible platform for wireless sensor networks which provides a thorough support to preemptive multitasking while allowing the programmers to write their applications in java. The proposed approach has been implemented and tested on top of VirtualSense, an ultra-low-power wireless sensor mote providing a java-compatible runtime environment. Performance and scalability of the solution are discussed in light of extensive experiments performed on representative benchmarks

    A Statistical Geometry Approach to Distance Estimation in Wireless Sensor Networks

    Get PDF
    none3Algorithmic approaches to the estimation of pairwise distances between the nodes of a wireless sensor network are highly attractive to provide information for routing and localization without requiring specific hardware to be added to cost/resource-constrained nodes. This paper exploits statistical geometry to derive robust estimators of the pairwise Euclidean distances from topological information typically available in any network. Extensive Monte Carlo experiments conducted on synthetic benchmarks demonstrate the improved quality of the proposed estimators with respect to the state of the art.openV. Freschi; E. Lattanzi; A. BoglioloFreschi, Valerio; Lattanzi, Emanuele; Bogliolo, Alessandr

    Large-scale assessment of mobile crowdsensed data: a case study

    Get PDF
    Mobile crowdsensing (MCS) is a well-established paradigm that leverages mobile devices’ ubiquitous nature and processing capabilities for large-scale data collection to monitor phenomena of common interest. Crowd-powered data collection is significantly faster and more cost-effective than traditional methods. However, it poses challenges in assessing the accuracy and extracting information from large volumes of user-generated data. SmartRoadSense (SRS) is an MCS technology that utilises sensors embedded in mobile phones to monitor the quality of road surfaces by computing a crowdsensed road roughness index (referred to as PPE). The present work performs statistical modelling of PPE to analyse its distribution across the road network and elucidate how it can be efficiently analysed and interpreted. Joint statistical analysis of open datasets is then carried out to investigate the effect of both internal and external road features on PPE . Several road properties affecting PPE as predicted are identified, providing evidence that SRS can be effectively applied to assess road quality conditions. Finally, the effect of road category and the speed limit on the mean and standard deviation of PPE is evaluated, incorporating previous results on the relationship between vehicle speed and PPE . These results enable more effective and confident use of the SRS platform and its data to help inform road construction and renovation decisions, especially where a lack of resources limits the use of conventional approaches. The work also exemplifies how crowdsensing technologies can benefit from open data integration and highlights the importance of making coherent, comprehensive, and well-structured open datasets available to the public

    Bootstrap Based Uncertainty Propagation for Data Quality Estimation in Crowdsensing Systems

    Get PDF
    The diffusion of mobile devices equipped with sensing, computation, and communication capabilities is opening unprecedented possibilities for high-resolution, spatio-temporal mapping of several phenomena. This novel data generation, collection, and processing paradigm, termed crowdsensing, lays upon complex, distributed cyberphysical systems. Collective data gathering from heterogeneous, spatially distributed devices inherently raises the question of how to manage different quality levels of contributed data. In order to extract meaningful information, it is, therefore, desirable to the introduction of effective methods for evaluating the quality of data. In this paper, we propose an approach aimed at systematic accuracy estimation of quantities provided by end-user devices of a crowd-based sensing system. This is obtained thanks to the combination of statistical bootstrap with uncertainty propagation techniques, leading to a consistent and technically sound methodology. Uncertainty propagation provides a formal framework for combining uncertainties, resulting from different quantities influencing a given measurement activity. Statistical bootstrap enables the characterization of the sampling distribution of a given statistics without any prior assumption on the type of statistical distributions behind the data generation process. The proposed approach is evaluated on synthetic benchmarks and on a real world case study. Cross-validation experiments show that confidence intervals computed by means of the presented technique show a maximum 1.5% variation with respect to interval widths computed by means of controlled standard Monte Carlo methods, under a wide range of operating conditions. In general, experimental results confirm the suitability and validity of the introduced methodology

    Decentralising the Internet of Medical Things with Distributed Ledger Technologies and Off-Chain Storages: a Proof of Concept

    Get PDF
    The privacy issue limits the Internet of Medical Things. Medical information would enhance new medical studies, formulate new treatments, and deliver new digital health technologies. Solving the sharing issue will have a triple impact: handling sensitive information easily, contributing to international medical advancements, and enabling personalised care. A possible solution could be to decentralise the notion of privacy, distributing it directly to users. Solutions enabling this vision are closely linked to Distributed Ledger Technologies. This technology would allow privacy-compliant solutions in contexts where privacy is the first need through its characteristics of immutability and transparency. This work lays the foundations for a system that can provide adequate security in terms of privacy, allowing the sharing of information between participants. We introduce an Internet of Medical Things application use case called “Balance”, networks of trusted peers to manage sensitive data access called “Halo”, and eventually leverage Smart Contracts to safeguard third party rights over data. This architecture should enable the theoretical vision of privacy-based healthcare solutions running in a decentralised manner

    Introducing User Feedback-based Counterfactual Explanations (UFCE)

    Full text link
    Machine learning models are widely used in real-world applications. However, their complexity makes it often challenging to interpret the rationale behind their decisions. Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in eXplainable Artificial Intelligence (XAI). CE provides actionable information to users on how to achieve the desired outcome with minimal modifications to the input. However, current CE algorithms usually operate within the entire feature space when optimizing changes to turn over an undesired outcome, overlooking the identification of key contributors to the outcome and disregarding the practicality of the suggested changes. In this study, we introduce a novel methodology, that is named as user feedback-based counterfactual explanation (UFCE), which addresses these limitations and aims to bolster confidence in the provided explanations. UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features while considering feature dependence, and evaluates the practicality of suggested changes using benchmark evaluation metrics. We conducted three experiments with five datasets, demonstrating that UFCE outperforms two well-known CE methods in terms of \textit{proximity}, \textit{sparsity}, and \textit{feasibility}. Reported results indicate that user constraints influence the generation of feasible CEs.Comment: preprint of paper submitted to IJCIS Springe
    • …
    corecore