292 research outputs found

    Minimizing the average distance to a closest leaf in a phylogenetic tree

    Full text link
    When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally-derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this paper we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Among Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, while PAM only gives a solution for the pre-specified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, while the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.Comment: Please contact us with any comments or questions

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Web-based sensor streaming wearable for respiratory monitoring applications.

    Get PDF
    This paper presents a system for remote monitoring of respiration of individuals that can detect respiration rate, mode of breathing and identify coughing events. It comprises a series of polymer fabric-sensors incorporated into a sports vest, a wearable data acquisition platform and a novel rich internet application (RIA) which together enable remote real-time monitoring of untethered wearable systems for respiratory rehabilitation. This system will, for the first time, allow therapists to monitor and guide the respiratory efforts of patients in real-time through a web browser. Changes in abdomen expansion and contraction associated with respiration are detected by the fabric sensors and transmitted wirelessly via a Bluetooth-based solution to a standard computer. The respiratory signals are visualized locally through the RIA and subsequently published to a sensor streaming cloud-based server. A web-based signal streaming protocol makes the signals available as real-time streams to authorized subscribers over standard browsers. We demonstrate real-time streaming of a six-sensor shirt rendered remotely at 40 samples/s per sensor with perceptually acceptable latency (<0.5s) over realistic network conditions

    Using Neural Networks to Reduce Entity State Updates in Distributed Interactive Applications

    Get PDF
    Dead reckoning is the most commonly used predictive contract mechanism for the reduction of network traffic in Distributed Interactive Applications (DIAs). However, this technique often ignores available contextual information that may be influential to the state of an entity, sacrificing remote predictive accuracy in favour of low computational complexity. In this paper, we present a novel extension of dead reckoning by employing neuralnetworks to take into account expected future entity behaviour during the transmission of entity state updates (ESUs) for remote entity modeling in DIAs. This proposed method succeeds in reducing network traffic through a decrease in the frequency of ESU transmission required to maintain consistency. Validation is achieved through simulation in a highly interactive DIA, and results indicate significant potential for improved scalability when compared to the use of the IEEE DIS Standard dead reckoning technique. The new method exhibits relatively low computational overhead and seamless integration with current dead reckoning schemes

    Formalizing a Framework for Dynamic Hybrid Strategy Models in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modelling in Distributed Interactive Applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. Previously, we have proposed the Dynamic Hybrid Strategy Model (DHSM) as an extension to the concept of dead reckoning that adaptively selects extrapolation models based on the use of local performance criteria. In this paper, we formalize the notion of the DHSM as a generalized framework for network traffic reduction in DIAs, alongside a set of consistency metrics for use as local performance criteria

    Dynamic Hybrid Strategy Models for Networked Mulitplayer Games

    Get PDF
    Two of the primary factors in the development of networked multiplayer computer games are network latency and network bandwidth. Reducing the effects of network latency helps maintain game-state fidelity, while reducing network bandwidth usage increases the scalability of the game to support more players. The current technique to address these issues is to have each player locally simulate remote objects (e.g. other players). This is known as dead reckoning. Provided the local simulations are accurate to within a given tolerance, dead reckoning reduces the amount of information required to be transmitted between players. This paper presents an extension to the recently proposed Hybrid Strategy Model (HSM) technique, known as the Dynamic Hybrid Strategy Model (DHSM). By dynamically switching between models of user behaviour, the DHSM attempts to improve the prediction capability of the local simulations, allowing them to stay within a given tolerance for a longer amount of time. This can lead to further reductions in the amount of information required to be transmitted. Presented results for the case of a simple first-person shooter (FPS) game demonstrate the validity of the DHSM approach over dead reckoning, leading to a reduction in the number of state update packets sent and indicating significant potential for network traffic reduction in various multiplayer games/simulations

    A Realistic Distributed Interactive Application Testbed for Static and Dynamic Entity State Data Acquisition

    Get PDF
    Scalability is an important issue for Distributed Interactive Application (DIA) designers. In order to achieve this, it is important to minimise the network traffic required to maintain the DIA. A commonly used technique to reduce network traffic is through short-term entity dynamics extrapolation. However, this technique makes no use of a priori information regarding entity dynamics. We have been developing methods to employ this information through a number of techniques, primarily statistical in nature, which have shown great promise in constrained experimental environments. The main tenet of our approach is that user behaviour in real DIAs follows patterns, and through acquisition, analysis and exploitation of these patterns, a reduction in network traffic can be achieved. In this paper, we report on our development of a realistic DIA based on an industry standard SDK in which we have implemented data acquisition routines that allow us to do this. Results are presented for trial runs using the system. These results clearly exhibit patterns of user behaviour consistent with our previous research and suggest that the exploitation of this knowledge can help reduce network traffic

    A Realistic Distributed Interactive Application Testbed for Static and Dynamic Entity State Data Acquisition

    Get PDF
    Scalability is an important issue for Distributed Interactive Application (DIA) designers. In order to achieve this, it is important to minimise the network traffic required to maintain the DIA. A commonly used technique to reduce network traffic is through short-term entity dynamics extrapolation. However, this technique makes no use of a priori information regarding entity dynamics. We have been developing methods to employ this information through a number of techniques, primarily statistical in nature, which have shown great promise in constrained experimental environments. The main tenet of our approach is that user behaviour in real DIAs follows patterns, and through acquisition, analysis and exploitation of these patterns, a reduction in network traffic can be achieved. In this paper, we report on our development of a realistic DIA based on an industry standard SDK in which we have implemented data acquisition routines that allow us to do this. Results are presented for trial runs using the system. These results clearly exhibit patterns of user behaviour consistent with our previous research and suggest that the exploitation of this knowledge can help reduce network traffic

    Locally performed postoperative circulating tumour DNA testing performed during routine clinical care to predict recurrence of colorectal cancer

    Get PDF
    Background: Identifying patients at high risk for colorectal cancer recurrence is essential for improving prognosis. In the postoperative period, circulating tumour DNA (ctDNA) has been demonstrated as a significant prognostic indicator of recurrence. These results have been obtained under the strict rigours of clinical trials, but not validated in a real-world setting using in-house testing. We report the outcomes of locally performed postoperative ctDNA testing conducted during routine clinical care and the association with the recurrence of colorectal cancer. Methods: We recruited 36 consecutive patients with newly diagnosed colorectal cancer between 2018 and 2020. Postoperative plasma samples were collected at the first outpatient review following resection. Tumour-informed ctDNA analysis was performed using droplet digital polymerase chain reaction or targeted next-generation sequencing. Results: At the time of surgery, there were 24 patients (66.7%) with localized cancer, nine (25%) with nodal spread, and three (8.3%) with metastatic disease. The median time from surgery to plasma sample donation was 22 days (IQR 20–28 days). At least one somatic mutation was identified in primary tumour tissue for 28 (77.8%) patients. Postoperative ctDNA was detected in five patients (13.9%). The median duration of follow-up was 32.0 months (IQR 27.2–38.1 months). Two patients (5.56%) developed metastatic recurrence. However, neither had detectable postoperative ctDNA. There were no instances of loco-regional recurrence. Conclusion: Analysis of postoperative ctDNA testing can be performed locally, however this study did not reproduce the adverse association between detectable postoperative ctDNA and the development of colorectal cancer recurrence seen in clinical trials
    corecore