22,193 research outputs found

    Advances on Concept Drift Detection in Regression Tasks using Social Networks Theory

    Full text link
    Mining data streams is one of the main studies in machine learning area due to its application in many knowledge areas. One of the major challenges on mining data streams is concept drift, which requires the learner to discard the current concept and adapt to a new one. Ensemble-based drift detection algorithms have been used successfully to the classification task but usually maintain a fixed size ensemble of learners running the risk of needlessly spending processing time and memory. In this paper we present improvements to the Scale-free Network Regressor (SFNR), a dynamic ensemble-based method for regression that employs social networks theory. In order to detect concept drifts SFNR uses the Adaptive Window (ADWIN) algorithm. Results show improvements in accuracy, especially in concept drift situations and better performance compared to other state-of-the-art algorithms in both real and synthetic data

    Optimal universal quantum circuits for unitary complex conjugation

    Full text link
    Let UdU_d be a unitary operator representing an arbitrary dd-dimensional unitary quantum operation. This work presents optimal quantum circuits for transforming a number kk of calls of UdU_d into its complex conjugate Udˉ\bar{U_d}. Our circuits admit a parallel implementation and are proven to be optimal for any kk and dd with an average fidelity of ⟨F⟩=k+1d(d−k)\left\langle{F}\right\rangle =\frac{k+1}{d(d-k)}. Optimality is shown for average fidelity, robustness to noise, and other standard figures of merit. This extends previous works which considered the scenario of a single call (k=1k=1) of the operation UdU_d, and the special case of k=d−1k=d-1 calls. We then show that our results encompass optimal transformations from kk calls of UdU_d to f(Ud)f(U_d) for any arbitrary homomorphism ff from the group of dd-dimensional unitary operators to itself, since complex conjugation is the only non-trivial automorphisms on the group of unitary operators. Finally, we apply our optimal complex conjugation implementation to design a probabilistic circuit for reversing arbitrary quantum evolutions.Comment: 19 pages, 5 figures. Improved presentation, typos corrected, and some proofs are now clearer. Closer to the published versio

    Bayesian networks for disease diagnosis: What are they, who has used them and how?

    Full text link
    A Bayesian network (BN) is a probabilistic graph based on Bayes' theorem, used to show dependencies or cause-and-effect relationships between variables. They are widely applied in diagnostic processes since they allow the incorporation of medical knowledge to the model while expressing uncertainty in terms of probability. This systematic review presents the state of the art in the applications of BNs in medicine in general and in the diagnosis and prognosis of diseases in particular. Indexed articles from the last 40 years were included. The studies generally used the typical measures of diagnostic and prognostic accuracy: sensitivity, specificity, accuracy, precision, and the area under the ROC curve. Overall, we found that disease diagnosis and prognosis based on BNs can be successfully used to model complex medical problems that require reasoning under conditions of uncertainty.Comment: 22 pages, 5 figures, 1 table, Student PhD first pape

    Event-based tracking of human hands

    Full text link
    This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting 3D hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human-robot interactions and in obstacle avoidance for human-robot safety applications. Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception. Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured. Tracking of human hands in 3D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space)

    Concept Graph Neural Networks for Surgical Video Understanding

    Full text link
    We constantly integrate our knowledge and understanding of the world to enhance our interpretation of what we see. This ability is crucial in application domains which entail reasoning about multiple entities and concepts, such as AI-augmented surgery. In this paper, we propose a novel way of integrating conceptual knowledge into temporal analysis tasks via temporal concept graph networks. In the proposed networks, a global knowledge graph is incorporated into the temporal analysis of surgical instances, learning the meaning of concepts and relations as they apply to the data. We demonstrate our results in surgical video data for tasks such as verification of critical view of safety, as well as estimation of Parkland grading scale. The results show that our method improves the recognition and detection of complex benchmarks as well as enables other analytic applications of interest

    Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules

    Full text link
    We target the problem of automatically synthesizing proofs of semantic equivalence between two programs made of sequences of statements. We represent programs using abstract syntax trees (AST), where a given set of semantics-preserving rewrite rules can be applied on a specific AST pattern to generate a transformed and semantically equivalent program. In our system, two programs are equivalent if there exists a sequence of application of these rewrite rules that leads to rewriting one program into the other. We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs. The system outputs a sequence of rewrites, and the validity of the sequence is simply checked by verifying it can be applied. If no valid sequence is produced by the neural network, the system reports the programs as non-equivalent, ensuring by design no programs may be incorrectly reported as equivalent. Our system is fully implemented for a given grammar which can represent straight-line programs with function calls and multiple types. To efficiently train the system to generate such sequences, we develop an original incremental training technique, named self-supervised sample selection. We extensively study the effectiveness of this novel training approach on proofs of increasing complexity and length. Our system, S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programsComment: 30 pages including appendi

    Ausubel's meaningful learning re-visited

    Get PDF
    This review provides a critique of David Ausubel’s theory of meaningful learning and the use of advance organizers in teaching. It takes into account the developments in cognition and neuroscience which have taken place in the 50 or so years since he advanced his ideas, developments which challenge our understanding of cognitive structure and the recall of prior learning. These include (i) how effective questioning to ascertain previous knowledge necessitates in-depth Socratic dialogue; (ii) how many findings in cognition and neuroscience indicate that memory may be non-representational, thereby affecting our interpretation of student recollections; (iii) the now recognised dynamism of memory; (iv) usefully regarding concepts as abilities or simulators and skills; (v) acknowledging conscious and unconscious memory and imagery; (vi) how conceptual change involves conceptual coexistence and revision; (vii) noting linguistic and neural pathways as a result of experience and neural selection; and (viii) recommending that wider concepts of scaffolding should be adopted, particularly given the increasing focus on collaborative learning in a technological world

    Grasping nothing: a study of minimal ontologies and the sense of music

    Get PDF
    If music were to have a proper sense – one in which it is truly given – one might reasonably place this in sound and aurality. I contend, however, that no such sense exists; rather, the sense of music takes place, and it does so with the impossible. To this end, this thesis – which is a work of philosophy and music – advances an ontology of the impossible (i.e., it thinks the being of what, properly speaking, can have no being) and considers its implications for music, articulating how ontological aporias – of the event, of thinking the absolute, and of sovereignty’s dismemberment – imply senses of music that are anterior to sound. John Cage’s Silent Prayer, a nonwork he never composed, compels a rerethinking of silence on the basis of its contradictory status of existence; Florian Hecker et al.’s Speculative Solution offers a basis for thinking absolute music anew to the precise extent that it is a discourse of meaninglessness; and Manfred Werder’s [yearn] pieces exhibit exemplarily that music’s sense depends on the possibility of its counterfeiting. Inso-much as these accounts produce musical senses that take the place of sound, they are also understood to be performances of these pieces. Here, then, thought is music’s organon and its instrument

    A Hierarchical Hybrid Learning Framework for Multi-agent Trajectory Prediction

    Full text link
    Accurate and robust trajectory prediction of neighboring agents is critical for autonomous vehicles traversing in complex scenes. Most methods proposed in recent years are deep learning-based due to their strength in encoding complex interactions. However, unplausible predictions are often generated since they rely heavily on past observations and cannot effectively capture the transient and contingency interactions from sparse samples. In this paper, we propose a hierarchical hybrid framework of deep learning (DL) and reinforcement learning (RL) for multi-agent trajectory prediction, to cope with the challenge of predicting motions shaped by multi-scale interactions. In the DL stage, the traffic scene is divided into multiple intermediate-scale heterogenous graphs based on which Transformer-style GNNs are adopted to encode heterogenous interactions at intermediate and global levels. In the RL stage, we divide the traffic scene into local sub-scenes utilizing the key future points predicted in the DL stage. To emulate the motion planning procedure so as to produce trajectory predictions, a Transformer-based Proximal Policy Optimization (PPO) incorporated with a vehicle kinematics model is devised to plan motions under the dominant influence of microscopic interactions. A multi-objective reward is designed to balance between agent-centric accuracy and scene-wise compatibility. Experimental results show that our proposal matches the state-of-the-arts on the Argoverse forecasting benchmark. It's also revealed by the visualized results that the hierarchical learning framework captures the multi-scale interactions and improves the feasibility and compliance of the predicted trajectories

    Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics

    Get PDF
    Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts. In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact pp-values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited. In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical R2R^2 in least squares regression. In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
    • …
    corecore