392 research outputs found

    Localizing the Common Action Among a Few Videos

    Get PDF
    This paper strives to localize the temporal extent of an action in a long untrimmed video. Where existing work leverages many examples with their start, their ending, and/or the class of the action during training time, we propose few-shot common action localization. The start and end of an action in a long untrimmed video is determined based on just a hand-full of trimmed video examples containing the same action, without knowing their common class label. To address this task, we introduce a new 3D convolutional network architecture able to align representations from the support videos with the relevant query video segments. The network contains: (\textit{i}) a mutual enhancement module to simultaneously complement the representation of the few trimmed support videos and the untrimmed query video; (\textit{ii}) a progressive alignment module that iteratively fuses the support videos into the query branch; and (\textit{iii}) a pairwise matching module to weigh the importance of different support videos. Evaluation of few-shot common action localization in untrimmed videos containing a single or multiple action instances demonstrates the effectiveness and general applicability of our proposal.Comment: ECCV 202

    Graph Layouts by t‐SNE

    Get PDF
    We propose a new graph layout method based on a modification of the t-distributed Stochastic Neighbor Embedding (t-SNE) dimensionality reduction technique. Although t-SNE is one of the best techniques for visualizing high-dimensional data as 2D scatterplots, t-SNE has not been used in the context of classical graph layout. We propose a new graph layout method, tsNET, based on representing a graph with a distance matrix, which together with a modified t-SNE cost function results in desirable layouts. We evaluate our method by a formal comparison with state-of-the-art methods, both visually and via established quality metrics on a comprehensive benchmark, containing real-world and synthetic graphs. As evidenced by the quality metrics and visual inspection, tsNET produces excellent layouts

    A network analysis to identify pathophysiological pathways distinguishing ischaemic from non-ischaemic heart failure

    Get PDF
    Aims Heart failure (HF) is frequently caused by an ischaemic event (e.g. myocardial infarction) but might also be caused by a primary disease of the myocardium (cardiomyopathy). In order to identify targeted therapies specific for either ischaemic or non‐ischaemic HF, it is important to better understand differences in underlying molecular mechanisms. Methods and results We performed a biological physical protein–protein interaction network analysis to identify pathophysiological pathways distinguishing ischaemic from non‐ischaemic HF. First, differentially expressed plasma protein biomarkers were identified in 1160 patients enrolled in the BIOSTAT‐CHF study, 715 of whom had ischaemic HF and 445 had non‐ischaemic HF. Second, we constructed an enriched physical protein–protein interaction network, followed by a pathway over‐representation analysis. Finally, we identified key network proteins. Data were validated in an independent HF cohort comprised of 765 ischaemic and 100 non‐ischaemic HF patients. We found 21/92 proteins to be up‐regulated and 2/92 down‐regulated in ischaemic relative to non‐ischaemic HF patients. An enriched network of 18 proteins that were specific for ischaemic heart disease yielded six pathways, which are related to inflammation, endothelial dysfunction superoxide production, coagulation, and atherosclerosis. We identified five key network proteins: acid phosphatase 5, epidermal growth factor receptor, insulin‐like growth factor binding protein‐1, plasminogen activator urokinase receptor, and secreted phosphoprotein 1. Similar results were observed in the independent validation cohort. Conclusions Pathophysiological pathways distinguishing patients with ischaemic HF from those with non‐ischaemic HF were related to inflammation, endothelial dysfunction superoxide production, coagulation, and atherosclerosis. The five key pathway proteins identified are potential treatment targets specifically for patients with ischaemic HF

    Generic 3D Representation via Pose Estimation and Matching

    Full text link
    Though a large body of computer vision research has investigated developing generic semantic representations, efforts towards developing a similar representation for 3D has been limited. In this paper, we learn a generic 3D representation through solving a set of foundational proxy 3D tasks: object-centric camera pose estimation and wide baseline feature matching. Our method is based upon the premise that by providing supervision over a set of carefully selected foundational tasks, generalization to novel tasks and abstraction capabilities can be achieved. We empirically show that the internal representation of a multi-task ConvNet trained to solve the above core problems generalizes to novel 3D tasks (e.g., scene layout estimation, object pose estimation, surface normal estimation) without the need for fine-tuning and shows traits of abstraction abilities (e.g., cross-modality pose estimation). In the context of the core supervised tasks, we demonstrate our representation achieves state-of-the-art wide baseline feature matching results without requiring apriori rectification (unlike SIFT and the majority of learned features). We also show 6DOF camera pose estimation given a pair local image patches. The accuracy of both supervised tasks come comparable to humans. Finally, we contribute a large-scale dataset composed of object-centric street view scenes along with point correspondences and camera pose information, and conclude with a discussion on the learned representation and open research questions.Comment: Published in ECCV16. See the project website http://3drepresentation.stanford.edu/ and dataset website https://github.com/amir32002/3D_Street_Vie

    Application of Machine Learning Techniques to Parameter Selection for Flight Risk Identification

    Get PDF
    In recent years, the use of data mining and machine learning techniques for safety analysis, incident and accident investigation, and fault detection has gained traction among the aviation community. Flight data collected from recording devices contains a large number of heterogeneous parameters, sometimes reaching up to thousands on modern commercial aircraft. More data is being collected continuously which adds to the ever-increasing pool of data available for safety analysis. However, among the data collected, not all parameters are important from a risk and safety analysis perspective. Similarly, in order to be useful for modern analysis techniques such as machine learning, using thousands of parameters collected at a high frequency might not be computationally tractable. As such, an intelligent and repeatable methodology to select a reduced set of significant parameters is required to allow safety analysts to focus on the right parameters for risk identification. In this paper, a step-by-step methodology is proposed to down-select a reduced set of parameters that can be used for safety analysis. First, correlation analysis is conducted to remove highly correlated, duplicate, or redundant parameters from the data set. Second, a pre-processing step removes metadata and empty parameters. This step also considers requirements imposed by regulatory bodies such as the Federal Aviation Administration and subject matter experts to further trim the list of parameters. Third, a clustering algorithm is used to group similar flights and identify abnormal operations and anomalies. A retrospective analysis is conducted on the clusters to identify their characteristics and impact on flight safety. Finally, analysis of variance techniques are used to identify which parameters were significant in the formation of the clusters. Visualization dashboards were created to analyze the cluster characteristics and parameter significance. This methodology is employed on data from the approach phase of a representative single-aisle aircraft to demonstrate its application and robustness across heterogeneous data sets. It is envisioned that this methodology can be further extended to other phases of flight and aircraft

    Determinants and clinical outcome of uptitration of ACE-inhibitor and beta-blocker in patients with heart failure:a prospective European study

    Get PDF
    Introduction: Despite clear guidelines recommendations, most patients with heart failure and reduced ejection–fraction (HFrEF) do not attain guideline-recommended target doses. We aimed to investigate characteristics and for treatment-indication-bias corrected clinical outcome of patients with HFrEF that did not reach recommended treatment doses of ACE-inhibitors/Angiotensin receptor blockers (ARBs) and/or beta-blockers. Methods and results: BIOSTAT-CHF was specifically designed to study uptitration of ACE-inhibitors/ARBs and/or beta-blockers in 2516 heart failure patients from 69 centres in 11 European countries who were selected if they were suboptimally treated while initiation or uptitration was anticipated and encouraged. Patients who died during the uptitration period (n = 151) and patients with a LVEF > 40% (n = 242) were excluded. Median follow up was 21 months. We studied 2100 HFrEF patients (76% male; mean age 68 ±12), of which 22% achieved the recommended treatment dose for ACE-inhibitor/ARB and 12% of beta-blocker. There were marked differences between European countries. Reaching <50% of the recommended ACE-inhibitor/ARB and beta-blocker dose was associated with an increased risk of death and/or heart failure hospitalization. Patients reaching 50–99% of the recommended ACE-inhibitor/ARB and/or beta-blocker dose had comparable risk of death and/or heart failure hospitalization to those reaching ≥100%. Patients not reaching recommended dose because of symptoms, side effects and non-cardiac organ dysfunction had the highest mortality rate (for ACE-inhibitor/ARB: HR 1.72; 95% CI 1.43–2.01; for beta-blocker: HR 1.70; 95% CI 1.36–2.05). Conclusion: Patients with HFrEF who were treated with less than 50% of recommended dose of ACE-inhibitors/ARBs and beta-blockers seemed to have a greater risk of death and/or heart failure hospitalization compared with patients reaching ≥100%

    Understanding Aesthetic Evaluation using Deep Learning

    Get PDF
    A bottleneck in any evolutionary art system is aesthetic evaluation. Many different methods have been proposed to automate the evaluation of aesthetics, including measures of symmetry, coherence, complexity, contrast and grouping. The interactive genetic algorithm (IGA) relies on human-in-the-loop, subjective evaluation of aesthetics, but limits possibilities for large search due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we use dimensionality reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in any generative system. Convolutional Neural Networks trained on the user's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings

    Categorical Dimensions of Human Odor Descriptor Space Revealed by Non-Negative Matrix Factorization

    Get PDF
    In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain unclear. Here, we use non-negative matrix factorization (NMF) – a dimensionality reduction technique – to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor dimensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner. We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures

    Visual analytics for collaborative human-machine confidence in human-centric active learning tasks

    Get PDF
    Active machine learning is a human-centric paradigm that leverages a small labelled dataset to build an initial weak classifier, that can then be improved over time through human-machine collaboration. As new unlabelled samples are observed, the machine can either provide a prediction, or query a human ‘oracle’ when the machine is not confident in its prediction. Of course, just as the machine may lack confidence, the same can also be true of a human ‘oracle’: humans are not all-knowing, untiring oracles. A human’s ability to provide an accurate and confident response will often vary between queries, according to the duration of the current interaction, their level of engagement with the system, and the difficulty of the labelling task. This poses an important question of how uncertainty can be expressed and accounted for in a human-machine collaboration. In short, how can we facilitate a mutually-transparent collaboration between two uncertain actors - a person and a machine - that leads to an improved outcome?In this work, we demonstrate the benefit of human-machine collaboration within the process of active learning, where limited data samples are available or where labelling costs are high. To achieve this, we developed a visual analytics tool for active learning that promotes transparency, inspection, understanding and trust, of the learning process through human-machine collaboration. Fundamental to the notion of confidence, both parties can report their level of confidence during active learning tasks using the tool, such that this can be used to inform learning. Human confidence of labels can be accounted for by the machine, the machine can query for samples based on confidence measures, and the machine can report confidence of current predictions to the human, to further the trust and transparency between the collaborative parties. In particular, we find that this can improve the robustness of the classifier when incorrect sample labels are provided, due to unconfidence or fatigue. Reported confidences can also better inform human-machine sample selection in collaborative sampling. Our experimentation compares the impact of different selection strategies for acquiring samples: machine-driven, human-driven, and collaborative selection. We demonstrate how a collaborative approach can improve trust in the model robustness, achieving high accuracy and low user correction, with only limited data sample selections

    Telomere length is independently associated with all-cause mortality in chronic heart failure

    Get PDF
    Objective: Patients with heart failure have shorter mean leucocyte telomere length (LTL), a marker of biological age, compared with healthy subjects, but it is unclear whether this is of prognostic significance. We therefore sought to determine whether LTL is associated with outcomes in patients with heart failure. Methods: We measured LTL in patients with heart failure from the BIOSTAT-CHF Index (n=2260) and BIOSTAT-CHF Tayside (n=1413) cohorts. Cox proportional hazards analyses were performed individually in each cohort and the estimates combined using meta-analysis. Our co-primary endpoints were all-cause mortality and heart failure hospitalisation. Results: In age-adjusted and sex-adjusted analyses, shorter LTL was associated with higher all-cause mortality in both cohorts individually and when combined (meta-analysis HR (per SD decrease in LTL)=1.16 (95% CI 1.08 to 1.24); p=2.66×10−5), an effect equivalent to that of being four years older. The association remained significant after adjustment for the BIOSTAT-CHF clinical risk score to account for known prognostic factors (HR=1.12 (95% CI 1.05 to 1.20); p=1.04×10−3). Shorter LTL was associated with both cardiovascular (HR=1.09 (95% CI 1.00 to 1.19); p=0.047) and non-cardiovascular deaths (HR=1.18 (95% CI 1.05 to 1.32); p=4.80×10−3). There was no association between LTL and heart failure hospitalisation (HR=0.99 (95% CI 0.92 to 1.07); p=0.855). Conclusion: In patients with heart failure, shorter mean LTL is independently associated with all-cause mortality
    corecore