3,345 research outputs found

    Tensor Matched Subspace Detection

    Full text link
    The problem of testing whether a signal lies within a given subspace, also named matched subspace detection, has been well studied when the signal is represented as a vector. However, the matched subspace detection methods based on vectors can not be applied to the situations that signals are naturally represented as multi-dimensional data arrays or tensors. Considering that tensor subspaces and orthogonal projections onto these subspaces are well defined in the recently proposed transform-based tensor model, which motivates us to investigate the problem of matched subspace detection in high dimensional case. In this paper, we propose an approach for tensor matched subspace detection based on the transform-based tensor model with tubal-sampling and elementwise-sampling, respectively. First, we construct estimators based on tubal-sampling and elementwise-sampling to estimate the energy of a signal outside a given subspace of a third-order tensor and then give the probability bounds of our estimators, which show that our estimators work effectively when the sample size is greater than a constant. Secondly, the detectors both for noiseless data and noisy data are given, and the corresponding detection performance analyses are also provided. Finally, based on discrete Fourier transform (DFT) and discrete cosine transform (DCT), the performance of our estimators and detectors are evaluated by several simulations, and simulation results verify the effectiveness of our approach

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    Domain Adaptation from Synthesis to Reality in Single-model Detector for Video Smoke Detection

    Full text link
    This paper proposes a method for video smoke detection using synthetic smoke samples. The virtual data can automatically offer precise and rich annotated samples. However, the learning of smoke representations will be hurt by the appearance gap between real and synthetic smoke samples. The existed researches mainly work on the adaptation to samples extracted from original annotated samples. These methods take the object detection and domain adaptation as two independent parts. To train a strong detector with rich synthetic samples, we construct the adaptation to the detection layer of state-of-the-art single-model detectors (SSD and MS-CNN). The training procedure is an end-to-end stage. The classification, location and adaptation are combined in the learning. The performance of the proposed model surpasses the original baseline in our experiments. Meanwhile, our results show that the detectors based on the adversarial adaptation are superior to the detectors based on the discrepancy adaptation. Code will be made publicly available on http://smoke.ustc.edu.cn. Moreover, the domain adaptation for two-stage detector is described in Appendix A.Comment: The manuscript approved by all authors is our original work, and has submitted to Pattern Recognition for peer review previously. There are 4532 words, 6 figures and 1 table in this manuscrip

    Visual Subpopulation Discovery and Validation in Cohort Study Data

    Full text link
    Epidemiology aims at identifying subpopulations of cohort participants that share common characteristics (e.g. alcohol consumption) to explain risk factors of diseases in cohort study data. These data contain information about the participants' health status gathered from questionnaires, medical examinations, and image acquisition. Due to the growing volume and heterogeneity of epidemiological data, the discovery of meaningful subpopulations is challenging. Subspace clustering can be leveraged to find subpopulations in large and heterogeneous cohort study datasets. In our collaboration with epidemiologists, we realized their need for a tool to validate discovered subpopulations. For this purpose, identified subpopulations should be searched for independent cohorts to check whether the findings apply there as well. In this paper we describe our interactive Visual Analytics framework S-ADVIsED for SubpopulAtion Discovery and Validation In Epidemiological Data. S-ADVIsED enables epidemiologists to explore and validate findings derived from subspace clustering. We provide a coordinated multiple view system, which includes a summary view of all subpopulations, detail views, and statistical information. Users can assess the quality of subspace clusters by considering different criteria via visualization. Furthermore, intervals for variables involved in a subspace cluster can be adjusted. This extension was suggested by epidemiologists. We investigated the replication of a selected subpopulation with multiple variables in another population by considering different measurements. As a specific result, we observed that study participants exhibiting high liver fat accumulation deviate strongly from other subpopulations and from the total study population with respect to age, body mass index, thyroid volume and thyroid-stimulating hormone.Comment: 12 pages. This work was originally reported in "EuroVis Workshop on Visual Analytics

    GLIMPS: A Greedy Mixed Integer Approach for Super Robust Matched Subspace Detection

    Full text link
    Due to diverse nature of data acquisition and modern applications, many contemporary problems involve high dimensional datum \x \in \R^\d whose entries often lie in a union of subspaces and the goal is to find out which entries of \x match with a particular subspace \sU, classically called \emph {matched subspace detection}. Consequently, entries that match with one subspace are considered as inliers w.r.t the subspace while all other entries are considered as outliers. Proportion of outliers relative to each subspace varies based on the degree of coordinates from subspaces. This problem is a combinatorial NP-hard in nature and has been immensely studied in recent years. Existing approaches can solve the problem when outliers are sparse. However, if outliers are abundant or in other words if \x contains coordinates from a fair amount of subspaces, this problem can't be solved with acceptable accuracy or within a reasonable amount of time. This paper proposes a two-stage approach called \emph{Greedy Linear Integer Mixed Programmed Selector} (GLIMPS) for this abundant-outliers setting, which combines a greedy algorithm and mixed integer formulation and can tolerate over 80\% outliers, outperforming the state-of-the-art.Comment: 8 pages, 5 figures, 57th Allerton Conferenc

    Superimposition-guided Facial Reconstruction from Skull

    Full text link
    We develop a new algorithm to perform facial reconstruction from a given skull. This technique has forensic application in helping the identification of skeletal remains when other information is unavailable. Unlike most existing strategies that directly reconstruct the face from the skull, we utilize a database of portrait photos to create many face candidates, then perform a superimposition to get a well matched face, and then revise it according to the superimposition. To support this pipeline, we build an effective autoencoder for image-based facial reconstruction, and a generative model for constrained face inpainting. Our experiments have demonstrated that the proposed pipeline is stable and accurate.Comment: 14 pages; 14 figure

    Better Feature Tracking Through Subspace Constraints

    Full text link
    Feature tracking in video is a crucial task in computer vision. Usually, the tracking problem is handled one feature at a time, using a single-feature tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives. While this approach works quite well when dealing with high-quality video and "strong" features, it often falters when faced with dark and noisy video containing low-quality features. We present a framework for jointly tracking a set of features, which enables sharing information between the different features in the scene. We show that our method can be employed to track features for both rigid and nonrigid motions (possibly of few moving bodies) even when some features are occluded. Furthermore, it can be used to significantly improve tracking results in poorly-lit scenes (where there is a mix of good and bad features). Our approach does not require direct modeling of the structure or the motion of the scene, and runs in real time on a single CPU core.Comment: 8 pages, 2 figures. CVPR 201

    EigenEvent: An Algorithm for Event Detection from Complex Data Streams in Syndromic Surveillance

    Full text link
    Syndromic surveillance systems continuously monitor multiple pre-diagnostic daily streams of indicators from different regions with the aim of early detection of disease outbreaks. The main objective of these systems is to detect outbreaks hours or days before the clinical and laboratory confirmation. The type of data that is being generated via these systems is usually multivariate and seasonal with spatial and temporal dimensions. The algorithm What's Strange About Recent Events (WSARE) is the state-of-the-art method for such problems. It exhaustively searches for contrast sets in the multivariate data and signals an alarm when find statistically significant rules. This bottom-up approach presents a much lower detection delay comparing the existing top-down approaches. However, WSARE is very sensitive to the small-scale changes and subsequently comes with a relatively high rate of false alarms. We propose a new approach called EigenEvent that is neither fully top-down nor bottom-up. In this method, we instead of top-down or bottom-up search, track changes in data correlation structure via eigenspace techniques. This new methodology enables us to detect both overall changes (via eigenvalue) and dimension-level changes (via eigenvectors). Experimental results on hundred sets of benchmark data reveals that EigenEvent presents a better overall performance comparing state-of-the-art, in particular in terms of the false alarm rate.Comment: To appear in Intelligent Data Analysis Journal, vol. 19(3), 201

    Outlier Detection from Network Data with Subnetwork Interpretation

    Full text link
    Detecting a small number of outliers from a set of data observations is always challenging. This problem is more difficult in the setting of multiple network samples, where computing the anomalous degree of a network sample is generally not sufficient. In fact, explaining why the network is exceptional, expressed in the form of subnetwork, is also equally important. In this paper, we develop a novel algorithm to address these two key problems. We treat each network sample as a potential outlier and identify subnetworks that mostly discriminate it from nearby regular samples. The algorithm is developed in the framework of network regression combined with the constraints on both network topology and L1-norm shrinkage to perform subnetwork discovery. Our method thus goes beyond subspace/subgraph discovery and we show that it converges to a global optimum. Evaluation on various real-world network datasets demonstrates that our algorithm not only outperforms baselines in both network and high dimensional setting, but also discovers highly relevant and interpretable local subnetworks, further enhancing our understanding of anomalous networks

    Kronecker PCA Based Robust SAR STAP

    Full text link
    In this work the detection of moving targets in multiantenna SAR is considered. As a high resolution radar imaging modality, SAR detects and identifies stationary targets very well, giving it an advantage over classical GMTI radars. Moving target detection is more challenging due to the "burying" of moving targets in the clutter and is often achieved using space-time adaptive processing (STAP) (based on learning filters from the spatio-temporal clutter covariance) to remove the stationary clutter and enhance the moving targets. In this work, it is noted that in addition to the oft noted low rank structure, the clutter covariance is also naturally in the form of a space vs time Kronecker product with low rank factors. A low-rank KronPCA covariance estimation algorithm is proposed to exploit this structure, and a separable clutter cancelation filter based on the Kronecker covariance estimate is proposed. Together, these provide orders of magnitude reduction in the number of training samples required, as well as improved robustness to corruption of the training data, e.g. due to outliers and moving targets. Theoretical properties of the proposed estimation algorithm are derived and the significant reductions in training complexity are established under the spherically invariant random vector model (SIRV). Finally, an extension of this approach incorporating multipass data (change detection) is presented. Simulation results and experiments using the real Gotcha SAR GMTI challenge dataset are presented that confirm the advantages of our approach relative to existing techniques.Comment: Tech report. Shorter version submitted to IEEE AE
    • …
    corecore