632 research outputs found

    Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    Get PDF
    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. Some local optimality conditions are derived for two different settings, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS

    TREAT: Terse Rapid Edge-Anchored Tracklets

    Get PDF
    Fast computation, efficient memory storage, and performance on par with standard state-of-the-art descriptors make binary descriptors a convenient tool for many computer vision applications. However their development is mostly tailored for static images. To respond to this limitation, we introduce TREAT (Terse Rapid Edge-Anchored Tracklets), a new binary detector and descriptor, based on tracklets. It harnesses moving edge maps to perform efficient feature detection, tracking, and description at low computational cost. Experimental results on 3 different public datasets demonstrate improved performance over other popular binary features. These experiments also provide a basis for benchmarking the performance of binary descriptors in video-based applications

    Gaussian normalization: handling burstiness in visual data

    Get PDF
    This paper addresses histogram burstiness, defined as the tendency of histograms to feature peaks out of pro- portion with their general distribution. After highlighting the impact of this growing issue on computer vision prob- lems and the need to preserve the distribution informa- tion, we introduce a new normalization based on a Gaus- sian fit with a pre-defined variance for each datum that suppresses burst without adversely affecting the distribu- tion. Experimental results on four public datasets show that our normalization scheme provides a staggering per- formance boost compared to other normalizations, even al- lowing Gaussian-normalized Bag-of-Words to perform sim- ilarly to intra-normalized Fisher vectors

    First-Principles Structural, Mechanical, and Thermodynamic Calculations of the Negative Thermal Expansion Compound Zr2(WO4)(PO4)2

    Full text link
    The negative thermal expansion (NTE) material Zr2(WO4)(PO4)2 has been investigated for the first time within the framework of the density functional perturbation theory (DFPT). The structural, mechanical, and thermodynamic properties of this material have been predicted using the Perdew, Burke and Ernzerhof for solid (PBEsol) exchange–correlation functional, which showed superior accuracy over standard functionals in previous computational studies of the NTE material α-ZrW2O8. The bulk modulus calculated for Zr2(WO4)(PO4)2 using the Vinet equation of state at room temperature is K0 = 63.6 GPa, which is in close agreement with the experimental estimate of 61.3(8) at T = 296 K. The computed mean linear coefficient of thermal expansion is −3.1 × 10–6 K−1 in the temperature range ∌0–70 K, in line with the X-ray diffraction measurements. The mean GrĂŒneisen parameter controlling the thermal expansion of Zr2(WO4)(PO4)2 is negative below 205 K, with a minimum of −2.1 at 10 K. The calculated standard molar heat capacity and entropy are CP0 = 287.6 and S0 = 321.9 J·mol–1·K–1, respectively. The results reported in this study demonstrate the accuracy of DFPT/PBEsol for assessing or predicting the relationship between structural and thermomechanical properties of NTE materials

    Multifidelity Uncertainty Propagation via Adaptive Surrogates in Coupled Multidisciplinary Systems

    Get PDF
    Fixed point iteration is a common strategy to handle interdisciplinary coupling within a feedback-coupled multidisciplinary analysis. For each coupled analysis, this requires a large number of disciplinary high-fidelity simulations to resolve the interactions between different disciplines. When embedded within an uncertainty analysis loop (e.g., with Monte Carlo sampling over uncertain parameters), the number of high-fidelity disciplinary simulations quickly becomes prohibitive, because each sample requires a fixed point iteration and the uncertainty analysis typically involves thousands or even millions of samples. This paper develops a method for uncertainty quantification in feedback-coupled systems that leverage adaptive surrogates to reduce the number of cases forwhichfixedpoint iteration is needed. The multifidelity coupled uncertainty propagation method is an iterative process that uses surrogates for approximating the coupling variables and adaptive sampling strategies to refine the surrogates. The adaptive sampling strategies explored in this work are residual error, information gain, and weighted information gain. The surrogate models are adapted in a way that does not compromise the accuracy of the uncertainty analysis relative to the original coupled high-fidelity problem as shown through a rigorous convergence analysis.United States. Army Research Office. Multidisciplinary University Research Initiative (Award FA9550-15-1-0038

    Retrofitting and Greening Existing Buildings: Strategies for Energy Conservation,Resource Management and Sustainability of the Built Environment in Nigeria

    Get PDF
    Energy consumption in residential buildings is one of the increasing phenomenal in the built environment.It has become imperative to react to the state of rapidly dwindling natural resources, environmental pressures and climate change posing fundamental threat to economic systems and human survival in Nigeria and globally. Until fairly recently, green considerations for existing residential buildings have received less attention. In Nigeria, thousands of households of low income buildings spends large sums of their earnings on energy bills, while getting less energy-driven services for their appliances and utilities to meet their needs. This paper explores possible alternatives for less dependence on national energy supply with more environmental benefits through sustainable retrofit and resource-efficiency interventions for low-income houses. The objective is to address issues relating to energy generation,conservation and other associated resource management with a view to achieving the development of a low carbon and more eco-friendly built environment. It is expected that the outcome of this paper will make an important contribution in the form of recommendations for future policies and programmes regarding retrofitting of existing residential houses and the construction of new ones in Nigeria. It concludes that if policies and regulatory mechanism are put in place for greening low-income housing in Nigeria, this could deliver a pathway to improving energy efficiency of the existing building sector

    Action recognition in video using a spatial-temporal graph-based feature representation

    Get PDF
    We propose a video graph based human action recognition framework. Given an input video sequence, we extract spatio-temporal local features and construct a video graph to incorporate appearance and motion constraints to reflect the spatio-temporal dependencies among features. them. In particular, we extend a popular dbscan density-based clustering algorithm to form an intuitive video graph. During training, we estimate a linear SVM classifier using the standard Bag-of-words method. During classification, we apply Graph-Cut optimization to find the most frequent action label in the constructed graph and assign this label to the test video sequence. The proposed approach achieves stateof-the-art performance with standard human action recognition benchmarks, namely KTH and UCF-sports datasets and competitive results for the Hollywood (HOHA) dataset
    • 

    corecore