1,653 research outputs found

    Semi-supervised tensor-based graph embedding learning and its application to visual discriminant tracking

    Get PDF
    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a 2-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learningbased semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object’s appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm

    Beyond One-hot Encoding: lower dimensional target embedding

    Full text link
    Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates.Comment: Published at Image and Vision Computin

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Regulating Ex Post: How Law Can Address the Inevitability of Financial Failure

    Get PDF
    Unlike many other areas of regulation, financial regulation operates in the context of a complex interdependent system. The interconnections among firms, markets, and legal rules have implications for financial regulatory policy, especially the choice between ex ante regulation aimed at preventing financial failure and ex post regulation aimed at responding to that failure. Regulatory theory has paid relatively little attention to this distinction. Were regulation to consist solely of duty-imposing norms, such neglect might be defensible. In the context of a system, however, regulation can also take the form of interventions aimed at mitigating the potentially systemic consequences of a financial failure. We show that this dual role of financial regulation implies that ex ante regulation and ex post regulation should be balanced in setting financial regulatory policy, and we offer guidelines for achieving that balance

    Community-Level Anomaly Detection for Anti-Money Laundering

    Full text link
    Anomaly detection in networks often boils down to identifying an underlying graph structure on which the abnormal occurrence rests on. Financial fraud schemes are one such example, where more or less intricate schemes are employed in order to elude transaction security protocols. We investigate the problem of learning graph structure representations using adaptations of dictionary learning aimed at encoding connectivity patterns. In particular, we adapt dictionary learning strategies to the specificity of network topologies and propose new methods that impose Laplacian structure on the dictionaries themselves. In one adaption we focus on classifying topologies by working directly on the graph Laplacian and cast the learning problem to accommodate its 2D structure. We tackle the same problem by learning dictionaries which consist of vectorized atomic Laplacians, and provide a block coordinate descent scheme to solve the new dictionary learning formulation. Imposing Laplacian structure on the dictionaries is also proposed in an adaptation of the Single Block Orthogonal learning method. Results on synthetic graph datasets comprising different graph topologies confirm the potential of dictionaries to directly represent graph structure information

    A Better Calculus for Regulators: From Cost-Benefit Analysis to the Social Welfare Function

    Get PDF
    The “social welfare function” (SWF) is a powerful tool that originates in theoretical welfare economics and has wide application in economic scholarship, for example in optimal tax theory and environmental economics. This Article provides a comprehensive introduction to the SWF framework. It then shows how the SWF framework can be used as the basis for regulatory policy analysis, and why it improves upon cost-benefit analysis (CBA). Two types of SWFs are especially plausible: the utilitarian SWF, which sums individual well-being numbers, and the prioritarian SWF, which gives extra weight to the well-being of the worse off. Either one of these is an improvement over CBA, which uses a monetary metric to quantify well-being and is thereby distorted by the declining marginal utility of money. The Article employs a simulation model based on the U.S. population survival curve and income distribution to illustrate, in detail, how the two SWFs differ from CBA in selecting risk-regulation policies
    • …
    corecore