583 research outputs found

    Trajectory Analysis for Sport and Video Surveillance

    Get PDF
    In video surveillance and sports analysis applications, object trajectories offer the possibility of extracting rich information on the underlying behavior of the moving targets. To this end we introduce an extension of Point Distribution Models (PDM) to analyze the object motion in their spatial, temporal and spatiotemporal dimensions. These trajectory models represent object paths as an average trajectory and a set of deformation modes, in the spatial, temporal and spatiotemporal domains. Thus any given motion can be expressed in terms of its modes, which in turn can be ascribed to a particular behavior. The proposed analysis tool has been tested on motion data extracted from a vision system that was tracking radio-guided cars running inside a circuit. This affords an easier interpretation of results, because the shortest lap provides a reference behavior. Besides showing an actual analysis we discuss how to normalize trajectories to have a meaningful analysis

    Theory and simulation of quantum photovoltaic devices based on the non-equilibrium Green's function formalism

    Get PDF
    This article reviews the application of the non-equilibrium Green's function formalism to the simulation of novel photovoltaic devices utilizing quantum confinement effects in low dimensional absorber structures. It covers well-known aspects of the fundamental NEGF theory for a system of interacting electrons, photons and phonons with relevance for the simulation of optoelectronic devices and introduces at the same time new approaches to the theoretical description of the elementary processes of photovoltaic device operation, such as photogeneration via coherent excitonic absorption, phonon-mediated indirect optical transitions or non-radiative recombination via defect states. While the description of the theoretical framework is kept as general as possible, two specific prototypical quantum photovoltaic devices, a single quantum well photodiode and a silicon-oxide based superlattice absorber, are used to illustrated the kind of unique insight that numerical simulations based on the theory are able to provide.Comment: 20 pages, 10 figures; invited review pape

    Measurements of the branching fractions of B+→ppK+ decays

    Get PDF
    The branching fractions of the decay B+ → pp̄K+ for different intermediate states are measured using data, corresponding to an integrated luminosity of 1.0 fb-1, collected by the LHCb experiment. The total branching fraction, its charmless component Mpp̄ < 2.85 GeV/c2 and the branching fractions via the resonant cc̄ states η c(1S) and ψ(2S) relative to the decay via a J/ψ intermediate state are [Equation not available: see fulltext.] Upper limits on the B + branching fractions into the η c(2S) meson and into the charmonium-like states X(3872) and X(3915) are also obtained

    Observation of the decay BcJ/ψK+Kπ+B_c \rightarrow J/\psi K^+ K^- \pi^+

    Get PDF
    The decay BcJ/ψK+Kπ+B_c\rightarrow J/\psi K^+ K^- \pi^+ is observed for the first time, using proton-proton collisions collected with the LHCb detector corresponding to an integrated luminosity of 3fb1^{-1}. A signal yield of 78±1478\pm14 decays is reported with a significance of 6.2 standard deviations. The ratio of the branching fraction of \B_c \rightarrow J/\psi K^+ K^- \pi^+ decays to that of BcJ/ψπ+B_c \rightarrow J/\psi \pi^+ decays is measured to be 0.53±0.10±0.050.53\pm 0.10\pm0.05, where the first uncertainty is statistical and the second is systematic.Comment: 18 pages, 2 figure

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Measurements of the B+B^+, B0B^0, Bs0B_s^0 meson and Λb0\Lambda_b^0 baryon lifetimes

    Get PDF
    Measurements of bb-hadron lifetimes are reported using pppp collision data, corresponding to an integrated luminosity of 1.0fb1^{-1}, collected by the LHCb detector at a centre-of-mass energy of 77Tev. Using the exclusive decays B+J/ψK+B^+\to J/\psi K^+, B0J/ψK(892)0B^0\to J/\psi K^*(892)^0, B0J/ψKS0B^0\to J/\psi K^0_{\rm S}, Λb0J/ψΛ\Lambda_b^0\to J/\psi \Lambda and Bs0J/ψϕB^0_s\to J/\psi \phi the average decay times in these modes are measured to be τB+J/ψK+\tau_{B^+\to J/\psi K^+} = 1.637±1.637 \pm 0.004 ±\pm 0.003 ps, τB0J/ψK(892)0\tau_{B^0\to J/\psi K^*(892)^0} = 1.524±1.524 \pm 0.006 ±\pm 0.004 ps, τB0J/ψKS0\tau_{B^0\to J/\psi K^0_{\rm S}} = 1.499±1.499 \pm 0.013 ±\pm 0.005 ps, τΛb0J/ψΛ\tau_{\Lambda_b^0\to J/\psi \Lambda} = 1.415±1.415 \pm 0.027 ±\pm 0.006 ps and τBs0J/ψϕ\tau_{B^0_s\to J/\psi \phi} = 1.480±1.480 \pm 0.011 ±\pm 0.005 ps, where the first uncertainty is statistical and the second is systematic. These represent the most precise lifetime measurements in these decay modes. In addition, ratios of these lifetimes, and the ratio of the decay-width difference, ΔΓd\Delta\Gamma_d, to the average width, Γd\Gamma_d, in the B0B^0 system, ΔΓd/Γd=0.044±0.025±0.011\Delta \Gamma_d/\Gamma_d = -0.044 \pm 0.025 \pm 0.011, are reported. All quantities are found to be consistent with Standard Model expectations.Comment: 28 pages, 4 figures. Updated reference

    The SURE-LET Approach to Image Denoising

    Get PDF
    We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error—Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a “tight” frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes—linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential

    Observation of the decay B+c→Bºsπ+

    Get PDF
    The result of a search for the decay B+c→Bºsπ+ is presented, using the Bºs→Ds-π+ and Bºs→J/ψϕ channels. The analysis is based on a data sample of pp collisions collected with the LHCb detector, corresponding to an integrated luminosity of 1  fb-1 taken at a center-of-mass energy of 7 TeV, and 2  fb-1 taken at 8 TeV. The decay B+c→Bºsπ+ is observed with significance in excess of 5 standard deviations independently in both decay channels. The measured product of the ratio of cross sections and branching fraction is [σ(Bc+)/σ(Bºs)]×B(Bc+→Bºsπ+)=[2.37±0.31 (stat)±0.11 (syst)-0.13+0.17(τBc+)]×10-3, in the pseudorapidity range 2<η(B)<5, where the first uncertainty is statistical, the second is systematic, and the third is due to the uncertainty on the Bc+ lifetime. This is the first observation of a B meson decaying to another B meson via the weak interaction

    Study of B0(s)→K0Sh+h′− decays with first observation of B0s→K0SK±π∓ and B0s→K0Sπ+π−

    Get PDF
    A search for charmless three-body decays of B 0 and B0s mesons with a K0S meson in the final state is performed using the pp collision data, corresponding to an integrated luminosity of 1.0 fb−1, collected at a centre-of-mass energy of 7 TeV recorded by the LHCb experiment. Branching fractions of the B0(s)→K0Sh+h′− decay modes (h (′) = π, K), relative to the well measured B0→K0Sπ+π− decay, are obtained. First observation of the decay modes B0s→K0SK±π∓ and B0s→K0Sπ+π− and confirmation of the decay B0→K0SK±π∓ are reported. The following relative branching fraction measurements or limits are obtained B(B0→K0SK±π∓)B(B0→K0Sπ+π−)=0.128±0.017(stat.)±0.009(syst.), B(B0→K0SK+K−)B(B0→K0Sπ+π−)=0.385±0.031(stat.)±0.023(syst.), B(B0s→K0Sπ+π−)B(B0→K0Sπ+π−)=0.29±0.06(stat.)±0.03(syst.)±0.02(fs/fd), B(B0s→K0SK±π∓)B(B0→K0Sπ+π−)=1.48±0.12(stat.)±0.08(syst.)±0.12(fs/fd)B(B0s→K0SK+K−)B(B0→K0Sπ+π−)∈[0.004;0.068]at90%CL

    Opposite-side flavour tagging of B mesons at the LHCb experiment

    Get PDF
    The calibration and performance of the oppositeside flavour tagging algorithms used for the measurements of time-dependent asymmetries at the LHCb experiment are described. The algorithms have been developed using simulated events and optimized and calibrated with B + →J/ψK +, B0 →J/ψK ∗0 and B0 →D ∗− μ + νμ decay modes with 0.37 fb−1 of data collected in pp collisions at √ s = 7 TeV during the 2011 physics run. The oppositeside tagging power is determined in the B + → J/ψK + channel to be (2.10 ± 0.08 ± 0.24) %, where the first uncertainty is statistical and the second is systematic
    corecore