12 research outputs found

    Texture as pixel feature for video object segmentation

    Get PDF
    As texture represents one of the key perceptual attributes of any object, integrating textural information into existing video object segmentation frameworks affords the potential to achieve semantically improved performance. While object segmentation is fundamentally pixel-based classification, texture is normally defined for the entire image, which raises the question of how best to directly specify and characterise texture as a pixel feature. Introduced is a generic strategy for representing textural information so it can be seamlessly incorporated as a pixel feature into any video object segmentation paradigm. Both numerical and perceptual results upon various test sequences reveal considerable improvement in the object segmentation performance when textural information is embedded

    A Bezier curve-based generic shape encoder

    Get PDF
    Existing Bezier curve based shape description techniques primarily focus upon determining a set of pertinent Control Points (CP) to represent a particular shape contour. While many different approaches have been proposed, none adequately consider domain specific information about the shape contour like its gradualness and sharpness, in the CP generation process which can potentially result in large distortions in the object’s shape representation. This paper introduces a novel Bezier Curve-based Generic Shape Encoder (BCGSE) that partitions an object contour into contiguous segments based upon its cornerity, before generating the CP for each segment using relevant shape curvature information. In addition, while CP encoding has generally been ignored, BCGSE embeds an efficient vertex-based encoding strategy exploiting the latent equidistance between consecutive CP. A nonlinear optimisation technique is also presented to enable the encoder is automatically adapt to bit-rate constraints. The performance of the BCGSE framework has been rigorously tested on a variety of diverse arbitrary shapes from both a distortion and requisite bit-rate perspective, with qualitative and quantitative results corroborating its superiority over existing shape descriptors

    Automatic video object segmentation from VOP

    Get PDF
    The video coding standard MPEG-4 is enabling content-based functionalities of a prior decomposition of sequences into video object planes (VOP) so that each VOP represents a semantic object. Therefore extraction of semantic objects is an important part. There are various coding tools: shape coding, motion estimation and compensation, texture coding, multifunctional coding, error resilience, sprite coding and scalability. These are performed by using diverse techniques, like: pixel based segmentation, region based segmentation, boundary based segmentation, morphological segmentation, Bayesian segmentation, model based approaches etc. However, most of the techniques are either manual or semi-automatic. Semiautomatic methods require user assistance and suffer from addressing the following issues: the background is variable due to the camera motion, the light condition slightly changed, new object can appear at any time, objects may remain for a long time in the scene, many objects may be in a scene and possible occlusion. So it is very important to develop automatic techniques that are robust and fast and will combine low-level automatic feature segmentation with interactive method for defining and tracking high semantic video objects and also the above mentioned constraints. In this paper we have proposed how this might be done. It can be done in accordance with the following modules: sprite generation from background and statistical feature analysis approach for object definition, region based motion estimation for tracking

    Video coding for mobile communications

    No full text
    With the significant influence and increasing requirements of visual mobile communications in our everyday lives, low bit-rate video coding to handle the stringent bandwidth limitations of mobile networks has become a major research topic. With both processing power and battery resources being inherently constrained, and signals having to be transmitted over error-prone mobile channels, this has mandated the design requirement for coders to be both low complexity and robust error resilient. To support multilevel users, any encoded bit-stream should also be both scalable and embedded. This chapter presents a review of appropriate image and video coding techniques for mobile communication applications and aims to provide an appreciation of the rich and far-reaching advancements taking place in this exciting field, while concomitantly outlining both the physical significance of popular quality image and video coding metrics and some of the research challenges that remain to be resolved

    Measurement of single-diffractive dijet production in proton–proton collisions at √s=8Te with the CMS and TOTEM experiments

    No full text
    Measurements are presented of the single-diffractive dijet cross section and the diffractive cross section as a function of the proton fractional momentum loss ξ and the four-momentum transfer squared t. Both processes pp→pX and pp→Xp, i.e. with the proton scattering to either side of the interaction point, are measured, where X includes at least two jets; the results of the two processes are averaged. The analyses are based on data collected simultaneously with the CMS and TOTEM detectors at the LHC in proton–proton collisions at s=8Te during a dedicated run with β∗=90m at low instantaneous luminosity and correspond to an integrated luminosity of 37.5nb-1. The single-diffractive dijet cross section σjjpX, in the kinematic region ξ< 0.1 , 0.03<|t|<1Ge2, with at least two jets with transverse momentum pT>40Ge, and pseudorapidity | η| < 4.4 , is 21.7±0.9(stat)-3.3+3.0(syst)±0.9(lumi)nb. The ratio of the single-diffractive to inclusive dijet yields, normalised per unit of ξ, is presented as a function of x, the longitudinal momentum fraction of the proton carried by the struck parton. The ratio in the kinematic region defined above, for x values in the range - 2.9 ≤ log 10x≤ - 1.6 , is R=(σjjpX/Δξ)/σjj=0.025±0.001(stat)±0.003(syst), where σjjpX and σjj are the single-diffractive and inclusive dijet cross sections, respectively. The results are compared with predictions from models of diffractive and nondiffractive interactions. Monte Carlo predictions based on the HERA diffractive parton distribution functions agree well with the data when corrected for the effect of soft rescattering between the spectator partons. © 2020, CERN for the benefit of the CMS and TOTEM collaborations

    Measurements of triple-differential cross sections for inclusive isolated-photon+jet events in p p collisions at √s=8TeV

    No full text
    Measurements are presented of the triple-differential cross section for inclusive isolated-photon+jet events in p p collisions at s=8 TeV as a function of photon transverse momentum (pTγ), photon pseudorapidity (ηγ), and jet pseudorapidity (ηjet). The data correspond to an integrated luminosity of 19.7fb-1 that probe a broad range of the available phase space, for | ηγ| < 1.44 and 1.57 < | ηγ| < 2.50 , | ηjet| < 2.5 , 40<pTγ<1000GeV, and jet transverse momentum, pTjet, > 25GeV. The measurements are compared to next-to-leading order perturbative quantum chromodynamics calculations, which reproduce the data within uncertainties. © 2019, CERN for the benefit of the CMS collaboration

    Measurements with silicon photomultipliers of dose-rate effects in the radiation damage of plastic scintillator tiles in the CMS hadron endcap calorimeter

    No full text
    Measurements are presented of the reduction of signal output due to radiation damage for two types of plastic scintillator tiles used in the hadron endcap (HE) calorimeter of the CMS detector. The tiles were exposed to particles produced in proton-proton (pp) collisions at the CERN LHC with a center-of-mass energy of 13 TeV, corresponding to a delivered luminosity of 50 fb-1. The measurements are based on readout channels of the HE that were instrumented with silicon photomultipliers, and are derived using data from several sources: A laser calibration system, a movable radioactive source, as well as hadrons and muons produced in pp collisions. Results from several irradiation campaigns using 60Co sources are also discussed. The damage is presented as a function of dose rate. Within the range of these measurements, for a fixed dose the damage increases with decreasing dose rate
    corecore