37,826 research outputs found

    Fuzzy logic based approach for object feature tracking

    Get PDF
    This thesis introduces a novel technique for feature tracking in sequences of greyscale images based on fuzzy logic. A versatile and modular methodology for feature tracking using fuzzy sets and inference engines is presented. Moreover, an extension of this methodology to perform the correct tracking of multiple features is also presented. To perform feature tracking three membership functions are initially defined. A membership function related to the distinctive property of the feature to be tracked. A membership function is related to the fact of considering that the feature has smooth movement between each image sequence and a membership function concerns its expected future location. Applying these functions to the image pixels, the corresponding fuzzy sets are obtained and then mathematically manipulated to serve as input to an inference engine. Situations such as occlusion or detection failure of features are overcome using estimated positions calculated using a motion model and a state vector of the feature. This methodology was previously applied to track a single feature identified by the user. Several performance tests were conducted on sequences of both synthetic and real images. Experimental results are presented, analysed and discussed. Although this methodology could be applied directly to multiple feature tracking, an extension of this methodology has been developed within that purpose. In this new method, the processing sequence of each feature is dynamic and hierarchical. Dynamic because this sequence can change over time and hierarchical because features with higher priority will be processed first. Thus, the process gives preference to features whose location are easier to predict compared with features whose knowledge of their behavior is less predictable. When this priority value becomes too low, the feature will no longer tracked by the algorithm. To access the performance of this new approach, sequences of images where several features specified by the user are to be tracked were used. In the final part of this work, conclusions drawn from this work as well as the definition of some guidelines for future research are presented.Nesta tese Ă© introduzida uma nova tĂ©cnica de seguimento de pontos caracterĂ­sticos de objectos em sequĂȘncias de imagens em escala de cinzentos baseada em lĂłgica difusa. É apresentada uma metodologia versĂĄtil e modular para o seguimento de objectos utilizando conjuntos difusos e motores de inferĂȘncia. É tambĂ©m apresentada uma extensĂŁo desta metodologia para o correcto seguimento de mĂșltiplos pontos caracterĂ­sticos. Para se realizar o seguimento sĂŁo definidas inicialmente trĂȘs funçÔes de pertença. Uma função de pertença estĂĄ relacionada com a propriedade distintiva do objecto que desejamos seguir, outra estĂĄ relacionada com o facto de se considerar que o objecto tem uma movimentação suave entre cada imagem da sequĂȘncia e outra função de pertença referente Ă  sua previsĂ­vel localização futura. Aplicando estas funçÔes de pertença aos pĂ­xeis da imagem, obtĂȘm-se os correspondentes conjuntos difusos, que serĂŁo manipulados matematicamente e servirĂŁo como entrada num motor de inferĂȘncia. SituaçÔes como a oclusĂŁo ou falha na detecção dos pontos caracterĂ­sticos sĂŁo ultrapassadas utilizando posiçÔes estimadas calculadas a partir do modelo de movimento e a um vector de estados do objecto. Esta metodologia foi inicialmente aplicada no seguimento de um objecto assinalado pelo utilizador. Foram realizados vĂĄrios testes de desempenho em sequĂȘncias de imagens sintĂ©ticas e tambĂ©m reais. Os resultados experimentais obtidos sĂŁo apresentados, analisados e discutidos. Embora esta metodologia pudesse ser aplicada directamente ao seguimento de mĂșltiplos pontos caracterĂ­sticos, foi desenvolvida uma extensĂŁo desta metodologia para esse fim. Nesta nova metodologia a sequĂȘncia de processamento de cada ponto caracterĂ­stico Ă© dinĂąmica e hierĂĄrquica. DinĂąmica por ser variĂĄvel ao longo do tempo e hierĂĄrquica por existir uma hierarquia de prioridades relativamente aos pontos caracterĂ­sticos a serem seguidos e que determina a ordem pela qual esses pontos sĂŁo processados. Desta forma, o processo dĂĄ preferĂȘncia a pontos caracterĂ­sticos cuja localização Ă© mais fĂĄcil de prever comparativamente a pontos caracterĂ­sticos cujo conhecimento do seu comportamento seja menos previsĂ­vel. Quando esse valor de prioridade se torna demasiado baixo, esse ponto caracterĂ­stico deixa de ser seguido pelo algoritmo. Para se observar o desempenho desta nova abordagem foram utilizadas sequĂȘncias de imagens onde vĂĄrias caracterĂ­sticas indicadas pelo utilizador sĂŁo seguidas. Na parte final deste trabalho sĂŁo apresentadas as conclusĂ”es resultantes a partir do desenvolvimento deste trabalho, bem como a definição de algumas linhas de investigação futura

    HST Imaging of the Host Galaxies of High Redshift Radio-Loud Quasars

    Get PDF
    We present rest-frame UV and Ly-alpha images of spatially-resolved structures around five high-redshift radio-loud quasars obtained with the WFPC2 camera on the Hubble Space Telescope. We find that all five quasars are extended and this "fuzz" contains ~5-40% of the total continuum flux and 15-65% of the Ly-alpha flux within a radius of about 1.5 arcsec. The rest-frame UV luminosities of the hosts are log lambda P_lambda = 11.9 to 12.5 solar luminosities (assuming no internal dust extinction), comparable to the luminous radio galaxies at similar redshifts and a factor 10 higher than both radio-quiet field galaxies at z~2-3 and the most UV-luminous low redshift starburst galaxies. The Ly-alpha luminosities of the hosts are (in the log) approximately 44.3-44.9 erg/s which are also similar to the those of luminous high redshift radio galaxies and considerably larger than the Ly-alpha luminosities of high redshift field galaxies. To generate the Ly-alpha luminosities of the hosts would require roughly a few percent of the total observed ionizing luminosity of the quasar. We find good alignment between the extended Ly-alpha and the radio sources, strong evidence for jet-cloud interactions in two cases, again resembling radio galaxies, and what is possibly the most luminous radio-UV synchrotron jet in one of the hosts at z=2.110.Comment: 36 pages (latex, aas macros), 3 figures (3 gif and 10 postscript files), accepted for publication in the the Astrophysical Journal Supplement Serie

    Orbital and stochastic far-UV variability in the nova-like system V3885 Sgr

    Full text link
    Highly time-resolved time-tagged FUSE satellite spectroscopic data are analysed to establish the far-ultraviolet (FUV) absorption line characteristics of the nova-like cataclysmic variable binary, V3885 Sgr. We determine the temporal behaviour of low (Ly_beta, CIII, NIII) and high (SIV, PV, OVI) ion species, and highlight corresponding orbital phase modulated changes in these lines. On average the absorption troughs are blueshifted due to a low velocity disc wind outflow. Very rapid (~ 5 min) fluctuations in the absorption lines are isolated, which are indicative of stochastic density changes. Doppler tomograms of the FUV lines are calculated which provide evidence for structures where a gas stream interacts with the accretion disc. We conclude that the line depth and velocity changes as a function of orbital phase are consistent with an asymmetry that has its origin in a line-emitting, localised disc-stream interaction region.Comment: Accepted for publication in MNRA

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications

    Parallelizing tracking algorithms

    Get PDF
    In several applications, the trajectory of an entity, a feature or an object has to be tracked over a sequence of image frames. When the processing is to be performed in real time, there are important constraints leading to the parallelization of tracking algorithms. This paper presents the results of a concrete implementation, which deals with the particular case of simple objects moving in an context reachable by the vision element (video camera). The steps involved in the solution development are detailed, specially in relation to their parallelization by using a computer heterogeneous network and MPI (Message Passing Interface) support. Finally, an analysis of the different algorithms behavior is carried out together with the obtained results assessment, which allows knowing the performed parallelization efficiency, and determining under which conditions this solution turns out to be the best one.Facultad de InformĂĄtic

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10ÎŒW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193ÎŒW193\mu W and 277ÎŒW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    The VIMOS Integral Field Unit: data reduction methods and quality assessment

    Full text link
    With new generation spectrographs integral field spectroscopy is becoming a widely used observational technique. The Integral Field Unit of the VIsible Multi-Object Spectrograph on the ESO-VLT allows to sample a field as large as 54" x 54" covered by 6400 fibers coupled with micro-lenses. We are presenting here the methods of the data processing software developed to extract the astrophysical signal of faint sources from the VIMOS IFU observations. We focus on the treatment of the fiber-to-fiber relative transmission and the sky subtraction, and the dedicated tasks we have built to address the peculiarities and unprecedented complexity of the dataset. We review the automated process we have developed under the VIPGI data organization and reduction environment (Scodeggio et al. 2005), along with the quality control performed to validate the process. The VIPGI-IFU data processing environment is available to the scientific community to process VIMOS-IFU data since November 2003.Comment: 19 pages, 10 figures and 1 table. Accepted for publication in PAS

    Vision-Based Production of Personalized Video

    No full text
    In this paper we present a novel vision-based system for the automated production of personalised video souvenirs for visitors in leisure and cultural heritage venues. Visitors are visually identified and tracked through a camera network. The system produces a personalized DVD souvenir at the end of a visitor’s stay allowing visitors to relive their experiences. We analyze how we identify visitors by fusing facial and body features, how we track visitors, how the tracker recovers from failures due to occlusions, as well as how we annotate and compile the final product. Our experiments demonstrate the feasibility of the proposed approach

    A Primal-Dual Framework for Real-Time Dense RGB-D Scene Flow

    Get PDF
    This paper presents the first method to compute dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in realtime. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech. Research supported by the Spanish Government under project DPI1011-25483 and the Spanish grant program FPI-MICINN 2012
    • 

    corecore