22,397 research outputs found

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques

    Get PDF
    The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed

    Face recognition using principal component analysis

    Get PDF
    Current methods of face recognition use linear methods to extract features. This causes potentially valuable nonlinear features to be lost. Using a kernel to extract nonlinear features should lead to better feature extraction and, therefore, lower error rates. Kernel Principal Component Analysis (KPCA) will be used as the method for nonlinear feature extraction. KPCA will be compared with well known linear methods such as correlation, Eigenfaces, and Fisherfaces

    Digits Recognition on Medical Device

    Get PDF
    With the rapid development of mobile health, mechanisms for automatic data input are becoming increasingly important for mobile health apps. In these apps, users are often required to input data frequently, especially numbers, from medical devices such as glucometers and blood pressure meters. However, these simple tasks are tedious and prone to error. Even though some Bluetooth devices can make those input operations easier, they are not popular enough due to being expensive and requiring complicated protocol support. Therefore, we propose an automatic procedure to recognize the digits on the screen of medical devices with smartphone cameras. The whole procedure includes several “standard” components in computer vision: image enhancement, the region-of-interest detection, and text recognition. Previous works existed for each component, but they have various weaknesses that lead to a low recognition rate. We proposed several novel enhancements in each component. Experiment results suggest that our enhanced procedure outperforms the procedure of applying optical character recognition directly from 6.2% to 62.1%. This procedure can be adopted (with human verification) to recognize the digits on the screen of medical devices with smartphone cameras

    HQR-Scheme: A High Quality and resilient virtual primary key generation approach for watermarking relational data

    Get PDF
    Most of the watermarking techniques designed to protect relational data often use the Primary Key (PK) of relations to perform the watermark synchronization. Despite offering high confidence to the watermark detection, these approaches become useless if the PK can be erased or updated. A typical example is when an attacker wishes to use a stolen relation, unlinked to the rest of the database. In that case, the original values of the PK lose relevance, since they are not employed to check the referential integrity. Then, it is possible to erase or replace the PK, compromising the watermark detection with no need to perform the slightest modification on the rest of the data. To avoid the problems caused by the PK-dependency some schemes have been proposed to generate Virtual Primary Keys (VPK) used instead. Nevertheless, the quality of the watermark synchronized using VPKs is compromised due to the presence of duplicate values in the set of VPKs and the fragility of the VPK schemes against the elimination of attributes. In this paper, we introduce the metrics to allow precise measuring of the quality of the VPKs generated by any scheme without requiring to perform the watermark embedding. This way, time waste can be avoided in case of low-quality detection. We also analyze the main aspects to design the ideal VPK scheme, seeking the generation of high-quality VPK sets adding robustness to the process. Finally, a new scheme is presented along with the experiments carried out to validate and compare the results with the rest of the schemes proposed in the literature
    corecore