30 research outputs found

    Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction

    Full text link
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.This work was carried out with the support of Information Storage S.L., University of Valencia (grant #CPI-15-170), CSD2007-00042 Consolider Ingenio CPAN (grant #CPAN13-TR01) as well as with the support of the Spanish Ministry of Industry, Energy and Tourism (Grant TSI-100101-2013-019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2016). Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction. IEEE Transactions on Medical Imaging. 35(8):1952-1961. https://doi.org/10.1109/TMI.2016.2540929S1952196135

    Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings

    Full text link
    [EN] We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang¿s approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.This work was carried out with the support of Information Storage S. L., University of Valencia (Grant #CPI-15170), CSD2007-00042 Consolider Ingenio CPAN (Grant #CPAN13TR01), Spanish Ministry of Industry, Energy and Tourism (Grant #TSI-100101-2013-019), IFIC (Severo Ochoa Centre of Excellence #SEV-2014-0398), and Dr. Bellot's medical clinic.Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2017). Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings. Radiological Physics and Technology. 10(1):68-81. https://doi.org/10.1007/s12194-016-0369-yS6881101Selby BP, Sakas G, Groch W-D, Stilla U. Patient positioning with X-ray detector self-calibration for image guided therapy. Aust Phys Eng Sci Med. 2011;34:391–400.Markelj P, Likar B. Registration of 3D and 2D medical images. PhD Thesis, University of Ljubljana; 2010.Miller T, Quintana E. Stereo X-ray system calibration for three-dimensional measurements. Springer, 2014. pp. 201–207.Rougé A, Picard C, Ponchut C, Trousset Y. Geometrical calibration of X-ray imaging chains for three-dimensional reconstruction. Comput Med Imaging Graph. 1993; 295–300.Trucco E, Verri A. Introductory techniques for 3-D computer vision. Prentice Hall Englewood Cliffs, 1998.Moura DC, Barbosa JG, Reis AM, Tavares JMRS. A flexible approach for the calibration of biplanar radiography of the spine on conventional radiological systems. Comput Model Eng Sci. 2010; 115–137.Schumann S, Thelen B, Ballestra S, Nolte L-P, Buchler P, Zheng G. X-ray image calibration and its application to clinical orthopedics. Med Eng Phys. 2014;36:968–74.Selby B, Sakas G, Walter S, Stilla U. Geometry calibration for X-ray equipment in radiation treatment devices. 2007. pp. 968–974.de Moura DC, Barbosa JMG, da Silva Tavares JMR, Reis A. Calibration of bi-planar radiography with minimal phantoms. In: Symposium on Informatics Engineering. 2008. pp. 1–10.Medioni G, Kang SB. Emerging topics in computer vision. Prentice Hall. 2004.Bushong S. Radiologic science for technologists: physics, biology, and protection. Elsevier. 2012.Rowlands JA. The physics of computed radiography. Phys Med Biol. 2002;47:123–66.Dobbins JT, Ergun DL, Rutz L, Hinshaw DA, Blume H, Clark DC. DQE(f) of four generations of computed radiography acquisition devices. Med Phys. 1995;22:1581–93.Hartley R. Self-calibration from multiple views with a rotating camera. In: European Conference on Computer Vision. 1994. pp. 471–478.Tsai R. A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom. 1985;3(4):323–44.Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge University Press. 2004.Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell. 2000;22:1330–4.Remondino F, Fraser C. Digital camera calibration methods: considerations and comparisons. Symposium Image Eng Vis Metrol. 2006;36:266–72.Zollner H, Sablatnig R. Comparison of methods for geometric camera calibration using planar calibration targets. In: Digital Imaging in Media and Education. 2004. pp. 237–244

    3D measurements in conventional X-ray imaging with RGB-D sensors

    Full text link
    [EN] A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and Xray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.(C) 2017 IPEM. Published by Elsevier Ltd. All rights reserved.The authors would like to thank the Radiation Oncology Department of the Physics Section at La Fe Hospital for the anthropomorphic phantom used in this work and Jose Manuel Monserrate (Instituto de FĂ­sica Corpuscular) for his contribution in the development of the calibration frame shown in Fig. 3. This research has the support of Information Storage S.L., University of Valencia (grant CPI-15-170), CSD-2007-00042 Con solider Ingenio CPAN (grant CPAN-13TR01), IFIC (Severo Ochoa Centre of Excellence SEV20140398) as well as the support of the Spanish Ministry of Industry, Energy, and Tourism (grant TSI1001012013019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2017). 3D measurements in conventional X-ray imaging with RGB-D sensors. Medical Engineering & Physics. 42:73-79. https://doi.org/10.1016/j.medengphy.2017.01.024S73794

    Fast 3D Rotation Estimation of Fruits Using Spheroid Models

    Full text link
    [EN] Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core.Albiol Colomer, AJ.; Albiol Colomer, A.; Sánchez De Merás, C. (2021). Fast 3D Rotation Estimation of Fruits Using Spheroid Models. Sensors. 21(6):1-24. https://doi.org/10.3390/s21062232S12421

    Precise eye localization using HOG descriptors

    Full text link
    In this paper, we present a novel algorithm for precise eye detection. First, a couple of AdaBoost classifiers trained with Haar-like features are used to preselect possible eye locations. Then, a Support Vector Machine machine that uses Histograms of Oriented Gradients descriptors is used to obtain the best pair of eyes among all possible combinations of preselected eyes. Finally, we compare the eye detection results with three state-of-the-art works and a commercial software. The results show that our algorithm achieves the highest accuracy on the FERET and FRGCv1 databases, which is the most complete comparative presented so far. © Springer-Verlag 2010.This work has been partially supported by the grant TEC2009-09146 of the Spanish Government.Monzó Ferrer, D.; Albiol Colomer, A.; Sastre, J.; Albiol Colomer, AJ. (2011). Precise eye localization using HOG descriptors. Machine Vision and Applications. 22(3):471-480. https://doi.org/10.1007/s00138-010-0273-0S471480223Riopka, T., Boult, T.: The eyes have it. In: Proceedings of ACM SIGMM Multimedia Biometrics Methods and Applications Workshop, Berkeley, CA, pp. 9–16 (2003)Kim C., Choi C.: Image covariance-based subspace method for face recognition. Pattern Recognit. 40(5), 1592–1604 (2007)Wang, P., Green, M., Ji, Q., Wayman, J.: Automatic eye detection and its validation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 3, San Diego, CA, pp. 164–171 (2005)Amir A., Zimet L., Sangiovanni-Vincentelli A., Kao S.: An embedded system for an eye-detection sensor. Comput. Vis. Image Underst. 98(1), 104–123 (2005)Zhu Z., Ji Q.: Robust real-time eye detection and tracking under variable lighting conditions and various face orientations. Comput. Vis. Image Underst. 98(1), 124–154 (2005)Huang, W., Mariani, R.: Face detection and precise eyes location. In: Proceedings of the International Conference on Pattern Recognition, vol. 4, Washington, DC, USA, pp. 722–727 (2000)Brunelli R., Poggio T.: Face recognition: features versus templates. IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993)Guan, Y.: Robust eye detection from facial image based on multi-cue facial information. In: Proceedings of IEEE International Conference on Control and Automation, pp. 1775–1778 (2007)Rizon, M., Kawaguchi, T.: Automatic eye detection using intensity and edge information. In: Proceedings of TENCON, vol. 2, Kuala Lumpur, Malaysia, pp. 415–420 (2000)Han, C., Liao, H., Yu, K., Chen, L.: Fast face detection via morphology-based pre-processing. In: Proceedings of the 9th International Conference on Image Analysis and Processing, vol. 2. Springer, London, UK, pp. 469–476 (1997)Song J., Chi Z., Liu J.: A robust eye detection method using combined binary edge and intensity information. Pattern Recognit. 39(6), 1110–1125 (2006)Campadelli, P., Lanzarotti, R., Lipori, G.: Precise eye localization through a general-to-specific model definition. In: Proceedings of the British Machine Vision Conference, Edinburgh, Scotland, pp. 187–196 (2006)Smeraldi F., Carmona O., Bign J.: Saccadic search with gabor features applied to eye detection and real-time head tracking. Image Vis. Comput. 18(4), 323–329 (1998)Sirohey S. A., Rosenfeld A.: Eye detection in a face image using linear and nonlinear filters. Pattern Recognit. 34(7), 1367–1391 (2001)Ma, Y., Ding, X., Wang, Z., Wang, N.: Robust precise eye location under probabilistic framework. In: Proceedings of the International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, pp. 339–344 (2004)Lu, H., Zhang, W., Yang D.: Eye detection based on rectangle features and pixel-pattern-based texture features. In: Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems, pp. 746–749 (2007)Jin, L., Yuan, X., Satoh, S., Li, J., Xia, L.: A hybrid classifier for precise and robust eye detection. In: Proceedings of the International Conference on Pattern Recognition, vol. 4, Hong Kong, pp. 731–735 (2006)Vapnik V. N.: The Nature of Statistical Learning Theory. Springer, New York Inc, New York, NY (1995)Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 1, Hawaii, pp. 511–518 (2001)Fasel I., Fortenberry B., Movellan J.: A generative framework for real time object detection and classification. Comput. Vis. Image Underst. 98(1), 182–210 (2005)Huang J., Wechsler H.: Visual routines for eye location using learning and evolution. IEEE Trans. Evolut. Comput. 4(1), 73–82 (2000)Behnke S.: Face localization and tracking in the neural abstraction pyramid. Neural Comput. Appl. 14(2), 97–103 (2005)Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 9th European Conference on Computer Vision, vol. 2, San Diego, CA, pp. 886–893 (2005)Albiol A., Monzo D., Martin A., Sastre J., Albiol A.: Face recognition using hog-ebgm. Pattern Recognit. Lett. 29(10), 1537–1543 (2008)Lowe D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)Bicego, M., Lagorio, A., Grosso, E., Tistarelli M.: On the use of SIFT features for face authentication. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition Workshop, New York, p. 35 (2006)Yang M.-H., Kriegman D., Ahuja N.: Detecting faces in images: a survey. Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)Jain A., Murty M., Flynn P.: Data clustering: a review. ACM Comput. Syst. 31(3), 264–323 (1999)Mikolajczyk K., Schmid C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)Humanscan, BioID database. http://www.bioid.comPeer, P.: CVL Face database, University of Ljubjana. http://www.fri.uni-lj.si/enPhillips P. J., Moon H., Rizvi S. A., Rauss P. J.: The feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Jin, C., Hoffman, K., Marques, J., Jaesik, M., Worek, W.: Overview of the face recognition grand challenge. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 1, San Diego, CA, pp. 947–954 (2005)Jesorsky, O., Kirchberg, K.J., Frischholz, R.: Robust face detection using the hausdorff distance. In: Proceedings of the Third International Conference on Audio- and Video-Based Biometric Person Authentication, Springer, London, UK, pp. 90–95 (2001)Neurotechnologija, Biometrical and Artificial Intelligence Technologies, Verilook SDK. http://www.neurotechnologija.comWitten I., Frank E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn: Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann, San Francisco (2005)Turk M., Pentland A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991

    Design of a Remote Signal Processing Student Lab

    Full text link
    [EN] We describe our experience of introducing digital signal processing (DSP) concepts via a software-defined radio project using a very inexpensive TV USB capture dongle. Through a series of weekly lab exercises, the students learned and applied DSP concepts to design a completely digital FM receiver. The proposed lab experience introduced concepts, such as sampling, IQ signal representation, sample rate conversion, filter design, filter delays, and more, all with an attractive learn-by-doing approach. The first offering of this course initially took place in Fall 2014 and has been successfully offered and repeated with growing success ever since. Our experience can serve as a proof of concept of the possibility of carrying out, in a massive open online course-like fashion, certain engineering labs that require inexpensive and readily available hardware components.This work was supported by the Universidad Internacional de la Rioja through the Research Institute for Innovation and Technology in Education.Albiol Colomer, A.; Corbi, A.; Burgos, D. (2017). Design of a Remote Signal Processing Student Lab. IEEE Access. 5:16068-16076. doi:10.1109/ACCESS.2017.2736165S1606816076

    A portable geometry-independent tomographic system for gamma-ray, a next generation of nuclear waste characterization

    Get PDF
    One of the main activities of the nuclear industry is the characterisation of radioactive waste based on the detection of gamma radiation. Large volumes of radioactive waste are classified according to their average activity, but often the radioactivity exceeds the maximum allowed by regulators in specific parts of the bulk. In addition, the detection of the radiation is currently based on static detection systems where the geometry of the bulk is fixed and well known. Furthermore, these systems are not portable and depend on the transport of waste to the places where the detection systems are located. However, there are situations where the geometry varies and where moving waste is complex. This is especially true in compromised situations.We present a new model for nuclear waste management based on a portable and geometry-independent tomographic system for three-dimensional image reconstruction for gamma radiation detection. The system relies on a combination of a gamma radiation camera and a visible camera that allows to visualise radioactivity using augmented reality and artificial computer vision techniques. This novel tomographic system has the potential to be a disruptive innovation in the nuclear industry for nuclear waste management

    Single Fusion Image from Collections of Fruit Views for Defect Detection and Classification

    Full text link
    [EN] Quality assessment is one of the most common processes in the agri-food industry. Typically, this task involves the analysis of multiple views of the fruit. Generally speaking, analyzing these single views is a highly time-consuming operation. Moreover, there is usually significant overlap between consecutive views, so it might be necessary to provide a mechanism to cope with the redundancy and prevent multiple counting of defect points. This paper presents a method to create surface maps of fruit from collections of views obtained when the piece is rotating. This single image map combines the information contained in the views, thus reducing the number of analysis operations and avoiding possible miscounts in the number of defects. After assigning each piece a simple geometrical model, 3D rotation between consecutive views is estimated only from the captured images, without any further need for sensors or information about the conveyor. The fact that rotation is estimated directly from the views makes this novel methodology readily usable in high throughput industrial inspection machines without any special hardware modification. As proof of this technique's usefulness, an application is shown where maps have been used as input to a CNN to classify oranges into different categories.Albiol Colomer, AJ.; Sánchez De-Merás, CJ.; Albiol Colomer, A.; Hinojosa, S. (2022). Single Fusion Image from Collections of Fruit Views for Defect Detection and Classification. Sensors. 22(14):1-14. https://doi.org/10.3390/s22145452114221

    Detection of Parked Vehicles using Spatio-temporal Maps

    Full text link
    This paper presents a video-based approach to detect the presence of parked vehicles in street lanes. Potential applications include the detection of illegally and double-parked vehicles in urban scenarios and incident detection on roads. The technique extracts information from low-level feature points (Harris corners) to create spatiotemporal maps that describe what is happening in the scene. The method neither relies on background subtraction nor performs any form of object tracking. The system has been evaluated using private and public data sets and has proven to be robust against common difficulties found in closed-circuit television video, such as varying illumination, camera vibration, the presence of momentary occlusion by other vehicles, and high noise levels. © 2011 IEEE.This work was supported by the Spanish Government project Movilidad y automocion en Redes de Transporte Avanzadas (MARTA) under the Consorcios Estrategicos Nacionales de Investigacion Tecnologica (CENIT) program and the Comision Interministerial Ciencia Y Tecnologia (CICYT) under Contract TEC2009-09146. The Associate Editor for this paper was R. W. Goudy.Albiol Colomer, AJ.; Sanchis Pastor, L.; Albiol Colomer, A.; Mossi García, JM. (2011). Detection of Parked Vehicles using Spatio-temporal Maps. IEEE Transactions on Intelligent Transportation Systems. 12(4):1277-1291. https://doi.org/10.1109/TITS.2011.2156791S1277129112

    Using latent features for short-term person re-identification with RGB-D cameras

    Full text link
    This paper presents a system for people re-identification in uncontrolled scenarios using RGB-depth cameras. Compared to conventional RGB cameras, the use of depth information greatly simplifies the tasks of segmentation and tracking. In a previous work, we proposed a similar architecture where people were characterized using color-based descriptors that we named bodyprints. In this work, we propose the use of latent feature models to extract more relevant information from the bodyprint descriptors by reducing their dimensionality. Latent features can also cope with missing data in case of occlusions. Different probabilistic latent feature models, such as probabilistic principal component analysis and factor analysis, are compared in the paper. The main difference between the models is how the observation noise is handled in each case. Re-identification experiments have been conducted in a real store where people behaved naturally. The results show that the use of the latent features significantly improves the re-identification rates compared to state-of-the-art works.The work presented in this paper has been funded by the Spanish Ministry of Science and Technology under the CICYT contract TEVISMART, TEC2009-09146.Oliver Moll, J.; Albiol Colomer, A.; Albiol Colomer, AJ.; Mossi García, JM. (2016). Using latent features for short-term person re-identification with RGB-D cameras. Pattern Analysis and Applications. 19(2):549-561. https://doi.org/10.1007/s10044-015-0489-8S549561192http://kinectforwindows.org/http://www.gpiv.upv.es/videoresearch/personindexing.htmlAlbiol A, Albiol A, Oliver J, Mossi JM (2012) Who is who at different cameras. Matching people using depth cameras. Comput Vis IET 6(5):378–387Bak S, Corvee E, Bremond F, Thonnat M (2010) Person re-identification using haar-based and dcd-based signature. In: 2nd workshop on activity monitoring by multi-camera surveillance systems, AMMCSS 2010, in conjunction with 7th IEEE international conference on advanced video and signal-based surveillance, AVSS. AVSSBak S, Corvee E, Bremond F, Thonnat M (2010) Person re-identification using spatial covariance regions of human body parts. In: Seventh IEEE international conference on advanced video and signal based surveillance. pp. 435–440Bak S, Corvee E, Bremond F, Thonnat M (2011) Multiple-shot human re-identification by mean riemannian covariance grid. In: Advanced video and signal-based surveillance. Klagenfurt, Autriche. http://hal.inria.fr/inria-00620496Baltieri D, Vezzani R, Cucchiara R, Utasi A, BenedeK C, Szirányi T (2011) Multi-view people surveillance using 3d information. In: ICCV workshops. pp. 1817–1824Barbosa BI, Cristani M, Del Bue A, Bazzani L, Murino V (2012) Re-identification with rgb-d sensors. In: First international workshop on re-identificationBasilevsky A (1994) Statistical factor analysis and related methods: theory and applications. Willey, New YorkBäuml M, Bernardin K, Fischer k, Ekenel HK, Stiefelhagen R (2010) Multi-pose face recognition for person retrieval in camera networks. In: International conference on advanced video and signal-based surveillanceBazzani L, Cristani M, Perina A, Farenzena M, Murino V (2010) Multiple-shot person re-identification by hpe signature. In: Proceedings of the 2010 20th international conference on pattern recognition. Washington, DC, USA, pp. 1413–1416Bird ND, Masoud O, Papanikolopoulos NP, Isaacs A (2005) Detection of loitering individuals in public transportation areas. IEEE Trans Intell Transp Syst 6(2):167–177Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, SecaucusCha SH (2007) Comprehensive survey on distance/similarity measures between probability density functions. Int J Math Models Methods Appl Sci 1(4):300–307Cheng YM, Zhou WT, Wang Y, Zhao CH, Zhang SW (2009) Multi-camera-based object handoff using decision-level fusion. In: Conference on image and signal processing. pp. 1–5Dikmen M, Akbas E, Huang TS, Ahuja N (2010) Pedestrian recognition with a learned metric. In: Asian conference in computer visionDoretto G, Sebastian T, Tu P, Rittscher J (2011) Appearance-based person reidentification in camera networks: problem overview and current approaches. J Ambient Intell Humaniz Comput 2:1–25Farenzena M, Bazzani L, Perina A, Murino V, Cristani M (2010) Person re-identification by symmetry-driven accumulation of local features. In: Proceedings of the 2010 IEEE computer society conference on computer vision and pattern recognition (CVPR 2010). IEEE Computer Society, San Francisco, CA, USAFodor I (2002) A survey of dimension reduction techniques. Technical report. Lawrence Livermore National LaboratoryFreund Y, Iyer R, Schapire RE, Singer Y (2003) An efficient boosting algorithm for combining preferences. J Mach Learn Res 4:933–969Gandhi T, Trivedi M (2006) Panoramic appearance map (pam) for multi-camera based person re-identification. Advanced Video and Signal Based Surveillance, IEEE Conference on, p. 78Garcia J, Gardel A, Bravo I, Lazaro J (2014) Multiple view oriented matching algorithm for people reidentification. Ind Inform IEEE Trans 10(3):1841–1851Gheissari N, Sebastian TB, Hartley R (2006) Person reidentification using spatiotemporal appearance. CVPR 2:1528–1535Gray D, Brennan S, Tao H (2007) Evaluating appearance models for recognition, reacquisition, and tracking. In: Proceedings of IEEE international workshop on performance evaluation for tracking and surveillance (PETS)Gray D, Tao H (2008) Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: Proceedings of the 10th european conference on computer vision: part I. Berlin, pp. 262–275 (2008)Ilin A, Raiko T (2010) Practical approaches to principal component analysis in the presence of missing values. J Mach Learn Res 99:1957–2000Javed O, Shafique O, Rasheed Z, Shah M (2008) Modeling inter-camera space–time and appearance relationships for tracking across non-overlapping views. Comput Vis Image Underst 109(2):146–162Kai J, Bodensteiner C, Arens M (2011) Person re-identification in multi-camera networks. In: Computer vision and pattern recognition workshops (CVPRW), 2011 IEEE computer society conference on, pp. 55–61Kuo CH, Huang C, Nevatia R (2010) Inter-camera association of multi-target tracks by on-line learned appearance affinity models. Proceedings of the 11th european conference on computer vision: part I, ECCV’10. Springer, Berlin, pp 383–396Lan R, Zhou Y, Tang YY, Chen C (2014) Person reidentification using quaternionic local binary pattern. In: Multimedia and expo (ICME), 2014 IEEE international conference on, pp. 1–6Loy CC, Liu C, Gong S (2013) Person re-identification by manifold ranking. In: icip. pp. 3318–3325Madden C, Cheng E, Piccardi M (2007) Tracking people across disjoint camera views by an illumination-tolerant appearance representation. Mach Vis Appl 18:233–247Mazzon R, Tahir SF, Cavallaro A (2012) Person re-identification in crowd. Pattern Recogn Lett 33(14):1828–1837Oliveira IO, Souza Pio JL (2009) People reidentification in a camera network. In: Eighth IEEE international conference on dependable, autonomic and secure computing. pp. 461–466Papadakis P, Pratikakis I, Theoharis T, Perantonis SJ (2010) Panorama: a 3d shape descriptor based on panoramic views for unsupervised 3d object retrieval. Int J Comput Vis 89(2–3):177–192Prosser B, Zheng WS, Gong S, Xiang T (2010) Person re-identification by support vector ranking. In: Proceedings of the British machine vision conference. BMVA Press, pp. 21.1–21.11Roweis S (1998) Em algorithms for pca and spca. In: Advances in neural information processing systems. MIT Press, Cambridge, pp. 626–632 (1998)Pedagadi S, Orwell J, Velastin S, Boghossian B (2013) Local fisher discriminant analysis for pedestrian re-identification. In: CVPR. pp. 3318–3325Satta R, Fumera G, Roli F (2012) Fast person re-identification based on dissimilarity representations. Pattern Recogn Lett, Special Issue on Novel Pattern Recognition-Based Methods for Reidentification in Biometric Context 33:1838–1848Tao D, Jin L, Wang Y, Li X (2015) Person reidentification by minimum classification error-based kiss metric learning. Cybern IEEE Trans 45(2):242–252Tipping ME, Bishop CM (1999) Probabilistic principal component analysis. J R Stat Soc Ser B 61:611–622Tisse CL, Martin L, Torres L, Robert M (2002) Person identification technique using human iris recognition. In: Proceedings of vision interface, pp 294–299Vandergheynst P, Bierlaire M, Kunt M, Alahi A (2009) Cascade of descriptors to detect and track objects across any network of cameras. Comput Vis Image Underst, pp 1413–1416Verbeek J (2009) Notes on probabilistic pca with missing values. Technical reportWang D, Chen CO, Chen TY, Lee CT (2009) People recognition for entering and leaving a video surveillance area. In: Fourth international conference on innovative computing, information and control. pp. 334–337Zhang Z, Troje NF (2005) View-independent person identification from human gait. Neurocomputing 69:250–256Zhao T, Aggarwal M, Kumar R, Sawhney H (2005) Real-time wide area multi-camera stereo tracking. In: IEEE computer society conference on computer vision and pattern recognition. pp. 976–983Zheng S, Xie B, Huang K, Tao D (2011) Multi-view pedestrian recognition using shared dictionary learning with group sparsity. In: Lu BL, Zhang L, Kwok JT (eds) ICONIP (3), Lecture notes in computer science, vol 7064. Springer, New York, pp. 629–638Zheng WS, Gong S, Xiang T (2011) Person re-identification by probabilistic relative distance comparison. In: Computer vision and pattern recognition (CVPR), 2011 IEEE conference on. pp. 649–65
    corecore