110 research outputs found
Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction
(c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.This work was carried out with the support of Information Storage S.L., University of Valencia (grant #CPI-15-170), CSD2007-00042 Consolider Ingenio CPAN (grant #CPAN13-TR01) as well as with the support of the Spanish Ministry of Industry, Energy and Tourism (Grant TSI-100101-2013-019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2016). Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction. IEEE Transactions on Medical Imaging. 35(8):1952-1961. https://doi.org/10.1109/TMI.2016.2540929S1952196135
Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings
[EN] We explore three different alternatives for obtaining intrinsic and extrinsic parameters in conventional diagnostic X-ray frameworks: the direct linear transform (DLT), the Zhang method, and the Tsai approach. We analyze and describe the computational, operational, and mathematical background differences for these algorithms when they are applied to ordinary radiograph acquisition. For our study, we developed an initial 3D calibration frame with tin cross-shaped fiducials at specific locations. The three studied methods enable the derivation of projection matrices from 3D to 2D point correlations. We propose a set of metrics to compare the efficiency of each technique. One of these metrics consists of the calculation of the detector pixel density, which can be also included as part of the quality control sequence in general X-ray settings. The results show a clear superiority of the DLT approach, both in accuracy and operational suitability. We paid special attention to the Zhang calibration method. Although this technique has been extensively implemented in the field of computer vision, it has rarely been tested in depth in common radiograph production scenarios. Zhang¿s approach can operate on much simpler and more affordable 2D calibration frames, which were also tested in our research. We experimentally confirm that even three or four plane-image correspondences achieve accurate focal lengths.This work was carried out with the support of Information Storage S. L., University of Valencia (Grant #CPI-15170), CSD2007-00042 Consolider Ingenio CPAN (Grant #CPAN13TR01), Spanish Ministry of Industry, Energy and Tourism (Grant #TSI-100101-2013-019), IFIC (Severo Ochoa Centre of Excellence #SEV-2014-0398), and Dr. Bellot's medical clinic.Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2017). Evaluation of modern camera calibration techniques for conventional diagnostic X-ray imaging settings. Radiological Physics and Technology. 10(1):68-81. https://doi.org/10.1007/s12194-016-0369-yS6881101Selby BP, Sakas G, Groch W-D, Stilla U. Patient positioning with X-ray detector self-calibration for image guided therapy. Aust Phys Eng Sci Med. 2011;34:391–400.Markelj P, Likar B. Registration of 3D and 2D medical images. PhD Thesis, University of Ljubljana; 2010.Miller T, Quintana E. Stereo X-ray system calibration for three-dimensional measurements. Springer, 2014. pp. 201–207.Rougé A, Picard C, Ponchut C, Trousset Y. Geometrical calibration of X-ray imaging chains for three-dimensional reconstruction. Comput Med Imaging Graph. 1993; 295–300.Trucco E, Verri A. Introductory techniques for 3-D computer vision. Prentice Hall Englewood Cliffs, 1998.Moura DC, Barbosa JG, Reis AM, Tavares JMRS. A flexible approach for the calibration of biplanar radiography of the spine on conventional radiological systems. Comput Model Eng Sci. 2010; 115–137.Schumann S, Thelen B, Ballestra S, Nolte L-P, Buchler P, Zheng G. X-ray image calibration and its application to clinical orthopedics. Med Eng Phys. 2014;36:968–74.Selby B, Sakas G, Walter S, Stilla U. Geometry calibration for X-ray equipment in radiation treatment devices. 2007. pp. 968–974.de Moura DC, Barbosa JMG, da Silva Tavares JMR, Reis A. Calibration of bi-planar radiography with minimal phantoms. In: Symposium on Informatics Engineering. 2008. pp. 1–10.Medioni G, Kang SB. Emerging topics in computer vision. Prentice Hall. 2004.Bushong S. Radiologic science for technologists: physics, biology, and protection. Elsevier. 2012.Rowlands JA. The physics of computed radiography. Phys Med Biol. 2002;47:123–66.Dobbins JT, Ergun DL, Rutz L, Hinshaw DA, Blume H, Clark DC. DQE(f) of four generations of computed radiography acquisition devices. Med Phys. 1995;22:1581–93.Hartley R. Self-calibration from multiple views with a rotating camera. In: European Conference on Computer Vision. 1994. pp. 471–478.Tsai R. A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom. 1985;3(4):323–44.Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge University Press. 2004.Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell. 2000;22:1330–4.Remondino F, Fraser C. Digital camera calibration methods: considerations and comparisons. Symposium Image Eng Vis Metrol. 2006;36:266–72.Zollner H, Sablatnig R. Comparison of methods for geometric camera calibration using planar calibration targets. In: Digital Imaging in Media and Education. 2004. pp. 237–244
3D measurements in conventional X-ray imaging with RGB-D sensors
[EN] A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and Xray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body.(C) 2017 IPEM. Published by Elsevier Ltd. All rights reserved.The authors would like to thank the Radiation Oncology Department of the Physics Section at La Fe Hospital for the anthropomorphic phantom used in this work and Jose Manuel Monserrate (Instituto de FĂsica Corpuscular) for his contribution in the development of the calibration frame shown in Fig. 3. This research has the support of Information Storage S.L., University of Valencia (grant CPI-15-170), CSD-2007-00042 Con solider Ingenio CPAN (grant CPAN-13TR01), IFIC (Severo Ochoa Centre of Excellence SEV20140398) as well as the support of the Spanish Ministry of Industry, Energy, and Tourism (grant TSI1001012013019).Albiol Colomer, F.; Corbi, A.; Albiol Colomer, A. (2017). 3D measurements in conventional X-ray imaging with RGB-D sensors. Medical Engineering & Physics. 42:73-79. https://doi.org/10.1016/j.medengphy.2017.01.024S73794
Design of a Remote Signal Processing Student Lab
[EN] We describe our experience of introducing digital signal processing (DSP) concepts via a
software-defined radio project using a very inexpensive TV USB capture dongle. Through a series of weekly
lab exercises, the students learned and applied DSP concepts to design a completely digital FM receiver. The
proposed lab experience introduced concepts, such as sampling, IQ signal representation, sample rate
conversion, filter design, filter delays, and more, all with an attractive learn-by-doing approach. The first
offering of this course initially took place in Fall 2014 and has been successfully offered and repeated with
growing success ever since. Our experience can serve as a proof of concept of the possibility of carrying out,
in a massive open online course-like fashion, certain engineering labs that require inexpensive and readily
available hardware components.This work was supported by the Universidad Internacional de la Rioja through the Research Institute for Innovation and Technology in Education.Albiol Colomer, A.; Corbi, A.; Burgos, D. (2017). Design of a Remote Signal Processing Student Lab. IEEE Access. 5:16068-16076. doi:10.1109/ACCESS.2017.2736165S1606816076
Fast 3D Rotation Estimation of Fruits Using Spheroid Models
[EN] Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core.Albiol Colomer, AJ.; Albiol Colomer, A.; Sánchez De Merás, C. (2021). Fast 3D Rotation Estimation of Fruits Using Spheroid Models. Sensors. 21(6):1-24. https://doi.org/10.3390/s21062232S12421
A portable geometry-independent tomographic system for gamma-ray, a next generation of nuclear waste characterization
One of the main activities of the nuclear industry is the characterisation of radioactive waste based on the detection of gamma radiation. Large volumes of radioactive waste are classified according to their average activity, but often the radioactivity exceeds the maximum allowed by regulators in specific parts of the bulk. In addition, the detection of the radiation is currently based on static detection systems where the geometry of the bulk is fixed and well known. Furthermore, these systems are not portable and depend on the transport of waste to the places where the detection systems are located. However, there are situations where the geometry varies and where moving waste is complex. This is especially true in compromised situations.We present a new model for nuclear waste management based on a portable and geometry-independent tomographic system for three-dimensional image reconstruction for gamma radiation detection. The system relies on a combination of a gamma radiation camera and a visible camera that allows to visualise radioactivity using augmented reality and artificial computer vision techniques. This novel tomographic system has the potential to be a disruptive innovation in the nuclear industry for nuclear waste management
Precise eye localization using HOG descriptors
In this paper, we present a novel algorithm for precise eye detection. First, a couple of AdaBoost classifiers trained with Haar-like features are used to preselect possible eye locations. Then, a Support Vector Machine machine that uses Histograms of Oriented Gradients descriptors is used to obtain the best pair of eyes among all possible combinations of preselected eyes. Finally, we compare the eye detection results with three state-of-the-art works and a commercial software. The results show that our algorithm achieves the highest accuracy on the FERET and FRGCv1 databases, which is the most complete comparative presented so far. © Springer-Verlag 2010.This work has been partially supported by the grant TEC2009-09146 of the Spanish Government.Monzó Ferrer, D.; Albiol Colomer, A.; Sastre, J.; Albiol Colomer, AJ. (2011). Precise eye localization using HOG descriptors. Machine Vision and Applications. 22(3):471-480. https://doi.org/10.1007/s00138-010-0273-0S471480223Riopka, T., Boult, T.: The eyes have it. In: Proceedings of ACM SIGMM Multimedia Biometrics Methods and Applications Workshop, Berkeley, CA, pp. 9–16 (2003)Kim C., Choi C.: Image covariance-based subspace method for face recognition. Pattern Recognit. 40(5), 1592–1604 (2007)Wang, P., Green, M., Ji, Q., Wayman, J.: Automatic eye detection and its validation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 3, San Diego, CA, pp. 164–171 (2005)Amir A., Zimet L., Sangiovanni-Vincentelli A., Kao S.: An embedded system for an eye-detection sensor. Comput. Vis. Image Underst. 98(1), 104–123 (2005)Zhu Z., Ji Q.: Robust real-time eye detection and tracking under variable lighting conditions and various face orientations. Comput. Vis. Image Underst. 98(1), 124–154 (2005)Huang, W., Mariani, R.: Face detection and precise eyes location. In: Proceedings of the International Conference on Pattern Recognition, vol. 4, Washington, DC, USA, pp. 722–727 (2000)Brunelli R., Poggio T.: Face recognition: features versus templates. IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993)Guan, Y.: Robust eye detection from facial image based on multi-cue facial information. In: Proceedings of IEEE International Conference on Control and Automation, pp. 1775–1778 (2007)Rizon, M., Kawaguchi, T.: Automatic eye detection using intensity and edge information. In: Proceedings of TENCON, vol. 2, Kuala Lumpur, Malaysia, pp. 415–420 (2000)Han, C., Liao, H., Yu, K., Chen, L.: Fast face detection via morphology-based pre-processing. In: Proceedings of the 9th International Conference on Image Analysis and Processing, vol. 2. Springer, London, UK, pp. 469–476 (1997)Song J., Chi Z., Liu J.: A robust eye detection method using combined binary edge and intensity information. Pattern Recognit. 39(6), 1110–1125 (2006)Campadelli, P., Lanzarotti, R., Lipori, G.: Precise eye localization through a general-to-specific model definition. In: Proceedings of the British Machine Vision Conference, Edinburgh, Scotland, pp. 187–196 (2006)Smeraldi F., Carmona O., Bign J.: Saccadic search with gabor features applied to eye detection and real-time head tracking. Image Vis. Comput. 18(4), 323–329 (1998)Sirohey S. A., Rosenfeld A.: Eye detection in a face image using linear and nonlinear filters. Pattern Recognit. 34(7), 1367–1391 (2001)Ma, Y., Ding, X., Wang, Z., Wang, N.: Robust precise eye location under probabilistic framework. In: Proceedings of the International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, pp. 339–344 (2004)Lu, H., Zhang, W., Yang D.: Eye detection based on rectangle features and pixel-pattern-based texture features. In: Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems, pp. 746–749 (2007)Jin, L., Yuan, X., Satoh, S., Li, J., Xia, L.: A hybrid classifier for precise and robust eye detection. In: Proceedings of the International Conference on Pattern Recognition, vol. 4, Hong Kong, pp. 731–735 (2006)Vapnik V. N.: The Nature of Statistical Learning Theory. Springer, New York Inc, New York, NY (1995)Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 1, Hawaii, pp. 511–518 (2001)Fasel I., Fortenberry B., Movellan J.: A generative framework for real time object detection and classification. Comput. Vis. Image Underst. 98(1), 182–210 (2005)Huang J., Wechsler H.: Visual routines for eye location using learning and evolution. IEEE Trans. Evolut. Comput. 4(1), 73–82 (2000)Behnke S.: Face localization and tracking in the neural abstraction pyramid. Neural Comput. Appl. 14(2), 97–103 (2005)Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 9th European Conference on Computer Vision, vol. 2, San Diego, CA, pp. 886–893 (2005)Albiol A., Monzo D., Martin A., Sastre J., Albiol A.: Face recognition using hog-ebgm. Pattern Recognit. Lett. 29(10), 1537–1543 (2008)Lowe D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)Bicego, M., Lagorio, A., Grosso, E., Tistarelli M.: On the use of SIFT features for face authentication. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition Workshop, New York, p. 35 (2006)Yang M.-H., Kriegman D., Ahuja N.: Detecting faces in images: a survey. Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)Jain A., Murty M., Flynn P.: Data clustering: a review. ACM Comput. Syst. 31(3), 264–323 (1999)Mikolajczyk K., Schmid C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)Humanscan, BioID database. http://www.bioid.comPeer, P.: CVL Face database, University of Ljubjana. http://www.fri.uni-lj.si/enPhillips P. J., Moon H., Rizvi S. A., Rauss P. J.: The feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Jin, C., Hoffman, K., Marques, J., Jaesik, M., Worek, W.: Overview of the face recognition grand challenge. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 1, San Diego, CA, pp. 947–954 (2005)Jesorsky, O., Kirchberg, K.J., Frischholz, R.: Robust face detection using the hausdorff distance. In: Proceedings of the Third International Conference on Audio- and Video-Based Biometric Person Authentication, Springer, London, UK, pp. 90–95 (2001)Neurotechnologija, Biometrical and Artificial Intelligence Technologies, Verilook SDK. http://www.neurotechnologija.comWitten I., Frank E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn: Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann, San Francisco (2005)Turk M., Pentland A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991
Control activo de ruido en conductos
Peer Reviewe
Single Fusion Image from Collections of Fruit Views for Defect Detection and Classification
[EN] Quality assessment is one of the most common processes in the agri-food industry. Typically, this task involves the analysis of multiple views of the fruit. Generally speaking, analyzing these single views is a highly time-consuming operation. Moreover, there is usually significant overlap between consecutive views, so it might be necessary to provide a mechanism to cope with the redundancy and prevent multiple counting of defect points.
This paper presents a method to create surface maps of fruit from collections of views obtained when the piece is rotating. This single image map combines the information contained in the views, thus reducing the number of analysis operations and avoiding possible miscounts in the number of defects.
After assigning each piece a simple geometrical model, 3D rotation between consecutive views is estimated only from the captured images, without any further need for sensors or information about the conveyor.
The fact that rotation is estimated directly from the views makes this novel methodology readily usable in high throughput industrial inspection machines without any special hardware modification.
As proof of this technique's usefulness, an application is shown where maps have been used as input to a CNN to classify oranges into different categories.Albiol Colomer, AJ.; Sánchez De-Merás, CJ.; Albiol Colomer, A.; Hinojosa, S. (2022). Single Fusion Image from Collections of Fruit Views for Defect Detection and Classification. Sensors. 22(14):1-14. https://doi.org/10.3390/s22145452114221
- …