9 research outputs found

    Concurrent Validity of the Inertial Measurement Unit Vmaxpro in Vertical Jump Estimation

    Get PDF
    The aim of this study was to evaluate if the inertial measurement unit (IMU) Vmaxpro is a valid device to estimate vertical jump height (VJH) when compared to a motion capture system (MoCAP). Thirteen highly trained female volleyball players participated in this study which consisted of three sessions. After a familiarization session, two sessions comprised a warm-up followed by ten countermovement jumps, resting two min between each attempt. Jump height was measured simultaneously by Vmaxpro using take-off velocity and MoCAP using center-of-mass vertical excursion. Results show significant differences in jump height between devices (10.52 cm; p < 0.001; ES = 0.9), a very strong Spearman’s correlation (rs = 0.84: p < 0.001), and a weak concordance correlation coefficient (CCC = 0.22; ρ = 0.861; Cb= 0.26). Regression analysis reveals very high correlations, high systematic error (8.46 cm), and a nonproportional random error (SEE = 1.67 cm). Bland–Altman plots show systematic error (10.6 cm) with 97.3 % of the data being within the LoA. In conclusion, Vmaxpro can be considered a valid device for the estimation of VJH, being a cheaper, portable, and manageable alternative to MoCAP. However, the magnitude of systematic error discourages its use where indistinguishable data from Vmaxpro and MoCAP are used unless the corresponding specific fitting equation is applied.This work was supported by Generalitat Valenciana (grant number GV/2021/098)

    Reliability of My Jump 2 Derived from Crouching and Standing Observation Heights

    Get PDF
    The crouching or prone-on-the-ground observation heights suggested by the My Jump app are not practical in some settings, so users usually hold smartphones in a standing posture. This study aimed to analyze the reliability of My Jump 2 from the standardized and standing positions. Two identical smartphones recorded 195 countermovement jump executions from 39 active adult athletes at heights 30 and 90 cm, which were randomly assessed by three experienced observers. The between-observer reliability was high for both observation heights separately (ICC~0.99; SEM~0.6 cm; CV~1.3%) with low systematic (0.1 cm) and random (±1.7 cm) errors. The within-observer reliability for the three observers comparing the standardized and standing positions was high (ICC~0.99; SEM~0.7 cm; CV~1.4%), showing errors of 0.3 ± 1.9 cm. Observer 2 was the least accurate out of the three, although reliability remained similar to the levels of agreement found in the literature. The reliability of the mean observations in each height also revealed high reliability (ICC = 0.993; SEM = 0.51 cm; CV = 1.05%, error 0.32 ± 1.4 cm). Therefore, the reliability in the standing position did not change with respect to the standardized position, so it can be regarded as an alternative method to using My Jump 2 with practical added benefits.This research was funded by Generalitat Valenciana, grant number GV/2021/098

    Ground truth annotation of traffic video data

    Full text link
    This paper presents a software application to generate ground-truth data on video files from traffic surveillance cameras used for Intelligent Transportation Systems (IT systems). The computer vision system to be evaluated counts the number of vehicles that cross a line per time unit intensity-, the average speed and the occupancy. The main goal of the visual interface presented in this paper is to be easy to use without the requirement of any specific hardware. It is based on a standard laptop or desktop computer and a Jog shuttle wheel. The setup is efficient and comfortable because one hand of the annotating person is almost all the time on the space key of the keyboard while the other hand is on the jog shuttle wheel. The mean time required to annotate a video file ranges from 1 to 5 times its duration (per lane) depending on the content. Compared to general purpose annotation tool a time factor gain of about 7 times is achieved.This work was funded by the Spanish Government project MARTA under the CENIT program and CICYT contract TEC2009-09146.Mossi García, JM.; Albiol Colomer, AJ.; Albiol Colomer, A.; Oliver Moll, J. (2014). Ground truth annotation of traffic video data. Multimedia Tools and Applications. 1-14. https://doi.org/10.1007/s11042-013-1396-xS114Albiol A et al (2011) Detection of parked vehicles using spatiotemporal maps. IEEE Trans Intell Transport Syst 12(4):1277–1291Blunsden SJ, Fisher R (2010) The BEHAVE video dataset: ground truthed video for multi-person behavior classification. Annal British Mach Vis Assoc 4:1–12Bradski G, Kaehler A (2008) Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, IncorporatedBrooke J. SUS: a “quick and dirty” usability scale. Usability evaluation in industry. Taylor and FrancisBrostow GJ et al (2009) Semantic object classes in video: a high-definition ground truth database. Pattern Recognit Lett 30(2):88–97Buch N et al (2011) A review of computer vision techniques for the analysis of urban traffic. IEEE Trans Intell Transp Syst 12(3):920–939D’Orazio T et al. (2009) A semi-automatic system for ground truth generation of soccer video sequences. Advanced Video and Signal Based Surveillance, 2009. AVSS’09. Sixth IEEE International Conference on (Sep. 2009), 559–564Dollar P et al (2012) Pedestrian detection: an evaluation of the state of the art. IEEE Trans Pattern Anal Mach Intell 34(4):743–761Faro A et al (2011) Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection. IEEE Trans Intell Transport Syst 12(4):1398–1412Giro-i-Nieto X et al (2010) GAT: a graphical annotation tool for semantic regions. Multimed Tool Appl 46(2–3):155–174i-LIDS. Image Library for Intelligent Detection Systems: www.ilids.co.uk . Home Office Scientific Development Branch, United Kingdom. Last Accessed February 2013Kasturi R et al (2009) Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol. IEEE Trans Pattern Anal Mach Intell 31(2):319–336Laganière R (2011) OpenCV 2 computer vision application programming cookbook. Packt Pub LimitedLorist MM et al (2000) Mental fatigue and task control: planning and preparation. Psychophysiology 37(5):614–625Russell B et al (2008) LabelMe: a database and web-based tool for image annotation. Int J Comput Vis 77(1):157–173Serrano M, Gracía J, Patricio M, Molina J (2010). Interactive video annotation tool. Distributed Computing and Artificial Intelligence, 325–332Traffic City Cameras. Ajuntament de València, Spain. http://camaras.valencia.es . Last Accessed February 2013TREC video retrieval evaluation. http://www-nlpir.nist.gov/projects/trecvid/Vezzani R, Cucchiara R (2010) Video Surveillance Online Repository (ViSOR): an integrated framework. Multimed Tool Appl 50(2):359–380ViPER: the video performance evaluation resource: http://viper-toolkit.sourceforge.net/Volkmer T et al. (2005) A web-based system for collaborative annotation of large image and video collections: an evaluation and user study. Proceedings of the 13th annual ACM international conference on Multimedia (New York, NY, USA, 2005), 892–901Zhang HB, Li SA, Chen SY, Su SZ, Duh DJ, Li SZ (2012) Adaptive photograph retrieval method. Multimedia Tools and Applications, Published online September 2012.Zou Y et al (2011) Traffic incident classification at intersections based on image sequences by HMM/SVM classifiers. Multimed Tool Appl 52(1):133–14

    CNNs for automatic glaucoma assessment using fundus images: an extensive validation

    Get PDF
    Background Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. Methods In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. Results Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92–97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors’ knowledge, all publicly available glaucoma-labelled databases. Conclusions These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and test the five CNNs are publicly available, which could be used as a testbed for further comparisons

    Video-Based System for Automatic Measurement of Barbell Velocity in Back Squat

    Get PDF
    Velocity-based training is a contemporary method used by sports coaches to prescribe the optimal loading based on the velocity of movement of a load lifted. The most employed and accurate instruments to monitor velocity are linear position transducers. Alternatively, smartphone apps compute mean velocity after each execution by manual on-screen digitizing, introducing human error. In this paper, a video-based instrument delivering unattended, real-time measures of barbell velocity with a smartphone high-speed camera has been developed. A custom image-processing algorithm allows for the detection of reference points of a multipower machine to autocalibrate and automatically track barbell markers to give real-time kinematic-derived parameters. Validity and reliability were studied by comparing the simultaneous measurement of 160 repetitions of back squat lifts executed by 20 athletes with the proposed instrument and a validated linear position transducer, used as a criterion. The video system produced practically identical range, velocity, force, and power outcomes to the criterion with low and proportional systematic bias and random errors. Our results suggest that the developed video system is a valid, reliable, and trustworthy instrument for measuring velocity and derived variables accurately with practical implications for use by coaches and practitioners.This work was supported by the Vice-rectorate program of Research and Knowledge transfer for the Promotion of R&D at the University of Alicante (Ref. GRE18-09)

    EXTRACTING RANDOM VIBRATION COMPONENTS FROM GLOBAL MOTION VECTORS

    No full text
    This paper deals with global motion detection applied to vibration restoration in image sequences. Image vibration is a typical degradation which consists of a random translation between consecutive frames. Testing different global motion detection methods, we have chosen the Phase Correlation technique as global motion estimator which obtains the best estimation of the global motion vector (GMV) by processing image sequences in frequency domain. Once the GMV has been determined, we proposed to use an automatic technique for camera panning detection which gives us a good and automatic estimation of image sequence vibration

    Deep-Learning-Based Classification of Rat OCT Images After Intravitreal Injection of ET-1 for Glaucoma Understanding

    No full text
    Optical coherence tomography (OCT) is a useful technique to monitor retinal damage. We present an automatic method to accurately classify rodent OCT images in healthy and pathological (before and after 14 days of intravitreal injection of Endothelin-1, respectively) making use of the DenseNet-201 architecture fine-tuned and a customized top-model. We validated the performance of the method on 1912 OCT images yielding promising results ( AUC=0.99±0.01 in a P=15 leave-P-out cross-validation). Besides, we also compared the results of the fine-tuned network with those achieved training the network from scratch, obtaining some interesting insights. The presented method poses a step forward in understanding pathological rodent OCT retinal images, as at the moment there is no known discriminating characteristic which allows classifying this type of images accurately. The result of this work is a very accurate and robust automatic method to distinguish between healthy and a rodent model of glaucoma, which is the backbone of future works dealing with human OCT images.Animal experiment permission was granted by the Danish Animal Experimentation Council (license number: 2017-15-0201-01213). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work was supported by the Project GALAHAD [H2020-ICT-2016-2017, 732613]Fuentes-Hurtado, FJ.; Morales, S.; Mossi García, JM.; Naranjo Ornedo, V.; Fedulov, V.; Woldbye, D.; Klemp, K.... (2018). Deep-Learning-based Classification of Rat OCT images After Intravitreal Injection of ET-1 for Glaucoma Understanding. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 27-34. https://doi.org/10.1007/978-3-030-03493-1_4S2734Karri, S., Chakraborty, D., Chatterjee, J.: Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. Biomed. Opt. Express 8(2), 579–592 (2017)Pekala, M., Joshi, N., Freund, D.E., Bressler, N.M., et al.: Deep learning based retinal OCT segmentation. arXiv preprint arXiv:1801.09749 (2018)Srinivasan, P.P., Kim, L.A., Mettu, P.S., et al.: Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed. Opt. Express 5(10), 3568–3577 (2014)Lee, C.S., Baughman, D.M., Lee, A.Y.: Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration. arXiv preprint arXiv:1612.04891 (2016)Muhammad, H., Fuchs, T.J., De Cuir, N., De Moraes, C.G., et al.: Hybrid deep learning on single wide-field optical coherence tomography scans accurately classifies glaucoma suspects. J. Glaucoma 26(12), 1086–1094 (2017)Virgili, G., Michelessi, M., Cook, J., Boachie, C., et al.: Diagnostic accuracy of optical coherence tomography for diagnosing glaucoma: secondary analyses of the gate study. Br. J. Ophthalmol. 102(5), 604–610 (2017). bjophthalmol-2017Hood, D.C.: Improving our understanding, and detection, of glaucomatous damage: an approach based upon OCT. Prog. Retin. eye res. 57, 46–75 (2017)Nagata, A., Omachi, K., Higashide, T., et al.: OCT evaluation of neuroprotective effects of tafluprost on retinal injury after intravitreal injection of endothelin-1 in the rat eye. Invest. Ophthalmol. Vis. Sci. 55(2), 1040–1047 (2014)Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, p. 3 (2017)Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014
    corecore