217 research outputs found

    Effectiveness of an automatic tracking software in underwater motion analysis

    Get PDF
    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software

    Real Time Stereo Cameras System Calibration Tool and Attitude and Pose Computation with Low Cost Cameras

    Get PDF
    The Engineering in autonomous systems has many strands. The area in which this work falls, the artificial vision, has become one of great interest in multiple contexts and focuses on robotics. This work seeks to address and overcome some real difficulties encountered when developing technologies with artificial vision systems which are, the calibration process and pose computation of robots in real-time. Initially, it aims to perform real-time camera intrinsic (3.2.1) and extrinsic (3.3) stereo camera systems calibration needed to the main goal of this work, the real-time pose (position and orientation) computation of an active coloured target with stereo vision systems. Designed to be intuitive, easy-to-use and able to run under real-time applications, this work was developed for use either with low-cost and easy-to-acquire or more complex and high resolution stereo vision systems in order to compute all the parameters inherent to this same system such as the intrinsic values of each one of the cameras and the extrinsic matrices computation between both cameras. More oriented towards the underwater environments, which are very dynamic and computationally more complex due to its particularities such as light reflections. The available calibration information, whether generated by this tool or loaded configurations from other tools allows, in a simplistic way, to proceed to the calibration of an environment colorspace and the detection parameters of a specific target with active visual markers (4.1.1), useful within unstructured environments. With a calibrated system and environment, it is possible to detect and compute, in real time, the pose of a target of interest. The combination of position and orientation or attitude is referred as the pose of an object. For performance analysis and quality of the information obtained, this tools are compared with others already existent.A engenharia de sistemas autónomos actua em diversas vertentes. Uma delas, a visão artificial, em que este trabalho assenta, tornou-se uma das de maior interesse em múltiplos contextos e focos na robótica. Assim, este trabalho procura abordar e superar algumas dificuldades encontradas aquando do desenvolvimento de tecnologias baseadas na visão artificial. Inicialmente, propõe-se a fornecer ferramentas para realizar as calibrações necessárias de intrínsecos (3.2.1) e extrínsecos (3.3) de sistemas de visão stereo em tempo real para atingir o objectivo principal, uma ferramenta de cálculo da posição e orientação de um alvo activo e colorido através de sistemas de visão stereo. Desenhadas para serem intuitivas, fáceis de utilizar e capazes de operar em tempo real, estas ferramentas foram desenvolvidas tendo em vista a sua integração quer com camaras de baixo custo e aquisição fácil como com camaras mais complexas e de maior resolução. Propõem-se a realizar a calibração dos parâmetros inerentes ao sistema de visão stereo como os intrínsecos de cada uma das camaras e as matrizes de extrínsecos que relacionam ambas as camaras. Este trabalho foi orientado para utilização em meio subaquático onde se presenciam ambientes com elevada dinâmica visual e maior complexidade computacional devido `a suas particularidades como reflexões de luz e má visibilidade. Com a informação de calibração disponível, quer gerada pelas ferramentas fornecidas, quer obtida a partir de outras, pode ser carregada para proceder a uma calibração simplista do espaço de cor e dos parâmetros de deteção de um alvo específico com marcadores ativos coloridos (4.1.1). Estes marcadores são ´uteis em ambientes não estruturados. Para análise da performance e qualidade da informação obtida, as ferramentas de calibração e cálculo de pose (posição e orientação), serão comparadas com outras já existentes

    Three-dimensional joint kinematics of swimming using body-worn inertial and magnetic sensors

    Get PDF
    Wearable inertial and magnetic measurements units (IMMU) are an important tool for underwater motion analysis because they are swimmer-centric, they require only simple measurement set-up and they provide the performance results very quickly. In order to estimate 3D joint kinematics during motion, protocols were developed to transpose the IMMU orientation estimation to a biomechanical model. The aim of the thesis was to validate a protocol originally propositioned to estimate the joint angles of the upper limbs during one-degree-of-freedom movements in dry settings and herein modified to perform 3D kinematics analysis of shoulders, elbows and wrists during swimming. Eight high-level swimmers were assessed in the laboratory by means of an IMMU while simulating the front crawl and breaststroke movements. A stereo-photogrammetric system (SPS) was used as reference. The joint angles (in degrees) of the shoulders (flexion-extension, abduction-adduction and internal-external rotation), the elbows (flexion-extension and pronation-supination), and the wrists (flexion-extension and radial-ulnar deviation) were estimated with the two systems and compared by means of root mean square errors (RMSE), relative RMSE, Pearson’s product-moment coefficient correlation (R) and coefficient of multiple correlation (CMC). Subsequently, the athletes were assessed during pool swimming trials through the IMMU. Considering both swim styles and all joint degrees of freedom modeled, the comparison between the IMMU and the SPS showed median values of RMSE lower than 8°, representing 10% of overall joint range of motion, high median values of CMC (0.97) and R (0.96). These findings suggest that the protocol accurately estimated the 3D orientation of the shoulders, elbows and wrists joint during swimming with accuracy adequate for the purposes of research. In conclusion, the proposed method to evaluate the 3D joint kinematics through IMMU was revealed to be a useful tool for both sport and clinical contexts

    Study and Characterization of a Camera-based Distributed System for Large-Volume Dimensional Metrology Applications

    Get PDF
    Large-Volume Dimensional Metrology (LVDM) deals with dimensional inspection of large objects with dimensions in the order of tens up to hundreds of meters. Typical large volume dimensional metrology applications concern the assembly/disassembly phase of large objects, referring to industrial engineering. Based on different technologies and measurement principles, a wealth of LVDM systems have been proposed and developed in the literature, just to name a few, e.g., optical based systems such as laser tracker, laser radar, and mechanical based systems such as gantry CMM and multi-joints artificial arm CMM, and so on. Basically, the main existing LVDM systems can be divided into two categories, i.e. centralized systems and distributed systems, according to the scheme of hardware configuration. By definition, a centralized system is a stand-alone unit which works independently to provide measurements of a spatial point, while a distributed system, is defined as a system that consists of a series of sensors which work cooperatively to provide measurements of a spatial point, and usually individual sensor cannot measure the coordinates separately. Some representative distributed systems in the literature are iGPS, MScMS-II, and etc. The current trend of LVDM systems seem to orient towards distributed systems, and actually, distributed systems demonstrate many advantages that distinguish themselves from conventional centralized systems

    Target Tracking Using Optical Markers for Remote Handling in ITER

    Get PDF
    The thesis focuses on the development of a vision system to be used in the remote handling systems of the International Thermonuclear Experimental Rector - ITER. It presents and discusses a realistic solution to estimate the pose of key operational targets, while taking into account the specific needs and restrictions of the application. The contributions to the state of the art are in two main fronts: 1) the development of optical markers that can withstand the extreme conditions in the environment; 2) the development of a robust marker detection and identification framework that can be effectively applied to different use cases. The markers’ locations and labels are used in computing the pose. In the first part of the work, a retro reflective marker made up ITER compliant materials, particularly, fused silica and stainless steel, is designed. A methodology is proposed to optimize the markers’ performance. Highly distinguishable markers are manufactured and tested. In the second part, a hybrid pipeline is proposed that detects uncoded markers in low resolution images using classical methods and identifies them using a machine learning approach. It is demonstrated that the proposed methodology effectively generalizes to different marker constellations and can successfully detect both retro reflective markers and laser engravings. Lastly, a methodology is developed to evaluate the end-to-end accuracy of the proposed solution using the feedback provided by an industrial robotic arm. Results are evaluated in a realistic test setup for two significantly different use cases. Results show that marker based tracking is a viable solution for the problem at hand and can provide superior performance to the earlier stereo matching based approaches. The developed solutions could be applied to other use cases and applications

    Development of an Exteroceptive Sensor Suite on Unmanned Surface Vessels for Real-Time Classification of Navigational Markers

    Get PDF
    This thesis presents the development of an exteroceptive sensor suite for real-time detection and classification of navigational markers on Unmanned Surface Vessels. Three sensors were used to complete this task: a 3D LIDAR and two visible light cameras. First, all LIDAR points were transformed from the sensor’s reference frame to the local frame using a Kalman filter to estimate instantaneous vehicle pose. Next, objects were chosen from the LIDAR data to be classified using either Multivariate Gaussian or Parzen Window Classifiers. Both produce 96% accuracy or better, however, multivariate Gaussian ran considerably faster than the Parzen and was simpler to implement and was therefore chosen as the final classifier. Additionally, regions of interest images based on the Multivariate Gaussian classification were extracted from the full camera images to improve marker knowledge. This sensor suite and set of algorithms underwent extensive testing on Embry-Riddle’s Maritime RobotX and RoboBoat platforms and greatly improves the ability to quickly and accurately identify multiple navigational markers, which is paramount to the success of any Unmanned Surfaces Vessel

    Comparison Marker-Based and Markerless Motion Capture Systems in Gait Biomechanics During Running

    Get PDF
    Background: Markerless (ML) motion capture systems have recently become available for biomechanics applications. Evidence has indicated the potential feasibility of using an ML system to analyze lower extremity kinematics. However, no research examined ML systems’ estimation of the lower extremity joint moments and powers. Objectives: This study primarily aimed to compare lower extremity joint moments and powers estimated by marker-based (MB) and ML motion capture systems during treadmill running. The secondary purpose was to investigate if movement’s speed would affect the ML’s performance. Methods: Sixteen volunteers ran on a treadmill for 120 s for each trial at the speed of 2.24, 2.91, and 3.58 m/s, respectively. The kinematic data were simultaneously recorded by 8 infrared cameras and 8 high-resolution video cameras. The force data were recorded via an instrumented treadmill. Results: Compared to the MB system, the ML system estimated greater increased hip and knee joint kinetics with faster speeds during the swing phase. Additionally, increased greater ankle joint moments with speed estimated by the ML system were observed at the early swing phase. In contrast, the greater ankle joint powers occurred at the initial stance phase. Conclusions: These observations indicated that inconsistent segment pose estimations (mainly the center of mass estimated by ML was farther away from the relevant distal joint center) might lead to systematic differences in joint moments and powers estimated by MB and ML systems. Despite the promising applications of the ML system in clinical settings, systematic ML overestimation requires extra attention

    Advanced photon counting techniques for long-range depth imaging

    Get PDF
    The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging applications. The work of this Thesis concentrates on the development and investigation of functional TCSPC-based long-range scanning time-of-flight (TOF) depth imaging systems. Although these systems have several different configurations and functions, all can facilitate depth profiling of remote targets at low light levels and with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode (SPAD) module were employed for developing kilometre-range TOF depth imaging systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor (CMOS) 32×32 Si-SPAD detector array was developed. This system was used with structured illumination to examine the potential for covert, eye-safe and high-speed depth imaging. In order to improve the light coupling efficiency onto the detectors, the arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using flip-chip bonding technology. This approach led to the improvement in the fill factor by up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR system was developed using a tunable broadband pulsed supercontinuum laser source which can provide simultaneous multispectral illumination, at wavelengths of 531, 570, 670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to provide the determination of physiological parameters as a function of the tree depth profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth images were estimated using spatial correlation techniques in order to reduce the aggregate number of photon required for depth reconstruction with low error. A depth imaging system was characterised and re-configured to reduce the effects of scintillation due to atmospheric turbulence. In addition, depth images were analysed in terms of spatial and depth resolution

    Evaluation of surface defect detection in reinforced concrete bridge decks using terrestrial LiDAR

    Get PDF
    Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information
    • …
    corecore