9 research outputs found

    Development of a non-invasive motion capture system for swimming biomechanics

    Get PDF
    Sports researchers and coaches currently have no practical tool that can accurately and rapidly measure the 3D kinematics of swimmers. Established motion capture methods in biomechanics are not well suited for underwater use, either because they i) are not accurate enough (like depth-based systems, or the visual hull), ii) would impair the movement of swimmers (like sensor- and marker-based systems), or iii) are too time consuming (like manual digitisation). The ideal for swimming motion capture would be a markerless motion capture system that only requires a few cameras. Such a system would automatically extract silhouettes and 2D joint locations from the videos recorded by the cameras, and fit a generic 3D body model to these constraints. The main challenge in developing such a system for swimming motion capture lies in the development of algorithms for silhouette extraction and 2D pose detection (i.e., localisation of joints in image coordinates), which need to perform well on images of swimmers—a task that currently available algorithms fail. The aim of this PhD was the development of such algorithms. Existing datasets do not contain images of swimmers, making it impossible to train algorithms that would perform well in this domain. Therefore, during the PhD two datasets of images of swimmers were constructed and hand-labelled: one, called Scylla, for silhouette extraction (3,100 images); and one, called Charybdis, for 2D pose detection (8,000 images). Scylla and Charybdis are the first datasets developed specifically for training algorithms to perform well on images of swimmers. Indeed, using these datasets, two algorithms were developed during this PhD: FISHnet, for silhouette extraction; and POSEidon, for 2D pose detection. The novelty of FISHnet (which outperformed state-of-the-art algorithms on Scylla) lies in its ability to predict outputs at the same resolution as the inputs, allowing it to reconstruct fine-grained silhouettes. The novelty of POSEidon lies in its unique structure, which allows it to directly regress the x and y coordinates of joints without needing heatmaps. POSEidon is almost as accurate as humans at locating the spinal joints of swimmers, which are essential constraints onto which to fit 3D models. Using these two algorithms, researchers will, in the future, be able to assemble a markerless motion capture system for swimming, which will contribute to improving our understanding of swimming biomechanics, as well as providing coaches a tool with which to monitor the technique of swimmers

    Effect of Silhouette Accuracy on Visual Hull Quality

    Get PDF

    FISHnet: Learning to Segment the Silhouettes of Swimmers

    Get PDF
    We present a novel silhouette extraction algorithm designed for the binary segmentation of swimmers underwater. The intended use of this algorithm is within a 2D-to-3D pipeline for the markerless motion capture of swimmers, a task which has not been achieved satisfactorily, partly due to the absence of silhouette extraction methods that work well on images of swimmers. Our algorithm, FISHnet, was trained on the novel Scylla dataset, which contains 3,100 images (and corresponding hand-traced silhouettes) of swimmers underwater, and achieved a dice score of 0.9712 on its test data. Our algorithm uses a U-Net-like architecture and VGG16 as a backbone. It introduces two novel modules: a modified version of the Semantic Embedding Branch module from ExFuse, which increases the complexity of the features learned by the layers of the encoder; and the Spatial Resolution Enhancer module, which increases the spatial resolution of the features of the decoder before they are skip connected with the features of the encoder. The contribution of these two modules to the performance of our network was marginal, and we attribute this result to the lack of data on which our network was trained. Nevertheless, our model outperformed state-of-the-art silhouette extraction algorithms (namely DeepLabv3+) on Scylla, and it is the first algorithm developed specifically for the task of accurately segmenting the silhouettes of swimmers

    Analysis, Characterization, Prediction and Attribution of Extreme Atmospheric Events with Machine Learning: a Review

    Full text link
    Atmospheric Extreme Events (EEs) cause severe damages to human societies and ecosystems. The frequency and intensity of EEs and other associated events are increasing in the current climate change and global warming risk. The accurate prediction, characterization, and attribution of atmospheric EEs is therefore a key research field, in which many groups are currently working by applying different methodologies and computational tools. Machine Learning (ML) methods have arisen in the last years as powerful techniques to tackle many of the problems related to atmospheric EEs. This paper reviews the ML algorithms applied to the analysis, characterization, prediction, and attribution of the most important atmospheric EEs. A summary of the most used ML techniques in this area, and a comprehensive critical review of literature related to ML in EEs, are provided. A number of examples is discussed and perspectives and outlooks on the field are drawn.Comment: 93 pages, 18 figures, under revie

    Optimisation-based refinement of genesis indices for tropical cyclones

    No full text
    Tropical cyclone genesis indices are valuable tools for studying the relationship between large-scale environmental fields and the genesis of tropical cyclones, supporting the identification of future trends of cyclone genesis. However, their formulation is generally derived from simple statistical models (e.g., multiple linear regression) and are not optimised globally. In this paper, we present a simple framework for optimising genesis indexes given a user-specified trade-off between two performance metrics, which measure how well an index captures the spatial and interannual variability of tropical cyclone genesis. We apply the proposed framework to the popular Emanuel and Nolan Genesis Potential Index, yielding new, optimised formulas that correspond to different trade-offs between spatial and interannual variability. Result show that our refined indexes can improve the performance of the Emanuel and Nolan index up to 8% for spatial variability and 16%–22% for interannual variability; this improvement was found to be statistically significant (p < 0.01). Lastly, by analysing the formulas found, we give some insights into the role of the different inputs of the index in maximising one metric or the other

    Machine learning for distinguishing saudi children with and without autism via eye-tracking data

    No full text
    Abstract Background Despite the prevalence of Autism Spectrum Disorder (ASD) globally, there’s a knowledge gap pertaining to autism in Arabic nations. Recognizing the need for validated biomarkers for ASD, our study leverages eye-tracking technology to understand gaze patterns associated with ASD, focusing on joint attention (JA) and atypical gaze patterns during face perception. While previous studies typically evaluate a single eye-tracking metric, our research combines multiple metrics to capture the multidimensional nature of autism, focusing on dwell times on eyes, left facial side, and joint attention. Methods We recorded data from 104 participants (41 neurotypical, mean age: 8.21 ± 4.12 years; 63 with ASD, mean age 8 ± 3.89 years). The data collection consisted of a series of visual stimuli of cartoon faces of humans and animals, presented to the participants in a controlled environment. During each stimulus, the eye movements of the participants were recorded and analyzed, extracting metrics such as time to first fixation and dwell time. We then used these data to train a number of machine learning classification algorithms, to determine if these biomarkers can be used to diagnose ASD. Results We found no significant difference in eye-dwell time between autistic and control groups on human or animal eyes. However, autistic individuals focused less on the left side of both human and animal faces, indicating reduced left visual field (LVF) bias. They also showed slower response times and shorter dwell times on congruent objects during joint attention (JA) tasks, indicating diminished reflexive joint attention. No significant difference was found in time spent on incongruent objects during JA tasks. These results suggest potential eye-tracking biomarkers for autism. The best-performing algorithm was the random forest one, which achieved accuracy = 0.76 ± 0.08, precision = 0.78 ± 0.13, recall = 0.84 ± 0.07, and F1 = 0.80 ± 0.09. Conclusions Although the autism group displayed notable differences in reflexive joint attention and left visual field bias, the dwell time on eyes was not significantly different. Nevertheless, the machine algorithm model trained on these data proved effective at diagnosing ASD, showing the potential of these biomarkers. Our study shows promising results and opens up potential for further exploration in this under-researched geographical context

    Tribenzylamine C−H Activation and Intermolecular Hydrogen Transfer Promoted by WCl6

    No full text
    The 1:1 molar reaction of WCl6 with tribenzylamine (tba), in dichloromethane, selectively afforded the iminium salt [(PhCH2)2N CHPh][WCl6], 1, and the ammonium one [tbaH][WCl6], 2, in equimolar amounts. The products were fully characterized by means of spectroscopic methods, analytical methods, and X-ray diffractometry. Density functional theory calculations were carried out with the aim of comprehending the mechanistic aspects
    corecore