745 research outputs found

    The efficacy of virtual reality in professional soccer

    Get PDF
    Professional soccer clubs have taken an interest to virtual reality, however, only a paucity of evidence exists to support its use in the soccer training ground environment. Further, several soccer virtual reality companies have begun providing solutions to teams, claiming to test specific characteristics of players, yet supportive evidence for certain measurement properties remain absent from the literature. The aims of this thesis were to explore the efficacy of virtual reality being used in the professional football training ground environment. To do so, this thesis looked to explore the fundamental measurement properties of soccer specific virtual reality tests, along with the perceptions of professional coaches, backroom staff, and players that could use virtual reality. The first research study (Chapter 3) aimed to quantify the learning effect during familiarisation trials of a soccer-specific virtual reality task. Thirty-four professional soccer players age, stature, and body mass: mean (SD) 20 (3.4) years; 180 (7) cm; 79 (8) kg, participated in six trials of a virtual reality soccer passing task. The task required participants to receive and pass 30 virtual soccer balls into highlighted mini-goals that surrounded the participant. The number of successful passes were recorded in each trial. The one-sided Bayesian paired samples t-test indicated very strong evidence in favour of the alternative hypothesis (H1)(BF10 = 46.5, d = 0.56 [95% CI = 0.2 to 0.92]) for improvements in total goals scored between trial 1: 13.6 (3.3) and trial 2: 16 (3.3). Further, the Bayesian paired-samples equivalence t-tests indicated strong evidence in favour of H1 (BF10 = 10.2, d = 0.24 [95% CI = -0.09 to 0.57]) for equivalence between trial 4: 16.7 (3.7) and trial 5: 18.2 (4.7); extreme evidence in favour of H1 (BF10 = 132, d = -0.02 [95% CI = -0.34 to 0.30]) for equivalence between trials 5 and 6: 18.1 (3.5); and moderate evidence in favour of H1 (BF10 = 8.4, d = 0.26 [95% CI = -0.08 to 0.59]) for equivalence between trials 4 and 6. Sufficient evidence indicated that a learning effect took place between the first two trials, and that up to five trials might be necessary for performance to plateau in a specific virtual reality soccer passing task.The second research study (Chapter 4) aimed to assess the validity of a soccer passing task by comparing passing ability between virtual reality and real-world conditions. A previously validated soccer passing test was replicated into a virtual reality environment. Twenty-nine soccer players participated in the study which required them to complete as many passes as possible between two rebound boards within 45 s. Counterbalancing determined the condition order, and then for each condition, participants completed four familiarisation trials and two recorded trials, with the best score being used for analysis. Sense of presence and fidelity were also assessed via questionnaires to understand how representative the virtual environments were compared to the real-world. Results showed that between conditions a difference was observed (EMM = -3.9, 95% HDI = -5.1 to -2.7) with the number of passes being greater in the real-world (EMM = 19.7, 95% HDI = 18.6 to 20.7) than in virtual reality (EMM = 15.7, 95% HDI = 14.7 to 16.8). Further, several subjective differences for fidelity between the two conditions were reported, notably the ability to control the ball in virtual reality which was suggested to have been more difficult than in the real-world. The last research study (Chapter 5) aimed to compare and quantify the perceptions of virtual reality use in soccer, and to model behavioural intentions to use this technology. This study surveyed the perceptions of coaches, support staff, and players in relation to their knowledge, expectations, influences, and barriers of using virtual reality via an internet-based questionnaire. To model behavioural intention, modified questions and constructs from the Unified Theory of Acceptance and Use of Technology were used, and the model was analysed through partial least squares structural equation modelling. Respondents represented coaches and support staff (n = 134) and players (n = 64). All respondents generally agreed that virtual reality should be used to improve tactical awareness and cognition, with its use primarily in performance analysis and rehabilitation settings. Generally, coaches and support staff agreed that monetary cost, coach buy-in and limited evidence base were barriers towards its use. In a sub-sample of coaches and support staff without access to virtual reality (n = 123), performance expectancy was the strongest construct in explaining behavioural intention to use virtual reality, followed by facilitating conditions (i.e., barriers) construct which had a negative association with behavioural intention. This thesis aimed to explore the measurement properties of soccer specific virtual reality tests, and the perceptions of staff and players who might use the technology. The key findings from exploring the measurement properties were (1) evidence of a learning curve, suggesting the need for multiple familiarisation trials before collecting data, and (2) a lack of evidence to support the validity of a virtual reality soccer passing test as evident by a lack of agreement to a real-world equivalent. This finding raises questions on the suitability for virtual reality being used to measure passing skill related performance. The key findings from investigating the perceptions of users included, using the technology to improve cognition and tactical awareness, and using it in rehabilitation and performance analysis settings. Future intention to use was generally positive, and driven by performance related factors, yet several barriers exist that may prevent its widespread use. In Chapter 7 of the thesis, a reflective account is presented for the reader, detailing some of the interactions made with coaches, support staff and players in relation to the personal, moral, and ethical challenges faced as a practitioner-researcher, working and studying, respectively, in a professional soccer club

    Neural Architecture Search for Image Segmentation and Classification

    Get PDF
    Deep learning (DL) is a class of machine learning algorithms that relies on deep neural networks (DNNs) for computations. Unlike traditional machine learning algorithms, DL can learn from raw data directly and effectively. Hence, DL has been successfully applied to tackle many real-world problems. When applying DL to a given problem, the primary task is designing the optimum DNN. This task relies heavily on human expertise, is time-consuming, and requires many trial-and-error experiments. This thesis aims to automate the laborious task of designing the optimum DNN by exploring the neural architecture search (NAS) approach. Here, we propose two new NAS algorithms for two real-world problems: pedestrian lane detection for assistive navigation and hyperspectral image segmentation for biosecurity scanning. Additionally, we also introduce a new dataset-agnostic predictor of neural network performance, which can be used to speed-up NAS algorithms that require the evaluation of candidate DNNs

    Subgroup discovery for structured target concepts

    Get PDF
    The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in data—i.e., named sub-populations— whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen für das Auffinden von Subgruppen in Daten—d. h. benannte Teilpopulationen—deren Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes außergewöhnlich ist. Es handelt sich hierbei um ein leistungsfähiges Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschränkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche übertragen. Wir führen das Konzept der repräsentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch über bekannte Trends in den Daten hinausgehen können. Für Entitäten mit zusätzlicher relationalen Information, die als Graph kodiert werden kann, führen wir ein neuartiges Maß für robuste Verbundenheit ein, das die etablierten alternativen Dichtemaße verbessert; anschließend stellen wir eine Methode bereit, die dieses Maß verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere Beiträge in diesem Rahmen gipfeln in der Einführung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen für u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusätzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. Für den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch für jede andere Art von Kernel-Methode, führen wir zusätzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen während der Zählung der Random Walks, während wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue Fähigkeit zu nutzen. Mit diesen Beiträgen erweitern wir die Anwendbarkeit der Subgruppentdeckung gründlich und definieren wir sie im Endeffekt als Kernel-Methode neu

    Toward Efficient and Robust Computer Vision for Large-Scale Edge Applications

    Get PDF
    The past decade has been witnessing remarkable advancements in computer vision and deep learning algorithms, ushering in a transformative wave of large-scale edge applications across various industries. These image processing methods, however, still encounter numerous challenges when it comes to meeting real-world demands, especially in terms of accuracy and latency at scale. Indeed, striking a balance among efficiency, robustness, and scalability remains a common obstacle. This dissertation investigates these issues in the context of different computer vision tasks, including image classification, semantic segmentation, depth estimation, and object detection. We introduce novel solutions, focusing on utilizing adjustable neural networks, joint multi-task architecture search, and generalized supervision interpolation. The first obstacle revolves around the ability to trade off between speed and accuracy in convolutional neural networks (CNNs) during inference on resource-constrained platforms. Despite their progress, CNNs are typically monolithic at runtime, which can present practical difficulties since computational budgets may vary over time. To address this, we introduce Any-Width Network, an adjustable-width CNN architecture that utilizes a novel Triangular Convolution module to enable fine-grained control over speed and accuracy during inference. The second challenge focuses on the computationally demanding nature of dense prediction tasks such as semantic segmentation and depth estimation. This issue becomes especially problematic for edge platforms with limited resources. To tackle this, we propose a novel and scalable framework named EDNAS. EDNAS leverages the synergistic relationship between Multi-Task Learning and hardware-aware Neural Architecture Search to significantly enhance on-device speed and accuracy of dense predictions. Finally, to improve the robustness of object detection, we introduce a novel data mixing augmentation. While mixing techniques such as Mixup have proven successful in image classification, their application to object detection is non-trivial due to spatial misalignment, foreground/background distinction, and instance multiplicity. To address these issues, we propose a generalized data mixing principle, Supervision Interpolation, and its simple yet effective implementation, LossMix. By addressing these challenges, this dissertation aims to facilitate better efficiency, accuracy, and scalability of computer vision and deep learning algorithms and contribute to the advancement of large-scale edge applications across different domains.Doctor of Philosoph

    Improving CLIP Training with Language Rewrites

    Full text link
    Contrastive Language-Image Pre-training (CLIP) stands as one of the most effective and scalable methods for training transferable vision models using paired image and text data. CLIP models are trained using contrastive loss, which typically relies on data augmentations to prevent overfitting and shortcuts. However, in the CLIP training paradigm, data augmentations are exclusively applied to image inputs, while language inputs remain unchanged throughout the entire training process, limiting the exposure of diverse texts to the same image. In this paper, we introduce Language augmented CLIP (LaCLIP), a simple yet highly effective approach to enhance CLIP training through language rewrites. Leveraging the in-context learning capability of large language models, we rewrite the text descriptions associated with each image. These rewritten texts exhibit diversity in sentence structure and vocabulary while preserving the original key concepts and meanings. During training, LaCLIP randomly selects either the original texts or the rewritten versions as text augmentations for each image. Extensive experiments on CC3M, CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training with language rewrites significantly improves the transfer performance without computation or memory overhead during training. Specifically for ImageNet zero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% on LAION-400M. Code is available at https://github.com/LijieFan/LaCLIP

    Characterizing the Top Cycle via Strategyproofness

    Full text link
    Gibbard and Satterthwaite have shown that the only single-valued social choice functions (SCFs) that satisfy non-imposition (i.e., the function's range coincides with its codomain) and strategyproofness (i.e., voters are never better off by misrepresenting their preferences) are dictatorships. In this paper, we consider set-valued social choice correspondences (SCCs) that are strategyproof according to Fishburn's preference extension and, in particular, the top cycle, an attractive SCC that returns the maximal elements of the transitive closure of the weak majority relation. Our main theorem implies that, under mild conditions, the top cycle is the only non-imposing strategyproof SCC whose outcome only depends on the quantified pairwise comparisons between alternatives. This result effectively turns the Gibbard-Satterthwaite impossibility into a complete characterization of the top cycle by moving from SCFs to SCCs. It is obtained as a corollary of a more general characterization of strategyproof SCCs.Comment: This paper is published at Theoretical Economics: https://econtheory.org/ojs/index.php/te/article/view/512

    Context and uncertainty in decisions from experience

    Full text link
    From the moment we wake up each morning, we are faced with countless choices. Should we press snooze on our alarm? Have toast or cereal for breakfast? Bring an umbrella? Agree to work on that new project? Go to the gym or eat a whole pizza while watching Netflix? The challenge when studying decision-making is to collapse these diverse scenarios into feasible experimental methods. The standard theoretical approach is to represent options using outcomes and probabilities and this has provided a rationale for studying decisions using gambling tasks. These tasks typically involve repeated choices between a single pair of options and outcomes that are determined probabilistically. Thus, the two sections in this thesis ask a simple question: are we missing something by using pairs of options that are divorced from the context in which we make choices outside the psychology laboratory? The first section focuses on the impact of extreme outcomes within a decision context. Chapter 2 addresses whether there is a rational explanation for why these outcomes appear in decisions from experience and numerous other cognitive domains. Chapters 3-5 describe six experiments that distinguish between plausible theories based on whether they measure extremity as categorical, ordinal, or continuous; whether extremity refers to the centre, the edges, or neighbouring outcomes; whether outcomes are represented as types or tokens; and whether extreme outcomes are defined using temporal or distributional characteristics. In the second section, we shift our focus to how people perceive uncertainty. We examine a distinction between uncertainty that is attributed to inadequate knowledge and uncertainty that is attributed to an inherently random process. Chapter 6 describes three experiments that examine whether allowing participants to map their uncertainty onto observable variability leads them to perceive it as potentially resolvable rather than purely stochastic. We then examine how this influences whether they seek additional information. In summary, the experiments described in these two sections demonstrate the importance of context and uncertainty in understanding how we make decisions

    Understanding youth in sport: A Foucauldian lens on power, discourse and knowledge in youth sports

    Get PDF
    In my dissertation, I challenged several modes of thought about youth sports and their practices. I purposely chose to explore practices and activities that vary greatly: resistance to competitive sport and the oft-invisible processes that are part of unorganised sports and of organised sports. Instead of drawing on commonly used post-positivist frameworks, I described an alternative way to study youth sport. In doing so, I gave participants a voice and contributed to the theorising of youth sport. A showed how variety in the use of theories, especially those that provide insight into the complex practices of youth sport such as the application of tools based of a Foucauldian approach to reality, can be used to develop policy and change practices that currently may result in youths dropping out, being exploited or abused or missing pleasure in participation. Because youth sport is considered to be an important contributor to the development of youths in society, research and theorising need to go beyond investigating outcomes and, instead, critically examine its complexities. The various studies in my dissertation revealed how a Foucauldian lens can be used to investigate youth sport practices and what this lens has to offer to scholars, administrators and policymakers to enhance their understanding and ability to respond to issues in youth sport

    Scalable Learning of Bayesian Networks Using Feedback Arc Set-Based Heuristics

    Get PDF
    Bayesianske nettverk er en viktig klasse av probabilistiske grafiske modeller. De består av en struktur (en rettet asyklisk graf) som beskriver betingede uavhengighet mellom stokastiske variabler og deres parametere (lokale sannsynlighetsfordelinger). Med andre ord er Bayesianske nettverk generative modeller som beskriver simultanfordelingene på en kompakt form. Den største utfordringen med å lære et Bayesiansk nettverk skyldes selve strukturen, og på grunn av den kombinatoriske karakteren til asyklisitetsegenskapen er det ingen overraskelse at strukturlæringsproblemet generelt er NP-hardt. Det eksisterer algoritmer som løser dette problemet eksakt: dynamisk programmering og heltalls lineær programmering er de viktigste kandidatene når man ønsker å finne strukturen til små til mellomstore Bayesianske nettverk fra data. På den annen side er heuristikk som bakkeklatringsvarianter ofte brukt når man forsøker å lære strukturen til større nettverk med tusenvis av variabler, selv om disse heuristikkene vanligvis ikke har teoretiske garantier og ytelsen i praksis kan bli uforutsigbar når man arbeider med storskala læring. Denne oppgaven tar for seg utvikling av skalerbare metoder som takler det strukturlæringsproblemet av Bayesianske nettverk, samtidig som det forsøkes å opprettholde et nivå av teoretisk kontroll. Dette ble oppnådd ved bruk av relaterte kombinatoriske problemer, nemlig det maksimale asykliske subgrafproblemet (maximum acyclic subgraph) og det duale problemet (feedback arc set). Selv om disse problemene er NP-harde i seg selv, er de betydelig mer håndterbare i praksis. Denne oppgaven utforsker måter å kartlegge Bayesiansk nettverksstrukturlæring til maksimale asykliske subgrafforekomster og trekke ut omtrentlige løsninger for det første problemet, basert på løsninger oppnådd for det andre. Vår forskning tyder på at selv om økt skalerbarhet kan oppnås på denne måten, er det adskillig mer utfordrende å opprettholde den teoretisk forståelsen med denne tilnærmingen. Videre fant vi ut at å lære strukturen til Bayesianske nettverk basert på maksimal asyklisk subgraf kanskje ikke er den beste metoden generelt, men vi identifiserte en kontekst - lineære strukturelle ligningsmodeller - der vi eksperimentelt kunne validere fordelene med denne tilnærmingen, som fører til rask og skalerbar identifisering av strukturen og med mulighet til å lære komplekse strukturer på en måte som er konkurransedyktig med moderne metoder.Bayesian networks form an important class of probabilistic graphical models. They consist of a structure (a directed acyclic graph) expressing conditional independencies among random variables, as well as parameters (local probability distributions). As such, Bayesian networks are generative models encoding joint probability distributions in a compact form. The main difficulty in learning a Bayesian network comes from the structure itself, owing to the combinatorial nature of the acyclicity property; it is well known and does not come as a surprise that the structure learning problem is NP-hard in general. Exact algorithms solving this problem exist: dynamic programming and integer linear programming are prime contenders when one seeks to recover the structure of small-to-medium sized Bayesian networks from data. On the other hand, heuristics such as hill climbing variants are commonly used when attempting to approximately learn the structure of larger networks with thousands of variables, although these heuristics typically lack theoretical guarantees and their performance in practice may become unreliable when dealing with large scale learning. This thesis is concerned with the development of scalable methods tackling the Bayesian network structure learning problem, while attempting to maintain a level of theoretical control. This was achieved via the use of related combinatorial problems, namely the maximum acyclic subgraph problem and its dual problem the minimum feedback arc set problem. Although these problems are NP-hard themselves, they exhibit significantly better tractability in practice. This thesis explores ways to map Bayesian network structure learning into maximum acyclic subgraph instances and extract approximate solutions for the first problem, based on the solutions obtained for the second. Our research suggests that although increased scalability can be achieved this way, maintaining theoretical understanding based on this approach is much more challenging. Furthermore, we found that learning the structure of Bayesian networks based on maximum acyclic subgraph/minimum feedback arc set may not be the go-to method in general, but we identified a setting - linear structural equation models - in which we could experimentally validate the benefits of this approach, leading to fast and scalable structure recovery with the ability to learn complex structures in a competitive way compared to state-of-the-art baselines.Doktorgradsavhandlin

    MAINSTREAM MEDIA COVERAGE (UK) OF ESPORTS TOURNAMENT THE ‘EPREMIER LEAGUE’ FINALS 2019 AND 2021. A MIXED-METHODS STUDY.

    Get PDF
    This study investigates the factors limiting mainstream media coverage of esports in the UK, specifically focusing on EA Sports' FIFA Series. The research aims to assess the current landscape of esports journalism, mainstream media perception, familiarity with the term 'esports', content categorisation, coverage extent, live event viewership, and potential barriers and opportunities for increased exposure. Despite the growing academic interest in esports, there is a noticeable gap in research regarding mainstream media coverage of esports in UK newspaper and broadcast journalism and of esports journalism. This project's critical analysis therefore of esports journalism offers a timely and original contribution to understanding the relationship between mainstream UK media and the niche esports broadcast/ journalism sector, and the factors influencing the sector’s limited exposure. Employing a mixed-methods approach, this study combines quantitative and qualitative data collection methods such as surveys, content analysis, and interviews. Focusing on mainstream media coverage of esports, the research utilises Rogers' Diffusion of Innovation Theory (2003) and Tidd and Bessant’s 4Ps of Innovation Model (2021) to explore the potential for esports to become a sustainable sector in the UK's digital economy. The study concentrates on the UK tournament the 'ePremier League' 2019 and 2021 and its reception by UK mainstream media, examining the relationship between traditional and new media platforms. The findings reveal a lack of significant value for esports in UK mainstream media, distrust of mainstream media within the esports sector, and a discrepancy in the categorisation of esports content between mainstream and esports media. This study highlights the need for independent investigative reporting and improved understanding of the esports sector within mainstream media to foster its growth and acceptance. The results hold considerable significance for various stakeholders, including publishers, policymakers, and analysts. For instance, the National Union of Journalists and the British Association of Journalists will find the insights on current journalistic practices valuable. Educational organisations such as the National Training Council of Journalists will appreciate findings regarding the importance of professional training for journalists. Likewise, mainstream broadcasters and esports media, including SKY Sports and Gfinity, will be interested in findings related to live streaming and broadcasting live esports events
    corecore