69 research outputs found

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    A Measurement of 'Walking-the-Wall' Dynamics: An Observational Study Using Accelerometry and Sensors to Quantify Risk Associated with Vertical Wall Impact Attenuation in Trampoline Parks.

    Full text link
    This study illustrates the application of a tri-axial accelerometer and gyroscope sensor device on a trampolinist performing the walking-the-wall manoeuvre on a high-performance trampoline to determine the performer dynamic conditions. This research found that rigid vertical walls would allow the trampolinist to obtain greater control and retain spatial awareness at greater levels than what is achievable on non-rigid vertical walls. With a non-rigid padded wall, the reaction force from the wall can be considered a variable force that is not constrained, and would not always provide the feedback that the trampolinist needs to maintain the balance with each climb up the wall and fall from height. This research postulates that unattenuated vertical walls are safer than attenuated vertical walls for walking-the-wall manoeuvres within trampoline park facilities. This is because non-rigid walls would provide higher g-force reaction feedback from the wall, which would reduce the trampolinist's control and stability. This was verified by measuring g-force on a horizontal rigid surface versus a non-rigid surface, where the g-force feedback was 27% higher for the non-rigid surface. Control and stability are both critical while performing the complex walking-the-wall manoeuvre. The trampolinist experienced a very high peak g-force, with a maximum g-force of approximately 11.5 g at the bottom of the jump cycle. It was concluded that applying impact attenuation padding to vertical walls used for walking-the-wall and similar activities would increase the likelihood of injury; therefore, padding of these vertical surfaces is not recommended

    spinfortec2022 : Tagungsband zum 14. Symposium der Sektion Sportinformatik und Sporttechnologie der Deutschen Vereinigung fĂŒr Sportwissenschaft (dvs), Chemnitz 29. - 30. September 2022

    Get PDF
    Dieser Tagungsband enthĂ€lt die BeitrĂ€ge aller VortrĂ€ge und PosterprĂ€sentationen des 14. Symposiums der Sektion Sportinformatik und Sporttechnologie der Deutschen Vereinigung fĂŒr Sportwissenschaft (dvs) an der Technischen UniversitĂ€t Chemnitz (29.-30. September 2022). Mit dem Ziel, das Forschungsfeld der Sportinformatik und Sporttechnologie voranzubringen, wurden knapp 20 vierseitige BeitrĂ€ge eingereicht und in den Sessions Informations- und Feedbacksysteme im Sport, Digitale Bewegung: Datenerfassung, Analyse und Algorithmen sowie SportgerĂ€teentwicklung: Materialien, Konstruktion, Tests vorgestellt.This conference volume contains the contributions of all oral and poster presentations of the 14th Symposium of the Section Sport Informatics and Engineering of the German Association for Sport Science (dvs) at Chemnitz University of Technology (September 29-30, 2022). With the goal of advancing the research field of sports informatics and sports technology, nearly 20 four-page papers were submitted and presented in the sessions Information and Feedback Systems in Sport, Digital Movement: Data Acquisition, Analysis and Algorithms, and Sports Equipment Development: Materials, Construction, Testing

    Privaatsust sÀilitava raalnÀgemise meetodi arendamine kehalise aktiivsuse automaatseks jÀlgimiseks koolis

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneKuidas vaadelda inimesi ilma neid nĂ€gemata? Öeldakse, et ei ole viisakas jĂ”llitada. Õigus privaatsusele on lausa inimĂ”igus. Siiski on inimkĂ€itumises palju sellist, mida teadlased tahaksid uurida inimesi vaadeldes. NĂ€iteks tahame teada, kas lapsed hakkavad vahetunnis rohkem liikuma, kui koolis keelatakse nutitelefonid? Selle vĂ€lja selgitamiseks peaks teadlane kĂŒsima lapsevanematelt nĂ”usolekut vĂ”sukeste vaatlemiseks. Eeldusel, et lapsevanemad annavad loa, oleks klassikaliseks vaatluseks vaja tohutult palju tööjĂ”udu – mitu vaatlejat koolimajas iga pĂ€ev piisavalt pikal perioodil enne ja pĂ€rast nutitelefoni keelu kehtestamist. Doktoritööga pĂŒĂŒdsin lahendada korraga privaatsuse probleemi ja tööjĂ”u probleemi, asendades inimvaatleja tehisaruga. Kaasaegsed masinĂ”ppe meetodid vĂ”imaldavad luua mudeleid, mis tuvastavad automaatselt pildil vĂ”i videos kujutatud objekte ja nende omadusi. Kui tahame tehisaru, mis tunneb pildil Ă€ra inimese, tuleb moodustada masinĂ”ppe andmestik, kus on pilte inimestest ja pilte ilma inimesteta. Kui tahame tehisaru, mis eristaks videos madalat ja kĂ”rget kehalist aktiivsust, on vaja vastavat videoandmestikku. Doktoritöös kogusingi andmestiku, kus video laste liikumisest on sĂŒnkroniseeritud puusal kantavate aktseleromeetritega, et treenida mudel, mis eristaks videopikslites madalamat ja kĂ”rgemat liikumise intensiivsust. Koostöös Tehonoloogiainstituudi iCV laboriga arendasime vĂ€lja videoanalĂŒĂŒsi sensori prototĂŒĂŒbi, mis suudab reaalaja kiirusel hinnata kaamera vaatevĂ€ljas olevate inimeste kehalise aktiivsuse taset. Just see, et tehisaru suudab tuletada videost kehalise aktiivsuse informatsiooni ilma neid videokaadreid salvestamata ega inimestele ĂŒldsegi nĂ€itamata, vĂ”imaldab vaadelda inimesi ilma neid nĂ€gemata. VĂ€ljatöötatud meetod on mĂ”eldud kehalise aktiivsuse mÔÔtmiseks koolipĂ”histes teadusuuringutes ning seetĂ”ttu on arenduses rĂ”hutatud privaatsuse kaitsmist ja teaduseetikat. Laiemalt vaadates illustreerib doktoritöö aga raalnĂ€gemistehnoloogiate potentsiaali töötlemaks visuaalset infot linnaruumis ja töökohtadel ning mitte ainult kehalise aktiivsuse mÔÔtmiseks kĂ”rgete teaduseetika kriteerimitega. Siin ongi koht avalikuks aruteluks – millistel tingimustel vĂ”i kas ĂŒldse on OK, kui sind jĂ”llitab robot?  How to observe people without seeing them? They say it's not polite to stare. The right to privacy is considered a human right. However, there is much in human behavior that scientists would like to study via observation. For example, we want to know whether children will start moving more during recess if smartphones are banned at school? To figure this out, scientists would have to ask parental consent to carry out the observation. Assuming parents grant permission, a huge amount of labour would be needed for classical observation - several observers in the schoolhouse every day for a sufficiently long period before and after the smartphone ban. With my doctoral thesis, I tried to solve both the problem of privacy and of labor by replacing the human observer with artificial intelligence (AI). Modern machine learning methods allow training models that automatically detect objects and their properties in images or video. If we want an AI that recognizes people in images, we need to form a machine learning dataset with pictures of people and pictures without people. If we want an AI that differentiates between low and high physical activity in video, we need a corresponding video dataset. In my doctoral thesis, I collected a dataset where video of children's movement is synchronized with hip-worn accelerometers to train a model that could differentiate between lower and higher levels of physical activity in video. In collaboration with the ICV lab at the Institute of Technology, we developed a prototype video analysis sensor that can estimate the level of physical activity of people in the camera's field of view at real-time speed. The fact that AI can derive information about physical activity from the video without recording the footage or showing it to anyone at all, makes it possible to observe without seeing. The method is designed for measuring physical activity in school-based research and therefore highly prioritizes privacy protection and research ethics. But more broadly, the thesis illustrates the potential of computer vision technologies for processing visual information in urban spaces and workplaces, and not only for measuring physical activity or adhering to high ethical standards. This warrants wider public discussion – under what conditions or whether at all is it OK to have a robot staring at you?https://www.ester.ee/record=b555972

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    Aerospace Medicine and Biology: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 356 reports, articles and other documents introduced into the NASA scientific and technical information system in June 1982

    Human Computer Interaction and Emerging Technologies

    Get PDF
    The INTERACT Conferences are an important platform for researchers and practitioners in the field of human-computer interaction (HCI) to showcase their work. They are organised biennially by the International Federation for Information Processing (IFIP) Technical Committee on Human–Computer Interaction (IFIP TC13), an international committee of 30 member national societies and nine Working Groups. INTERACT is truly international in its spirit and has attracted researchers from several countries and cultures. With an emphasis on inclusiveness, it works to lower the barriers that prevent people in developing countries from participating in conferences. As a multidisciplinary field, HCI requires interaction and discussion among diverse people with different interests and backgrounds. The 17th IFIP TC13 International Conference on Human-Computer Interaction (INTERACT 2019) took place during 2-6 September 2019 in Paphos, Cyprus. The conference was held at the Coral Beach Hotel Resort, and was co-sponsored by the Cyprus University of Technology and Tallinn University, in cooperation with ACM and ACM SIGCHI. This volume contains the Adjunct Proceedings to the 17th INTERACT Conference, comprising a series of selected papers from workshops, the Student Design Consortium and the Doctoral Consortium. The volume follows the INTERACT conference tradition of submitting adjunct papers after the main publication deadline, to be published by a University Press with a connection to the conference itself. In this case, both the Adjunct Proceedings Chair of the conference, Dr Usashi Chatterjee, and the lead Editor of this volume, Dr Fernando Loizides, work at Cardiff University which is the home of Cardiff University Press

    Creating a real-time movement sonification system for hemiparetic upper limb rehabilitation for survivors of stroke

    Get PDF
    Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention.Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention

    Pedagogy in performance: An investigation into decision training as a cognitive approach to circus training

    Get PDF
    This research project represents the first formal research conducted into the potential application of Decision Training in an elite circus arts school environment. The research examines the effects of the introduction of Decision Training—a training model developed for sports applications—into the elite circus arts training program at the National Circus School (NCS), a key circus arts school in one of the world’s most vital circus domains, Montreal, Quebec, Canada. Decision Training, a cognitive-based training model, has been shown through extensive sports-based research to support the development of decision-making ability and self-regulatory learning behaviour, both of which are fundamental for the long-term retention and application of physical skills. A key research aim was to investigate whether Decision Training had the potential to enhance existing teaching practice at the NCS. This research investigates how this cognitive training model—developed for use in the world of competitive sports—functions in a performing arts context in which not only physical and technical skills are trained, but also elements connected with performance, such as aesthetic expression and the creation and development of new performance material. A qualitative action research methodology was employed, consisting of three reflection–action cycles with three case studies of student–teacher pairings. Data collection took place over an extended training period at the NCS from November 2011 to April 2012. Observation, interviews with teachers and students, and group discussions were used to collect data and to provide the impetus for the Decision Training interventions for the three action research cycles. This qualitative study reveals how teachers implemented the three-step Decision Training model and how students responded to these teaching interventions. This was done through an action research process investigating the lived experiences of the participants involved in each case study. The research findings indicate that incorporating a cognitive training method such as Decision Training into circus pedagogy has the potential benefit of giving students the means of acquiring important skills such as effective decision making in performance situations, and self-regulatory behaviour such as the ability to effectively self-assess their performance. Teachers have the potential to benefit by not having to be the sole providers of feedback or motivation, allowing the rapport between student and teacher to become collaborative and creative. The research findings show that the effectiveness of the Decision Training interventions was influenced by the different learning and teaching backgrounds and styles of the student–teacher pairings, and the different ways in which the teachers integrated Decision Training into their existing teaching practices. The research findings led to the proposal of an “integrated” pedagogical approach based on a combination of Decision Training and direct teaching. This “integrated” pedagogy would enable a teacher to use the cognitivist, student-centred learning approach of Decision Training to develop self-regulation and effective decision making in students, but switch to aspects of direct teaching at appropriate times: for instance, when a student needs to be directly aware of safety issues or has little foundational knowledge in a circus discipline; in the lead-up to a performance showing; or during the period in which a student is adjusting to the new cognitivist learning and teaching environment. Recommendations are made for the gradual phasing in of Decision Training into the main training program at the NCS, and implications for future research are discussed
    • 

    corecore