765 research outputs found

    Assistive Technology and Biomechatronics Engineering

    Get PDF
    This Special Issue will focus on assistive technology (AT) to address biomechanical and control of movement issues in individuals with impaired health, whether as a result of disability, disease, or injury. All over the world, technologies are developed that make human life richer and more comfortable. However, there are people who are not able to benefit from these technologies. Research can include development of new assistive technology to promote more effective movement, the use of existing technology to assess and treat movement disorders, the use and effectiveness of virtual rehabilitation, or theoretical issues, such as modeling, which underlie the biomechanics or motor control of movement disorders. This Special Issue will also cover Internet of Things (IoT) sensing technology and nursing care robot applications that can be applied to new assistive technologies. IoT includes data, more specifically gathering them efficiently and using them to enable intelligence, control, and new applications

    Development and assessment of a hand assist device: GRIPIT

    Get PDF
    Background Although various hand assist devices have been commercialized for people with paralysis, they are somewhat limited in terms of tool fixation and device attachment method. Hand exoskeleton robots allow users to grasp a wider range of tools but are heavy, complicated, and bulky owing to the presence of numerous actuators and controllers. The GRIPIT hand assist device overcomes the limitations of both conventional devices and exoskeleton robots by providing improved tool fixation and device attachment in a lightweight and compact device. GRIPIT has been designed to assist tripod grasp for people with spinal cord injury because this grasp posture is frequently used in school and offices for such activities as writing and grasping small objects. Methods The main development objective of GRIPIT is to assist users to grasp tools with their own hand using a lightweight, compact assistive device that is manually operated via a single wire. GRIPIT consists of only a glove, a wire, and a small structure that maintains tendon tension to permit a stable grasp. The tendon routing points are designed to apply force to the thumb, index finger, and middle finger to form a tripod grasp. A tension-maintenance structure sustains the grasp posture with appropriate tension. Following device development, four people with spinal cord injury were recruited to verify the writing performance of GRIPIT compared to the performance of a conventional penholder and handwriting. Writing was chosen as the assessment task because it requires a tripod grasp, which is one of the main performance objectives of GRIPIT. Results New assessment, which includes six different writing tasks, was devised to measure writing ability from various viewpoints including both qualitative and quantitative methods, while most conventional assessments include only qualitative methods or simple time measuring assessments. Appearance, portability, difficulty of wearing, difficulty of grasping the subject, writing sensation, fatigability, and legibility were measured to assess qualitative performance while writing various words and sentences. Results showed that GRIPIT is relatively complicated to wear and use compared to a conventional assist device but has advantages for writing sensation, fatigability, and legibility because it affords sufficient grasp force during writing. Two quantitative performance factors were assessed, accuracy of writing and solidity of writing. To assess accuracy of writing, we asked subjects to draw various figures under given conditions. To assess solidity of writing, pen tip force and the angle variation of the pen were measured. Quantitative evaluation results showed that GRIPIT helps users to write accurately without pen shakes even high force is applied on the pen. Conclusions Qualitative and quantitative results were better when subjects used GRIPIT than when they used the conventional penholder, mainly because GRIPIT allowed them to exert a higher grasp force. Grasp force is important because disabled people cannot control their fingers and thus need to move their entire arm to write, while non-disabled people only need to move their fingers to write. The tension-maintenance structure developed for GRIPIT provides appropriate grasp force and moment balance on the users hand, but the other writing method only fixes the pen using friction force or requires the users arm to generate a grasp force

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Development and validation of haptic devices for studies on human grasp and rehabilitation

    Get PDF
    This thesis aims to develop and to validate a new set of devices for accurate investigation of human finger stiffness and force distribution in grasping tasks. The ambitious goal of this research is twofold: 1) to advance the state of art on human strategies in manipulation tasks and provide tools to assess rehabilitation procedure, 2) to investigate human strategies for impedance control that can be used for human robot interaction and control of myoelectric prosthesis. The first part of this thesis describes two types of systems that enable to achieve a complete set of measurements on force distribution and contact point locations. More specifically, this part includes: (i) the design process and validation of tripod grasp devices with controllable stiffness at the contact to be used also for rehabilitation purposes, and (ii) the validation of multi-digit wearable sensor system. Results on devices validation as well as illustrative measurement examples are reported and discussed. The effectiveness of these devices in grasp analysis was also experimentally demonstrated and applications to neuroscientific studies are discussed. In the second part of this thesis, the tripod devices are exploited in two different studies to investigate stiffness regulation principles in humans. The first study provides evidence on the existence of coordinated stiffening patterns in human hand fingers and establishes initial steps towards a real-time and effective modelling of finger stiffness in tripod grasp. This pattern further supports the evidence of synergistic control in human grasping. To achieve this goal, the endpoint stiffness of the thumb, index and middle fingers of healthy subjects are experimentally identified and correlated with the electromyography (EMG) signals recorded from a dominant antagonistic pair of the forearm muscles. Our findings suggest that the magnitude of the stiffness ellipses at the fingertips grows in a coordinated way, subsequent to the co-contraction of the forearm muscles. The second study presents experimental findings on how humans modulate their hand stiffness while grasping object of varying levels of compliance. Subjects perform a grasp and lift task with a tripod-grasp object with contact surfaces of variable compliance; EMG from the main finger flexor and extensor muscles was recorded along with force and torque data at the contact points. A significant increase in the extensor muscle and cocontraction levels is evidenced with an increasing compliance at the contact points. Overall results give solid evidence on the validity and utility of the proposed devices to investigate human grasp proprieties. The underlying motor control principles that are exploited by humans in the achievement of a reliable and robust grasp can be potentially integrated into the control framework of robotic or prosthetic hands to achieve a similar interaction performance

    Design of a mechatronic system for postural control analysis

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    Sensitive and Makeable Computational Materials for the Creation of Smart Everyday Objects

    Get PDF
    The vision of computational materials is to create smart everyday objects using the materi- als that have sensing and computational capabilities embedded into them. However, today’s development of computational materials is limited because its interfaces (i.e. sensors) are unable to support wide ranges of human interactions , and withstand the fabrication meth- ods of everyday objects (e.g. cutting and assembling). These barriers hinder citizens from creating smart every day objects using computational materials on a large scale. To overcome the barriers, this dissertation presents the approaches to develop compu- tational materials to be 1) sensitive to a wide variety of user interactions, including explicit interactions (e.g. user inputs) and implicit interactions (e.g. user contexts), and 2) makeable against a wide range of fabrication operations, such cutting and assembling. I exemplify the approaches through five research projects on two common materials, textile and wood. For each project, I explore how a material interface can be made to sense user inputs or activities, and how it can be optimized to balance sensitivity and fabrication complexity. I discuss the sensing algorithms and machine learning model to interpret the sensor data as high-level abstraction and interaction. I show the practical applications of developed computational materials. I demonstrate the evaluation study to validate their performance and robustness. In the end of this dissertation, I summarize the contributions of my thesis and discuss future directions for the vision of computational materials

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Safe Haptics-enabled Patient-Robot Interaction for Robotic and Telerobotic Rehabilitation of Neuromuscular Disorders: Control Design and Analysis

    Get PDF
    Motivation: Current statistics show that the population of seniors and the incidence rate of age-related neuromuscular disorders are rapidly increasing worldwide. Improving medical care is likely to increase the survival rate but will result in even more patients in need of Assistive, Rehabilitation and Assessment (ARA) services for extended periods which will place a significant burden on the world\u27s healthcare systems. In many cases, the only alternative is limited and often delayed outpatient therapy. The situation will be worse for patients in remote areas. One potential solution is to develop technologies that provide efficient and safe means of in-hospital and in-home kinesthetic rehabilitation. In this regard, Haptics-enabled Interactive Robotic Neurorehabilitation (HIRN) systems have been developed. Existing Challenges: Although there are specific advantages with the use of HIRN technologies, there still exist several technical and control challenges, e.g., (a) absence of direct interactive physical interaction between therapists and patients; (b) questionable adaptability and flexibility considering the sensorimotor needs of patients; (c) limited accessibility in remote areas; and (d) guaranteeing patient-robot interaction safety while maximizing system transparency, especially when high control effort is needed for severely disabled patients, when the robot is to be used in a patient\u27s home or when the patient experiences involuntary movements. These challenges have provided the motivation for this research. Research Statement: In this project, a novel haptics-enabled telerobotic rehabilitation framework is designed, analyzed and implemented that can be used as a new paradigm for delivering motor therapy which gives therapists direct kinesthetic supervision over the robotic rehabilitation procedure. The system also allows for kinesthetic remote and ultimately in-home rehabilitation. To guarantee interaction safety while maximizing the performance of the system, a new framework for designing stabilizing controllers is developed initially based on small-gain theory and then completed using strong passivity theory. The proposed control framework takes into account knowledge about the variable biomechanical capabilities of the patient\u27s limb(s) in absorbing interaction forces and mechanical energy. The technique is generalized for use for classical rehabilitation robotic systems to realize patient-robot interaction safety while enhancing performance. In the next step, the proposed telerobotic system is studied as a modality of training for classical HIRN systems. The goal is to first model and then regenerate the prescribed kinesthetic supervision of an expert therapist. To broaden the population of patients who can use the technology and HIRN systems, a new control strategy is designed for patients experiencing involuntary movements. As the last step, the outcomes of the proposed theoretical and technological developments are translated to designing assistive mechatronic tools for patients with force and motion control deficits. This study shows that proper augmentation of haptic inputs can not only enhance the transparency and safety of robotic and telerobotic rehabilitation systems, but it can also assist patients with force and motion control deficiencies

    Intracortical microstimulation of human somatosensory cortex as a source of cutaneous feedback

    Get PDF
    The field of brain computer interfaces (BCI) has been making rapid advances in decoding brain activity into control signals capable of operating neural prosthetic devices, such as dexterous robotic arms and computer cursors. Potential users of neural prostheses, including people with amputations or spinal cord injuries, retain intact brain function that can be decoded using BCIs. Recent work has demonstrated simultaneous control over up to 10 degrees-of-freedom, but the current paradigms lack a component crucial to normal motor control: somatosensory feedback. Currently, BCIs are controlled using visual feedback alone, which is important for many reaching movement and identifying target locations. However, as the actuators controlled by BCIs become more complex and include devices approximating the performance of human limbs, visual feedback becomes especially limiting, as it cannot convey information used during object manipulation, such as grip force. The objective of this work is to provide real-time, cutaneous, somatosensory feedback to users of dexterous prosthetic limbs under BCI control by applying intracortical microstimulation (ICMS) to primary somatosensory cortex (S1). Long-term microstimulation of the cortex with microelectrode arrays had never been attempted in a human prior to this work, and while this work is ultimately motivated by efforts to improve BCIs, this general approach also enables INTRACORTICAL MICROSTIMULATION OF HUMAN PRIMARY SOMATOSENSORY CORTEX AS A SOURCE OF CUTANEOUS FEEDBACK Sharlene Nicole Flesher, PhD University of Pittsburgh, 2017 v unprecedented access to the human cortex enabling investigations of more basic scientific issues surrounding cutaneous perception, its conscious components, and its role in motor planning and control. To this end, two microelectrode arrays were placed in human somatosensory cortex of a human participant. I first characterized qualities of sensations evoked via ICMS, such as percept location, modality, intensity and size, over a two-year study period. The sensations were found to be focal to a single digit, and increased in intensity linearly with pulse train amplitude, which suggests that ICMS will be a suitable means of relaying locations of object contact with single-digit precision, and a range of grasp forces can be relayed for each location. Additionally, I found these qualities to be stable over a two-year period, suggesting that delivering ICMS was not damaging the electrode-tissue interface. ICMS was then used as a real-time feedback source during BCI control of a robotic limb during tasks ranging from simple force-matching tasks to functional reach, grasp and carry tasks. Finally, we examined the relationship between pulse train parameters and conscious perception of sensations, an endeavor that until now could not have been undertaken. These results demonstrate that ICMS is a suitable means of relaying somatosensory feedback to BCI users. Adding somatosensory feedback to BCI users has the potential to improve embodiment and control of the devices, bringing this technology closer to restoring upper limb function

    The development and assessment of novel non-invasive methods of measuring sleep in dairy cows : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Animal Science at Massey University, Manawatū, New Zealand

    Get PDF
    Onet published article in Appendix C was removed for copyright reasons, but may be accessed via its source: Hunter, L.B., O’Connor, C., Haskell, M.J., Langford, F.M., Webster, J.R., & Stafford, K.J. 2021, September. Lying posture does not accurately indicate sleep stage in dairy cows. Applied Animal Behaviour Science, 242, 105427. https://doi.org/10.1016/j.applanim.2021.105427Sleep is important for animal health and welfare and there are many factors, for example, the environment, illness, or stress, that are likely to have an impact on cow sleep and consequently affect their welfare. Polysomnography (PSG) is considered the gold standard for precise identification of sleep stages. It consists of electrophysiological recordings of the brain activity, eye movements and muscle activity but is costly and difficult to use with cows on farm. Accordingly, the study of sleep in cows may be limited due to the impracticability of PSG. Alternative methods of assessing sleep have been developed for humans. Some such work has been conducted for cows, but this has yet to be validated with PSG. The main aim of this thesis was to investigate alternative methods to PSG to accurately detect sleep stages in dairy cows. Specifically, I aimed to develop a detailed 5-stage scoring system for assessing sleep stages from the cow PSG, to investigate the suitability of using lying postures and heart rate (HR) measures to assess sleep stages and to develop a model to accurately predict sleep stages using non-invasive measures in dairy cows compared with PSG. Two studies were conducted using 6 non-lactating dairy cows in an indoor housed environment in Scotland, and outdoors at pasture in New Zealand. PSG was recorded with each cow over a period of seven days. From these data a 5-stage sleep-scoring criteria with good reliability was developed which identified two stages of light sleep, two stages of deep sleep as well as awake and rumination stages. Video was recorded during sleep recordings and the cow’s behaviour was analysed and compared with the scored sleep stages from the PSG. Some sleep stages were found to occur mainly in specific lying postures; however, overall, postures were not useful indicators of sleep stages. Heart rate (HR) and heart rate variability (HRV) were measured using a Polar HR monitor ii and data logging device. Differences in the HR and HRV measures were found between the sleep stages, and the patterns of these changes were similar between both Scottish and NZ cows. Finally, machine learning models were developed using supervised learning methods to predict sleep stage from the HR and HRV measures as well as the surface EMG data recorded during PSG. The models were able to learn to recognize and accurately predict sleep stages compared with the PSG scoring. This research demonstrates that non-invasive alternatives can be used to identify sleep stages accurately in dairy cows compared with PSG. Further research is necessary with larger sample sizes and cows of various breeds and life stages; however, the success of the methods developed during this thesis demonstrates their suitability for the future measurement of sleep in cows and in the assessment of cow welfare
    • 

    corecore