69 research outputs found

    Mobility classification of cattle with micro-Doppler radar

    Get PDF
    Lameness in dairy cattle is a welfare concern that negatively impacts animal productivity and farmer profitability. Micro-Doppler radar sensing has been previously suggested as a potential system for automating lameness detection in ruminants. This thesis investigates the refinement of the proposed automated system by analysing and enhancing the repeatability and accuracy of the existing scoring method in cattle mobility scoring, used to provide labels in machine learning. The main aims of the thesis were (1) to quantify the performance of the micro-Doppler radar sensing method for the assessment of mobility, (2) to characterise and validate micro-Doppler radar signatures of dairy cattle with varying degrees of gait impairment, and (3) to develop machine learning algorithms that can infer the mobility status of the animals under test from their radar signatures and support automatic contactless classification. The first study investigated inter-assessor agreement using a 4-level system and modifications to it, as well as the impact of factors such as mobility scoring experience, confidence in scoring decisions, and video characteristics. The results revealed low levels of agreement between assessors' scores, with kappa values ranging from 0.16 to 0.53. However, after transforming and reducing the mobility scoring system levels, an improvement was observed, with kappa values ranging from 0.2 to 0.67. Subsequently, a longitudinal study was conducted using good-agreement scores as ground truth labels in supervised machine-learning models. However, the accuracy of the algorithmic models was found to be insufficient, ranging from 0.57 to 0.63. To address this issue, different labelling systems and data pre-processing techniques were explored in a cross-sectional study. Nonetheless, the inter-assessor agreement remained challenging, with an average kappa value of 0.37 (SD = 0.16), and high-accuracy algorithmic predictions remained elusive, with an average accuracy of 56.1 (SD =16.58). Finally, the algorithms' performance was tested with high-confidence labels, which consisted of only scores 0 and 3 of the AHDB system. This testing resulted in good classification accuracy (0.82), specificity (0.79), and sensitivity (0.85). This led to the proposal of a new approach to producing labels, testing vantage point changes, and improving the performance of machine learning models (average accuracy = 0.7 & SD = 0.17, average sensitivity = 0.68 & SD = 0.27, average specificity = 0.75 & SD = 0.17). The research identified a challenge in creating high-confidence diagnostic labels for supervised machine learning-based algorithms to automate the detection and classification of lameness in dairy cows. As a result, the original goals were partially overridden, with the focus shifted to creating reliable labels that would perform well with radar data and machine learning. This point was considered necessary for smooth system development and process automation. Nevertheless, we managed to quantify the performance of the micro-Doppler radar system, partially develop the supervised machine learning algorithms, compare levels of agreement among multiple assessors, evaluate the assessment tools, assess the mobility evaluation process and gather a valuable data set which can be used as a foundation for subsequent studies. Finally, the thesis suggests changes in the assessment process to improve the prediction accuracy of algorithms based on supervised machine learning with radar data

    A Simplified Pavement Condition Assessment and its Integration to a Pavement Management System

    Get PDF
    abstract: Road networks are valuable assets that deteriorate over time and need to be preserved to an acceptable service level. Pavement management systems and pavement condition assessment have been implemented widely to routinely evaluate the condition of the road network, and to make recommendations for maintenance and rehabilitation in due time and manner. The problem with current practices is that pavement evaluation requires qualified raters to carry out manual pavement condition surveys, which can be labor intensive and time consuming. Advances in computing capabilities, image processing and sensing technologies has permitted the development of vehicles equipped with such technologies to assess pavement condition. The problem with this is that the equipment is costly, and not all agencies can afford to purchase it. Recent researchers have developed smartphone applications to address this data collection problem, but only works in a restricted set up, or calibration is recommended. This dissertation developed a simple method to continually and accurately quantify pavement condition of an entire road network by using technologies already embedded in new cars, smart phones, and by randomly collecting data from a population of road users. The method includes the development of a Ride Quality Index (RQI), and a methodology for analyzing the data from multi-factor uncertainty. It also derived a methodology to use the collected data through smartphone sensing into a pavement management system. The proposed methodology was validated with field studies, and the use of Monte Carlo method to estimate RQI from different longitudinal profiles. The study suggested RQI thresholds for different road settings, and a minimum samples required for the analysis. The implementation of this approach could help agencies to continually monitor the road network condition at a minimal cost, thus saving millions of dollars compared to traditional condition surveys. This approach also has the potential to reliably assess pavement ride quality for very large networks in matter of days.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    The Data Science Design Manual

    Get PDF

    Vol. 8, No. 1 (Full Issue)

    Get PDF

    Detecting Periods of Eating in Everyday Life by Tracking Wrist Motion — What is a Meal?

    Get PDF
    Eating is one of the most basic activities observed in sentient animals, a behavior so natural that humans often eating without giving the activity a second thought. Unfortunately, this often leads to consuming more calories than expended, which can cause weight gain - a leading cause of diseases and death. This proposal describes research in methods to automatically detect periods of eating by tracking wrist motion so that calorie consumption can be tracked. We first briefly discuss how obesity is caused due to an imbalance in calorie intake and expenditure. Calorie consumption and expenditure can be tracked manually using tools like paper diaries, however it is well known that human bias can affect the accuracy of such tracking. Researchers in the upcoming field of automated dietary monitoring (ADM) are attempting to track diet using electronic methods in an effort to mitigate this bias. We attempt to replicate a previous algorithm that detects eating by tracking wrist motion electronically. The previous algorithm was evaluated on data collected from 43 subjects using an iPhone as the sensor. Periods of time are segmented first, and then classified using a naive Bayesian classifier. For replication, we describe the collection of the Clemson all-day data set (CAD), a free-living eating activity dataset containing 4,680 hours of wrist motion collected from 351 participants - the largest of its kind known to us. We learn that while different sensors are available to log wrist acceleration data, no unified convention exists, and this data must thus be transformed between conventions. We learn that the performance of the eating detection algorithm is affected due to changes in the sensors used to track wrist motion, increased variability in behavior due to a larger participant pool, and the ratio of eating to non-eating in the dataset. We learn that commercially available acceleration sensors contain noise in their reported readings which affects wrist tracking specifically due to the low magnitude of wrist acceleration. Commercial accelerometers can have noise up to 0.06g which is acceptable in applications like automobile crash testing or pedestrian indoor navigation, but not in ones using wrist motion. We quantify linear acceleration noise in our free-living dataset. We explain sources of noise, a method to mitigate it, and also evaluate the effect of this noise on the eating detection algorithm. By visualizing periods of eating in the collected dataset we learn that that people often conduct secondary activities while eating, such as walking, watching television, working, and doing household chores. These secondary activities cause wrist motions that obfuscate wrist motions associated with eating, which increases the difficulty of detecting periods of eating (meals). Subjects reported conducting secondary activities in 72% of meals. Analysis of wrist motion data revealed that the wrist was resting 12.8% of the time during self-reported meals, compared to only 6.8% of the time in a cafeteria dataset. Walking motion was found during 5.5% of the time during meals in free-living, compared to 0% in the cafeteria. Augmenting an eating detection classifier to include walking and resting detection improved the average per person accuracy from 74% to 77% on our free-living dataset (t[353]=7.86, p\u3c0.001). This suggests that future data collections for eating activity detection should also collect detailed ground truth on secondary activities being conducted during eating. Finally, learning from this data collection, we describe a convolutional neural network (CNN) to detect periods of eating by tracking wrist motion during everyday life. Eating uses hand-to-mouth gestures for ingestion, each of which lasts appx 1-5 sec. The novelty of our new approach is that we analyze a much longer window (0.5-15 min) that can contain other gestures related to eating, such as cutting or manipulating food, preparing foods for consumption, and resting between ingestion events. The context of these other gestures can improve the detection of periods of eating. We found that accuracy at detecting eating increased by 15% in longer windows compared to shorter windows. Overall results on CAD were 89% detection of meals with 1.7 false positives for every true positive (FP/TP), and a time weighted accuracy of 80%

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion

    Body schema in adolescent idiopathic scoliosis

    Get PDF
    This thesis documents the studies and analyses conducted as part of a research project whose principal aim was to evaluate the role of body schema in the development of adolescent idiopathic scoliosis (AIS). There were three main research questions: 1. do adolescents with AIS differ from non-scoliotic adolescents with regard to mechanisms that are thought to underpin body schema? 2. in adolescents with AIS, is there any relationship between the mechanisms thought to underpin body schema and the magnitude of spinal deformity? 3. is there any relationship between changes in body schema and progression of the spinal deformity in AIS over time? To answer these questions, a systematic review of neurophysiological deficits in AIS and a case-control study involving patients with AIS and non-scoliotic controls was performed along with a series of correlational and longitudinal analyses. Fifty-eight participants with AIS (cases) were recruited along with 197 age and sex-matched control participants from schools in Warwickshire, Oxfordshire, Leicestershire and Coventry. Measures of body schema as well as other self-report measures were collected at baseline for both groups. Cases were followed up at 6 and 12 months. Imaging data of spinal deformity was also collected for case participants. The results of the systematic review and case-control analysis indicated that people with AIS did not differ significantly from non-scoliotic controls with regard to measures of body schema. The correlational and longitudinal analyses confirmed the lack of association between these two sets of parameters with no relationship between the magnitude of spinal deformity and body schema over a period of 12 months. Secondary analyses did reveal differences between case and control participants with regard to perceived spinal deformity, pain, self-image and, to a lesser extent, function. Correlational and longitudinal analyses revealed that these differences were not related to the magnitude of spinal deformity and that perceptions of spinal deformity may be more important than the actual bony changes themselves

    Addressing subjectivity in the classification of palaeoenvironmental remains with supervised deep learning convolutional neural networks

    Get PDF
    Archaeological object identifications have been traditionally undertaken through a comparative methodology where each artefact is identified through a subjective, interpretative act by a professional. Regarding palaeoenvironmental remains, this comparative methodology is given boundaries by using reference materials and codified sets of rules, but subjectivity is nevertheless present. The problem with this traditional archaeological methodology is that higher level of subjectivity in the identification of artefacts leads to inaccuracies, which then increases the potential for Type I and Type II errors in the testing of hypotheses. Reducing the subjectivity of archaeological identifications would improve the statistical power of archaeological analyses, which would subsequently lead to more impactful research. In this thesis, it is shown that the level of subjectivity in palaeoenvironmental research can be reduced by applying deep learning convolutional neural networks within an image recognition framework. The primary aim of the presented research is therefore to further the on-going paradigm shift in archaeology towards model-based object identifications, particularly within the realm of palaeoenvironmental remains. Although this thesis focuses on the identification of pollen grains and animal bones, with the latter being restricted to the astragalus of sheep and goats, there are wider implications for archaeology as these methods can easily be extended beyond pollen and animal remains. The previously published POLEN23E dataset is used as the pilot study of applying deep learning in pollen grain classification. In contrast, an image dataset of modern bones was compiled for the classification of sheep and goat astragali due to a complete lack of available bone image datasets and a double blind study with inexperienced and experienced zooarchaeologists was performed to have a benchmark to which image recognition models can be compared. In both classification tasks, the presented models outperform all previous formal modelling methods and only the best human analysts match the performance of the deep learning model in the sheep and goat astragalus separation task. Throughout the thesis, there is a specific focus on increasing trust in the models through the visualization of the models’ decision making and avenues of improvements to Grad-CAM are explored. This thesis makes an explicit case for the phasing out of the comparative methods in favour of a formal modelling framework within archaeology, especially in palaeoenvironmental object identification

    An investigation into deviant morphology : issues in the implementation of a deep grammar for Indonesian

    Get PDF
    This thesis investigates deviant morphology in Indonesian for the implementation of a deep grammar. In particular we focus on the implementation of the verbal suffix -kan. This suffix has been described as having many functions, which alter the kinds of arguments and the number of arguments the verb takes (Dardjowidjojo 1971; Chung 1976; Arka 1993; Vamarasi 1999; Kroeger 2007; Son and Cole 2008). Deep grammars or precision grammars (Butt et al. 1999a; Butt et al. 2003; Bender et al. 2011) have been shown to be useful for natural language processing (NLP) tasks, such as machine translation and generation (Oepen et al. 2004; Cahill and Riester 2009; Graham 2011), and information extraction (MacKinlay et al. 2012), demonstrating the need for linguistically rich information to aid NLP tasks. Although these linguistically-motivated grammars are invaluable resources to the NLP community, the biggest drawback is the time required for the manual creation and curation of the lexicon. Our work aims to expedite this process by applying methods to assign syntactic information to kan-affixed verbs automatically. The method we employ exploits the hypothesis that semantic similarity is tightly connected with syntactic behaviour (Levin 1993). Our endeavour in automatically acquiring verbal information for an Indonesian deep grammar poses a number of lingustic challenges. First of all Indonesian verbs exhibit voice marking that is characteristic of the subgrouping of its language family. In order to be able to characterise verbal behaviour in Indonesian, we first need to devise a detailed analysis of voice for implementation. Another challenge we face is the claim that all open class words in Indonesian, at least as it is spoken in some varieties (Gil 1994; Gil 2010), cannot linguistically be analysed as being distinct from each other. That is, there is no distiction between nouns, verbs or adjectives in Indonesian, and all word from the open class categories should be analysed uniformly. This poses difficulties in implementing a grammar in a linguistically motivated way, as well discovering syntactic behaviour of verbs, if verbs cannot be distinguished from nouns. As part of our investigation we conduct experiments to verify the need to employ word class categories, and we find that indeed these are linguistically motivated labels in Indonesian. Through our investigation into deviant morphological behaviour, we gain a better characterisation of the morphosyntactic effects of -kan, and we discover that, although Indonesian has been labelled as a language with no open word class distinctions, word classes can be established as being linguistically-motivated
    • …
    corecore