19,713 research outputs found

    Identifying and responding to people with mild learning disabilities in the probation service

    Get PDF
    It has long been recognised that, like many other individuals, people with learningdisabilities find their way into the criminal justice system. This fact is not disputed. Whathas been disputed, however, is the extent to which those with learning disabilities arerepresented within the various agencies of the criminal justice system and the ways inwhich the criminal justice system (and society) should address this. Recently, social andlegislative confusion over the best way to deal with offenders with learning disabilities andmental health problems has meant that the waters have become even more muddied.Despite current government uncertainty concerning the best way to support offenders withlearning disabilities, the probation service is likely to continue to play a key role in thesupervision of such offenders. The three studies contained herein aim to clarify the extentto which those with learning disabilities are represented in the probation service, toexamine the effectiveness of probation for them and to explore some of the ways in whichprobation could be adapted to fit their needs.Study 1 and study 2 showed that around 10% of offenders on probation in Kent appearedto have an IQ below 75, putting them in the bottom 5% of the general population. Study 3was designed to assess some of the support needs of those with learning disabilities in theprobation service, finding that many of the materials used by the probation service arelikely to be too complex for those with learning disabilities to use effectively. To addressthis, a model for service provision is tentatively suggested. This is based on the findings ofthe three studies and a pragmatic assessment of what the probation service is likely to becapable of achieving in the near future

    Consent and the Construction of the Volunteer: Institutional Settings of Experimental Research on Human Beings in Britain during the Cold War

    Get PDF
    This study challenges the primacy of consent in the history of human experimentation and argues that privileging the cultural frameworks adds nuance to our understanding of the construction of the volunteer in the period 1945 to 1970. Historians and bio-ethicists have argued that medical ethics codes have marked out the parameters of using people as subjects in medical scientific research and that the consent of the subjects was fundamental to their status as volunteers. However, the temporality of the creation of medical ethics codes means that they need to be understood within their historical context. That medical ethics codes arose from a specific historical context rather than a concerted and conscious determination to safeguard the well-being of subjects needs to be acknowledged. The British context of human experimentation is under-researched and there has been even less focus on the cultural frameworks within which experiments took place. This study demonstrates, through a close analysis of the Medical Research Council's Common Cold Research Unit (CCRU) and the government's military research facility, the Chemical Defence Experimental Establishment, Porton Down (Porton), that the `volunteer' in human experiments was a subjective entity whose identity was specific to the institution which recruited and made use of the subject. By examining representations of volunteers in the British press, the rhetoric of the government's collectivist agenda becomes evident and this fed into the institutional construction of the volunteer at the CCRU. In contrast, discussions between Porton scientists, staff members, and government officials demonstrate that the use of military personnel in secret chemical warfare experiments was far more complex. Conflicting interests of the military, the government and the scientific imperative affected how the military volunteer was perceived

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks

    Data-to-text generation with neural planning

    Get PDF
    In this thesis, we consider the task of data-to-text generation, which takes non-linguistic structures as input and produces textual output. The inputs can take the form of database tables, spreadsheets, charts, and so on. The main application of data-to-text generation is to present information in a textual format which makes it accessible to a layperson who may otherwise find it problematic to understand numerical figures. The task can also automate routine document generation jobs, thus improving human efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or its variants. These models generate fluent (but often imprecise) text and perform quite poorly at selecting appropriate content and ordering it coherently. This thesis focuses on overcoming these issues by integrating content planning with neural models. We hypothesize data-to-text generation will benefit from explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our generator are tables (with records) in the sports domain. And the output are summaries describing what happened in the game (e.g., who won/lost, ..., scored, etc.). We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records should be mentioned and in which order, and then generate the document while taking the micro plan into account. We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the records corresponding to the entities by using hierarchical attention at each time step. We then combine planning with the high level organization of entities, events, and their interactions. Such coarse-grained macro plans are learnt from data and given as input to the generator. Finally, we present work on making macro plans latent while incrementally generating a document paragraph by paragraph. We infer latent plans sequentially with a structured variational model while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document

    Enhancing Parkinson’s Disease Prediction Using Machine Learning and Feature Selection Methods

    Get PDF
    Several millions of people suffer from Parkinson’s disease globally. Parkinson’s affects about 1% of people over 60 and its symptoms increase with age. The voice may be affected and patients experience abnormalities in speech that might not be noticed by listeners, but which could be analyzed using recorded speech signals. With the huge advancements of technology, the medical data has increased dramatically, and therefore, there is a need to apply data mining and machine learning methods to extract new knowledge from this data. Several classification methods were used to analyze medical data sets and diagnostic problems, such as Parkinson’s Disease (PD). In addition, to improve the performance of classification, feature selection methods have been extensively used in many fields. This paper aims to propose a comprehensive approach to enhance the prediction of PD using several machine learning methods with different feature selection methods such as filter-based and wrapper-based. The dataset includes 240 recodes with 46 acoustic features extracted from 3 voice recording replications for 80 patients. The experimental results showed improvements when wrapper-based features selection method was used with KNN classifier with accuracy of 88.33%. The best obtained results were compared with other studies and it was found that this study provides comparable and superior results

    The applied psychology of addictive orientations : studies in a 12-step treatment context.

    Get PDF
    The clinical data for the studies was collected at The PROMIS Recovery Centre, a Minnesota Model treatmentc entre for addictions,w hich encouragesth e membership and use of the 12 step Anonymous Fellowships, and is abstinence based. The area of addiction is contextualised in a review chapter which focuses on research relating to the phenomenon of cross addiction. A study examining the concept of "addictive orientations" in male and female addicts is described, which develops a study conductedb y StephensonM, aggi, Lefever, & Morojele (1995). This presents study found a four factor solution which appeared to be subdivisions of the previously found Hedonism and Nurturance factors. Self orientated nurturance (both food dimensions, shopping and caffeine), Other orientated nurturance (both compulsive helping dimensions and work), Sensation seeking hedonism (Drugs, prescription drugs, nicotine and marginally alcohol), and Power related hedonism (Both relationship dimensions, sex and gambling. This concept of "addictive orientations" is further explored in a non-clinical population, where again a four factor solution was found, very similar to that in the clinical population. This was thought to indicate that in terms of addictive orientation a pattern already exists in this non-clinical population and that consideration should be given to why this is the case. These orientations are examined in terms of gender differences. It is suggested that the differences between genders reflect power-related role relationships between the sexes. In order to further elaborate the significance and meaning behind these orientations, the next two chapters look at the contribution of personality variables and how addictive orientations relate to psychiatric symptomatology. Personality variables were differentially, and to a considerable extent predictably involved with the four factors for both males and females.Conscientiousness as positively associated with "Other orientated Nurturance" and negatively associated with "Sensation seeking hedonism" (particularly for men). Neuroticism had a particularly strong association with the "Self orientated Nurturance" factor in the female population. More than twice the symptomatology variance was explained by the factor scores for females than it was for males. The most important factorial predictors for psychiatric symptomatology were the "Power related hedonism" factor for males, and "Self oriented nurturance" for females. The results are discussed from theoretical and treatment perspectives

    Reforming the United Nations

    Get PDF
    The thesis deals with the financial crisis that the United Nations faced starting in 1985 when the US Congress decided to withhold a significant part of the US contribution to the UN regular budget in order to force a greater say for the major contributors on budgetary issues, budgetary restraint and greater efficiency. The UN responded by the adoption of resolution 41/213 of 19 December 1986 that was based on the recommendations of a Group of High-level Intergovernmental Experts ("G-18") set up a year earlier. A new system was introduced regarding the formulation of the regular budget of the United Nations Organisation and a broader process of reform was initiated including a restructuring of the Secretariat and of the intergovernmental machinery in the economic and social fields. After an introductory chapter (Chapter I), the thesis examines the UN problems at the budgetary/financial and administrative/structural levels, the solutions proposed from within and without the United Nations established framework and the actual attempts at reform (Chapters II and ifi). The realisation that the implementation of reforms is rather disjointed and often unsuccessful (e.g. the failure to restructure the intergovernmental machi.neiy) prompts a search for the deeper causes of the UN problems at the political level and the attitudes of the main actors, namely the USA, the USSR, some up-and-coming states, notably Japan, the Third World states and, finally, of the UN Secretary-General and the Secretariat (Chapter 1V). Although the financial crisis may have subsided since 1988 and the USA seem committed to paying up their dues, the deeper UN crisis of identity has not been resolved and is expected to resurface if no bold steps are taken. In that direction, some possible alternative courses for the UN in the future are discussed drawing upon theory and practice (Chapte

    The Caribbean Syzygy: a study of the novels of Edgar Mittelholzer and Wilson Harris

    Get PDF
    The problem of racial inheritance - the "search for identity" - is a recurring theme in the criticism of Caribbean literature. It is a pre-occupation with Caribbean writers, affecting both subject matter and literary quality, as FM. Birbalsingh, for example, has shown with reference to the novels of John Hearne and E,R. Braithwaite (Caribbean quarterly Vols. 14, December 1968 and 16, March 1970). This study of the work of Edgar Mittelholzer and Wilson Harris will attempt to show that there are important areas still to be explored relating Caribbean literature to its complex racial and cultural background. Both Mittelholzer and Harris deserve close, critical study in their own right; but a parallel examination reveals similarities and differences which bring into sharper focus wider concerns of Caribbean literature. The two important directions of West Indian writing are more clearly seen: the one, pioneered by Mittelholzer, in which the writer looks outward towards a "parent" culture, and the other looking inward, seeking in its own, complex inheritance the raw material for new and original growth. Mittelholzer and Harris are both Guyanese of mixed racial stock, both deeply concerned with the psychological effects of this mixture, and both writers have a profound awareness of the Guyanese historical and cultural heritage. They also share a deep feeling for the Guyenese landscape which appears in their work as a brooding presence affecting radically -the lives of those who live within i-t. Mittelholzer's attitude to his mixed racial and cultural origins, however, produces in his work a schizophrenic Imbalance while Harris, by accepting racial and cultural complexity as a starting-point, initiates a uniquely creative and experimental art. Mittelholzer, in his approach to history, human character eM landscape, remains a vi "coastal" writer never really concerned (as Harris is) with. the deeper significance of the "Interior" and all that this implies, both in a geographical and psychological sense. The fact that Mittelbolzer's work reflects a psychological imbalance induced by a pre-occupation with racial identity has been demonstrated by Denis Williams in the 1968 Mittelholzer Lectures, and by Joyce Sparer in a series of articles in the Guyana Graphic. Mittelholzer's awareness of this imbalance, however, and his attempt to come to terms with it in his art remain to be examined and documented, as does Harris's attempt to create am "associative" art aimed at healing the breach in the individual consciousness of Caribbean Man. The aim of this study is to demonstrate that Mitteholzer and. Harris, although antithetical in impact and style (each representing an approach to fiction directly opposed to the other) are, in fact, the opposite elements of a dichotomy. Their work illustrates the negative and positive aspects of the racial and cultural schizophrenia of the Caribbean, for both writers in their different ways are preoccupied with (and therefore have embodied in their work) the juxtaposition and, contrasting of apparently irreconcilable emotional and intellectual qualities - the Caribbean Syzygy

    Machine learning and large scale cancer omic data: decoding the biological mechanisms underpinning cancer

    Get PDF
    Many of the mechanisms underpinning cancer risk and tumorigenesis are still not fully understood. However, the next-generation sequencing revolution and the rapid advances in big data analytics allow us to study cells and complex phenotypes at unprecedented depth and breadth. While experimental and clinical data are still fundamental to validate findings and confirm hypotheses, computational biology is key for the analysis of system- and population-level data for detection of hidden patterns and the generation of testable hypotheses. In this work, I tackle two main questions regarding cancer risk and tumorigenesis that require novel computational methods for the analysis of system-level omic data. First, I focused on how frequent, low-penetrance inherited variants modulate cancer risk in the broader population. Genome-Wide Association Studies (GWAS) have shown that Single Nucleotide Polymorphisms (SNP) contribute to cancer risk with multiple subtle effects, but they are still failing to give further insight into their synergistic effects. I developed a novel hierarchical Bayesian regression model, BAGHERA, to estimate heritability at the gene-level from GWAS summary statistics. I then used BAGHERA to analyse data from 38 malignancies in the UK Biobank. I showed that genes with high heritable risk are involved in key processes associated with cancer and are often localised in genes that are somatically mutated drivers. Heritability, like many other omics analysis methods, study the effects of DNA variants on single genes in isolation. However, we know that most biological processes require the interplay of multiple genes and we often lack a broad perspective on them. For the second part of this thesis, I then worked on the integration of Protein-Protein Interaction (PPI) graphs and omics data, which bridges this gap and recapitulates these interactions at a system level. First, I developed a modular and scalable Python package, PyGNA, that enables robust statistical testing of genesets' topological properties. PyGNA complements the literature with a tool that can be routinely introduced in bioinformatics automated pipelines. With PyGNA I processed multiple genesets obtained from genomics and transcriptomics data. However, topological properties alone have proven to be insufficient to fully characterise complex phenotypes. Therefore, I focused on a model that allows to combine topological and functional data to detect multiple communities associated with a phenotype. Detecting cancer-specific submodules is still an open problem, but it has the potential to elucidate mechanisms detectable only by integrating multi-omics data. Building on the recent advances in Graph Neural Networks (GNN), I present a supervised geometric deep learning model that combines GNNs and Stochastic Block Models (SBM). The model is able to learn multiple graph-aware representations, as multiple joint SBMs, of the attributed network, accounting for nodes participating in multiple processes. The simultaneous estimation of structure and function provides an interpretable picture of how genes interact in specific conditions and it allows to detect novel putative pathways associated with cancer

    Scalable software and models for large-scale extracellular recordings

    Get PDF
    The brain represents information about the world through the electrical activity of populations of neurons. By placing an electrode near a neuron that is firing (spiking), it is possible to detect the resulting extracellular action potential (EAP) that is transmitted down an axon to other neurons. In this way, it is possible to monitor the communication of a group of neurons to uncover how they encode and transmit information. As the number of recorded neurons continues to increase, however, so do the data processing and analysis challenges. It is crucial that scalable software and analysis tools are developed and made available to the neuroscience community to keep up with the large amounts of data that are already being gathered. This thesis is composed of three pieces of work which I develop in order to better process and analyze large-scale extracellular recordings. My work spans all stages of extracellular analysis from the processing of raw electrical recordings to the development of statistical models to reveal underlying structure in neural population activity. In the first work, I focus on developing software to improve the comparison and adoption of different computational approaches for spike sorting. When analyzing neural recordings, most researchers are interested in the spiking activity of individual neurons, which must be extracted from the raw electrical traces through a process called spike sorting. Much development has been directed towards improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, I develop SpikeInterface, an open-source, Python framework designed to unify preexisting spike sorting technologies into a single toolkit and to facilitate straightforward benchmarking of different approaches. With this framework, I demonstrate that modern, automated spike sorters have low agreement when analyzing the same dataset, i.e. they find different numbers of neurons with different activity profiles; This result holds true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a consensus-based approach to spike sorting, where the outputs of multiple spike sorters are combined, can dramatically reduce the number of falsely detected neurons. In the second work, I focus on developing an unsupervised machine learning approach for determining the source location of individually detected spikes that are recorded by high-density, microelectrode arrays. By localizing the source of individual spikes, my method is able to determine the approximate position of the recorded neuriii ons in relation to the microelectrode array. To allow my model to work with large-scale datasets, I utilize deep neural networks, a family of machine learning algorithms that can be trained to approximate complicated functions in a scalable fashion. I evaluate my method on both simulated and real extracellular datasets, demonstrating that it is more accurate than other commonly used methods. Also, I show that location estimates for individual spikes can be utilized to improve the efficiency and accuracy of spike sorting. After training, my method allows for localization of one million spikes in approximately 37 seconds on a TITAN X GPU, enabling real-time analysis of massive extracellular datasets. In my third and final presented work, I focus on developing an unsupervised machine learning model that can uncover patterns of activity from neural populations associated with a behaviour being performed. Specifically, I introduce Targeted Neural Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity and any external behavioural variables. TNDM decomposes neural dynamics (i.e. temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics; the behaviourally relevant dynamics constitute all activity patterns required to generate the behaviour of interest while behaviourally irrelevant dynamics may be completely unrelated (e.g. other behavioural or brain states), or even related to behaviour execution (e.g. dynamics that are associated with behaviour generally but are not task specific). Again, I implement TNDM using a deep neural network to improve its scalability and expressivity. On synthetic data and on real recordings from the premotor (PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching task, I show that TNDM is able to extract low-dimensional neural dynamics that are highly predictive of behaviour without sacrificing its fit to the neural data
    • …
    corecore