9,636 research outputs found

    Consent and the Construction of the Volunteer: Institutional Settings of Experimental Research on Human Beings in Britain during the Cold War

    Get PDF
    This study challenges the primacy of consent in the history of human experimentation and argues that privileging the cultural frameworks adds nuance to our understanding of the construction of the volunteer in the period 1945 to 1970. Historians and bio-ethicists have argued that medical ethics codes have marked out the parameters of using people as subjects in medical scientific research and that the consent of the subjects was fundamental to their status as volunteers. However, the temporality of the creation of medical ethics codes means that they need to be understood within their historical context. That medical ethics codes arose from a specific historical context rather than a concerted and conscious determination to safeguard the well-being of subjects needs to be acknowledged. The British context of human experimentation is under-researched and there has been even less focus on the cultural frameworks within which experiments took place. This study demonstrates, through a close analysis of the Medical Research Council's Common Cold Research Unit (CCRU) and the government's military research facility, the Chemical Defence Experimental Establishment, Porton Down (Porton), that the `volunteer' in human experiments was a subjective entity whose identity was specific to the institution which recruited and made use of the subject. By examining representations of volunteers in the British press, the rhetoric of the government's collectivist agenda becomes evident and this fed into the institutional construction of the volunteer at the CCRU. In contrast, discussions between Porton scientists, staff members, and government officials demonstrate that the use of military personnel in secret chemical warfare experiments was far more complex. Conflicting interests of the military, the government and the scientific imperative affected how the military volunteer was perceived

    Moduli Stabilisation and the Statistics of Low-Energy Physics in the String Landscape

    Get PDF
    In this thesis we present a detailed analysis of the statistical properties of the type IIB flux landscape of string theory. We focus primarily on models constructed via the Large Volume Scenario (LVS) and KKLT and study the distribution of various phenomenologically relevant quantities. First, we compare our considerations with previous results and point out the importance of Kähler moduli stabilisation, which has been neglected in this context so far. We perform different moduli stabilisation procedures and compare the resulting distributions. To this end, we derive the expressions for the gravitino mass, various quantities related to axion physics and other phenomenologically interesting quantities in terms of the fundamental flux dependent quantities gsg_s, W0W_0 and n\mathfrak{n}, the parameter which specifies the nature of the non-perturbative effects. Exploiting our knowledge of the distribution of these fundamental parameters, we can derive a distribution for all the quantities we are interested in. For models that are stabilised via LVS we find a logarithmic distribution, whereas for KKLT and perturbatively stabilised models we find a power-law distribution. We continue by investigating the statistical significance of a newly found class of KKLT vacua and present a search algorithm for such constructions. We conclude by presenting an application of our findings. Given the mild preference for higher scale supersymmetry breaking, we present a model of the early universe, which allows for additional periods of early matter domination and ultimately leads to rather sharp predictions for the dark matter mass in this model. We find the dark matter mass to be in the very heavy range mχ10101011 GeVm_{\chi}\sim 10^{10}-10^{11}\text{ GeV}

    TOWARDS AN UNDERSTANDING OF EFFORTFUL FUNDRAISING EXPERIENCES: USING INTERPRETATIVE PHENOMENOLOGICAL ANALYSIS IN FUNDRAISING RESEARCH

    Get PDF
    Physical-activity oriented community fundraising has experienced an exponential growth in popularity over the past 15 years. The aim of this study was to explore the value of effortful fundraising experiences, from the point of view of participants, and explore the impact that these experiences have on people’s lives. This study used an IPA approach to interview 23 individuals, recognising the role of participants as proxy (nonprofessional) fundraisers for charitable organisations, and the unique organisation donor dynamic that this creates. It also bought together relevant psychological theory related to physical activity fundraising experiences (through a narrative literature review) and used primary interview data to substantiate these. Effortful fundraising experiences are examined in detail to understand their significance to participants, and how such experiences influence their connection with a charity or cause. This was done with an idiographic focus at first, before examining convergences and divergences across the sample. This study found that effortful fundraising experiences can have a profound positive impact upon community fundraisers in both the short and the long term. Additionally, it found that these experiences can be opportunities for charitable organisations to create lasting meaningful relationships with participants, and foster mutually beneficial lifetime relationships with them. Further research is needed to test specific psychological theory in this context, including self-esteem theory, self determination theory, and the martyrdom effect (among others)

    A suite of quantum algorithms for the shortestvector problem

    Get PDF
    Crytography has come to be an essential part of the cybersecurity infrastructure that provides a safe environment for communications in an increasingly connected world. The advent of quantum computing poses a threat to the foundations of the current widely-used cryptographic model, due to the breaking of most of the cryptographic algorithms used to provide confidentiality, authenticity, and more. Consequently a new set of cryptographic protocols have been designed to be secure against quantum computers, and are collectively known as post-quantum cryptography (PQC). A forerunner among PQC is lattice-based cryptography, whose security relies upon the hardness of a number of closely related mathematical problems, one of which is known as the shortest vector problem (SVP). In this thesis I describe a suite of quantum algorithms that utilize the energy minimization principle to attack the shortest vector problem. The algorithms outlined span the gate-model and continuous time quantum computing, and explore methods of parameter optimization via variational methods, which are thought to be effective on near-term quantum computers. The performance of the algorithms are analyzed numerically, analytically, and on quantum hardware where possible. I explain how the results obtained in the pursuit of solving SVP apply more broadly to quantum algorithms seeking to solve general real-world problems; minimize the effect of noise on imperfect hardware; and improve efficiency of parameter optimization.Open Acces

    Unraveling the effect of sex on human genetic architecture

    Get PDF
    Sex is arguably the most important differentiating characteristic in most mammalian species, separating populations into different groups, with varying behaviors, morphologies, and physiologies based on their complement of sex chromosomes, amongst other factors. In humans, despite males and females sharing nearly identical genomes, there are differences between the sexes in complex traits and in the risk of a wide array of diseases. Sex provides the genome with a distinct hormonal milieu, differential gene expression, and environmental pressures arising from gender societal roles. This thus poses the possibility of observing gene by sex (GxS) interactions between the sexes that may contribute to some of the phenotypic differences observed. In recent years, there has been growing evidence of GxS, with common genetic variation presenting different effects on males and females. These studies have however been limited in regards to the number of traits studied and/or statistical power. Understanding sex differences in genetic architecture is of great importance as this could lead to improved understanding of potential differences in underlying biological pathways and disease etiology between the sexes and in turn help inform personalised treatments and precision medicine. In this thesis we provide insights into both the scope and mechanism of GxS across the genome of circa 450,000 individuals of European ancestry and 530 complex traits in the UK Biobank. We found small yet widespread differences in genetic architecture across traits through the calculation of sex-specific heritability, genetic correlations, and sex-stratified genome-wide association studies (GWAS). We further investigated whether sex-agnostic (non-stratified) efforts could potentially be missing information of interest, including sex-specific trait-relevant loci and increased phenotype prediction accuracies. Finally, we studied the potential functional role of sex differences in genetic architecture through sex biased expression quantitative trait loci (eQTL) and gene-level analyses. Overall, this study marks a broad examination of the genetics of sex differences. Our findings parallel previous reports, suggesting the presence of sexual genetic heterogeneity across complex traits of generally modest magnitude. Furthermore, our results suggest the need to consider sex-stratified analyses in future studies in order to shed light into possible sex-specific molecular mechanisms

    The gut microbiome variability of a butterflyfish increases on severely degraded Caribbean reefs.

    Get PDF
    Environmental degradation has the potential to alter key mutualisms that underlie the structure and function of ecological communities. How microbial communities associated with fishes vary across populations and in relation to habitat characteristics remains largely unknown despite their fundamental roles in host nutrition and immunity. We find significant differences in the gut microbiome composition of a facultative coral-feeding butterflyfish (Chaetodon capistratus) across Caribbean reefs that differ markedly in live coral cover (∼0-30%). Fish gut microbiomes were significantly more variable at degraded reefs, a pattern driven by changes in the relative abundance of the most common taxa potentially associated with stress. We also demonstrate that fish gut microbiomes on severely degraded reefs have a lower abundance of Endozoicomonas and a higher diversity of anaerobic fermentative bacteria, which may suggest a less coral dominated diet. The observed shifts in fish gut bacterial communities across the habitat gradient extend to a small set of potentially beneficial host associated bacteria (i.e., the core microbiome) suggesting essential fish-microbiome interactions may be vulnerable to severe coral degradation

    Hunting Wildlife in the Tropics and Subtropics

    Get PDF
    The hunting of wild animals for their meat has been a crucial activity in the evolution of humans. It continues to be an essential source of food and a generator of income for millions of Indigenous and rural communities worldwide. Conservationists rightly fear that excessive hunting of many animal species will cause their demise, as has already happened throughout the Anthropocene. Many species of large mammals and birds have been decimated or annihilated due to overhunting by humans. If such pressures continue, many other species will meet the same fate. Equally, if the use of wildlife resources is to continue by those who depend on it, sustainable practices must be implemented. These communities need to remain or become custodians of the wildlife resources within their lands, for their own well-being as well as for biodiversity in general. This title is also available via Open Access on Cambridge Core

    Managing global virtual teams in the London FinTech industry

    Get PDF
    Today, the number of organisations that are adopting virtual working arrangements has exploded, and the London FinTech industry is no exception. During recent years, FinTech companies have increasingly developed virtual teams as a means of connecting and engaging geographically dispersed workers, lowering costs, and enabling greater speed and adaptability. As the first study in the United Kingdom regarding global virtual team management in the FinTech industry, this DBA research seeks answers to the question, “What makes for the successful management of a global virtual team in the London FinTech industry?”. Straussian grounded-theory method was chosen as this qualitative approach lets participants have their own voice and offers some flexibility. It also allows the researcher to have preconceived ideas about the research undertaking. The research work makes the case for appreciating the voice of people with lived experiences. Ten London-based FinTech Managers with considerable experience running virtual teams agreed to take part in this study. These Managers had spent time working at large, household-name firms with significant global reach, and one had recently become founder and CEO of his own firm, taking on clients and hiring contract staff from around the world. At least eight of the other participants were senior ‘Heads’ of various technology teams and one was a Managing Director working at a ‘Big Four’ consultancy. They had all (and many still did) spent years running geographically distributed teams with members as far away as Pacific Asia and they were all keen to discuss that breadth of experience and the challenges they faced. Results from these in-depth interviews suggested that there are myriad reasons for a global virtual team, from providing 24 hour, follow-the-sun service to locating the most cost-effective resources with the highest skills. It also confirmed that there are unique challenges to virtual management and new techniques are required to help navigate virtual managers through them. Managing a global virtual team requires much more than the traditional management competencies. Based on discussion with the respondents, a set of practical recommendations for global virtual team management was developed and covered a wide range of issues related to recruitment and selection, team building, developing standard operating procedures, communication, motivation, performance management, and building trust

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions
    corecore