4,284 research outputs found

    Congenital diaphragmatic hernia:A critical appraisal of perinatal care

    Get PDF
    A congenital diaphragmatic hernia (CDH) is a rare birth defect characterised byincomplete closure of the diaphragm. After birth, CDH is associated with significantneonatal morbidity and mortality due to a combination of pulmonary hypoplasia,pulmonary hypertension, and cardiac dysfunction. Despite improvements in clinicalcare, around 30% of these infants do not survive. The research projects reportedin this thesis provide a critical appraisal of important aspects of perinatal care forinfants with CDH.<br/

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    Spatial and temporal hierarchical decomposition methods for the optimal power flow problem

    Get PDF
    The subject of this thesis is the development of spatial and temporal decomposition methods for the optimal power flow problem, such as in the transmissiondistribution network topologies. In this context, we propose novel decomposition interfaces and effectivemethodology for both the spatial and temporal dimensions applicable to linear and non-linear representations of the OPF problem. These two decomposition strategies are combined with a Benders-based algorithmand have advantages in model building time, memory management and solving time. For example, in the 2880-period linear problems, the decomposition finds optimal solutions up to 50 times faster and allows even larger instances to be solved; and in multi-period non-linear problems with 48 periods, close-to-optimal feasible solutions are found 7 times faster. With these decompositions, detailed networks can be optimized in coordination, effectively exploiting the value of the time-linked elements in both transmission and distribution levels while speeding up the solution process, preserving privacy, and adding flexibility when dealing with different models at each level. In the non-linear methodology, significant challenges, such as active set determination, instability and non-convex overestimations, may hinder its effectiveness, and they are addressed, making the proposed methodology more robust and stable. A test network was constructed by combining standard publicly available networks resulting in nearly 1000 buses and lines with up to 8760 connected periods; several interfaces were presented depending on the problemtype and its topology using a modified Benders algorithm. Insight was given into why a Benders-based decomposition was used for this type of problem instead of a common alternative: ADMM. The methodology is useful mainly in two sets of applications: when highly detailed long-termlinear operational problems need to be solved, such as in planning frameworks where the operational problems solved assume no prior knowledge; and in full AC-OPF problems where prior information from historic solutions can be used to speed up convergence

    Designing similarity functions

    Get PDF
    The concept of similarity is important in many areas of cognitive science, computer science, and statistics. In machine learning, functions that measure similarity between two instances form the core of instance-based classifiers. Past similarity measures have been primarily based on simple Euclidean distance. As machine learning has matured, it has become obvious that a simple numeric instance representation is insufficient for most domains. Similarity functions for symbolic attributes have been developed, and simple methods for combining these functions with numeric similarity functions were devised. This sequence of events has revealed three important issues, which this thesis addresses. The first issue is concerned with combining multiple measures of similarity. There is no equivalence between units of numeric similarity and units of symbolic similarity. Existing similarity functions for numeric and symbolic attributes have no common foundation, and so various schemes have been devised to avoid biasing the overall similarity towards one type of attribute. The similarity function design framework proposed by this thesis produces probability distributions that describe the likelihood of transforming between two attribute values. Because common units of probability are employed, similarities may be combined using standard methods. It is empirically shown that the resulting similarity functions treat different attribute types coherently. The second issue relates to the instance representation itself. The current choice of numeric and symbolic attribute types is insufficient for many domains, in which more complicated representations are required. For example, a domain may require varying numbers of features, or features with structural information. The framework proposed by this thesis is sufficiently general to permit virtually any type of instance representation-all that is required is that a set of basic transformations that operate on the instances be defined. To illustrate the framework’s applicability to different instance representations, several example similarity functions are developed. The third, and perhaps most important, issue concerns the ability to incorporate domain knowledge within similarity functions. Domain information plays an important part in choosing an instance representation. However, even given an adequate instance representation, domain information is often lost. For example, numeric features that are modulo (such as the time of day) can be perfectly represented as a numeric attribute, but simple linear similarity functions ignore the modulo nature of the attribute. Similarly, symbolic attributes may have inter-symbol relationships that should be captured in the similarity function. The design framework proposed by this thesis allows domain information to be captured in the similarity function, both in the transformation model and in the probability assigned to basic transformations. Empirical results indicate that such domain information improves classifier performance, particularly when training data is limited

    Reformulating aircraft routing algorithms to reduce fuel burn and thus CO2 emissions

    Get PDF
    During the UN Climate Change Conference (COP26), in November 2021, the international aviation community agreed to advance actions to reduce CO2 emissions. Adopting more fuel efficient routes, now that full global satellite coverage is available, could achieve this quickly and economically. Here flights between New York and London, from 1st December, 2019 to 29th February, 2020 are considered. Trajectories through wind fields from a global atmospheric re-analysis dataset are found using optimal control theory. Initially, time minimal routes are obtained by applying Pontryagin’s Minimum Principle. Minimum time air distances are compared with actual Air Traffic Management tracks. Potential air distance savings range from 0.7 to 16.4%, depending on direction and track efficiency. To gauge the potential for longer duration time minimal round trips in the future, due to climate change, trajectories are considered for historic and future time periods, using an ensemble of climate models. Next, fixed-time, fuel-minimal routes are sought. Fuel consumption is modelled with a new physics-driven fuel burn function, which is aircraft model specific. Control variables of position-dependent aircraft headings and airspeeds or just headings are used. The importance of airspeed in finding trajectories is established, by comparing fuel burn found from a global search of optimised results for the discretised approximation of each formulation. Finally, dynamic programming is applied to find free-time, fuel-optimal routes. Results show that significant fuel reductions are possible, compared with estimates of fuel use from actual flights, without significant changes to flight duration. Fuel use for winter 2019–2020 could have been reduced by 4.6% eastbound and 3.9% westbound on flights between Heathrow and John F Kennedy Airports. This equates to a 16.6 million kg reduction in CO2 emissions. Thus large reductions in fuel consumption and emissions are possible immediately, without waiting decades for incremental improvements in fuel-efficiency through technological advances

    The molecular athlete: exercise physiology from mechanisms to medals

    Get PDF
    Human skeletal muscle demonstrates remarkable plasticity, adapting to numerous external stimuli including the habitual level of contractile loading. Accordingly, muscle function and exercise capacity encompass a broad spectrum, from inactive individuals with low levels of endurance and strength, to elite athletes who produce prodigious performances underpinned by pleiotropic training-induced muscular adaptations. Our current understanding of the signal integration, interpretation and output coordination of the cellular and molecular mechanisms that govern muscle plasticity across this continuum is incomplete. As such, training methods and their application to elite athletes largely rely on a "trial and error" approach with the experience and practices of successful coaches and athletes often providing the bases for "post hoc" scientific enquiry and research. This review provides a synopsis of the morphological and functional changes along with the molecular mechanisms underlying exercise adaptation to endurance- and resistance-based training. These traits are placed in the context of innate genetic and inter-individual differences in exercise capacity and performance, with special considerations given to the ageing athletes. Collectively, we provide a comprehensive overview of skeletal muscle plasticity in response to different modes of exercise, and how such adaptations translate from "molecules to medals"
    • 

    corecore