3,290 research outputs found

    Understanding and improving the applicability of randomised controlled trials: subgroup reporting and the statistical calibration of trials to real-world populations

    Get PDF
    Context and objective Randomised controlled trials (hereafter, trials) are widely regarded as the gold standard for evaluating treatment efficacy in medical interventions. They employ strict study designs, rigorous eligibility criteria, standardised protocols, and close participant monitoring under controlled conditions, contributing to high internal validity. However, these stringent criteria and procedures may limit the generalisability of trial findings to real-world situations, which often involve diverse patient populations such as multimorbidity and frailty patients. Consequently, there is growing interest in the applicability of trials to real-world clinical practice. In this thesis I will 1) evaluate how well major trials report on variation in treatment effects and 2) examine the use of trial calibration methods to test trial applicability. Methods 1) A comprehensive and consistent subgroup reporting description was presented, which contributes to the exploration of subgroup effects and treatment heterogeneity for informed decision-making in tailored subgroup populations within routine practice. The study evaluated 2,235 trials from clinicaltrial.gov that involve multiple chronic medical conditions, assessing the presence of subgroup reporting in corresponding publications and extracting subgroup terms. These terms were then standardised and summarised using Medical Subject Headings and WHO Anatomical Therapeutic Chemical codes. Logistic and Poisson regression models were employed to identify independent predictors of subgroup reporting patterns. 2) Two calibration models, namely the regression-based model and inverse odds of sampling weights (IOSW) were implemented. These models were utilised to apply the findings from two influential heart failure (HF) trials - COMET and DIG - to a real-world HF registry in Scotland consisting of 8,012 HF patients mainly with reduced ejection fraction, using individual participant data (IPD) from both datasets. Additionally, calibration was conducted within the subgroup population (lowest and highest risk group) of the real-world Scottish HF registry for exploratory analyses. The study provided comparisons of baseline characteristics and calibrated and uncalibrated results between the trial and registry. Furthermore, it assessed the impact of calibration on the results with the focus on overall effects and precision. Results The subgroup reporting study showed that among 2,235 eligible trials, 48% (1,082 trials) reported overall results and 23% (524 trials) reported subgroups. Age (51%), gender (45%), racial group (28%) and geographical locations (17%) were the most frequently reported subgroups among 524 trials. Characteristics related to the index condition (severity/duration/types, etc.) were somewhat commonly reported. However, reporting on metrics of comorbidity or frailty and mental health were rare. Follow-up time, enrolment size, trial starting year and specific index conditions (e.g., hypercholesterolemia, hypertension etc.) were significant predictors for any subgroup reporting after adjusting for enrolment size and index conditions while funding source and number of arms were not associated with subgroup reporting. The trial calibration study showed that registry patients were, on average, older, had poorer renal function and received higher-doses of loop diuretics than trial participants. The key findings from two HF trials remained consistent after calibration in the registry, with a tolerable decrease in precision (larger confidence intervals) for the effect estimates. Treatment-effect estimates were also similar when trials were calibrated to high-risk and low-risk registry patients, albeit with a greater reduction in precision. Conclusion Variations in subgroup reporting among different trials limited the feasibility to evaluate subgroup effects and examine heterogeneity of treatment effects. If IPD or IPD alternative summarised data is available from trials and the registry, trial applicability can be assessed by performing calibration

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Synoptic weather patterns conducive to compound extreme rainfall–wave events in the NW Mediterranean

    Get PDF
    The NW Mediterranean coast is highly susceptible to the impacts of extreme rainstorms and coastal storms, which often lead to flash floods, coastal erosion, and flooding across a highly urbanised territory. Often, these storms occur simultaneously, resulting in compound events that intensify local impacts when they happen in the same location or spread impacts across the territory when they occur in different areas. These multivariate and spatially compound events present significant challenges for risk management, potentially overwhelming emergency services. In this study, we analysed the prevailing atmospheric conditions during various types of extreme episodes, aiming to create the first classification of synoptic weather patterns (SWPs) conducive to compound events involving heavy rainfall and storm waves in the Spanish NW Mediterranean. To achieve this, we developed a methodological framework that combines an objective synoptic classification method based on principal component analysis and k-means clustering with a Bayesian network. This methodology was applied to a dataset comprising 562 storm events recorded over 30 years, including 112 compound events. First, we used the framework to determine the optimal combination of domain size, classification variables, and number of clusters based on the synoptic skill to replicate local-scale values of daily rainfall and significant wave height. Subsequently, we identified SWPs associated with extreme compound events, which are often characterised by upper-level lows and trough structures in conjunction with Mediterranean cyclones, resulting in severe to extreme coastal storms combined with convective systems. The obtained classification demonstrated strong skill, with scores exceeding 0.4 when considering factors like seasonality or the North Atlantic Oscillation. These findings contribute to a broader understanding of compound terrestrial–maritime extreme events in the study area and have the potential to aid in the development of effective risk management strategies.</p

    Metro systems : Construction, operation and impacts

    Get PDF
    Peer reviewedPublisher PD

    Review of A. Colin Cameron and Pravin K. Trivedi’s Microeconometrics Using Stata, Second Edition

    Get PDF
    This is the author accepted manuscript. The final version is available from SAGE Publications via the DOI in this recordIn this article, I review Microeconometrics Using Stata, Second Edition, by A. Colin Cameron and Pravin K. Trivedi (2022, Stata Press)

    Probabilistic finite element-based reliability of corroded pipelines with interacting corrosion cluster defects

    Get PDF
    Open Access via the Elsevier agreement The first author would like to thank the Ghana National Petroleum Corporation (GNPC) Foundation for funding the PhD studies at the University of Aberdeen, United Kingdom. The first author also acknowledges the research support from Net Zero Technology Centre and University of Aberdeen through their partnership in the UK National Decommissioning CentrePeer reviewedPublisher PD

    A Look at Financial Dependencies by Means of Econophysics and Financial Economics

    Full text link
    This is a review about financial dependencies which merges efforts in econophysics and financial economics during the last few years. We focus on the most relevant contributions to the analysis of asset markets' dependencies, especially correlational studies, which in our opinion are beneficial for researchers in both fields. In econophysics, these dependencies can be modeled to describe financial markets as evolving complex networks. In particular we show that a useful way to describe dependencies is by means of information filtering networks that are able to retrieve relevant and meaningful information in complex financial data sets. In financial economics these dependencies can describe asset comovement and spill-overs. In particular, several models are presented that show how network and factor model approaches are related to modeling of multivariate volatility and asset returns respectively. Finally, we sketch out how these studies can inspire future research and how they contribute to support researchers in both fields to find a better and a stronger common language

    Differential Co-Abundance Network Analyses for Microbiome Data Adjusted for Clinical Covariates Using Jackknife Pseudo-Values

    Full text link
    A recent breakthrough in differential network (DN) analysis of microbiome data has been realized with the advent of next-generation sequencing technologies. The DN analysis disentangles the microbial co-abundance among taxa by comparing the network properties between two or more graphs under different biological conditions. However, the existing methods to the DN analysis for microbiome data do not adjust for other clinical differences between subjects. We propose a Statistical Approach via Pseudo-value Information and Estimation for Differential Network Analysis (SOHPIE-DNA) that incorporates additional covariates such as continuous age and categorical BMI. SOHPIE-DNA is a regression technique adopting jackknife pseudo-values that can be implemented readily for the analysis. We demonstrate through simulations that SOHPIE-DNA consistently reaches higher recall and F1-score, while maintaining similar precision and accuracy to existing methods (NetCoMi and MDiNE). Lastly, we apply SOHPIE-DNA on two real datasets from the American Gut Project and the Diet Exchange Study to showcase the utility. The analysis of the Diet Exchange Study is to showcase that SOHPIE-DNA can also be used to incorporate the temporal change of connectivity of taxa with the inclusion of additional covariates. As a result, our method has found taxa that are related to the prevention of intestinal inflammation and severity of fatigue in advanced metastatic cancer patients.Comment: 23 pages, 2 figures, 4 table

    Network communications flexibly predict visual contents that enhance representations for faster visual categorization

    Get PDF
    Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N=11, both sexes). Each was cued to the spatial location (left vs. right) and contents (Low vs. High Spatial Frequency, LSF vs. HSF) of a predicted Gabor stimulus that they then categorized. Using each participant’s concurrently measured MEG, we reconstructed networks that predict and categorize LSF vs. HSF contents for behavior. We found that predicted contents flexibly propagate top-down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF vs. HSF representations of the stimulus, all the way from occipital-ventral-parietal to pre-motor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e. 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions

    Novel Neural Network Applications to Mode Choice in Transportation: Estimating Value of Travel Time and Modelling Psycho-Attitudinal Factors

    Get PDF
    Whenever researchers wish to study the behaviour of individuals choosing among a set of alternatives, they usually rely on models based on the random utility theory, which postulates that the single individuals modify their behaviour so that they can maximise of their utility. These models, often identified as discrete choice models (DCMs), usually require the definition of the utilities for each alternative, by first identifying the variables influencing the decisions. Traditionally, DCMs focused on observable variables and treated users as optimizing tools with predetermined needs. However, such an approach is in contrast with the results from studies in social sciences which show that choice behaviour can be influenced by psychological factors such as attitudes and preferences. Recently there have been formulations of DCMs which include latent constructs for capturing the impact of subjective factors. These are called hybrid choice models or integrated choice and latent variable models (ICLV). However, DCMs are not exempt from issues, like, the fact that researchers have to choose the variables to include and their relations to define the utilities. This is probably one of the reasons which has recently lead to an influx of numerous studies using machine learning (ML) methods to study mode choice, in which researchers tried to find alternative methods to analyse travellers’ choice behaviour. A ML algorithm is any generic method that uses the data itself to understand and build a model, improving its performance the more it is allowed to learn. This means they do not require any a priori input or hypotheses on the structure and nature of the relationships between the several variables used as its inputs. ML models are usually considered black-box methods, but whenever researchers felt the need for interpretability of ML results, they tried to find alternative ways to use ML methods, like building them by using some a priori knowledge to induce specific constrains. Some researchers also transformed the outputs of ML algorithms so that they could be interpreted from an economic point of view, or built hybrid ML-DCM models. The object of this thesis is that of investigating the benefits and the disadvantages deriving from adopting either DCMs or ML methods to study the phenomenon of mode choice in transportation. The strongest feature of DCMs is the fact that they produce very precise and descriptive results, allowing for a thorough interpretation of their outputs. On the other hand, ML models offer a substantial benefit by being truly data-driven methods and thus learning most relations from the data itself. As a first contribution, we tested an alternative method for calculating the value of travel time (VTT) through the results of ML algorithms. VTT is a very informative parameter to consider, since the time consumed by individuals whenever they need to travel normally represents an undesirable factor, thus they are usually willing to exchange their money to reduce travel times. The method proposed is independent from the mode-choice functions, so it can be applied to econometric models and ML methods equally, if they allow the estimation of individual level probabilities. Another contribution of this thesis is a neural network (NN) for the estimation of choice models with latent variables as an alternative to DCMs. This issue arose from wanting to include in ML models not only level of service variables of the alternatives, and socio-economic attributes of the individuals, but also psycho-attitudinal indicators, to better describe the influence of psychological factors on choice behaviour. The results were estimated by using two different datasets. Since NN results are dependent on the values of their hyper-parameters and on their initialization, several NNs were estimated by using different hyper-parameters to find the optimal values, which were used to verify the stability of the results with different initializations
    • …
    corecore