1,594 research outputs found

    Longitudinal association between different levels of alcohol consumption and a new onset of depression and generalized anxiety disorder: Results from an international study in primary care

    Get PDF
    BACKGROUND: Several studies that have examined the full range of alcohol consumption have pointed to a possible non-linear association between alcohol use and the common mental disorders. Most of these studies are cross sectional and assessed psychiatric morbidity using non-specific instruments. Our aim was to investigate the longitudinal association between varying levels of alcohol consumption at baseline and the new-onset of depression and generalized anxiety disorder (GAD), in a large international primary care sample. METHODS: The sample consisted of 3201 primary care attenders from 14 countries in the context of WHO Collaborative Study of Psychological Problems in General Health Care. Alcohol use at baseline was assessed using AUDIT and the mental disorders were assessed with the Composite International Diagnostic Interview. RESULTS: Light to moderate alcohol consumption at baseline was associated with a lower incidence of depression and GAD compared to abstinence while excessive alcohol consumption was associated with a higher incidence of depression but not GAD. This non-linear association was not substantially affected after adjustment for a range of possible confounding variables. CONCLUSION: Any causal interpretation of this association is difficult in the context of an observational study and further combined and consistent evidence from different sources is needed

    Sampling‐based methods for uncertainty propagation in flood modeling under multiple uncertain inputs: finding out the most efficient choice

    Get PDF
    In probabilistic flood modeling, uncertainty manifests in frequency of occurrence, or histograms, for quantities of interest, including the Flood Extent and hazard rating (HR). Such modeling at the field-scale requires the identification of a more efficient alternative to the Standard Monte Carlo (SMC) method that can reproduce comparable output probability distributions with a relatively reduced sample size, including detailed histograms of quantities of interest. Latin hypercube sampling (LHS) is the most evaluated alternative for fluvial floods but yields no considerable sample size reduction. Potentially better alternatives include adaptive stratified sampling (ASS), Quasi Monte Carlo (QMC) and Haar-wavelet expansion (HWE), which are yet unevaluated for probabilistic flood modeling. To fulfill this gap, LHS, ASS, QMC, and HWE are compared to quantify sample size reduction to reproduce output detailed histograms—for Flood Extent, and average and maximum HR—while keeping the difference below 10% to the reference SMC prediction. The comparison is done for two test cases with two (i.e., inflow discharge and Manning's coefficient) and three (i.e., further including the ground elevation) input random variables, and a real case with five input random variables. With two input random variables, all four alternatives yield sample size reductions, with QMC and HWE considerably outperforming the others; with three and more input random variables, HWE becomes inflexible and LHS underperforms. Still, QMC is a better choice than ASS to boost sample size reduction for the real case and shall be preferred in probabilistic flood modeling. Accompanying research codes are openly available online

    Validación experimental de un modelo computacional unidimensional para el cálculo de ondas de avenida

    Get PDF
    [ES] El objetivo principal de este artículo es validar un modelo numérico para el cálculo computacional de ondas de avenida en cauces fluviales, en aproximación unidimensional. Se presenta la comparación de resultados frente a los datos experimentales de un ensayo de laboratorio, y la comparación con un caso test propuesto y resuelto por otros autores.Villanueva, I.; García, P.; Zorraquino, V. (1999). Validación experimental de un modelo computacional unidimensional para el cálculo de ondas de avenida. Ingeniería del Agua. 6(1):55-62. https://doi.org/10.4995/ia.1999.2777SWORD556261Abbott M. B. (1992). Computational Hydraulics. Ashgate, Aldershot, U.K.Bellos C.V., Soulis J.V., Sakkas J.G. (1992). Experimental investigation of two-dimensional dam-break induced flows. Journal of Hydraulic Research, Vol. 30, Num. 1Chow V. T. (1959), Open channel Hydraulics, McGraw-Hill Book Co. IncCunge J.A., Holly F.M., Verwey A. (1980). Practical aspects of computational river hydraulics, Pitman, London, U.K.Fread D.L. (1985). Channel routing. Hidrological forecasting. M. G. Anderson and T.P. Burt, eds., Jhon Wiley and Sons Ltd. N.Y.García-Navarro P. (1989). Estudio de la propagación de ondas en cursos fluviales, tesis doctoral, U. de Zaragoza.García-Navarro P., Alcrudo F. (1995). Simulación de flujo transitorio en cauces naturales. Ingeniería del Agua. Vol. 2, Num. 1.García-Navarro P., Alcrudo F., Savirón J.M. (1992). 1-D Open-channel flow simulation using TVD-Mac-Cormack scheme. Journal of Hydraulic Engeenering. Vol. 118, Num. 10.Jin M., Fread D.L. (1997). Dynamic flood routing with explicit and implicit numerical solution schemes. Journal of Hydraulic Engeenering. Vol. 123, Num. 3.Mahmood K., Yevjevich, V. (1975). Unsteady flow in open channels, Water Resources Publications, US

    A convolutional neural network for fast upsampling of undersampled tomograms in X-ray CT time-series using a representative highly sampled tomogram

    Get PDF
    We designed a convolutional neural network to quickly and accurately upscale the sinograms of x-ray tomograms captured with a low number of projections; effectively increasing the number of projections. This is particularly useful for tomograms that are part of a time-series, as in order to capture fast-occurring temporal events, tomograms have to be collected quickly, requiring a low number of projections. The upscaling process is facilitated using a single tomogram with a high number of projections for training, which is usually captured at the end or the beginning of the time-series when capturing the tomogram quickly no longer needed. Abstract X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collections with a low number of projections for each tomogram in order to achieve the desired 'frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. In this paper we use this highly-sampled data to aid feature detection in the rapidlycollected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, we propose a super-resolution approach based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample, using learnt behaviour from a dataset containing a high number of projections, taken of the same sample occurring at the beginning or the end of the data collection. The prior provided by the highly-sampled tomogram allows the Journal of Synchrotron Radiation research papers 2 application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time series capture. The increase in quality can prove very helpful for the researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network (CNN) which through training learns an end-to-end mapping between sinograms with low and high number of projections. Since datasets can differ greatly between experiments, our approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique we present results with different hyperparameter settings, and have tested our method on both synthetic and real-world data. In addition, we have released accompanying real-world experimental datasets in the form of two 80GB tomograms depicting a metallic pin that undergoes corruption from a droplet of saltwater , and also produced and released a new engineering-based phantom dataset, inspired by the experimental datasets

    Temporal refinement of 3D CNN semantic segmentations on 4D time-series of undersampled tomograms using hidden Markov models

    Get PDF
    Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available

    A stacked dense denoising–segmentation network for undersampled tomograms and knowledge transfer using synthetic tomograms

    Get PDF
    Over recent years, many approaches have been proposed for the denoising or semantic segmentation of X-ray computed tomography (CT) scans. In most cases, high-quality CT reconstructions are used; however, such reconstructions are not always available. When the X-ray exposure time has to be limited, undersampled tomograms (in terms of their component projections) are attained. This low number of projections offers low-quality reconstructions that are difficult to segment. Here, we consider CT time-series (i.e. 4D data), where the limited time for capturing fast-occurring temporal events results in the time-series tomograms being necessarily undersampled. Fortunately, in these collections, it is common practice to obtain representative highly sampled tomograms before or after the time-critical portion of the experiment. In this paper, we propose an end-to-end network that can learn to denoise and segment the time-series’ undersampled CTs, by training with the earlier highly sampled representative CTs. Our single network can offer two desired outputs while only training once, with the denoised output improving the accuracy of the final segmentation. Our method is able to outperform state-of-the-art methods in the task of semantic segmentation and offer comparable results in regard to denoising. Additionally, we propose a knowledge transfer scheme using synthetic tomograms. This not only allows accurate segmentation and denoising using less real-world data, but also increases segmentation accuracy. Finally, we make our datasets, as well as the code, publicly available

    When should customers control service delivery? Implications for service design

    Get PDF
    What do a Mongolian stir-fry restaurant and a medical lab providing home testing solutions have in common? They are both innovative services that base their success on customers controlling part of the service delivery. These providers allow service tasks to be performed by the customers as a means of shaping the overall experience and not strictly as a means of "outsourcing" the service. Motivated by such practices, we explore whether and how should providers allocate the control of different tasks of their service to the customers. We model services as multi-step processes with each step affecting customers' experience at other steps. At certain steps the provider may hold an “expert" role and be more capable of performing than the customers, whereas at other steps she holds an “administrative" role and is less capable of performing than the customers. We distinguish between routine services, where the service outcome must conform to standardized specifications, and non-routine services, where the value of the service outcome relies on subjective dimensions. We show that the optimal design is determined by an economically intuitive rule whereby the provider controls the steps based on the marginal benefit she can derive compared to self-service. For routine services, this rule translates to managing “blocks" of steps because the provider benefits from containing the volatility of the experiences across the service even when this implies the provision of service steps with a negative marginal benefit, i.e., steps which she is less capable of performing than the customers. Instead, in non-routine services providers should focus on the value advantage they can ensure through a "core provision" even if this implies forgoing control of steps for which they are more capable of performing than the customers and from which they can derive positive marginal benefit. This implies that in non-routine services the provider exercises more control up to a certain process length; beyond that she delegates more steps to the customers. When customers differ in their abilities to perform the different steps, the provider may offer a service line. Service lines facilitate better segmentation than a single service offering, but their economic benefit exhibits an inverted “U-shaped" relationship with respect to the number of steps that a service comprises. Finally, we find that competition between two providers who differ in their capabilities to perform a service results in service design differentiation where the more capable provider offers a higher-end "focused service" against a lower-end "super-service" offered from the less capable provider
    • …