50,029 research outputs found

    Physics-related epistemic uncertainties in proton depth dose simulation

    Full text link
    A set of physics models and parameters pertaining to the simulation of proton energy deposition in matter are evaluated in the energy range up to approximately 65 MeV, based on their implementations in the Geant4 toolkit. The analysis assesses several features of the models and the impact of their associated epistemic uncertainties, i.e. uncertainties due to lack of knowledge, on the simulation results. Possible systematic effects deriving from uncertainties of this kind are highlighted; their relevance in relation to the application environment and different experimental requirements are discussed, with emphasis on the simulation of radiotherapy set-ups. By documenting quantitatively the features of a wide set of simulation models and the related intrinsic uncertainties affecting the simulation results, this analysis provides guidance regarding the use of the concerned simulation tools in experimental applications; it also provides indications for further experimental measurements addressing the sources of such uncertainties.Comment: To be published in IEEE Trans. Nucl. Sc

    Design implications of the new harmonised probabilistic damage stability regulations

    Get PDF
    In anticipation of the forthcoming new harmonised regulations for damage stability, SOLAS Chapter II-1, proposed in IMO MSC 80 and due for enforcement in 2009, a number of ship owners and consequentially yards and classification societies are venturing to exploit the new degrees of freedom afforded by the probabilistic concept of ship subdivision. In this process, designers are finding it rather difficult to move away from the prescription mindset that has been deeply ingrained in their way of conceptualising, creating and completing a ship design. Total freedom it appears is hard to cope with and a helping hand is needed to guide them in crossing the line from prescriptive to goal-setting design. This will be facilitated considerably with improved understanding of what this concept entails and of its limitations and range of applicability. This paper represents an attempt in this direction, based on the results of a research study, financed by the Maritime and Coastguard Agency in the UK, to assess the design implications of the new harmonised rules on passenger and cargo ships

    TEE, a simple estimator for the precision of eclipse and transit minimum times

    Full text link
    Context: Transit or eclipse timing variations have proven to be a valuable tool in exoplanet research. However, no simple way to estimate the potential precision of such timing measures has been presented yet, nor are guidelines available regarding the relation between timing errors and sampling rate. Aims: A `timing error estimator' (TEE) equation is presented that requires only basic transit parameters as input. With the TEE, it is straightforward to estimate timing precisions both for actual data as well as for future instruments, such as the TESS and PLATO space missions. Methods: A derivation of the timing error based on a trapezoidal transit shape is given. We also verify the TEE on realistically modeled transits using Monte Carlo simulations and determine its validity range, exploring in particular the interplay between ingress/egress times and sampling rates. Results: The simulations show that the TEE gives timing errors very close to the correct value, as long as the temporal sampling is faster than transit ingress/egress durations and transits with very low S/N are avoided. Conclusions: The TEE is a useful tool to estimate eclipse or transit timing errors in actual and future data-sets. In combination with an equation to estimate period errors (Deeg 2015), predictions for the ephemeris precision of long-coverage observations are possible as well. The tests for the TEE's validity-range led also to implications for instrumental design: Temporal sampling has to be faster than transit in- or egress durations, or a loss in timing-precision will occur. An application to the TESS mission shows that transits close to its detection limit will have timing uncertainties that exceed 1 hour within a few months after their acquisition. Prompt follow-up observations will be needed to avoid a `loosing' of their ephemeris.Comment: Accepted by A&A. Version 2 with updated timing uncertainties of TESS mission due to correction of a figure in Sullivan et al. 201

    Determination of stress concentration factors in offshore wind welded structures through a hybrid experimental and numerical approach

    Get PDF
    Offshore wind turbine (OWT) monopile support structures generally consist of steel cans connected together through circumferential welding joints. One critical factor to evaluate the localised increase in stresses is the stress concentration factor (SCF) which depends on the welding quality. The complex welding profiles in OWT monopiles makes the accurate calculation of SCF quite challenging. In this work, an innovative approach for the calculation of SCFs in offshore welded structures is proposed based on combined 3D (three-dimensional) laser scanning technology (LST) and 3D finite element analysis (FEA). The precise geometry of the welded specimens is captured using 3D LST, and then imported into a finite element software to perform 3D FEA modelling to accurately calculate SCFs. A 2D (two-dimensional) FEA model of a typical offshore welded structure with ideal geometry is also developed in this work. In addition to numerically calculate SCFs, the 2D FEA model is further combined with non-linear RSM (response surface method) to derive analytical equations, expressing SCFs of offshore welded structures in terms of key welding parameters. Both LST-FEA3D and RSM-FEA2D models are applied to calculate SCFs in large-scale S-N fatigue welded specimens. The results indicate that the LST-FEA3D approach is capable of capturing the variation of SCFs along the width of the welded specimens and identifying the critical points where fatigue crack is most likely to initiate; and the RSM-FEA2D is valuable and efficient in deriving analytical parametric equations for SCFs

    A fisheries acoustic multi-frequency indicator to inform on large scale spatial patterns of aquatic pelagic ecosystems

    Get PDF
    Fisheries acoustic instruments provide information on four major groups in aquatic ecosystems: fish with and without swim bladder (tertiary and quaternary consumers), fluidlike zooplankton (secondary consumers) and small gas bearing organisms such as larval fish and phytoplankton (predominantly primary producers). We entertain that this information is useable to describe the spatial structure of organism groups in pelagic ecosystems. The proposal we make is based on a multi-frequency indicator that synthesises in a single metric the shape of the acoustic frequency response of different organism groups, i.e. the dependence of received acoustic backscattered energy on emitting echosounder frequency. We demonstrate the development and interpretation of the multi-frequency indicator using simulated data. We then calculate the indicator for acoustic water-column survey data from the Bay of Biscay and use it to create reference maps for the spatial structure of the four scattering groups as well as their small scale spatial variability. These maps provide baselines for monitoring future changes in the structure of the pelagic ecosystem

    A comparison of homonym meaning frequency estimates derived from movie and television subtitles, free association, and explicit ratings

    Get PDF
    First Online: 10 September 2018Most words are ambiguous, with interpretation dependent on context. Advancing theories of ambiguity resolution is important for any general theory of language processing, and for resolving inconsistencies in observed ambiguity effects across experimental tasks. Focusing on homonyms (words such as bank with unrelated meanings EDGE OF A RIVER vs. FINANCIAL INSTITUTION), the present work advances theories and methods for estimating the relative frequency of their meanings, a factor that shapes observed ambiguity effects. We develop a new method for estimating meaning frequency based on the meaning of a homonym evoked in lines of movie and television subtitles according to human raters. We also replicate and extend a measure of meaning frequency derived from the classification of free associates. We evaluate the internal consistency of these measures, compare them to published estimates based on explicit ratings of each meaning’s frequency, and compare each set of norms in predicting performance in lexical and semantic decision mega-studies. All measures have high internal consistency and show agreement, but each is also associated with unique variance, which may be explained by integrating cognitive theories of memory with the demands of different experimental methodologies. To derive frequency estimates, we collected manual classifications of 533 homonyms over 50,000 lines of subtitles, and of 357 homonyms across over 5000 homonym–associate pairs. This database—publicly available at: www.blairarmstrong.net/homonymnorms/—constitutes a novel resource for computational cognitive modeling and computational linguistics, and we offer suggestions around good practices for its use in training and testing models on labeled data

    Refactoring, reengineering and evolution: paths to Geant4 uncertainty quantification and performance improvement

    Full text link
    Ongoing investigations for the improvement of Geant4 accuracy and computational performance resulting by refactoring and reengineering parts of the code are discussed. Issues in refactoring that are specific to the domain of physics simulation are identified and their impact is elucidated. Preliminary quantitative results are reported.Comment: To be published in the Proc. CHEP (Computing in High Energy Physics) 201

    Message-Passing Inference on a Factor Graph for Collaborative Filtering

    Full text link
    This paper introduces a novel message-passing (MP) framework for the collaborative filtering (CF) problem associated with recommender systems. We model the movie-rating prediction problem popularized by the Netflix Prize, using a probabilistic factor graph model and study the model by deriving generalization error bounds in terms of the training error. Based on the model, we develop a new MP algorithm, termed IMP, for learning the model. To show superiority of the IMP algorithm, we compare it with the closely related expectation-maximization (EM) based algorithm and a number of other matrix completion algorithms. Our simulation results on Netflix data show that, while the methods perform similarly with large amounts of data, the IMP algorithm is superior for small amounts of data. This improves the cold-start problem of the CF systems in practice. Another advantage of the IMP algorithm is that it can be analyzed using the technique of density evolution (DE) that was originally developed for MP decoding of error-correcting codes
    corecore