14,151 research outputs found

    Light composite Higgs boson from the normalized Bethe-Salpeter equation

    Full text link
    Scalar composite boson masses have been computed in QCD and Technicolor theories with the help of the homogeneous Bethe-Salpeter equation (BSE), resulting in a scalar mass that is twice the dynamically generated fermion or technifermion mass (mdynm_{dyn}). We show that in the case of walking (or quasi-conformal) technicolor theories, where the mdynm_{dyn} behavior with the momenta may be quite different from the one predicted by the standard operator product expansion, this result is incomplete and we must consider the effect of the normalization condition of the BSE to determine the scalar masses. We compute the composite Higgs boson mass for several groups with technifermions in the fundamental and higher dimensional representations and comment about the experimental constraints on these theories, which indicate that models based on walking theories with fermions in the fundamental representation may, within the limitations of our approach, have masses quite near the actual direct exclusion limit.Comment: 9 pages, 4 figures, minor corrections, to appear in Physical Review

    The equivalence of two tax processes

    Get PDF
    We introduce two models of taxation, the latent and natural tax processes, which have both been used to represent loss-carry-forward taxation on the capital of an insurance company. In the natural tax process, the tax rate is a function of the current level of capital, whereas in the latent tax process, the tax rate is a function of the capital that would have resulted if no tax had been paid. Whereas up to now these two types of tax processes have been treated separately, we show that, in fact, they are essentially equivalent. This allows a unified treatment, translating results from one model to the other. Significantly, we solve the question of existence and uniqueness for the natural tax process, which is defined via an integral equation. Our results clarify the existing literature on processes with tax

    Accuracy and computational efficiency of 2D urban surface flood modelling based on cellular automata

    Get PDF
    There is an emerging abundance freely available of high resolution (one meter or less) LIDAR data due to the advent of remote sensing, which enables wider applications of detailed flood risk modelling and analysis. Digital terrain surface data often comes in raster form, i.e., a square regular grid, and often requires conversion into a specific computational mesh for two-dimensional (2D) flood modelling that adopts triangular irregular meshes. 2D modelling of flood water movement through urban areas requires resolution of complex flow paths around buildings, which requires both high accuracy and computational efficiency. Water distribution and Wastewater systems in the UK contain over 700,000 km of water distribution and sewer pipes, which represents a large risk exposure from flooding caused by sewer surcharging or distribution pipe breaks. This makes it important for utilities to understand and predict where clean or dirty water flows will be directed when they leave the system. In order to establish risk assessment many thousands of simulations may be required calling for the most computational efficient models possible. Cellular Automata (CA) represents a method of running simulations based on a regular square grid, thus saving set-up time of configuring the terrain data into an irregular triangular mesh. It also offers a more uniform memory pattern for very fast modern, highly parallel hardware, such as general purpose graphical processing units (GPGPU). In this paper the performance of the CADDIES, a CA platform and associate flood modelling software caFloodPro, using a square regular grid and Von Neumann neighbourhood, is compared to industry standard software using triangular irregular meshes for similar resolutions. A minimum time step is used to control the computational complexity of the algorithm, which then creates a trade-off between the processing speeds of simulations and the accuracy resulting from the limitations used within the local rule to cope with relatively large time steps. This study shows that using CA based methods on regular square grids offers process speed increases in terms of 5-20 times over that of the industry standard software using irregular triangular meshes, while maintaining 98-99% flooding extent accuracy.This is the final version of the article. Available from Elsevier via the DOI in this record.There is an emerging abundance freely available of high resolution (one meter or less) LIDAR data due to the advent of remote sensing, which enables wider applications of detailed flood risk modelling and analysis. Digital terrain surface data often comes in raster form, i.e., a square regular grid, and often requires conversion into a specific computational mesh for two-dimensional (2D) flood modelling that adopts triangular irregular meshes. 2D modelling of flood water movement through urban areas requires resolution of complex flow paths around buildings, which requires both high accuracy and computational efficiency. Water distribution and Wastewater systems in the UK contain over 700,000 km of water distribution and sewer pipes, which represents a large risk exposure from flooding caused by sewer surcharging or distribution pipe breaks. This makes it important for utilities to understand and predict where clean or dirty water flows will be directed when they leave the system. In order to establish risk assessment many thousands of simulations may be required calling for the most computational efficient models possible. Cellular Automata (CA) represents a method of running simulations based on a regular square grid, thus saving set-up time of configuring the terrain data into an irregular triangular mesh. It also offers a more uniform memory pattern for very fast modern, highly parallel hardware, such as general purpose graphical processing units (GPGPU). In this paper the performance of the CADDIES, a CA platform and associate flood modelling software caFloodPro, using a square regular grid and Von Neumann neighbourhood, is compared to industry standard software using triangular irregular meshes for similar resolutions. A minimum time step is used to control the computational complexity of the algorithm, which then creates a trade-off between the processing speeds of simulations and the accuracy resulting from the limitations used within the local rule to cope with relatively large time steps. This study shows that using CA based methods on regular square grids offers process speed increases in terms of 5-20 times over that of the industry standard software using irregular triangular meshes, while maintaining 98-99% flooding extent accuracy

    ESO Imaging Survey: Optical follow-up of 12 selected XMM-Newton fields

    Get PDF
    (Abridged) This paper presents the data recently released for the XMM-Newton/WFI survey carried out as part of the ESO Imaging Survey (EIS) project. The aim of this survey is to provide optical imaging follow-up data in BVRI for identification of serendipitously detected X-ray sources in selected XMM-Newton fields. In this paper, fully calibrated individual and stacked images of 12 fields as well as science-grade catalogs for the 8 fields located at high-galactic latitude are presented. The data covers an area of \sim 3 square degrees for each of the four passbands. The median limiting magnitudes (AB system, 2" aperture, 5\sigma detection limit) are 25.20, 24.92, 24.66, and 24.39 mag for B-, V-, R-, and I-band, respectively. These survey products, together with their logs, are available to the community for science exploitation in conjunction with their X-ray counterparts. Preliminary results from the X-ray/optical cross-correlation analysis show that about 61% of the detected X-ray point sources in deep XMM-Newton exposures have at least one optical counterpart within 2" radius down to R \simeq 25 mag, 50% of which are so faint as to require VLT observations thereby meeting one of the top requirements of the survey, namely to produce large samples for spectroscopic follow-up with the VLT, whereas only 15% of the objects have counterparts down to the DSS limiting magnitude.Comment: 24 pages, 10 figures, accepted for publication in Astronomy and Astrophysics. Accompanying data releases available at http://archive.eso.org/archive/public_datasets.html (WFI images), http://www.eso.org/science/eis/surveys/release_65000025_XMM.html (optical catalogs), http://www.aip.de/groups/xray/XMM_EIS/ (X-ray data). Full resolution version available at http://www.astro.uni-bonn.de/~dietrich/publications/3785.ps.g

    On the well-posedness of the stochastic Allen-Cahn equation in two dimensions

    Get PDF
    White noise-driven nonlinear stochastic partial differential equations (SPDEs) of parabolic type are frequently used to model physical and biological systems in space dimensions d = 1,2,3. Whereas existence and uniqueness of weak solutions to these equations are well established in one dimension, the situation is different for d \geq 2. Despite their popularity in the applied sciences, higher dimensional versions of these SPDE models are generally assumed to be ill-posed by the mathematics community. We study this discrepancy on the specific example of the two dimensional Allen-Cahn equation driven by additive white noise. Since it is unclear how to define the notion of a weak solution to this equation, we regularize the noise and introduce a family of approximations. Based on heuristic arguments and numerical experiments, we conjecture that these approximations exhibit divergent behavior in the continuum limit. The results strongly suggest that a series of published numerical studies are problematic: shrinking the mesh size in these simulations does not lead to the recovery of a physically meaningful limit.Comment: 21 pages, 4 figures; accepted by Journal of Computational Physics (Dec 2011

    An abnormality in glucocorticoid receptor expression differentiates steroid responders from nonresponders in keloid disease

    Get PDF
    Background: Glucocorticoids (GCs) are first-line treatment for keloid disease (KD) but are limited by high incidence of resistance, recurrence and undesirable sideeffects. Identifying patient responsiveness early could guide therapy. Methods: Nineteen patients with KD were recruited at week 0 (before treatment) and received intralesional steroids. At weeks 0, 2 and 4, noninvasive imaging and biopsies were performed. Responsiveness was determined by clinical response and a significant reduction in vascular perfusion following steroid treatment, using full-field laser perfusion imaging (FLPI). Responsiveness was also evaluated using (i) spectrophotometric intracutaneous analysis to quantify changes in collagen and melanin and (ii) histology to identify changes in epidermal thickness and glycosaminoglycan (GAG) expression. Biopsies were used to quantify changes in glucocorticoid receptor (GR) expression using quantitative reverse transcriptase polymerase chain reaction, immunoblotting and immunohistochemistry. Results: At week 2, the FLPI was used to separate patients into steroid responsive (n = 12) and nonresponsive groups (n = 7). All patients demonstrated a signifccant decrease in GAG at week 2 (P < 0 05). At week 4, responsive patients exhibited significant reduction in melanin, GAG, epidermal thickness (all P < 0 05) and a continued reduction in perfusion (P < 0 001) compared with nonresponders. Steroid-responsive patients had increased GR expression at baseline and showed autoregulation of GR compared with nonresponders, who showed no change in GR transcription or protein. Conclusions: This is the first demonstration that keloid response to steroids can be measured objectively using noninvasive imaging. FLPI is a potentially reliable tool to stratify KD responsiveness. Altered GR expression may be the mechanism gating therapeutic response

    Reconstructing discards profiles of unreported catches

    Get PDF
    In Portugal it has been estimated that unreported catches represent one third of total catches. Herein, information on landings and total unreported catches (discards) by commercial métier were disaggregated into high taxonomic detail using published scientific studies. Fish accounted for 93.5% (115493 t) of overall unreported catches per year, followed by cephalopods (2345 t, 1.9%) and crustaceans (1754 t, 1.4%). Sharks accounted for 1.3% of total unreported catches in weight (1638 t/y). Unreported taxa consisted mostly of the commercial landed fish species: Scomber colias, Boops boops, Trachurus picturatus, T. trachurus, Merluccius merluccius, Sardina pilchardus, Liza aurata and Micromesistius poutassou, which together accounted for 70% of the unreported discarded catches. The number of unreported/discarded species was highest in artisanal fisheries, followed by trawl and purse seine. In artisanal fisheries, L. aurata, S. colias, S. pilchardus, Trachinus draco and B. boops accounted for 76.4% of the unreported discards. B. boops, S. colias and S. pilchardus were also among the most discarded purse seine species, together with Belone belone accounting for 79% of the unreported catches. In trawl fisheries, T. picturatus (16%), M. merluccius (13%), S. colias (13%) and M. poutassou (13%) accounted for 55% of the trawl discarded unreported catches. The discarded species that most contribute to overall unreported catches are those that are most frequently landed and that most contribute to overall landings in weight.SFRH/BD/104209/2014 and SFRH/ BPD/108949/2015). This work received national funds through the Foundation for Science and Technology (FCT) through project UID/Multi/04326/2013. Karim Erzini was supported by funding from the European Commission’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 634495 for the project Science, Technology, and Society Initiative to minimize Unwanted Catches in European Fisheries (MINOUW)info:eu-repo/semantics/publishedVersio

    Time-to-birth prediction models and the influence of expert opinions

    Get PDF
    Preterm birth is the leading cause of death among children under five years old. The pathophysiology and etiology of preterm labor are not yet fully understood. This causes a large number of unnecessary hospitalizations due to high--sensitivity clinical policies, which has a significant psychological and economic impact. In this study, we present a predictive model, based on a new dataset containing information of 1,243 admissions, that predicts whether a patient will give birth within a given time after admission. Such a model could provide support in the clinical decision-making process. Predictions for birth within 48 h or 7 days after admission yield an Area Under the Curve of the Receiver Operating Characteristic (AUC) of 0.72 for both tasks. Furthermore, we show that by incorporating predictions made by experts at admission, which introduces a potential bias, the prediction effectiveness increases to an AUC score of 0.83 and 0.81 for these respective tasks
    corecore