1,656 research outputs found

    Pushed beyond the brink: Allee effects, environmental stochasticity, and extinction

    Get PDF
    A demographic Allee effect occurs when individual fitness, at low densities, increases with population density. Coupled with environmental fluctuations in demographic rates, Allee effects can have subtle effects on population persistence and extinction. To understand the interplay between these deterministic and stochastic forces, we analyze discrete-time single species models allowing for general forms of density-dependent feedbacks and stochastic fluctuations in demographic rates. Our analysis provide criteria for stochastic persistence, asymptotic extinction, and conditional persistence. Stochastic persistence requires that the geometric mean of fitness at low densities is greater than one. When this geometric mean is less than one, asymptotic extinction occurs with a high probability whenever the initial population density is low. If in addition the population only experiences positive density-dependent feedbacks, conditional persistence occurs provided the geometric mean of fitness at high population densities is greater than one. However, if the population experiences both positive and negative density-dependent feedbacks, conditional persistence is only possible if fluctuations in demographic rates are sufficiently small. Applying our results to stochastic models of mate-limitation, we illustrate counter-intuitively that the environmental fluctuations can increase the probability of persistence when populations are initially at low densities, and decrease the likelihood of persistence when populations are initially at high densities. Alternatively, for stochastic models accounting for predator saturation and negative density-dependence, environmental stochasticity can result in asymptotic extinction at intermediate predation rates despite conditional persistence occurring at higher predation rates.Comment: 19 pages, 3 figure

    Reporting the discharge medication in the discharge letter : an explorative survey of family doctors ; meeting abstract

    Get PDF
    Background and Aim: In Germany, the discharge medication is usually reported to the general practitioner (GP) by an inital short report (SR) /notification (handed over to the patient) and later by a more detailed discharge letter (DL) of the hospital. Material and Method: We asked N=536 GPs (from Frankfurt/Main and Luebeck) after the typical report format of their patients discharge medication by the local hospitals. The questionnaire asked for 26 items covering (1) the designation of the medication (brand name, generic name) in SR and DL, (2) further specifications e.g. possibilities of generic substitution or supervision of sensible medications, (3) reasons why GPs do not follow the hospitals recommendations and (4) possibilities for an improvement in the medication-related communication between GP and hospitals. Results: 39% GPs responded sufficiently to the questionnaire. The majority of the GPs (82%) quoted that in the SR only brand names are given (often or ever) and neither the generic name or any further information on generic substitution is available (seldom or never). 65% of the responders quoted that even in the DL only brand names are given. Only 41% of the responders quoted that further treatment relevant specifications are given (often or ever). 95% responded that new medications or change of custom medication is seldom or never explained in the DL and GP were not explicitly informed about relevant medication changes. 58% of the responders quoted economic reasons for re-adjustment of the discharge medication e.g. by generic substitution. The majority of responders (83%) are favouring (useful or very useful) a pre-discharge information (e.g. via fax) about the medication and 54% a hot-line to some relevant person in the hospital when treatment problems emerge. 67% of the responders quoted in favour of regular meetings between GPs and hospital doctors regarding actual pharmacotherapy. Conclusion: In conclusion, our survey pointed to marked deficiencies in reporting the discharge medication to GPs. Conflict of interest: Non

    Resolving stellar populations with crowded field 3D spectroscopy

    Full text link
    (Abridged) We describe a new method to extract spectra of stars from observations of crowded stellar fields with integral field spectroscopy (IFS). Our approach extends the well-established concept of crowded field photometry in images into the domain of 3-dimensional spectroscopic datacubes. The main features of our algorithm are: (1) We assume that a high-fidelity input source catalogue already exists and that it is not needed to perform sophisticated source detection in the IFS data. (2) Source positions and properties of the point spread function (PSF) vary smoothly between spectral layers of the datacube, and these variations can be described by simple fitting functions. (3) The shape of the PSF can be adequately described by an analytical function. Even without isolated PSF calibrator stars we can therefore estimate the PSF by a model fit to the full ensemble of stars visible within the field of view. (4) By using sparse matrices to describe the sources, the problem of extracting the spectra of many stars simultaneously becomes computationally tractable. We present extensive performance and validation tests of our algorithm using realistic simulated datacubes that closely reproduce actual IFS observations of the central regions of Galactic globular clusters. We investigate the quality of the extracted spectra under the effects of crowding. The main effect of blending between two nearby stars is a decrease in the S/N in their spectra. The effect increases with the crowding in the field in a way that the maximum number of stars with useful spectra is always ~0.2 per spatial resolution element. This balance breaks down when exceeding a total source density of ~1 significantly detected star per resolution element. We close with an outlook by applying our method to a simulated globular cluster observation with the upcoming MUSE instrument at the ESO-VLT.Comment: accepted for publication in A&A, 19 pages, 19 figure

    Stabilizing Training of Generative Adversarial Networks through Regularization

    Full text link
    Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f-divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer across several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning

    How to deploy security mechanisms online (consistently)

    Get PDF
    To mitigate a myriad of Web attacks, modern browsers support client-side secu- rity policies shipped through HTTP response headers. To enforce these policies, the operator can set response headers that the server then communicates to the client. We have shown that one of those, namely the Content Security Policy (CSP), re- quires massive engineering effort to be deployed in a non-trivially bypassable way. Thus, many policies deployed on Web sites are misconfigured. Due to the capability of CSP to also defend against framing-based attacks, it has a functionality-wise overlap with the X-Frame-Options header. We have shown that this overlap leads to inconsistent behavior of browsers, but also inconsistent deployment on real-world Web applications. Not only overloaded defense mechanisms are prone to security inconsistencies. We investigated that due to the structure of the Web it- self, misconfigured origin servers or geolocation-based CDN caches can cause unwanted security inconsistencies. To not disregard the high number of misconfigurations of CSP, we also took a closer look at the deployment process of the mechanism. By conducting a semi-structured interview, including a coding task, we were able to shed light on motivations, strategies, and roadblocks of CSP deployment. However, due to the wide usage of CSP, drastic changes are generally considered impractical. Therefore, we also evaluated if one of the newest Web security features, namely Trusted Types, can be improved.Um eine Vielzahl von Angriffen im Web zu entschärfen, unterstützen moderne Browser clientseitige Sicherheitsmechanismen, die über sogenannte HTTP Response- Header übermittelt werden. Um jene Sicherheitsfeatures anzuwenden, setzt der Betreiber einer Web site einen solchen Header, welchen der Server dann an den Client ausliefert. Wir haben gezeigt, dass das konfigurieren eines dieser Mechanismen, der Content Security Policy (CSP), einen enormen technischen Aufwand erfordert, um auf nicht triviale Weise umgangen werden zu können. Daher ist jenes feature auf vielen Webseiten, auch Top Webseiten, falsch konfiguriert. Aufgrund der Fähigkeit von CSP, auch Framing-basierte Angriffe abzuwehren, überschneidet sich seine Funktionalität darüber hinaus mit der des X-Frame-Options Headers. Wir haben gezeigt, dass dies zu inkonsistentem Verhalten von Browsern, aber auch zu inkonsistentem Einsatz in realen Webanwendungen führt. Nicht nur überladene Verteidigungsmechanismen sind anfällig für Sicherheitsinkonsistenzen. Wir haben untersucht, dass aufgrund der Struktur desWebs selbst, falsch konfigurierte Ursprungsserver, oder CDN-Caches, die von der geographischen Lage abhängen, unerwünschte Sicherheitsinkonsistenzen verursachen können. Um die hohe Anzahl an Fehlkonfigurationen von CSP-Headern nicht außer Acht zu lassen, haben wir uns auch den Erstellungsprozess eines CSP-Headers genauer angesehen. Mit Hilfe eines halbstrukturierten Interviews, welches auch eine Programmieraufgabe beinhaltete, konnten wir die Motivationen, Strategien und Hindernisse beim Einsatz von CSP beleuchten. Aufgrund der weiten Verbreitung von CSP werden drastische Änderungen allgemein jedoch als unpraktisch angesehen. Daher haben wir ebenfalls untersucht, ob eine der neuesten und daher wenig genutzten,Web-Sicherheitsmechanismen, namentlich Trusted Types, ebenfalls verbesserungswürdig ist

    Peer Data Management

    Get PDF
    Peer Data Management (PDM) deals with the management of structured data in unstructured peer-to-peer (P2P) networks. Each peer can store data locally and define relationships between its data and the data provided by other peers. Queries posed to any of the peers are then answered by also considering the information implied by those mappings. The overall goal of PDM is to provide semantically well-founded integration and exchange of heterogeneous and distributed data sources. Unlike traditional data integration systems, peer data management systems (PDMSs) thereby allow for full autonomy of each member and need no central coordinator. The promise of such systems is to provide flexible data integration and exchange at low setup and maintenance costs. However, building such systems raises many challenges. Beside the obvious scalability problem, choosing an appropriate semantics that can deal with arbitrary, even cyclic topologies, data inconsistencies, or updates while at the same time allowing for tractable reasoning has been an area of active research in the last decade. In this survey we provide an overview of the different approaches suggested in the literature to tackle these problems, focusing on appropriate semantics for query answering and data exchange rather than on implementation specific problems
    corecore