145 research outputs found

    Design and Evaluation of an Anti-Phishing Artifact Based on Useful Transparency

    Get PDF
    Background: Various security interventions to support users in detecting phishing emails exist including providing the URL in a tooltip or the statusbar. Aim: Designing and evaluating an anti-phishing artifact based on the Useful Transparency theory. Method:}We used the design science research approach for the entire process. As evaluation we ran a between-subjects study with 109 participants from the UK to determine the anti-phishing artifact effectiveness to support users distinguishing between phishing and legitimate emails. Results: Our results show that, when compared against the state of the art security interventions (displaying the URL in the statusbar), our anti-phishing artifact increase the detection significantly, i.e. phishing detection increased from 50% to 72%. Conclusion: Albeit further studies are required, the evaluation demonstrate that the Useful Transparency theory can result in promising security interventions. Thus, it might be worth considering it for other security interventions, too

    Manufacturing and logistics use cases on Ccombining discrete event simulation and process mining

    Get PDF
    This paper provides an overview on two approaches to combine processmining and discrete-event simulation in manufacturing and logistics for mutual benefit. One approach focuses on generating simulation models based on process models, while the other approach utilizes process models to validate existing simulation models. Both approaches are explained on a conceptual and technical level. In addition, two real-world use cases from the field of intralogistics and from battery cell assembly are presented. Initial experiences and lessons learned from applying the approaches to the use cases are discussed. The findings illustrate the potential and the limitations of the explored combinations

    Parameterization of Gamma, e^+/- and Neutrino Spectra Produced by p-p Interaction in Astronomical Environment

    Full text link
    We present the yield and spectra of stable secondary particles (gamma, e^+/-, nu_e, nubar_e, nu_mu, and nubar_mu) of p-p interaction in parameterized formulae to facilitate calculations involving them in astronomical environments. The formulae are derived on the up-to-date p-p interaction model by [Kamae05] which incorporates the logarithmically rising inelastic cross-section, the diffraction dissociation process, and the Feynman scaling violation. To improve fidelity to experimental data in lower energies, two baryon resonance contributions have been added: one representing Delta(1232) and the other multiple resonances around 1600MeV/c^2. The parametrized formulae predict that all secondary particle spectra be harder by about 0.05 in power-law indices than that of the incident proton and their inclusive cross-sections be larger than those predicted by p-p interaction models based on the Feynman scaling.Comment: Errors and typos in Eqns 1-4 and Table 1 corrected and editorial changes incorporate

    A SAT-Based Encoding of the One-Pass and Tree-Shaped Tableau System for LTL

    Get PDF
    A new one-pass and tree-shaped tableau system for LTL sat- isfiability checking has been recently proposed, where each branch can be explored independently from others and, furthermore, directly cor- responds to a potential model of the formula. Despite its simplicity, it proved itself to be effective in practice. In this paper, we provide a SAT-based encoding of such a tableau system, based on the technique of bounded satisfiability checking. Starting with a single-node tableau, i.e., depth k of the tree-shaped tableau equal to zero, we proceed in an incremental fashion. At each iteration, the tableau rules are encoded in a Boolean formula, representing all branches of the tableau up to the current depth k. A typical downside of such bounded techniques is the effort needed to understand when to stop incrementing the bound, to guarantee the completeness of the procedure. In contrast, termination and completeness of the proposed algorithm is guaranteed without com- puting any upper bound to the length of candidate models, thanks to the Boolean encoding of the PRUNE rule of the original tableau system. We conclude the paper by describing a tool that implements our procedure, and comparing its performance with other state-of-the-art LTL solvers

    Investigation of the Stationary and Transient A1·− Radical in Trp → Phe Mutants of Photosystem I

    Get PDF
    Photosystem I (PS I) contains two symmetric branches of electron transfer cofactors. In both the A- and B-branches, the phylloquinone in the A1 site is π-stacked with a tryptophan residue and is H-bonded to the backbone nitrogen of a leucine residue. In this work, we use optical and electron paramagnetic resonance (EPR) spectroscopies to investigate cyanobacterial PS I complexes, where these tryptophan residues are changed to phenylalanine. The time-resolved optical data show that backward electron transfer from the terminal electron acceptors to P700·+ is affected in the A- and B-branch mutants, both at ambient and cryogenic temperatures. These results suggest that the quinones in both branches take part in electron transport at all temperatures. The electron-nuclear double resonance (ENDOR) spectra of the spin-correlated radical pair P700·+A1·− and the photoaccumulated radical anion A1·−, recorded at cryogenic temperature, allowed the identification of characteristic resonances belonging to protons of the methyl group, some of the ring protons and the proton hydrogen-bonded to phylloquinone in the wild type and both mutants. Significant changes in PS I isolated from the A-branch mutant are detected, while PS I isolated from the B-branch mutant shows the spectral characteristics of wild-type PS I. A possible short-lived B-branch radical pair cannot be detected by EPR due to the available time resolution; therefore, only the A-branch quinone is observed under conditions typically employed for EPR and ENDOR spectroscopies

    Supporting Pharmacovigilance Signal Validation and Prioritization with Analyses of Routinely Collected Health Data: Lessons Learned from an EHDEN Network Study

    Get PDF
    Introduction: Individual case reports are the main asset in pharmacovigilance signal management. Signal validation is the first stage after signal detection and aims to determine if there is sufficient evidence to justify further assessment. Throughout signal management, a prioritization of signals is continually made. Routinely collected health data can provide relevant contextual information but are primarily used at a later stage in pharmacoepidemiological studies to assess communicated signals. Objective: The aim of this study was to examine the feasibility and utility of analysing routine health data from a multinational distributed network to support signal validation and prioritization and to reflect on key user requirements for these analyses to become an integral part of this process. Methods: Statistical signal detection was performed in VigiBase, the WHO global database of individual case safety reports, targeting generic manufacturer drugs and 16 prespecified adverse events. During a 5-day study-a-thon, signal validation and prioritization were performed using information from VigiBase, regulatory documents and the scientific literature alongside descriptive analyses of routine health data from 10 partners of the European Health Data and Evidence Network (EHDEN). Databases included in the study were from the UK, Spain, Norway, the Netherlands and Serbia, capturing records from primary care and/or hospitals. Results: Ninety-five statistical signals were subjected to signal validation, of which eight were considered for descriptive analyses in the routine health data. Design, execution and interpretation of results from these analyses took up to a few hours for each signal (of which 15–60 minutes were for execution) and informed decisions for five out of eight signals. The impact of insights from the routine health data varied and included possible alternative explanations, potential public health and clinical impact and feasibility of follow-up pharmacoepidemiological studies. Three signals were selected for signal assessment, two of these decisions were supported by insights from the routine health data. Standardization of analytical code, availability of adverse event phenotypes including bridges between different source vocabularies, and governance around the access and use of routine health data were identified as important aspects for future development. Conclusions: Analyses of routine health data from a distributed network to support signal validation and prioritization are feasible in the given time limits and can inform decision making. The cost–benefit of integrating these analyses at this stage of signal management requires further research

    Globaliseerumise seosed sissetulekute ebavõrdsusega arenguriikide ning üleminekujärgus majandusega riikide näitel

    Get PDF
    Üha süvenev sissetulekute ebavõrdsus on maailmas aktuaalne probleem, mille üle peavad järjepidevat diskussiooni nii poliitikakujundajad kui ka akadeemikud. Nii ei ole inimeste sissetulekute ebavõrdsuse kasvu probleemid mööda läinud isegi arenenud majandusega riikidest, kuid arengu- ning siirderiikides on probleem veelgi laiaulatuslikum ning tõsisem. Probleemide hulk, mida toob kaasa järjest süvenev sissetulekute ebavõrdsus arengu- ning siirderiikides on laiahaardeline ning mitmekesine, mistõttu arvatakse, et sissetulekute ebavõrdsus võib suure tõenäosusega põhjustada lähema kümne aasta jooksul maailmas olulist kahju. Nii teaduskirjanduses kui ka poliitilisel maastikul valitseb diskussioon selle üle, kas sissetulekute ebavõrdsuse allikaks on pigem riigisisene poliitika või tegu on siiski globaliseerumise näol maailmas aset leidnud loomulike arengutega. Siit tulenevalt oli bakalaureusetöö eesmärk uurida globaliseerumise majandusliku dimensiooni ning sissetulekute ebavõrdsuse vahelisi seoseid arengu- ning üleminekujärgus majandusega riikides. Töös püstitati hüpotees: mida globaliseerunum on majandus, seda väiksem on sissetulekute ebavõrdsus arengu- ning siirderiikides tulenevalt olemasolevatest teoreetilistest käsitlustest ning varasemate empiiriliste uuringute tulemustest. Püstitatud hüpoteesi kontrollimiseks viis autor läbi korrelatsioonanalüüsi kui ka mitmese regressioonanalüüsi abil modelleerimise. Valimisse kuulus kokku 53 arengu- ning siirderiiki 12 erinevast regioonist. Tulemused näitasid, et majanduslikku globaliseerumisprotsessi iseloomustavad avatud kaubavahetus ning välismaised otseinvesteeringud ei ole arengu- ning siirderiikides sissetulekute ebavõrdsust tekitavateks teguriteks ning viimaseid saab hoopis pidada sissetulekute ebavõrdsust vähendavateks mõjuriteks. Saadud uurimistulemused on kasulikud mõistmaks paremini kompleksse sissetulekute ebavõrdsuse probleemi olemust arengu- ning siirderiikides. Samuti võiksid saadud tulemused aidata kaasa arengu- ning siirderiikides tõhusa poliitika loomisele, eesmärgiga toetada sissetulekute ebavõrdsuse probleemiga edukamat toimetulekut.http://www.ester.ee/record=b5144161*es

    Persistently modified h-channels after complex febrile seizures convert the seizure-induced enhancement of inhibition to hyperexcitability.

    Get PDF
    Febrile seizures are the most common type of developmental seizures, affecting up to 5% of children. Experimental complex febrile seizures involving the immature rat hippocampus led to a persistent lowering of seizure threshold despite an upregulation of inhibition. Here we provide a mechanistic resolution to this paradox by showing that, in the hippocampus of rats that had febrile seizures, the long-lasting enhancement of the widely expressed intrinsic membrane conductance Ih converts the potentiated synaptic inhibition to hyperexcitability in a frequency-dependent manner. The altered gain of this molecular inhibition-excitation converter reveals a new mechanism for controlling the balance of excitation-inhibition in the limbic system. In addition, here we show for the first time that h-channels are modified in a human neurological disease paradigm

    The Added Value of Large-Eddy and Storm-Resolving Models for Simulating Clouds and Precipitation

    Get PDF
    More than one hundred days were simulated over very large domains with fine (0.156 km to 2.5 km) grid spacing for realistic conditions to test the hypothesis that storm (kilometer) and large-eddy (hectometer) resolving simulations would provide an improved representation of clouds and precipitation in atmospheric simulations. At scales that resolve convective storms (storm-resolving for short), the vertical velocity variance becomes resolved and a better physical basis is achieved for representing clouds and precipitation. Similarly to past studies we found an improved representation of precipitation at kilometer scales, as compared to models with parameterized convection. The main precipitation features (location, diurnal cycle and spatial propagation) are well captured already at kilometer scales, and refining resolution to hectometer scales does not substantially change the simulations in these respects. It does, however, lead to a reduction in the precipitation on the time-scales considered – most notably over the ocean in the tropics. Changes in the distribution of precipitation, with less frequent extremes are also found in simulations incorporating hectometer scales. Hectometer scales appear to be more important for the representation of clouds, and make it possible to capture many important aspects of the cloud field, from the vertical distribution of cloud cover, to the distribution of cloud sizes, and to the diel (daily) cycle. Qualitative improvements, particularly in the ability to differentiate cumulus from stratiform clouds, are seen when one reduces the grid spacing from kilometer to hectometer scales. At the hectometer scale new challenges arise, but the similarity of observed and simulated scales, and the more direct connection between the circulation and the unconstrained degrees of freedom make these challenges less daunting. This quality, combined with already improved simulation as compared to more parameterized models, underpins our conviction that the use and further development of storm-resolving models offers exciting opportunities for advancing understanding of climate and climate change

    A process model of the formation of spatial presence experiences

    Get PDF
    In order to bridge interdisciplinary differences in Presence research and to establish connections between Presence and “older” concepts of psychology and communication, a theoretical model of the formation of Spatial Presence is proposed. It is applicable to the exposure to different media and intended to unify the existing efforts to develop a theory of Presence. The model includes assumptions about attention allocation, mental models, and involvement, and considers the role of media factors and user characteristics as well, thus incorporating much previous work. It is argued that a commonly accepted model of Spatial Presence is the only solution to secure further progress within the international, interdisciplinary and multiple-paradigm community of Presence research
    corecore