270 research outputs found

    Trade persistence and trader identity - evidence from the demise of the Hanseatic League

    Get PDF
    How do trade networks persist following disruptions of political networks? We study different types of persistence following the decline of the Hanseatic League using a panel of 21,590 city-level trade flows over 190 years, covering 1,425 cities. We use the Sound Toll data, a dataset collected by the Danish crown until 1857 that registered every ship entering or leaving the Baltic Sea, forming one of the most granular and extensive trade data sets. We measure trade flows by counting the number of ships sailing on a particular route in a given year and estimate gravity equations using PPML and an appropriate set of fixed effects. Bilateral gravity estimation results show that trade among former Hansa cities only shows persistence after its dissolution in 1669 for about 30 years, but this persistence is not robust across different regression specifications. However, when we incorporate the flag under which a ship is sailing and consider trilateral trade (where an observation is a combination of origin, destination, and flag), we find that trade persistently exceeds the gravity benchmark: Hansa cities continued to trade more with each other, but only on ships that were owned in another former Hansa city and thus sailed under a Hansa flag. Similar effects are found for trade among former Hansa cities and their trading posts abroad, yet again only conditional on the ship sailing under a former Hanseatic flag. Trade flows among the same pair of origin and destination cities, but under a different flag, do not show this persistence. Our main result shows that the identity of traders persists longer and more strongly than other forms of trading relationships we can measure. Apart from these new quantitative and qualitative insights on the persistence of trade flows, our paper is also of historic interest, as it provides new and detailed information on the speed of decline of trade amongst members of the Hanseatic League

    Counteracting phishing through HCI

    Get PDF
    Computer security is a very technical topic that is in many cases hard to grasp for the average user. Especially when using the Internet, the biggest network connecting computers globally together, security and safety are important. In many cases they can be achieved without the user's active participation: securely storing user and customer data on Internet servers is the task of the respective company or service provider, but there are also a lot of cases where the user is involved in the security process, especially when he or she is intentionally attacked. Socially engineered phishing attacks are such a security issue were users are directly attacked to reveal private data and credentials to an unauthorized attacker. These types of attacks are the main focus of the research presented within my thesis. I have a look at how these attacks can be counteracted by detecting them in the first place but also by mediating these detection results to the user. In prior research and development these two areas have most often been regarded separately, and new security measures were developed without taking the final step of interacting with the user into account. This interaction mainly means presenting the detection results and receiving final decisions from the user. As an overarching goal within this thesis I look at these two aspects united, stating the overall protection as the sum of detection and "user intervention". Within nine different research projects about phishing protection this thesis gives answers to ten different research questions in the areas of creating new phishing detectors (phishing detection) and providing usable user feedback for such systems (user intervention): The ten research questions cover five different topics in both areas from the definition of the respective topic over ways how to measure and enhance the areas to finally reasoning about what is making sense. The research questions have been chosen to cover the range of both areas and the interplay between them. They are mostly answered by developing and evaluating different prototypes built within the projects that cover a range of human-centered detection properties and evaluate how well these are suited for phishing detection. I also take a look at different possibilities for user intervention (e.g. how should a warning look like? should it be blocking or non-blocking or perhaps even something else?). As a major contribution I finally present a model that combines phishing detection and user intervention and propose development and evaluation recommendations for similar systems. The research results show that when developing security detectors that yield results being relevant for end users such a detector can only be successful in case the final user feedback already has been taken into account during the development process.Sicherheit rund um den Computer ist ein, für den durchschnittlichen Benutzer schwer zu verstehendes Thema. Besonders, wenn sich die Benutzer im Internet - dem größten Netzwerk unserer Zeit - bewegen, ist die technische und persönliche Sicherheit der Benutzer extrem wichtig. In vielen Fällen kann diese ohne das Zutun des Benutzers erreicht werden. Datensicherheit auf Servern zu garantieren obliegt den Dienstanbietern, ohne dass eine aktive Mithilfe des Benutzers notwendig ist. Es gibt allerdings auch viele Fälle, bei denen der Benutzer Teil des Sicherheitsprozesses ist, besonders dann, wenn er selbst ein Opfer von Attacken wird. Phishing Attacken sind dabei ein besonders wichtiges Beispiel, bei dem Angreifer versuchen durch soziale Manipulation an private Daten des Nutzers zu gelangen. Diese Art der Angriffe stehen im Fokus meiner vorliegenden Arbeit. Dabei werfe ich einen Blick darauf, wie solchen Attacken entgegen gewirkt werden kann, indem man sie nicht nur aufspürt, sondern auch das Ergebnis des Erkennungsprozesses dem Benutzer vermittelt. Die bisherige Forschung und Entwicklung betrachtete diese beiden Bereiche meistens getrennt. Dabei wurden Sicherheitsmechanismen entwickelt, ohne den finalen Schritt der Präsentation zum Benutzer hin einzubeziehen. Dies bezieht sich hauptsächlich auf die Präsentation der Ergebnisse um dann den Benutzer eine ordnungsgemäße Entscheidung treffen zu lassen. Als übergreifendes Ziel dieser Arbeit betrachte ich diese beiden Aspekte zusammen und postuliere, dass Benutzerschutz die Summe aus Problemdetektion und Benutzerintervention' ("user intervention") ist. Mit Hilfe von neun verschiedenen Forschungsprojekten über Phishingschutz beantworte ich in dieser Arbeit zehn Forschungsfragen über die Erstellung von Detektoren ("phishing detection") und das Bereitstellen benutzbaren Feedbacks für solche Systeme ("user intervention"). Die zehn verschiedenen Forschungsfragen decken dabei jeweils fünf verschiedene Bereiche ab. Diese Bereiche erstrecken sich von der Definition des entsprechenden Themas über Messmethoden und Verbesserungsmöglichkeiten bis hin zu Überlegungen über das Kosten-Nutzen-Verhältnis. Dabei wurden die Forschungsfragen so gewählt, dass sie die beiden Bereiche breit abdecken und auf die Abhängigkeiten zwischen beiden Bereichen eingegangen werden kann. Die Forschungsfragen werden hauptsächlich durch das Schaffen verschiedener Prototypen innerhalb der verschiedenen Projekte beantwortet um so einen großen Bereich benutzerzentrierter Erkennungsparameter abzudecken und auszuwerten wie gut diese für die Phishingerkennung geeignet sind. Außerdem habe ich mich mit den verschiedenen Möglichkeiten der Benutzerintervention befasst (z.B. Wie sollte eine Warnung aussehen? Sollte sie Benutzerinteraktion blockieren oder nicht?). Ein weiterer Hauptbeitrag ist schlussendlich die Präsentation eines Modells, dass die Entwicklung von Phishingerkennung und Benutzerinteraktionsmaßnahmen zusammenführt und anhand dessen dann Entwicklungs- und Analyseempfehlungen für ähnliche Systeme gegeben werden. Die Forschungsergebnisse zeigen, dass Detektoren im Rahmen von Computersicherheitsproblemen die eine Rolle für den Endnutzer spielen nur dann erfolgreich entwickelt werden können, wenn das endgültige Benutzerfeedback bereits in den Entwicklungsprozesses des Detektors einfließt

    Predatory journals: Perception, impact and use of Beall’s list by the scientific community–A bibliometric big data study

    Get PDF
    Beall's list is widely used to identify potentially predatory journals. With this study, we aim to investigate the impact of Beall's list on the perception of listed journals as well as on the publication and citation behavior of the scientific community. We performed comprehensive bibliometric analyses of data extracted from the ISSN database, PubMed, PubMed Central (PMC), Crossref, Scopus and Web of Science. Citation analysis was performed by data extracted from the Crossref Cited-by database. At the time of analysis, Beall's list consisted of 1,289 standalone journals and 1,162 publishers, which corresponds to 21,735 individual journals. Of these, 3,206 (38.8%) were located in the United States, 2,484 in India (30.0%), and 585 in United Kingdom (7.1%). The majority of journals were listed in the ISSN database (n = 8,266), Crossref (n = 5,155), PubMed (n = 1,139), Scopus (n = 570), DOAJ (n = 224), PMC (n = 135) or Web of Science (n = 50). The number of articles published by journals on Beall's list as well as on the DOAJ continuously increased from 2011 to 2017. In 2018, the number of articles published by journals on Beall's list decreased. Journals on Beall's list were more often cited when listed in Web of Science (CI 95% 5.5 to 21.5; OR = 10.7) and PMC (CI 95% 6.3 to 14.1; OR = 9.4). It seems that the importance of Beall's list for the scientific community is overestimated. In contrast, journals are more likely to be selected for publication or citation when indexed by commonly used and renowned databases. Thus, the providers of these databases must be aware of their impact and verify that good publication practice standards are being applied by the journals listed

    Effects of pressure-controlled intermittent coronary sinus occlusion on regional ischemic myocardial function

    Get PDF
    Pressure-controlled intermittent coronary sinus occlusion has been reported to reduce infarct size in dogs with coronary artery occlusion, possibly because of increased ischemic zone perfusion and washout of toxic metabolites. The influence of this intervention on regional myocardial function was investigated in open and closed chest dogs. In six open chest dogs with severe stenosis of the left anterior descending coronary artery and subsequent total occlusion, a 10 minute application of intermittent coronary sinus occlusion increased ischemic myocardial segment shortening from 5.5 ± 1.2 to 8.2 ± 2.6% (NS) and from −0.1 ± 2.1 to 2.3 ± 1.2% (NS), respectively.In eight closed chest anesthetized dogs, intermittent coronary sinus occlusion was applied for 2.5 hours between 30 minutes and 3 hours of intravascular balloon occlusion of the proximal left anterior descending coronary artery. Standardized two-dimensional echocardio-graphic measurements of left ventricular function were performed to derive systolic sectional and segmental fractional area changes in five short-axis cross sections of the left ventricle. Fractional area change in all the severely ischemic segments (< 5% systolic wall thickening) was −4.0 ± 4.7% at 30 minutes after occlusion, and increased with subsequent 60 and 150 minutes of treatment to 13.1 ± 3.3 and 7.0 ± 3.3%, respectively (p < 0.05). At the most extensively involved low papillary muscle level of the ventricle, regional ischemic fractional area change was increased by intermittent coronary sinus occlusion between 30 and 180 minutes of coronary occlusion from −0.4 ± 0.1 to 14.4 ± 4% (p < 0.05), whereas a further deterioration was noted in untreated dogs with coronary occlusion.Continuous arterial and coronary venous blood density measurements were performed in seven open chest dogs to determine the influence of pressure-controlled intermittent coronary sinus occlusion on ischemic myocardial washout. The arteriovenous density gradient was 0.16 ± 0.05 g/Iiter during coronary artery occlusion, and decreased to 0.05 ± 0.08 g/liter (p < 0.05) as a result of the intervention, suggesting a significant fluid washout from the myocardium. It is concluded that pressure-controlled intermittent coronary sinus occlusion provides recovery of cardiac function and that this benefit might be associated with enhanced ischemic zone washout

    Reducing the Pill Burden: Immunosuppressant Adherence and Safety after Conversion from a Twice-Daily (IR-Tac) to a Novel Once-Daily (LCP-Tac) Tacrolimus Formulation in 161 Liver Transplant Patients

    Get PDF
    Non-adherence to immunosuppressant therapy reduces long-term graft and patient survival after solid organ transplantation. The objective of this 24-month prospective study was to determine adherence, efficacy and safety after conversion of stable liver transplant (LT) recipients from a standard twice-daily immediate release Tacrolimus (IR-Tac) to a novel once-daily life cycle pharma Tacrolimus (LCP-Tac) formulation. We converted a total of 161 LT patients at baseline, collecting Tacrolimus trough levels, laboratories, physical examination data and the BAASIS(C) questionnaire for self-reported adherence to immunosuppression at regular intervals. With 134 participants completing the study period (17% dropouts), the overall adherence to the BAASIS(C) increased by 57% until month 24 compared to baseline (51% vs. 80%). Patients who required only a morning dose of their concomitant medications reported the largest improvement in adherence after conversion. The intra-patient variability (IPV) of consecutive Tacrolimus trough levels after conversion did not change significantly compared to pre-conversion levels. Despite reducing the daily dose by 30% at baseline as recommended by the manufacturer, Tac-trough levels remained stable, reflected by an increase in the concentration-dose (C/D) ratio. No episodes of graft rejection or loss occurred. Our data suggest that the use of LCP-Tac in liver transplant patients is safe and can increase adherence to immunosuppression compared to conventional IR-Tac

    Use and accuracy of decision support systems using artificial intelligence for tumor diseases: a systematic review and meta-analysis

    Get PDF
    Background: For therapy planning in cancer patients multidisciplinary team meetings (MDM) are mandatory. Due to the high number of cases being discussed and significant workload of clinicians, Clinical Decision Support System (CDSS) may improve the clinical workflow. Methods: This review and meta-analysis aims to provide an overview of the systems utilized and evaluate the correlation between a CDSS and MDM. Results: A total of 31 studies were identified for final analysis. Analysis of different cancers shows a concordance rate (CR) of 72.7% for stage I-II and 73.4% for III-IV. For breast carcinoma, CR for stage I-II was 72.8% and for III-IV 84.1%, P≤ 0.00001. CR for colorectal carcinoma is 63% for stage I-II and 67% for III-IV, for gastric carcinoma 55% and 45%, and for lung carcinoma 85% and 83% respectively, all P>0.05. Analysis of SCLC and NSCLC yields a CR of 94,3% and 82,7%, P=0.004 and for adenocarcinoma and squamous cell carcinoma in lung cancer a CR of 90% and 86%, P=0.02. Conclusion: CDSS has already been implemented in clinical practice, and while the findings suggest that its use is feasible for some cancers, further research is needed to fully evaluate its effectiveness
    corecore