570 research outputs found
The Lessons of 9/11: A Failure of Intelligence, Not Immigration Law
In the hours following the deadly terrorist attacks of September 11, 2001, the United States government took the extraordinary step of sealing U.S. borders to traffic and trade by grounding all aircraft flying into or out of the country and imposing a lock-down on the networks of transportation and commerce that are the lifeblood of our economy and society. Given the uncertainty over what might happen next, these emergency procedures were a necessary and appropriate short-term response to the attacks. In the long run, however, a siege mentality and the construction of a fortress America are ineffective and unrealistic responses to the dangers we face. If we are to succeed in reducing our vulnerability to further terrorist attacks, we must focus our attention and resources on the gaps in intelligence gathering and information sharing that allowed nineteen terrorists to enter the United States. National security is most effectively enhanced by improving the mechanisms for identifying actual terrorists, not by implementing harsher immigration laws or blindly treating all foreigners as potential terrorists. Policies and practices that fail to properly distinguish between terrorists and legitimate foreign travelers are ineffective security tools that waste limited resources, damage the U.S. economy, alienate those groups whose cooperation the U.S. government needs to prevent terrorism, and foster a false sense of security by promoting the illusion that we are reducing the threat of terrorism
Untersuchungen zu Client-seitigem Cross-Site Scripting
The Web's functionality has shifted from purely server-side code to rich client-side applications, which allow the user to interact with a site without the need for a full page load. While server-side attacks, such as SQL or command injection, still pose a threat, this change in the Web's model also increases the impact of vulnerabilities aiming at exploiting the client. The most prominent representative of such client-side attacks is Cross-Site Scripting, where the attacker's goal is to inject code of his choosing into the application, which is subsequently executed by the victimized browsers in the context of the vulnerable site.
This thesis provides insights into different aspects of Cross-Site Scripting. First, we show that the concept of password managers, which aim to allow users to choose more secure passwords and, thus, increase the overall security of user accounts, is susceptible to attacks which abuse Cross-Site Scripting flaws. In our analysis, we found that almost all built-in password managers can be leveraged by a Cross-Site Scripting attacker to steal stored credentials. Based on our observations, we present a secure concept for password managers, which does not insert password data into the document such that it is accessible from the injected JavaScript code. We evaluate our approach from a functional and security standpoint and find that our proposal provides additional security benefits while not causing any incompatibilities.
Our work then focuses on a sub-class of Cross-Site Scripting, namely Client-Side Cross-Site Scripting. We design, implement and execute a study into the prevalence of this class of flaws at scale. To do so, we implement a taint-aware browsing engine and an exploit generator capable of precisely producing exploits based on our gathered data on suspicious, tainted flows. Our subsequent study of the Alexa top 5,000 domains shows that almost one out of ten of these domains carry at least one Client-Side Cross-Site Scripting vulnerability.
We follow up on these flaws by analyzing the gathered flow data in depth in search of the root causes of this class of vulnerability. To do so, we first discuss the complexities inherent to JavaScript and define a set of metrics to measure said complexity. We then classify the vulnerable snippets of code we discovered according to these metrics and present the key insights gained from our analysis. In doing so, we find that the reasons for such flaws are manifold, ranging from simple unawareness of developers to incompatibilities between, otherwise safe, first- and third-party code.
In addition, we investigate the capability of the state of the art of Cross-Site Scripting filters in the client, the XSS Auditor, finding that several conceptual issues exist which an attacker to subvert of its protection capabilities. We show that the Auditor can be bypassed on over 80% of the vulnerable domains in our data set, highlighting that it is ill-equipped to stop Client-Side Cross-Site Scripting. Motivated by our findings, we present a concept for a filter targeting Client-Side Cross-Site Scripting, combining taint tracking in the browser in conjunction with taint-aware HTML and JavaScript parsers, allowing us to robustly protect users from such attacks.Das Web hat sich gewandelt: von rein server-seitig implementierter Funktionalität zu mächtigen client-seitigen Applikation, die einem Benutzer die Interaktion mit einer Seite ohne Neuladen ermöglichen. Obwohl server-seitige Angriffe, wie etwa SQL- oder Command-Injections, weiterhin ein Risiko darstellen, erhöht sich durch die Änderung am Konzept des Webs auch die Auswirkung von client-seitigen An- griffen. Der bekannteste Vertreter solcher Verwundbarkeiten ist Cross-Site Script- ing (XSS), eine Code-Injection-Attacke, die darauf abzielt, dass der vom Angreifer eingeschleuste Code im Browser seines Opfers mit der verwundbaren Applikation interagieren kann.
Diese Arbeit beschäftigt sich mit verschiedenen Aspekten von Cross-Site Script- ing. Zuerst legt sie die Funktionsweise von Passwort-Managern dar, die Benutzer bei der Wahl sichererer Passwörter unterstützen sollen. Eine Analyse des Konzepts an sich und aktueller Browser zeigt jedoch auf, dass Passwort-Manager anfällig für Cross-Site Scripting-Angriffe sind, die das Stehlen von gespeicherten Zugangsdaten ermöglichen. Basierend auf den Erkenntnissen stellt diese Arbeit daraufhin das Konzept eines Passwort-Managers vor, der Zugangsdaten für eingeschleusten Code unerreichbar macht. Die Evaluation in Hinblick auf Funktionalität und Sicherheit zeigt, dass sich durch den vorgestellten Ansatz Vorzüge für die Sicherheit bieten und der präsentierte Ansatz keinen negativen Einfluss auf bestehende Applikationen hat.
Anschließend liegt der Fokus dieser Arbeit auf einer Unterklasse von XSS, dem Client-Seitigen Cross-Site Scripting. Sie präsentiert das Design, die Implementierung und die Durchführung einer Studie, deren Ziel es ist, die Verbreitung von Client-Seitigem XSS im Web zu erforschen. Dazu kombiniert die Arbeit einen Browser, der Taint Tracking unterstützt, und einen Exploit-Generator, der basierend auf den gesammelten Datenflüssen Exploits erzeugen kann. Die daraufhin durchgeführte Studie zeigt auf, dass fast jede zehnte Webseite in den Alexa Top 5000 Domains anfällig für Client-Seitiges Cross-Site Scripting ist. Anschließend analysiert diese Arbeit die gefundenen Schwachstellen ausführlich, um die zugrunde liegenden Ursachen von Client-Seitigem XSS zu erkunden. Dazu erläutert sie die Komplexität von JavaScript, leitet daraus Metriken zur Messung der Komplexität von verwundbarem Code ab, klassifiziert die erkannten Verwundbarkeiten und präsentiert die Schlüsselerkenntnisse. Dabei zeigt sich, dass Client-Seitiges Cross-Site Scripting sowohl durch Unwissen von Entwicklern, aber auch durch die Kombination von inkompatiblem eigenen und fremden Code verursacht wird.
Im Anschluss analysiert diese Arbeit aktuelle XSS-Filter und zeigt auf, dass moderne Filter, wie der XSS Auditor, konzeptionelle Probleme aufweisen, die es einem Angreifer erlauben, den Filter auf 80% der verwundbaren Domains zu umgehen. Diese Arbeit präsentiert daraufhin ein neues Konzept, welches auf Taint Tracking basierend von einem Angreifer eingeschleusten Code im HTML- und JavaScript- Parser erkennt und stoppt. Dies ermöglicht es, Client-Seitiges XSS zu unterbinden, wobei ein akzeptabler Performanz-Verlust und geringe False Positives entstehen
Multisecond ligand dissociation dynamics from atomistic simulations
Coarse-graining of fully atomistic molecular dynamics simulations is a
long-standing goal in order to allow the description of processes occurring on
biologically relevant timescales. For example, the prediction of pathways,
rates and rate-limiting steps in protein-ligand unbinding is crucial for modern
drug discovery. To achieve the enhanced sampling, we first perform
dissipation-corrected targeted molecular dynamics simulations, which yield free
energy and friction profiles of the molecular process under consideration. In a
second step, we use these fields to perform temperature-boosted Langevin
simulations which account for the desired molecular kinetics occurring on
multisecond timescales and beyond. Adopting the dissociation of solvated sodium
chloride as well as trypsin-benzamidine and Hsp90-inhibitor protein-ligand
complexes as test problems, we are able to reproduce rates from molecular
dynamics simulation and experiments within a factor of 2-20, and dissociation
constants within a factor of 1-4. Analysis of the friction profiles reveals
that binding and unbinding dynamics are mediated by changes of the surrounding
hydration shells in all investigated systems.Comment: This unedited earlier version of the manuscript may be downloaded for
personal use only. The final manuscript was published in Nature
Communications 11, 2918 (2020) as open access publication and is available at
https://www.nature.com/articles/s41467-020-16655-
Kizzle: A Signature Compiler for Detecting Exploit Kits
In recent years, the drive-by malware space has undergone significant consolidation. Today, the most common source of drive-by downloads are socalled exploit kits (EKs). This paper presents Kizzle, the first prevention technique specifically designed for finding exploit kits.
Our analysis shows that while the JavaScript delivered by kits varies greatly, the unpacked code varies much less, due to the kits authors’ code reuse between versions. Ironically, this well-regarded software engineering practice allows us to build a scalable and precise detector that is able to quickly respond to superficial but frequent changes in EKs.
Kizzle is able to generate anti-virus signatures for detecting EKs, which compare favorably to manually created ones. Kizzle is highly responsive and can generate new signatures within hours. Our experiments show that Kizzle produces high-accuracy signatures. When evaluated over a four-week period, false-positive rates for Kizzle are under 0.03%, while the false-negative
rates are under 5%
Long term cognitive outcomes of early term (37-38 weeks) and late preterm (34-36 weeks) births: a systematic review
Background: There is a paucity of evidence regarding long-term outcomes of late preterm (34-36 weeks) and early term (37-38 weeks) delivery. The objective of this systematic review was to assess long-term cognitive outcomes of children born at these gestations. Methods: Four electronic databases (Medline, Embase, clinicaltrials.gov and PsycINFO) were searched. Last search was 5 th August 2016. Studies were included if they reported gestational age, IQ measure and the ages assessed. The protocol was registered with the International prospective register of systematic reviews (PROSPERO Record CRD42015015472). Two independent reviewers assessed the studies. Data were abstracted and critical appraisal performed of eligible papers. Results: Of 11,905 potential articles, seven studies reporting on 41,344 children were included. For early term births, four studies (n = 35,711) consistently showed an increase in cognitive scores for infants born at full term (39-41 weeks) compared to those born at early term (37-38 weeks) with increases for each week of term (difference between 37 and 40 weeks of around 3 IQ points), despite differences in age of testing and method of IQ/cognitive testing. Four studies (n = 5644) reporting childhood cognitive outcomes of late preterm births (34 - 36 weeks) also differed in study design (cohort and case control); age of testing; and method of IQ testing, and found no differences in outcomes between late preterm and term births, although risk of bias was high in included studies. Conclusion: Children born at 39-41 weeks have higher cognitive outcome scores compared to those born at early term (37-38 weeks). This should be considered when discussing timing of delivery. For children born late preterm, the data is scarce and when compared to full term (37-42 weeks) did not show any difference in IQ scores
Hemodialysis and Peritoneal Dialysis in Germany from a Health Economic View-A Propensity Score Matched Analysis.
BACKGROUND
Hemodialysis (HD) and peritoneal dialysis (PD) are deemed medically equivalent for therapy of end-stage renal disease (ESRD) and reimbursed by the German statutory health insurance (SHI). However, although the home dialysis modality PD is associated with higher patient autonomy than HD, for unknown reasons, PD uptake is low in Germany. Hence, we compared HD with PD regarding health economic outcomes, particularly costs, as potentially relevant factors for the predominance of HD.
METHODS
Claims data from two German health insurance funds were analysed in a retrospective cohort study regarding the prevalence of HD and PD in 2013-2016. Propensity score matching created comparable HD and PD groups (n = 436 each). Direct annual health care costs were compared. A sensitivity analysis included a comparison of different matching techniques and consideration of transportation costs. Additionally, hospitalisation and survival were investigated using Poisson regression and Kaplan-Meier curves.
RESULTS
Total direct annual average costs were higher for HD (€47,501) than for PD (€46,235), but not significantly (p = 0.557). The additional consideration of transportation costs revealed an annual cost advantage of €7000 for PD. HD and PD differed non-significantly in terms of hospitalisation and survival rates (p = 0.610/p = 0.207).
CONCLUSIONS
PD has a slight non-significant cost advantage over HD, especially when considering transportation costs
Nanoscale Imaging Reveals a Tetraspanin-CD9 Coordinated Elevation of Endothelial ICAM-1 Clusters
Endothelial barriers have a central role in inflammation as they allow or deny the passage of leukocytes from the vasculature into the tissue. To bind leukocytes, endothelial cells form adhesive clusters containing tetraspanins and ICAM-1, so-called endothelial adhesive platforms (EAPs). Upon leukocyte binding, EAPs evolve into docking structures that emanate from the endothelial surface while engulfing the leukocyte. Here, we show that TNF-α is sufficient to induce apical protrusions in the absence of leukocytes. Using advanced quantitation of atomic force microscopy (AFM) recordings, we found these structures to protrude by 160 ± 80 nm above endothelial surface level. Confocal immunofluorescence microscopy proved them positive for ICAM-1, JAM-A, tetraspanin CD9 and f-actin. Microvilli formation was inhibited in the absence of CD9. Our findings indicate that stimulation with TNF-α induces nanoscale changes in endothelial surface architecture and that—via a tetraspanin CD9 depending mechanism—the EAPs rise above the surface to facilitate leukocyte capture
Manipulation of Fgf and Bmp signaling in teleost fishes suggests potential pathways for the evolutionary origin of multicuspid teeth
Teeth with two or more cusps have arisen independently from an ancestral unicuspid condition in a variety of vertebrate lineages, including sharks, teleost fishes, amphibians, lizards, and mammals. One potential explanation for the repeated origins of multicuspid teeth is the existence of multiple adaptive pathways leading to them, as suggested by their different uses in these lineages. Another is that the addition of cusps required only minor changes in genetic pathways regulating tooth development. Here we provide support for the latter hypothesis by demonstrating that manipulation of the levels of Fibroblast growth factor (Fgf) or Bone morphogenetic protein (Bmp) signaling produces bicuspid teeth in the zebrafish (Danio rerio), a species lacking multicuspid teeth in its ancestry. The generality of these results for teleosts is suggested by the conversion of unicuspid pharyngeal teeth into bicuspid teeth by similar manipulations of the Mexican Tetra (Astyanax mexicanus). That these manipulations also produced supernumerary teeth in both species supports previous suggestions of similarities in the molecular control of tooth and cusp number. We conclude that despite their apparent complexity, the evolutionary origin of multicuspid teeth is positively constrained, likely requiring only slight modifications of a pre-existing mechanism for patterning the number and spacing of individual teeth. © 2013 Wiley Periodicals, Inc
- …