5,413 research outputs found
Free and Open Source Software in Municipal Procurement:The Challenges and Benefits of Cooperation
The use of free and open source software by municipal governments is the exception rather than the rule. This is due to a variety of factors, including a failure of many municipal procurement policies to take into account the benefits of free software, free software vendors second-to-market status, and a lack of established free and open source software vendors in niche markets. With feasible policy shifts to improve city operations, including building upon open standards and engaging with free software communities, municipalities may be able to better leverage free and open source software to realize fully the advantages that stem from open software development
PatchRNN: A Deep Learning-Based System for Security Patch Identification
With the increasing usage of open-source software (OSS) components,
vulnerabilities embedded within them are propagated to a huge number of
underlying applications. In practice, the timely application of security
patches in downstream software is challenging. The main reason is that such
patches do not explicitly indicate their security impacts in the documentation,
which would be difficult to recognize for software maintainers and users.
However, attackers can still identify these "secret" security patches by
analyzing the source code and generate corresponding exploits to compromise not
only unpatched versions of the current software, but also other similar
software packages that may contain the same vulnerability due to code cloning
or similar design/implementation logic. Therefore, it is critical to identify
these secret security patches to enable timely fixes. To this end, we propose a
deep learning-based defense system called PatchRNN to automatically identify
secret security patches in OSS. Besides considering descriptive keywords in the
commit message (i.e., at the text level), we leverage both syntactic and
semantic features at the source-code level. To evaluate the performance of our
system, we apply it on a large-scale real-world patch dataset and conduct a
case study on a popular open-source web server software - NGINX. Experimental
results show that the PatchRNN can successfully detect secret security patches
with a low false positive rate
Talos: Neutralizing Vulnerabilities with Security Workarounds for Rapid Response
Considerable delays often exist between the discovery of a vulnerability and
the issue of a patch. One way to mitigate this window of vulnerability is to
use a configuration workaround, which prevents the vulnerable code from being
executed at the cost of some lost functionality -- but only if one is
available. Since program configurations are not specifically designed to
mitigate software vulnerabilities, we find that they only cover 25.2% of
vulnerabilities.
To minimize patch delay vulnerabilities and address the limitations of
configuration workarounds, we propose Security Workarounds for Rapid Response
(SWRRs), which are designed to neutralize security vulnerabilities in a timely,
secure, and unobtrusive manner. Similar to configuration workarounds, SWRRs
neutralize vulnerabilities by preventing vulnerable code from being executed at
the cost of some lost functionality. However, the key difference is that SWRRs
use existing error-handling code within programs, which enables them to be
mechanically inserted with minimal knowledge of the program and minimal
developer effort. This allows SWRRs to achieve high coverage while still being
fast and easy to deploy.
We have designed and implemented Talos, a system that mechanically
instruments SWRRs into a given program, and evaluate it on five popular Linux
server programs. We run exploits against 11 real-world software vulnerabilities
and show that SWRRs neutralize the vulnerabilities in all cases. Quantitative
measurements on 320 SWRRs indicate that SWRRs instrumented by Talos can
neutralize 75.1% of all potential vulnerabilities and incur a loss of
functionality similar to configuration workarounds in 71.3% of those cases. Our
overall conclusion is that automatically generated SWRRs can safely mitigate
2.1x more vulnerabilities, while only incurring a loss of functionality
comparable to that of traditional configuration workarounds.Comment: Published in Proceedings of the 37th IEEE Symposium on Security and
Privacy (Oakland 2016
OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective
Computer-based automation in industrial appliances led to a growing number of
logically dependent, but physically separated embedded control units per
appliance. Many of those components are safety-critical systems, and require
adherence to safety standards, which is inconsonant with the relentless demand
for features in those appliances. Features lead to a growing amount of control
units per appliance, and to a increasing complexity of the overall software
stack, being unfavourable for safety certifications. Modern CPUs provide means
to revise traditional separation of concerns design primitives: the consolidation
of systems, which yields new engineering challenges that concern the entire
software and system stack.
Multi-core CPUs favour economic consolidation of formerly separated
systems with one efficient single hardware unit. Nonetheless, the system
architecture must provide means to guarantee the freedom from interference
between domains of different criticality. System consolidation demands for
architectural and engineering strategies to fulfil requirements (e.g., real-time
or certifiability criteria) in safety-critical environments.
In parallel, there is an ongoing trend to substitute ordinary proprietary base
platform software components by mature OSS variants for economic and
engineering reasons. There are fundamental differences of processual properties
in development processes of OSS and proprietary software. OSS in
safety-critical systems requires development process assessment techniques to
build an evidence-based fundament for certification efforts that is based upon
empirical software engineering methods.
In this thesis, I will approach from both sides: the software and system
engineering perspective. In the first part of this thesis, I focus on the
assessment of OSS components: I develop software engineering techniques
that allow to quantify characteristics of distributed OSS development
processes. I show that ex-post analyses of software development processes can
be used to serve as a foundation for certification efforts, as it is required
for safety-critical systems.
In the second part of this thesis, I present a system architecture based on
OSS components that allows for consolidation of mixed-criticality systems
on a single platform. Therefore, I exploit virtualisation extensions of modern
CPUs to strictly isolate domains of different criticality. The proposed
architecture shall eradicate any remaining hypervisor activity in order to
preserve real-time capabilities of the hardware by design, while
guaranteeing strict isolation across domains.ComputergestĂĽtzte Automatisierung industrieller Systeme fĂĽhrt zu einer
wachsenden Anzahl an logisch abhängigen, aber physisch voneinander getrennten
Steuergeräten pro System. Viele der Einzelgeräte sind sicherheitskritische
Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch
die unermüdliche Nachfrage an Funktionalitäten erschwert wird. Diese führt zu
einer wachsenden Gesamtzahl an Steuergeräten, einhergehend mit wachsender
Komplexität des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben
erschwert werden. Moderne Prozessoren stellen Mittel zur VerfĂĽgung, welche es
ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu
erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische
Herausforderungen, die den gesamten Software und Systemstapel betreffen.
Mehrkernprozessoren begünstigen die ökonomische und effiziente Konsolidierung
vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete
Systemarchitekturen müssen jedoch die Rückwirkungsfreiheit zwischen Domänen
unterschiedlicher Kritikalität sicherstellen. Die Konsolidierung erfordert
architektonische, als auch ingenieurstechnische Strategien um die Anforderungen
(etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen
Umgebungen erfüllen zu können.
Zunehmend werden herkömmliche proprietär entwickelte Basisplattformkomponenten
aus ökonomischen und technischen Gründen vermehrt durch ausgereifte OSS
Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen
Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in
sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben
die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament fĂĽr
Zertifizierungsvorhaben basierend auf empirischen Methoden des Software
Engineerings zur VerfĂĽgung zu stellen.
In dieser Arbeit nähere ich mich von beiden Seiten: der Softwaretechnik, und
der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von
OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es
ermöglichen, prozessuale Charakteristika von verteilten OSS
Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rĂĽckschauende Analysen
des Entwicklungsprozess als Grundlage fĂĽr Softwarezertifizierungsvorhaben
genutzt werden können.
Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle
eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von
Systemen gemischter Kritikalität auf einer alleinstehenden Plattform
ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren
aus, um die Hardware in strikt voneinander isolierten Rechendomänen unterschiedlicher
Kritikalität unterteilen zu können. Die vorgeschlagene Architektur soll jegliche
Betriebsstörungen des Hypervisors beseitigen, um die Echtzeitfähigkeiten der
Hardware bauartbedingt aufrecht zu erhalten, während strikte Isolierung
zwischen Domänen stets sicher gestellt ist
Digital Architecture as Crime Control
This paper explains how theories of realspace architecture inform the prevention of computer crime. Despite the prevalence of the metaphor, architects in realspace and cyberspace have not talked to one another. There is a dearth of literature about digital architecture and crime altogether, and the realspace architectural literature on crime prevention is often far too soft for many software engineers. This paper will suggest the broad brushstrokes of potential design solutions to cybercrime, and in the course of so doing, will pose severe criticisms of the White House\u27s recent proposals on cybersecurity.
The paper begins by introducing four concepts of realspace crime prevention through architecture. Design should: (1) create opportunities for natural surveillance, meaning its visibility and susceptibility to monitoring by residents, neighbors, and bystanders; (2) instill a sense of territoriality so that residents develop proprietary attitudes and outsiders feel deterred from entering a private space; (3) build communities and avoid social isolation; and (4) protect targets of crime. There are digital analogues to each goal. Natural-surveillance principles suggest new virtues of open-source platforms, such as Linux, and territoriality outlines a strong case for moving away from digital anonymity towards psuedonymity. The goal of building communities will similarly expose some new advantages for the original, and now eroding, end-to-end design of the Internet. An understanding of architecture and target prevention will illuminate why firewalls at end points will more effectively guarantee security than will attempts to bundle security into the architecture of the Net. And, in total, these architectural lessons will help us chart an alternative course to the federal government\u27s tepid approach to computer crime. By leaving the bulk of crime prevention to market forces, the government will encourage private barricades to develop - the equivalent of digital gated communities - with terrible consequences for the Net in general and interconnectivity in particular
Between Hype and Understatement: Reassessing Cyber Risks as a Security Strategy
Most of the actions that fall under the trilogy of cyber crime, terrorism,and war exploit pre-existing weaknesses in the underlying technology.Because these vulnerabilities that exist in the network are not themselvesillegal, they tend to be overlooked in the debate on cyber security. A UKreport on the cost of cyber crime illustrates this approach. Its authors chose to exclude from their analysis the costs in anticipation of cyber crime, such as insurance costs and the costs of purchasing anti-virus software on the basis that "these are likely to be factored into normal day-to-day expenditures for the Government, businesses, and individuals. This article contends if these costs had been quantified and integrated into the cost of cyber crime, then the analysis would have revealed that what matters is not so much cyber crime, but the fertile terrain of vulnerabilities that unleash a range of possibilities to whomever wishes to exploit them. By downplaying the vulnerabilities, the threats represented by cyber war, cyber terrorism, and cyber crime are conversely inflated. Therefore, reassessing risk as a strategy for security in cyberspace must include acknowledgment of understated vulnerabilities, as well as a better distributed knowledge about the nature and character of the overhyped threats of cyber crime, cyber terrorism, and cyber war
- …