362,917 research outputs found

    Web-Based Roadway Geometry Design Software for Transportation Education

    Get PDF
    Traditionally, students use pencil and ruler to lay out lines and curves over contour maps for roadway geometry design. Numerous calculations of stopping sight distance, minimum turning radius, and curve alignments are required during the roadway design process in order to ensure safety, to minimize economic and environmental impacts, as well as to reduce construction costs. Iterative computations during the design process are usually performed manually by the students in order to meet any given design criteria and environmental constraints. The traditional design process of roadway geometry design is often cumbersome and time consuming. It limits students from taking a broader perspective on the overall roadway design process. An Internet-based roadway design tool (ROAD: Roadway Online Application for Design) was developed to enhance the learning experience for transportation engineering students. This tool allows students to efficiently design and to easily modify the roadway design with given economic and environmental parameters. A 3D roadway geometry model can be generated by the software at final design to allow students immerse themselves in the driver’s seat and drive through the designed roadway at maximum design speed. This roadway geometry design tool was deployed and tested in a civil engineering undergraduate class in spring 2006 at University of Minnesota, Department of Civil Engineering. Feedback was collected from instructors and students that will lead to additional enhancements of the roadway design software.

    An Analysis of How Many Undiscovered Vulnerabilities Remain in Information Systems

    Full text link
    Vulnerability management strategy, from both organizational and public policy perspectives, hinges on an understanding of the supply of undiscovered vulnerabilities. If the number of undiscovered vulnerabilities is small enough, then a reasonable investment strategy would be to focus on finding and removing the remaining undiscovered vulnerabilities. If the number of undiscovered vulnerabilities is and will continue to be large, then a better investment strategy would be to focus on quick patch dissemination and engineering resilient systems. This paper examines a paradigm, namely that the number of undiscovered vulnerabilities is manageably small, through the lens of mathematical concepts from the theory of computing. From this perspective, we find little support for the paradigm of limited undiscovered vulnerabilities. We then briefly support the notion that these theory-based conclusions are relevant to practical computers in use today. We find no reason to believe undiscovered vulnerabilities are not essentially unlimited in practice and we examine the possible economic impacts should this be the case. Based on our analysis, we recommend vulnerability management strategy adopts an approach favoring quick patch dissemination and engineering resilient systems, while continuing good software engineering practices to reduce (but never eliminate) vulnerabilities in information systems

    Optimal scheduling of reliability development activities

    Get PDF
    Probabilistic Safety Assessment and Management is a collection of papers presented at the PSAM 7 - ESREL '04 Conference in June 2004. The joint Conference provided a forum for the presentation of the latest developments in methodology and application of probabilistic and reliability methods in various industries. Innovations in methodology as well as practical applications in the areas of probabilistic safety assessment and of reliability analysis are presented in this six volume set. The aim of these applications is the optimisation of technological systems and processes from the perspective of a risk-informed safety management while also taking economic and environmental aspects into account. The joint Conference in particular achieved an enhanced communication, the sharing of experience and integration of approaches not only among the various industries but also on a truly global basis by bringing together leading experts from all over the world. Over the last four decades, contemporary researchers have continuously been working to provide modern societies with a systematic, self-consistent and coherent framework for making decisions on at least one class of risks, those stemming from modern technological applications. Most of the effort has been spent in developing methods and techniques for assessing the dependability of technological systems, and assessing or estimating the levels of safety and associated risks. A wide spectrum of engineering, natural and economic sciences has been involved in this assessment effort. The developments have moved beyond research endeavours, they have been applied and utilised in real socio-technical environments and have become established - while modern technology continues to present new challenges and to raise new questions. Consequently, Probabilistic Safety Assessment and Management covers both well-established practices and open issues in the fields addressed by the Conference, identifying areas where maturity has been reached and those where more development is needed. The papers reflect a wide variety of disciplines, such as principles and theory of reliability and risk analysis, systems modelling and simulation, consequence assessment, human and organisational factors, structural reliability methods, software reliability and safety, insights and lessons from risk studies and management/decision making. A diverse range of application areas are represented including aviation and space, chemical processing, civil engineering, energy, environment, information technology, legal, manufacturing, health care, defence, transportation and waste management

    Towards the Development of a Framework for Socially Responsible Software by Analyzing Social Media Big Data on Cloud Through Ontological Engineering

    Get PDF
    AbstractA socially responsible internet is the need of the hour considering its huge potential and role in educating and transforming the society. Social computing is emerging as an important area as far as development of next generation web is concerned. With the proliferation of social networking applications, vast amount of data is available on cloud, which may be analyzed to gain useful insight into behavioral and linguistic patterns of different cultural and socio-economic groups further classified on the basis of gender and age etc. The idea is to come up with an appropriate framework for socially responsible software artifacts. These artifacts will monitor online social network data and analyze it from the perspective of socially responsible behavior based on ontological engineering concepts. Identification of socially responsible agents is such an example, though based on a different approach. More examples may be taken from literature dealing with microblog analytics, social semantic web, upper ontology for social web, and social-network-sourced big data analytics. In the present work, it is proposed to focus on analysis/monitoring of socially responsible behavior of social media big data and develop an upper level ontology as the framework/tool for such an analytics

    OSS architecture for mixed-criticality systems – a dual view from a software and system engineering perspective

    Get PDF
    Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack. Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments. In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods. In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems. In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve real-time capabilities of the hardware by design, while guaranteeing strict isolation across domains.ComputergestĂŒtzte Automatisierung industrieller Systeme fĂŒhrt zu einer wachsenden Anzahl an logisch abhĂ€ngigen, aber physisch voneinander getrennten SteuergerĂ€ten pro System. Viele der EinzelgerĂ€te sind sicherheitskritische Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch die unermĂŒdliche Nachfrage an FunktionalitĂ€ten erschwert wird. Diese fĂŒhrt zu einer wachsenden Gesamtzahl an SteuergerĂ€ten, einhergehend mit wachsender KomplexitĂ€t des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben erschwert werden. Moderne Prozessoren stellen Mittel zur VerfĂŒgung, welche es ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische Herausforderungen, die den gesamten Software und Systemstapel betreffen. Mehrkernprozessoren begĂŒnstigen die ökonomische und effiziente Konsolidierung vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete Systemarchitekturen mĂŒssen jedoch die RĂŒckwirkungsfreiheit zwischen DomĂ€nen unterschiedlicher KritikalitĂ€t sicherstellen. Die Konsolidierung erfordert architektonische, als auch ingenieurstechnische Strategien um die Anforderungen (etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen Umgebungen erfĂŒllen zu können. Zunehmend werden herkömmliche proprietĂ€r entwickelte Basisplattformkomponenten aus ökonomischen und technischen GrĂŒnden vermehrt durch ausgereifte OSS Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament fĂŒr Zertifizierungsvorhaben basierend auf empirischen Methoden des Software Engineerings zur VerfĂŒgung zu stellen. In dieser Arbeit nĂ€here ich mich von beiden Seiten: der Softwaretechnik, und der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es ermöglichen, prozessuale Charakteristika von verteilten OSS Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rĂŒckschauende Analysen des Entwicklungsprozess als Grundlage fĂŒr Softwarezertifizierungsvorhaben genutzt werden können. Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von Systemen gemischter KritikalitĂ€t auf einer alleinstehenden Plattform ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren aus, um die Hardware in strikt voneinander isolierten RechendomĂ€nen unterschiedlicher KritikalitĂ€t unterteilen zu können. Die vorgeschlagene Architektur soll jegliche Betriebsstörungen des Hypervisors beseitigen, um die EchtzeitfĂ€higkeiten der Hardware bauartbedingt aufrecht zu erhalten, wĂ€hrend strikte Isolierung zwischen DomĂ€nen stets sicher gestellt ist

    Multi-perspective requirements engineering for networked business systems: a framework for pattern composition

    Get PDF
    How business and software analysts explore, document, and negotiate requirements for enterprise systems is critical to the benefits their organizations will eventually derive. In this paper, we present a framework for analysis and redesign of networked business systems. It is based on libraries of patterns which are derived from existing Internet businesses. The framework includes three perspectives: Economic value, Business processes, and Application communication, each of which applies a goal-oriented method to compose patterns. By means of consistency relationships between perspectives, we demonstrate the usefulness of the patterns as a light-weight approach to exploration of business ideas

    Requirements: The Key to Sustainability

    Get PDF
    Software's critical role in society demands a paradigm shift in the software engineering mind-set. This shift's focus begins in requirements engineering. This article is part of a special issue on the Future of Software Engineering
    • 

    corecore