164 research outputs found

    Revision of Security Risk-oriented Patterns for Distributed Systems

    Get PDF
    Turvariskide haldamine on oluline osa tarkvara arendusest. Arvestades, et enamik tĂ€napĂ€eva ettevĂ”tetest sĂ”ltuvad suuresti infosĂŒsteemidest, on turvalisusel oluline roll sujuvalt toimivate Ă€riprotsesside tagamisel. Paljud inimesed kasutavad e-teenuseid, mida pakuvad nĂ€iteks pangad ja haigekassa. Ebapiisavatel turvameetmetel infosĂŒsteemides vĂ”ivad olla soovimatud tagajĂ€rjed nii ettevĂ”tte mainele kui ka inimeste eludele.\n\rTarkvara turvalisusega tuleb tavaliselt tegeleda kogu tarkvara arendusperioodi ja tarkvara eluea jooksul. Uuringute andmetel tegeletakse tarkvara turvakĂŒsimustega alles tarkvara arenduse ja hooldus etappidel. Kuna turvariskide vĂ€hendamine kaasneb tavaliselt muudatustena informatsioonisĂŒsteemi spetsifikatsioonis, on turvaanalĂŒĂŒsi mĂ”istlikum teha tarkvara vĂ€ljatöötamise algusjĂ€rgus. See vĂ”imaldab varakult vĂ€listada ebasobivad lahendused. Lisaks aitab see vĂ€ltida hilisemaid kulukaid muudatusi tarkvara arhitektuuris.\n\rKĂ€esolevas töös kĂ€sitleme turvalise tarkvara arendamise probleemi, pakkudes lahendusena vĂ€lja turvariskidele orienteeritud mustreid. Need mustrid aitavad leida turvariske Ă€riprotsessides ja pakuvad vĂ€lja turvariske vĂ€hendavaid lahendusi. Turvamustrid pakuvad analĂŒĂŒtikutele vahendit turvanĂ”uete koostamiseks Ă€riprotsessidele. Samuti vĂ€hendavad nad riskianalĂŒĂŒsiks vajalikku töömahtu. Oma töös joondame me turvariskidele orienteeritud mustrid vastu hajussĂŒsteemide turvaohtude mustreid. See vĂ”imaldab meil tĂ€iustada olemasolevaid turvariski mustreid ja vĂ”tta kasutusele tĂ€iendavaid mustreid turvariskide vĂ€hendamiseks hajussĂŒsteemides.\n\rTurvariskidele orienteeritud mustrite kasutatavust on kontrollitud lennunduse Ă€riprotsessides. Tulemused nĂ€itavad, et turvariskidele orienteeritud mustreid saab kasutada turvariskide vĂ€hendamiseks hajussĂŒsteemides.Security risk management is an important part of software development. Given that majority of modern organizations rely heavily on information systems, security plays a big part in ensuring smooth operation of business processes. Many people rely on e-services offered by banks and medical establishments. Inadequate security measures in information systems could have unwanted effects on an organization’s reputation and on people’s lives. Security concerns usually need to be addressed throughout the development and lifetime of a software system. Literature reports however, that security is often considered during implementation and maintenance stages of software development. Since security risk mitigation usually results with changes to an IS’s specification, security analysis is best done at an early phase of the development process. This allows an early exclusion of inadequate system designs. Additionally, it helps prevent the need for fundamental and expensive design changes later in the development process. In this thesis, we target the secure system development problem by suggesting application of security risk-oriented patterns. These patterns help find security risk occurrences in business processes and present mitigations for these risks. They provide business analysts with means to elicit and introduce security requirements to business processes. At the same time, they reduce the efforts needed for risk analysis. We confront the security risk-oriented patterns against threat patterns for distributed systems. This allows us to refine the collection of existing patterns and introduce additional patterns to mitigate security risks in processes of distributed systems. The applicability of these security risk-oriented patterns is validated on business processes from aviation turnaround system. The validation results show that the security risk-oriented patterns can be used to mitigate security risks in distributed systems

    Security of Electrical, Optical and Wireless On-Chip Interconnects: A Survey

    Full text link
    The advancement of manufacturing technologies has enabled the integration of more intellectual property (IP) cores on the same system-on-chip (SoC). Scalable and high throughput on-chip communication architecture has become a vital component in today's SoCs. Diverse technologies such as electrical, wireless, optical, and hybrid are available for on-chip communication with different architectures supporting them. Security of the on-chip communication is crucial because exploiting any vulnerability would be a goldmine for an attacker. In this survey, we provide a comprehensive review of threat models, attacks, and countermeasures over diverse on-chip communication technologies as well as sophisticated architectures.Comment: 41 pages, 24 figures, 4 table

    Replication of non-deterministic objects

    Get PDF
    This thesis discusses replication of non-deterministic objects in distributed systems to achieve fault tolerance against crash failures. The objects replicated are the virtual nodes of a distributed application. Replication is viewed as an issue that is to be dealt with only during the configuration of a distributed application and that should not affect the development of the application. Hence, replication of virtual nodes should be transparent to the application. Like all measures to achieve fault tolerance, replication introduces redundancy in the system. Not surprisingly, the main difficulty is guaranteeing the consistency of all replicas such that they behave in the same way as if the object was not replicated (replication transparency). This is further complicated if active objects (like virtual nodes) are replicated, and these objects themselves can be clients of still further objects in the distributed application. The problems of replication of active non-deterministic objects are analyzed in the context of distributed Ada 95 applications. The ISO standard for Ada 95 defines a model for distributed execution based on remote procedure calls (RPC). Virtual nodes in Ada 95 use this as their sole communication paradigm, but they may contain tasks to execute activities concurrently, thus making the execution potentially non-deterministic due to implicit timing dependencies. Such non-determinism cannot be avoided by choosing deterministic tasking policies. I present two different approaches to maintain replica consistency despite this non-determinism. In a first approach, I consider the run-time support of Ada 95 as a black box (except for the part handling remote communications). This corresponds to a non-deterministic computation model. I show that replication of non-deterministic virtual nodes requires that remote procedure calls are implemented as nested transactions. Unfortunately, effects of failures are not local to the replicas of a virtual node: when a failure occurs, nested remote calls made to other virtual nodes must be undone. Also, using transactional semantics for RPCs necessitates a compromise regarding transparency: the application must identify global state for it cannot be determined reliably in an automatic way. Further study reveals that this approach cannot be implemented in a transparent way at all because the consistency criterion of Ada 95 (linearizability) is much weaker than that of transactions (serializability). An execution of remote procedure calls as transactions may thus lead to incompatibilities with the semantics of the programming language. If remotely called subprograms on a replicated virtual node perform partial operations, i.e., entry calls on global protected objects, deadlocks that cannot be broken can occur in certain cases. Such deadlocks do not occur when the virtual node is not replicated. The transactional semantics of RPCs must therefore be exposed to the application. A second approach is based on a piecewise deterministic computation model, i.e., the execution of a virtual node is seen as a sequence of deterministic state intervals. Whenever a non-deterministic event occurs, a new state interval is started. I study replica organization under this computation model (semi-active replication). In this model, all non-deterministic decisions are made on one distinguished replica (the leader), while all other replicas (the followers) are forced to follow the same sequence of non-deterministic events. I show that it suffices to synchronize the followers with the leader upon each observable event, i.e., when the leader sends a message to some other virtual node. It is not necessary to synchronize upon each and every non-deterministic event — which would incur a prohibitively high overhead. Non-deterministic events occurring on the leader between observable events are logged and sent to the followers just before the leader executes an observable event. Consequently, it is guaranteed that the followers will reach the same state as the leader, and thus the effects of failures remain mostly local to the replicas. A prototype implementation called RAPIDS (Replicated Ada Partitions In Distributed Systems) serves as a proof of concept for this second approach, demonstrating its feasibility. RAPIDS is an Ada 95 implementation of a replication manager for semi-active replication for the GNAT development system for Ada 95. It is entirely contained within the run-time support and hence largely transparent for the application

    Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks

    Get PDF
    In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables. Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions

    A Web-Based Collaborative Multimedia Presentation Document System

    Get PDF
    With the distributed and rapidly increasing volume of data and expeditious development of modern web browsers, web browsers have become a possible legitimate vehicle for remote interactive multimedia presentation and collaboration, especially for geographically dispersed teams. To our knowledge, although there are a large number of applications developed for these purposes, there are some drawbacks in prior work including the lack of interactive controls of presentation flows, general-purpose collaboration support on multimedia, and efficient and precise replay of presentations. To fill the research gaps in prior work, in this dissertation, we propose a web-based multimedia collaborative presentation document system, which models a presentation as media resources together with a stream of media events, attached to associated media objects. It represents presentation flows and collaboration actions in events, implements temporal and spatial scheduling on multimedia objects, and supports real-time interactive control of the predefined schedules. As all events are represented by simple messages with an object-prioritized approach, our platform can also support fine-grained precise replay of presentations. Hundreds of kilobytes could be enough to store the events in a collaborative presentation session for accurate replays, compared with hundreds of megabytes in screen recording tools with a pixel-based replay mechanism

    A testbed for embedded systems

    Get PDF
    Testing and Debugging are often the most difficult phase of software development. This is especially true of embedded systems which are usually concurrent, have real-time performance and correctness constraints and which execute in the field in an environment which may not permit internal scrutiny of the software behaviour. Although good software engineering practices help, they will never eliminate the need for testing and debugging. This is because failings in the specification and design are often only discovered through testing and understanding these failings and how to correct them comes from debugging. These observations suggest that embedded software should be designed in a way which makes testing and debugging easier and that tools which support these activities are required. Due to the often hostile environment in which the finished embedded system will function, it is necessary to have a platform which allows the software to be developed and tested "in vitro". The Testbed system achieves these goals by providing dynamic modification and process migration facilities for use during development as well as powerful monitoring and background debugging support. These facilities are built on a basic run-time harness supporting an event-driven programming model with a global communication mechanism. This programming model is well suited to the reactive nature of embedded systems. The main research contributions of this work are in the areas of finding deadlock-free, path-optimal routings for networks and of dynamic modification with automated conversion of data which may include pointers

    Safe and automatic live update

    Get PDF
    Tanenbaum, A.S. [Promotor

    NPS AUV Integrated Simulation

    Get PDF
    The development and testing of Autonomous Underwater Vehicle (AUV) hardware and software is greatly complicated by vehicle inaccessibility during operation. Integrated simulation remotely links vehicle components and support equipment with graphics simulation workstations, allowing complete real-time, pre-mission, pseudo-mission and post-mission visualization and analysis in the lab environment. Integrated simulator testing of software and hardware is a broad and versatile method that supports rapid and robust diagnosis and correction of system faults. This method is demonstrated using the Naval Postgraduate School (NPS) AUV. High-resolution three-dimensional graphics workstations can provide real-time representations of vehicle dynamics, control system behavior, mission execution, sensor processing and object classification. Integrated simulation is also useful for development of the variety of sophisticated artificial intelligence applications needed by an AUV. Examples include sonar classification using an expert system and path planning using a circle world model. The flexibility and versatility provided by this approach enables visualization and analysis of all aspects of AUV development. Integrated simulator networking is recommended as a fundamental requirement for AUV research and deployment.http://archive.org/details/npsauvintegrated00brutLieutenant Commander, United States NavyApproved for public release; distribution is unlimited
    • 

    corecore