2,754 research outputs found

    Synchronization and fault-masking in redundant real-time systems

    Get PDF
    A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration

    Application of High-precision Timing Systems to Distributed Survey Systems

    Get PDF
    In any hydrographic survey system that consists of more than one computer, one of the most difficult integration problems is to ensure that all components maintain a coherent sense of time. Since virtually all modern survey systems are of this type, timekeeping and synchronized timestamping of data as it is created is of significant concern. This paper describes a method for resolving this problem based on the IEEE 1588 Precise Time Protocol (PTP) implemented by hardware devices, layered with some custom software called the Software Grandmaster (SWGM) algorithm. This combination of hardware and software maintains a coherent sense of time between multiple ethernet-connected computers, on the order of 100 ns (rms) in the best case, of the timebase established by the local GPS-receiver clock. We illustrate the performance of this techniques in a practical survey system using a Reson 7P sonar processor connected to a Reson 7125 Multibeam Echosounder (MBES), integrated with an Applanix POS/MV 320 V4 and a conventional data capture computer. Using the timing capabilities of the PTP hardware implementations, we show that the timepieces achieve mean (hardware based) synchronization and timestamping within 100-150 ns (rms), and that the data created at the Reson 7P without hardware timestamps has a latency variability of 28 µs (rms) due to software constraints within the capture system. This compares to 288 ms (rms) using Reson’s standard hybrid hardware/software solution, and 13.6 ms (rms) using a conventional single-oscillator timestamping model

    A Semantic-Based Middleware for Multimedia Collaborative Applications

    Get PDF
    The Internet growth and the performance increase of desktop computers have enabled large-scale distributed multimedia applications. They are expected to grow in demand and services and their traffic volume will dominate. Real-time delivery, scalability, heterogeneity are some requirements of these applications that have motivated a revision of the traditional Internet services, the operating systems structures, and the software systems for supporting application development. This work proposes a Java-based lightweight middleware for the development of large-scale multimedia applications. The middleware offers four services for multimedia applications. First, it provides two scalable lightweight protocols for floor control. One follows a centralized model that easily integrates with centralized resources such as a shared too], and the other is a distributed protocol targeted to distributed resources such as audio. Scalability is achieved by periodically multicasting a heartbeat that conveys state information used by clients to request the resource via temporary TCP connections. Second, it supports intra- and inter-stream synchronization algorithms and policies. We introduce the concept of virtual observer, which perceives the session as being in the same room with a sender. We avoid the need for globally synchronized clocks by introducing the concept of user\u27s multimedia presence, which defines a new manner for combining streams coming from multiple sites. It includes a novel algorithm for estimation and removal of clock skew. In addition, it supports event-driven asynchronous message reception, quality of service measures, and traffic rate control. Finally, the middleware provides support for data sharing via a resilient and scalable protocol for transmission of images that can dynamically change in content and size. The effectiveness of the middleware components is shown with the implementation of Odust, a prototypical sharing tool application built on top of the middleware

    Performance and policy dimensions in internet routing

    Get PDF
    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary

    Time Synchronization system, Investigation and Implementation Proposal

    Get PDF

    Nanosecond-Level Resilient GNSS-Based Time Synchronization in Telecommunication Networks Through WR-PTP HA

    Get PDF
    In recent years, the push for accurate and reliable time synchronization has gained momentum in critical infrastructures, especially in telecommunication networks, driven by the demands of 5G new radio and next-generation technologies that rely on submicrosecond timing accuracy for radio access network (RAN) nodes. Traditionally, atomic clocks paired with global navigation satellite systems (GNSS) timing receivers have served as grand master clocks, supported by dedicated network timing protocols. However, this approach struggles to scale with the increasing numbers of RAN intermediate nodes. To address scalability and high-accuracy synchronization, a more cost-effective and capillary solution is needed. Standalone GNSS timing receivers leverage ubiquitous satellite signals to offer stable timing signals but can expose networks to radio-frequency attacks due to the consequent proliferation of GNSS antennas. Our research introduces a solution by combining the white rabbit precise time protocol with a backup timing source logic acting in case of timing disruptive attacks against GNSS for resilient GNSS-based network synchronization. It has been rigorously tested against common jamming, meaconing, and spoofing attacks, consistently maintaining 2 ns relative synchronization accuracy between nodes, all without the need for an atomic clock

    Goodbye, ALOHA!

    Get PDF
    Š2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The vision of the Internet of Things (IoT) to interconnect and Internet-connect everyday people, objects, and machines poses new challenges in the design of wireless communication networks. The design of medium access control (MAC) protocols has been traditionally an intense area of research due to their high impact on the overall performance of wireless communications. The majority of research activities in this field deal with different variations of protocols somehow based on ALOHA, either with or without listen before talk, i.e., carrier sensing multiple access. These protocols operate well under low traffic loads and low number of simultaneous devices. However, they suffer from congestion as the traffic load and the number of devices increase. For this reason, unless revisited, the MAC layer can become a bottleneck for the success of the IoT. In this paper, we provide an overview of the existing MAC solutions for the IoT, describing current limitations and envisioned challenges for the near future. Motivated by those, we identify a family of simple algorithms based on distributed queueing (DQ), which can operate for an infinite number of devices generating any traffic load and pattern. A description of the DQ mechanism is provided and most relevant existing studies of DQ applied in different scenarios are described in this paper. In addition, we provide a novel performance evaluation of DQ when applied for the IoT. Finally, a description of the very first demo of DQ for its use in the IoT is also included in this paper.Peer ReviewedPostprint (author's final draft

    Improving the measurement of system time on remote hosts

    Get PDF
    The tools and techniques of digital forensics are useful in investigating system failures, gathering evidence of illegal activities, and analyzing computer systems after cyber attacks. Constructing an accurate timeline of digital events is essential to forensic analysis, and developing a correlation between a computer’s system time and a standard time such as Coordinated Universal Time (UTC) is key to building such a timeline. In addition to local temporal data, such as file MAC (Modified, Accessed, and Changed/Created) times and event logs, a computer may hold timestamps from other machines, such as email headers, HTTP cookies, and downloaded files. To fully understand the sequence of events on a single computer, investigators need dependable tools for building clock models of all other computers that have contributed to its timestamps. Building clock models involves measuring the system times on remote hosts and correlating them to the time on the local machine. Sending ICMP or IP timestamp requests and analyzing the responses is one way to take this measurement. The Linux program clockdiff utilizes this method, but it is slow and sometimes inaccurate. Using a series of 50 packets, clockdiff consumes an average of 11 seconds in measuring one target. Also, clockdiff assumes that the time difference between the local and target hosts is never greater than 12 hours. When it receives a timestamp showing a greater difference, it manipulates this value without alerting the user, reporting a result that could make the target appear to be more tightly synchronized with the local host than it actually is. Thus, clockdiff is not the best choice for forensic investigators. As a better alternative, we have designed and implemented a program called clockvar, which also uses ICMP and IP timestamp messages. We show by experiment that clockvar maintains precision when system times on the local and target hosts differ by twelve to twenty-four hours, and we demonstrate that clockvar is capable of making measurements up to 1400 times faster than clockdiff
    • …
    corecore