239 research outputs found

    Time and Frequency Transfer in a Coherent Multistatic Radar using a White Rabbit Network

    Get PDF
    Networks of coherent multistatic radars require accurate and stable time and frequency transfer (TFT) for range and Doppler estimation. TFT techniques based on global navigation satellite systems (GNSS), have been favoured for several reasons, such as enabling node mobility through wireless operation, geospatial referencing, and atomic clock level time and frequency stability. However, such systems are liable to GNSS-denial, where the GNSS carrier is temporarily or permanently removed. A denial-resilient system should consider alternative TFT techniques, such as the White Rabbit (WR) project. WR is an Ethernet based protocol, that is able to synchronise thousands of nodes on a fibre-optic based network with sub-nanosecond accuracy and picoseconds of jitter. This thesis evaluates WR as the TFT network for a coherent multistatic pulse-Doppler radar – NeXtRAD. To test the hypothesis that WR is suitable for TFT in a coherent multistatic radar, the time and frequency performance of a WR network was evaluated under laboratory conditions, comparing the results against a network of multi-channel GPS-disciplined oscillators (GPSDO). A WR-disciplined oscillator (WRDO) is introduced, which has the short-term stability of an ovenised crystal (OCXO), and long-term stability of the WR network. The radar references were measured using a dual mixer time difference technique (DMTD), which allows the phase to be measured with femtosecond level resolution. All references achieved the stringent time and frequency requirements for short-term coherent bistatic operation, however the GPSDOs and WRDOs had the best short-term frequency stability. The GPSDOs had the highest amount of long-term phase drift, with a peak-peak time error of 9.6 ns, whilst the WRDOs were typically stable to within 0.4 ns, but encountered transient phase excursions to 1.5 ns. The TFT networks were then used on the NeXtRAD radar, where a lighthouse, Roman Rock, was used as a static target to evaluate the time and frequency performance of the references on a real system. The results conform well to the laboratory measurements, and therefore, WR can be used for TFT in coherent radar

    Resilient architecture (preliminary version)

    Get PDF
    The main objectives of WP2 are to define a resilient architecture and to develop a range of middleware solutions (i.e. algorithms, protocols, services) for resilience to be applied in the design of highly available, reliable and trustworthy networking solutions. This is the first deliverable within this work package, a preliminary version of the resilient architecture. The deliverable builds on previous results from WP1, the definition of a set of applications and use cases, and provides a perspective of the middleware services that are considered fundamental to address the dependability requirements of those applications. Then it also describes the architectural organisation of these services, according to a number of factors like their purpose, their function within the communication stack or their criticality/specificity for resilience. WP2 proposes an architecture that differentiates between two classes of services, a class including timeliness and trustworthiness oracles, and a class of so called complex services. The resulting architecture is referred to as a "hybrid architecture". The hybrid architecture is motivated and discussed in this document. The services considered within each of the service classes of the hybrid architecture are described. This sets the background for the work to be carried on in the scope of tasks 2.2 and 2.3 of the work package. Finally, the deliverable also considers high-level interfacing aspects, by providing a discussion about the possibility of using existing Service Availability Forum standard interfaces within HIDENETS, in particular discussing possibly necessary extensions to those interfaces in order to accommodate specific HIDENETS services suited for ad-hoc domain

    A spatio-temporal model to reveal oscillator phenotypes in molecular clocks: Parameter estimation elucidates circadian gene transcription dynamics in single-cells.

    Get PDF
    We propose a stochastic distributed delay model together with a Markov random field prior and a measurement model for bioluminescence-reporting to analyse spatio-temporal gene expression in intact networks of cells. The model describes the oscillating time evolution of molecular mRNA counts through a negative transcriptional-translational feedback loop encoded in a chemical Langevin equation with a probabilistic delay distribution. The model is extended spatially by means of a multiplicative random effects model with a first order Markov random field prior distribution. Our methodology effectively separates intrinsic molecular noise, measurement noise, and extrinsic noise and phenotypic variation driving cell heterogeneity, while being amenable to parameter identification and inference. Based on the single-cell model we propose a novel computational stability analysis that allows us to infer two key characteristics, namely the robustness of the oscillations, i.e. whether the reaction network exhibits sustained or damped oscillations, and the profile of the regulation, i.e. whether the inhibition occurs over time in a more distributed versus a more direct manner, which affects the cells' ability to phase-shift to new schedules. We show how insight into the spatio-temporal characteristics of the circadian feedback loop in the suprachiasmatic nucleus (SCN) can be gained by applying the methodology to bioluminescence-reported expression of the circadian core clock gene Cry1 across mouse SCN tissue. We find that while (almost) all SCN neurons exhibit robust cell-autonomous oscillations, the parameters that are associated with the regulatory transcription profile give rise to a spatial division of the tissue between the central region whose oscillations are resilient to perturbation in the sense that they maintain a high degree of synchronicity, and the dorsal region which appears to phase shift in a more diversified way as a response to large perturbations and thus could be more amenable to entrainment

    AN INVESTIGATION INTO AN EXPERT SYSTEM FOR TELECOMMUNICATION NETWORK DESIGN

    Get PDF
    Many telephone companies, especially in Eastern-Europe and the 'third world', are developing new telephone networks. In such situations the network design engineer needs computer based tools that not only supplement his own knowledge but also help him to cope with situations where not all the information necessary for the design is available. Often traditional network design tools are somewhat removed from the practical world for which they were developed. They often ignore the significant uncertain and statistical nature of the input data. They use data taken from a fixed point in time to solve a time variable problem, and the cost formulae tend to be on an average per line or port rather than the specific case. Indeed, data is often not available or just plainly unreliable. The engineer has to rely on rules of thumb honed over many years of experience in designing networks and be able to cope with missing data. The complexity of telecommunication networks and the rarity of specialists in this area often makes the network design process very difficult for a company. It is therefore an important area for the application of expert systems. Designs resulting from the use of expert systems will have a measure of uncertainty in their solution and adequate account must be made of the risk involved in implementing its design recommendations. The thesis reviews the status of expert systems as used for telecommunication network design. It further shows that such an expert system needs to reduce a large network problem into its component parts, use different modules to solve them and then combine these results to create a total solution. It shows how the various sub-division problems are integrated to solve the general network design problem. This thesis further presents details of such an expert system and the databases necessary for network design: three new algorithms are invented for traffic analysis, node locations and network design and these produce results that have close correlation with designs taken from BT Consultancy archives. It was initially supposed that an efficient combination of existing techniques for dealing with uncertainty within expert systems would suffice for the basis of the new system. It soon became apparent, however, that to allow for the differing attributes of facts, rules and data and the varying degrees of importance or rank within each area, a new and radically different method would be needed. Having investigated the existing uncertainty problem it is believed that a new more rational method has been found. The work has involved the invention of the 'Uncertainty Window' technique and its testing on various aspects of network design, including demand forecast, network dimensioning, node and link system sizing, etc. using a selection of networks that have been designed by BT Consultancy staff. From the results of the analysis, modifications to the technique have been incorporated with the aim of optimising the heuristics and procedures, so that the structure gives an accurate solution as early as possible. The essence of the process is one of associating the uncertainty windows with their relevant rules, data and facts, which results in providing the network designer with an insight into the uncertainties that have helped produce the overall system design: it indicates which sources of uncertainty and which assumptions are were critical for further investigation to improve upon the confidence of the overall design. The windowing technique works by virtue of its ability to retain the composition of the uncertainty and its associated values, assumption, etc. and allows for better solutions to be attained.BRITISH TELECOMMUNICATIONS PL

    Towards low power radio localisation

    Get PDF
    This work investigates the use of super-resolution algorithms for precision localisation and long-term tracking of small subjects, like rodents. An overview is given of a variety of techniques for positioning in use today, namely received signal strength, time of arrival, time difference of arrival and direction of arrival (DoA). Based on the analysis, it is concluded that the direction finding signal subspace based techniques are most appropriate for the purposes of our system. The details of the software defined radio (SDR) antenna array testbed development, build, characterisation and performance evaluation are presented. The results of direction finding experiments in the screened anechoic chamber emulating open-space propagation are discussed. It is shown that such testbed is capable of locating sources in the vicinity of the array with high precision. It can estimate the DoAs of more simultaneously working transmitters than antennas in the array, by employing spread spectrum techniques, and readily accommodates very low power sources. Overall constraints on the system are such that the operational range must be around 50 – 100 m. The transmitter must be small both volumetrically and in terms of weight. It also has to be operational over an extended period of around 1 year. The implications of these are that very small antennas and batteries must be used, which are usually accompanied by very low transmission efficiencies and tiny capacities, respectively. Based on the above, the use of ultra-low power oscillator transmitters, as first cut prototypes of the tag, is proposed. It is shown that the Clapp, Colpitts, Pierce and Cross-coupled architectures are adequate. A thorough analysis of these topologies is provided with full details of tag and antenna co-design. Finally the performance of these architectures is evaluated through simulations with respect to power output, overall efficiency and phase noise.Open Acces

    Supervisory Wireless Control for Critical Industrial Applications

    Get PDF

    Constructing fail-controlled nodes for distributed systems: a software approach

    Get PDF
    PhD ThesisDesigning and implementing distributed systems which continue to provide specified services in the presence of processing site and communication failures is a difficult task. To facilitate their development, distributed systems have been built assuming that their underlying hardware components are Jail-controlled, i.e. present a well defined failure mode. However, if conventional hardware cannot provide the assumed failure mode, there is a need to build processing sites or nodes, and communication infra-structure that present the fail-controlled behaviour assumed. Coupling a number of redundant processors within a replicated node is a well known way of constructing fail-controlled nodes. Computation is replicated and executed simultaneously at each processor, and by employing suitable validation techniques to the outputs generated by processors (e.g. majority voting, comparison), outputs from faulty processors can be prevented from appearing at the application level. One way of constructing replicated nodes is by introducing hardwired mechanisms to couple replicated processors with specialised validation hardware circuits. Processors are tightly synchronised at the clock cycle level, and have their outputs validated by a reliable validation hardware. Another approach is to use software mechanisms to perform synchronisation of processors and validation of the outputs. The main advantage of hardware based nodes is the minimum performance overhead incurred. However, the introduction of special circuits may increase the complexity of the design tremendously. Further, every new microprocessor architecture requires considerable redesign overhead. Software based nodes do not present these problems, on the other hand, they introduce much bigger performance overheads to the system. In this thesis we investigate alternative ways of constructing efficient fail-controlled, software based replicated nodes. In particular, we present much more efficient order protocols, which are necessary for the implementation of these nodes. Our protocols, unlike others published to date, do not require processors' physical clocks to be explicitly synchronised. The main contribution of this thesis is the precise definition of the semantics of a software based Jail-silent node, along with its efficient design, implementation and performance evaluation.The Brazilian National Research Council (CNPq/Brasil)

    Cyber-Physical Systems of Systems: Foundations – A Conceptual Model and Some Derivations: The AMADEOS Legacy

    Get PDF
    Computer Systems Organization and Communication Networks; Software Engineering; Complex Systems; Information Systems Applications (incl. Internet); Computer Application

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
    corecore