4,846 research outputs found

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Assessing and augmenting SCADA cyber security: a survey of techniques

    Get PDF
    SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability

    A novel approach to emergency management of wireless telecommunication system

    Get PDF
    The survivability concerns the service continuity when the components of a system are damaged. This concept is especially useful in the emergency management of the system, as often emergencies involve accidents or incident disasters which more or less damage the system. The overall objective of this thesis study is to develop a quantitative management approach to the emergency management of a wireless cellular telecommunication system in light of its service continuity in emergency situations – namely the survivability of the system. A particular wireless cellular telecommunication system, WCDMA, is taken as an example to ground this research.The thesis proposes an ontology-based paradigm for service management such that the management system contains three models: (1) the work domain model, (2) the dynamic model, and (3) the reconfiguration model. A powerful work domain modeling tool called Function-Behavior-Structure (FBS) is employed for developing the work domain model of the WCDMA system. Petri-Net theory, as well as its formalization, is applied to develop the dynamic model of the WCDMA system. A concept in engineering design called the general and specific function concept is applied to develop a new approach to system reconfiguration for the high survivability of the system. These models are implemented along with a user-interface which can be used by emergency management personnel. A demonstration of the effectiveness of this study approach is included.There are a couple of contributions with this thesis study. First, the proposed approach can be added to contemporary telecommunication management systems. Second, the Petri Net model of the WCDMA system is more comprehensive than any dynamic model of the telecommunication systems in literature. Furthermore, this model can be extended to any other telecommunication system. Third, the proposed system reconfiguration approach, based on the general and specific function concept, offers a unique way for the survivability of any service provider system.In conclusion, the ontology-based paradigm for a service system management provides a total solution to service continuity as well as its emergency management. This paradigm makes the complex mathematical modeling of the system transparent to the manager or managerial personnel and provides a feasible scenario of the human-in-the-loop management

    The DS-Pnet modeling formalism for cyber-physical system development

    Get PDF
    This work presents the DS-Pnet modeling formalism (Dataflow, Signals and Petri nets), designed for the development of cyber-physical systems, combining the characteristics of Petri nets and dataflows to support the modeling of mixed systems containing both reactive parts and data processing operations. Inheriting the features of the parent IOPT Petri net class, including an external interface composed of input and output signals and events, the addition of dataflow operations brings enhanced modeling capabilities to specify mathematical data transformations and graphically express the dependencies between signals. Data-centric systems, that do not require reactive controllers, are designed using pure dataflow models. Component based model composition enables reusing existing components, create libraries of previously tested components and hierarchically decompose complex systems into smaller sub-systems. A precise execution semantics was defined, considering the relationship between dataflow and Petri net nodes, providing an abstraction to define the interface between reactive controllers and input and output signals, including analog sensors and actuators. The new formalism is supported by the IOPT-Flow Web based tool framework, offering tools to design and edit models, simulate model execution on the Web browser, plus model-checking and software/hardware automatic code generation tools to implement controllers running on embedded devices (C,VHDL and JavaScript). A new communication protocol was created to permit the automatic implementation of distributed cyber-physical systems composed of networks of remote components communicating over the Internet. The editor tool connects directly to remote embedded devices running DS-Pnet models and may import remote components into new models, contributing to simplify the creation of distributed cyber-physical applications, where the communication between distributed components is specified just by drawing arcs. Several application examples were designed to validate the proposed formalism and the associated framework, ranging from hardware solutions, industrial applications to distributed software applications

    Reviewing SWAP

    Get PDF

    Petri nets based components within globally asynchronous locally synchronous systems

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologias da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e ComputadoresThe main goal is to develop a solution for the interconnection of components constituent of a GALS - Globally Asynchronous, Locally Synchronous – system. The components are implemented in parallel obtained as a result of the partition of a model expressed a Petri net (PN), performed using the PNs editor SNOOPY-IOPT in conjunction with the Split tool and the tools to automatically generate the VHDL code from the representations of the PNML models resulting from the partition (these tools were developed under the project FORDESIGN and are available at http://www.uninova.pt/FORDESIGN). Typical solutions will be analyzed to ensure proper communication between components of the GALS system, as well as characterized and developed an appropriate solution for the interconnection of the components associated with the PN sub-models. The final goal (not attained with this thesis) would be to acquire a tool that allows generation of code for the interconnection solution from the associated components, considering a specific application. The solution proposed for componentes interconnection was coded in VHDL and the implementation platforms used for testing include the Xilinx FPGA Spartan-3 and Virtex-II

    Process control and configuration of a reconfigurable production system using a multi-agent software system

    Get PDF
    Thesis (M. Tech. (Information Technology)) -- Central University of technology, Free State, 2011Traditional designs for component-handling platforms are rigidly linked to the product being produced. Control and monitoring methods for these platforms consist of various proprietary hardware controllers containing the control logic for the production process. Should the configuration of the component handling platform change, the controllers need to be taken offline and reprogrammed to take the changes into account. The current thinking in component-handling system design is the notion of re-configurability. Reconfigurability means that with minimum or no downtime the system can be adapted to produce another product type or overcome a device failure. The re-configurable component handling platform is built-up from groups of independent devices. These groups or cells are each responsible for some aspect of the overall production process. By moving or swopping different versions of these cells within the component-handling platform, re-configurability is achieved. Such a dynamic system requires a flexible communications platform and high-level software control architecture to accommodate the reconfigurable nature of the system. This work represents the design and testing of the core of a re-configurable production control software platform. Multiple software components work together to control and monitor a re-configurable component handling platform. The design and implementation of a production database, production ontology, communications architecture and the core multi-agent control application linking all these components together is presented
    corecore