12 research outputs found

    REAL-TIME ADAPTIVE PULSE COMPRESSION ON RECONFIGURABLE, SYSTEM-ON-CHIP (SOC) PLATFORMS

    Get PDF
    New radar applications need to perform complex algorithms and process a large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low-power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression algorithms for real-time transceiver optimization is presented, and is based on a System-on-Chip architecture for reconfigurable hardware devices. This study also evaluates the performance of dedicated coprocessors as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion, which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through high-performance buses, to perform floating-point operations, control the processing blocks, and communicate with an external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band testbed together with a low-cost channel emulator for different types of waveforms

    SatCat5: A Low-Power, Mixed-Media Ethernet Network for Smallsats

    Get PDF
    In any satellite, internal bus and payload systems must exchange a variety of command, control, telemetry, and mission-data. In too many cases, the resulting network is an ad-hoc proliferation of complex, dissimilar protocols with incomplete system-to-system connectivity. While standards like CAN, MIL-STD-1553, and SpaceWire mitigate this problem, none can simultaneously solve the need for high throughput and low power consumption. We present a new solution that uses Ethernet framing and addressing to unify a mixed-media network. Low-speed nodes (0.1-10 Mbps) use simple interfaces such as SPI and UART to communicate with extremely low power and minimal complexity. High-speed nodes use so-called “media-independent” interfaces such as RMII, RGMII, and SGMII to communicate at rates up to 1000 Mbps and enable connection to traditional COTS network equipment. All are interconnected into a single smallsat-area-network using a Layer-2 network switch, with mixed-media support for all these interfaces on a single network. The result is fast, easy, and flexible communication between any two subsystems. SatCat5 is presented as a free and open-source reference implementation of this mixed-media network switch, with power consumption of 0.2-0.7W depending on network activity. Further discussion includes example protocols that can be used on such networks, leveraging IPv4 when suitable but also enabling full-featured communication without the need for a complex protocol stack

    A new vision of software defined radio: from academic experimentation to industrial explotation

    Get PDF
    The broad objective of this study is to examine the role of Software Defined Radio in an industrial field. Basically examines the changes that have to be done to achieve moving this technology in a commercial domain. It is important to predict the impacts of the introduction of Software Defined Radio in the telecommunications industry because it is a real future that is coming. The project starts with the evolution of mobile telecommunications systems through the history. Following this, Software Defined Radio is defined and its main features are commented such as its architecture. Moreover, it wants to predict the changes that the telecommunications industry will might suffer with the introduction of SDR and some future structural and organizational variations are suggested. Additionally, it is discussed the positive and negative aspects of the introduction of SDR in the commercial domain from different points of view and finally, the future SDR mobile phone is described with its possible hardware and software.Outgoin

    Comparative Benchmarking Analysis of Next-Generation Space Processors

    Get PDF
    Researchers, corporations, and government entities are seeking to deploy increasingly compute-intensive workloads on space platforms. This need is driving the development of two new radiation-hardened, multi-core space processors, the BAE Systems RAD5545(TM) processor and the Boeing High-Performance Spaceflight Computing (HPSC) processor. As these systems are in the development phase as of this writing, the Freescale P5020DS and P5040DS systems, based on the same PowerPC e5500 architecture as the RAD5545 processor, and the Hardkernel ODROID-C2, sharing the same ARM Cortex-A53 core as the HPSC processor, were selected as facsimiles for evaluation. Several OpenMP-parallelized applications, including a color search, Sobel filter, Mandelbrot set generator, hyperspectral-imaging target classifier, and image thumbnailer, were benchmarked on these processing platforms. Performance and energy consumption results on these facsimiles were scaled to forecasted frequencies of the radiationhardened devices in development. In these studies, the RAD5545 achieved the highest and most consistent parallel efficiency, up to 99%. The HPSC processor achieved lower execution times, averaging about half that of the RAD5545 processor, with lower energy consumption. The evaluated applications achieved a speedup of 3.9 times across four cores. The frequency-scaling methods were validated by comparing the set of scaled measures with data points from an underclocked facsimile, which yielded an average accuracy of 97% between estimated and measured results. These performance outcomes help to quantify the capabilities of both the RAD5545 and HPSC processors for on-board parallel processing of computationally-demanding applications for future space missions

    Enabling Runtime Self-Coordination of Reconfigurable Embedded Smart Cameras in Distributed Networks

    Get PDF
    Smart camera networks are real-time distributed embedded systems able to perform computer vision using multiple cameras. This new approach is a confluence of four major disciplines (computer vision, image sensors, embedded computing and sensor networks) and has been subject of intensive work in the past decades. The recent advances in computer vision and network communication, and the rapid growing in the field of high-performance computing, especially using reconfigurable devices, have enabled the design of more robust smart camera systems. Despite these advancements, the effectiveness of current networked vision systems (compared to their operating costs) is still disappointing; the main reason being the poor coordination among cameras entities at runtime and the lack of a clear formalism to dynamically capture and address the self-organization problem without relying on human intervention. In this dissertation, we investigate the use of a declarative-based modeling approach for capturing runtime self-coordination. We combine modeling approaches borrowed from logic programming, computer vision techniques, and high-performance computing for the design of an autonomous and cooperative smart camera. We propose a compact modeling approach based on Answer Set Programming for architecture synthesis of a system-on-reconfigurable-chip camera that is able to support the runtime cooperative work and collaboration with other camera nodes in a distributed network setup. Additionally, we propose a declarative approach for modeling runtime camera self-coordination for distributed object tracking in which moving targets are handed over in a distributed manner and recovered in case of node failure

    Development of FPGA-based High-Speed serial links for High Energy Physics Experiments

    Get PDF
    High Energy Physics (HEP) experiments generate high volumes of data which need to be transferred over long distance. Then, for data read out, reliable and high-speed links are necessary. Over the years, due to their extreme high bandwidth, serial links (especially optical) have been preferred over the parallel ones. So that, now, high-speed serial links are commonly used in Trigger and Data Acquisition (TDAQ) systems of HEP experiments, not only for data transfer, but also for the distribution of trigger and control systems. Examples of their wide use can be found at CERN, where each of the four big experiments mounted on the Large Hadron Collider (LHC) uses a huge amount of serial links in its read out system. Again at LHC, the Timing, Trigger and Control system (TTC), which broadcasts the timing signals, from the LHC machine to the experiments, uses optical serial link to distribute signals over kilometers of distance (diameter of LHC is 27 Km). Also for upgrades of LHC, physical layer components and protocol chips (ASIC) have been designed and are now under development: the Versatile Link and the GBT protocol (and ASICs) whose peculiarity relies in their radiation hardness. This PhD project is intended to respond to the requests of HEP experiments, developing: - a high-speed self-adapting serial link, which can be easily used in different application fields; - the serial interface of a read out board in the end-cap region of ATLAS Experiment at LHC; - the interface board for the barrel read out system of the ATLAS Experiments. Both the two last projects have required the development of fixed latency, high-speed serial links. In order to take advantage of flexibility, re-programmability and system integration of SRAM-based Field Programmable Gate Array devices (FPGAs), their serializer-deserializer (SERDES) embedded modules have been chosen for the development of the links. However, as a drawback, FPGA embedded SERDESes are typically designed for applications that do not require a deterministic latenc. Then, an accurate study of their architecture has been necessary, in order to find a configuration and a clocking scheme to guarantee a deterministic transmission delay in data transfers. The frequency agile, auto-adaptive serial link is capable to analyze the incoming data stream, by scanning the Unit Interval, and to find the highest transmission line rate, according to a given tolerated Bit Error Ratio (BER). It uses a new feature (RX eye margin analysis) of the RX side of the Xilinx 7 series FPGAs high-speed transceivers (GTX/GTH), in order to measure and display the receiver eye margin after the equalizer. When the new eye scan functionality is running, an additional sampler is activated in the GTX. It acquires a new sample (Offset Sample), with programmable (horizontal and vertical) offsets from the data sample point (Data Sample) used in standard operation. An eye scan measurement run is performed by acquiring a large number of Data Samples (which can range from tens of thousands to 1014 or more) and by counting the number of times the Offset Sample has a different value with respect to the Data Sample; the latter number is often called Error Count. The BER at a specific vertical and horizontal offset is given by the ratio between the Error Count and the Sample Count. By repeating the eye scan measurement for each horizontal and vertical offset in the Unit Interval (or in a part of the U.I.) a 2-D BER map can be produced which is usually called Statistical Eye. The auto-adaptive derail ink is designed around an FPGA-embedded microprocessor, which drives the programmable ports of the GTX, in order to perform a 2-D eye-scan, and takes care of the reconfiguration of the GTX parameters, in order to fully benefit from the available link bandwidth. Xilinx provides a standalone tool that allows performing the Eye Scan Analysis on the receiver side of the GTX/GTH transceiver, using the MicroBlaze Micro Controller System macro; the toolkit also includes the Eye Scan algorithm (providing the C code). Moreover, Xilinx supplies the hardware sources files for the implementation of a link based on the XAUI protocol, in which the GTXs are arranged in a loopback configuration. The original contribution of this work consists in the build-up, design and optimization of a full architecture, on top of the basic Xilinx tool, which: - drives the programmable ports of the GTX in order to modify the line rate of the link; - runs consecutive eye scans for various line rate; - analyses the results of the different scans, in order to find the maximum line rate sustainable by the link; - manages the synchronization between the transmitter and the receiver of the link, that will be needed at each line rate change. The application can be deployed as a monitoring tool in HEP experiments, in order to remotely monitor a transmission system or detect issues in the serial link physical layer. An application example could be some of the many experiments at Large Hadron Collider (LHC) at CERN, which have been intensively using different serial links, both for transmission of TTC signals and for trigger and data readout. Besides, this solution could be easily adapted in wide, different frameworks, as it can be used on top of any user’s existing link, as it has no specific requirement about link specification or protocol. The other two serial interface developed in this project are in the framework of the ATLAS experiment. ATLAS is one of the four detectors installed on the LHC proton-proton collider built at CERN. It was designed to collide two opposing particle beams at an energy of 14 TeV and to reach a luminosity of 1034 cm-2/s. In order to reach the design parameters, the LHC system will be upgraded in several phases. In order to take advantage of the improved LHC operation, the ATLAS detector must be upgraded following the same schedule as the LHC upgrade. The main focus of the Phase-I ATLAS upgrade (to be completed by 2018) is on the Level-1 trigger where upgrades are planned for both the muon and the calorimeter trigger systems. In particular, for the end-cap region of the muon spectrometer, the installation of a new set of precision tracking and trigger detectors was approved, called the ‘New Small Wheels’ (NSW). It will be instrumented with micro-mesh gaseous structure detectors (MM) and small-strip Thin Gap Chambers (sTGC). These detectors will solve two points of particular importance at high luminosity: high rate of fake high-pt level-1 muon triggers, and high L1 muon rate with the current momentum threshold. With the introduction of new detectors, new electronics need to be developed, in particular new trigger electronics for both the MM and sTGC. I was involved in the development of serial interface of the FPGA-based sTGC trigger board that uses information from the coarse sTGC readout pads. The sTGC pad trigger board receives serial information coming from 24 front-end chips at 4.8 Gb/s. On the board, data are deserialised, aligned and analyzed by the trigger algorithm. The trigger logic processes the data and choses two candidates at each Bunch Crossing. The result is then serialised and used for selective fine-grained strip readout. I developed the pad trigger board interface logic. The data format from the front-end chips has been agreed upon, and defines the requirements on the receiver and decoding logic. The number of output lines is 24 and the data are 8B/10B formatted. While the receiver uses the Xilinx Kintex-7 GTX transceivers, the output lines are driven by double data rate (DDR) shift registers at 640 Mb/s. A fixed latency in the sTGC trigger chain was guaranteed through the implementation and configuration of all serialisers and deserialisers. In order to test the project, I also developed a simple microprocessor-based protocol for accessing the board via terminal (rs232). A demonstrator board is now being developed. Another Phase-I Level-1 trigger upgrade consists of a new Muon to Central Trigger Processor Interface (MUCTPI). The MUCTPI receives muon candidate information from each of the muon detectors, selects muon candidates and sends them to the Central Trigger Processor (CTP). In the first runs of ATLAS, the L1 Barrel trigger candidate data were transferred to the MuCTPI via copper cables. In order to cope with the trigger upgrade, serial optical links are necessary. The optical links will provide a much higher bandwidth (up to 6.4 Gb/s) which will be used to transfer additional information from the sector logic modules, for example data for more than two muon candidates. They will also provide a lower transmission latency. I developed the interface board between the new MUCTPI and the Resistive Plate Chambers (RPC) muon trigger, using the Xilinx Artix-7 FPGA GTP transceivers. I took care of the study of feasibility of the new serial optical transmitter and the logic for the new data format. Also in this case, the fixed latency has been a requirement to be fulfilled

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF
    corecore