2,349 research outputs found

    Optimization of star research algorithm for esmo star tracker

    Get PDF
    This paper explains in detail the design and the development of a software research star algorithm, embedded on a star tracker, by the ISAE/SUPAERO team. This research algorithm is inspired by musical techniques. This work will be carried out as part of the ESMO (European Student Moon Orbiter) project by different teams of students and professors from ISAE/SUPAERO (Institut Supe ́rieur de l’Ae ́ronautique et de l’Espace). Till today, the system engineering studies have been completed and the work that will be presented will concern the algorithmic and the embedded software development. The physical architecture of the sensor relies on APS 750 developed by the CIMI laboratory of ISAE/SUPAERO. First, a star research algorithm based on the image acquired in lost-in-space mode (one of the star tracker opera- tional modes) will be presented; it is inspired by techniques of musical recognition with the help of the correlation of digital signature (hash) with those stored in databases. The musical recognition principle is based on finger- printing, i.e. the extraction of points of interest in the studied signal. In the musical context, the signal spectrogram is used to identify these points. Applying this technique in image processing domain requires an equivalent tool to spectrogram. Those points of interest create a hash and are used to efficiently search within the database pre- viously sorted in order to be compared. The main goals of this research algorithm are to minimise the number of steps in the computations in order to deliver information at a higher frequency and to increase the computation robustness against the different possible disturbances

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    Design and Implementation of a FPGA and DSP Based MIMO Radar Imaging System

    Get PDF
    The work presented in this paper is aimed at the implementation of a real-time multiple-input multiple-output (MIMO) imaging radar used for area surveillance. In this radar, the equivalent virtual array method and time-division technique are applied to make 16 virtual elements synthesized from the MIMO antenna array. The chirp signal generater is based on a combination of direct digital synthesizer (DDS) and phase locked loop (PLL). A signal conditioning circuit is used to deal with the coupling effect within the array. The signal processing platform is based on an efficient field programmable gates array (FPGA) and digital signal processor (DSP) pipeline where a robust beamforming imaging algorithm is running on. The radar system was evaluated through a real field experiment. Imaging capability and real-time performance shown in the results demonstrate the practical feasibility of the implementation

    Towards Low Latency and Resource-Efficient FPGA Implementations of the MUSIC Algorithm for Direction of Arrival Estimation

    Get PDF
    The estimation of the Direction of Arrival (DoA) is one of the most critical parameters for target recognition, identification and classification. MUltiple SIgnal Classification (MUSIC) is a powerful technique for DoA estimation. The algorithm requires complex mathematical operations like the computation of the covariance matrix for the input signals, eigenvalue decomposition and signal peak search. All these signal processing operations make real-time and resource-efficient implementation of the MUSIC algorithm on Field Programmable Gate Arrays (FPGAs) a challenge. In this paper, a novel design approach is proposed for the FPGA-implementation of the MUSIC algorithm. This approach enables a significant reduction in both FPGA resources and latency. In more detail, the proposed design enables the estimation of DoA in real-time scenarios in 2μ sec with 30% to 50% fewer resources as compared to existing techniques.The work of Pedro Reviriego was supported in part by the Architecting Intelligent Cost-effective Central Offices to enable 5G Tactile Internet (ACHILLES) through the Spanish Ministry of Economy and Competitivity under Project PID2019-104207RB-I00, in part by the Madrid Government (Comunidad de Madrid-Spain) through the Multiannual Agreement with Universidad Carlos III de Madrid (UC3M) in the line of Excellence of University Professors under Grant EPUC3M21, and in part by the Context of the V Plan Regional de Investigación Científica e Innovación Tecnológica (V PRICIT) (Regional Program of Research and Technological Innovation)

    Maximizing CNN Accelerator Efficiency Through Resource Partitioning

    Full text link
    Convolutional neural networks (CNNs) are revolutionizing machine learning, but they present significant computational challenges. Recently, many FPGA-based accelerators have been proposed to improve the performance and efficiency of CNNs. Current approaches construct a single processor that computes the CNN layers one at a time; the processor is optimized to maximize the throughput at which the collection of layers is computed. However, this approach leads to inefficient designs because the same processor structure is used to compute CNN layers of radically varying dimensions. We present a new CNN accelerator paradigm and an accompanying automated design methodology that partitions the available FPGA resources into multiple processors, each of which is tailored for a different subset of the CNN convolutional layers. Using the same FPGA resources as a single large processor, multiple smaller specialized processors increase computational efficiency and lead to a higher overall throughput. Our design methodology achieves 3.8x higher throughput than the state-of-the-art approach on evaluating the popular AlexNet CNN on a Xilinx Virtex-7 FPGA. For the more recent SqueezeNet and GoogLeNet, the speedups are 2.2x and 2.0x
    corecore