3,788 research outputs found

    An optimal architecture for a DDC

    Get PDF
    Digital down conversion (DDC) is an algorithm, used to lower the amount of samples per second by selecting a limited frequency band out of a stream of samples. A possible DDC algorithm consists of two simple cascading integrating comb (CIC) filters and a finite input response (FIR) filter preceded by a modulator that is controlled with a numeric controlled oscillator (NCO). Implementations of the algorithm have been made for five architectures, two application specific integrated circuits (ASIC), a general purpose processor (GPP), a field programmable gate array (FPGA), and the Montium tile processor (TP). All architectures are functionally capable of performing the algorithm. The differences between the architectures are their performance, flexibility and energy consumption. In this paper, we compared the energy consumption of the architectures when performing the DDC algorithm. The ASIC is the best solution if digital down conversion is constantly required. When digital down conversion is needed only parts of the time, the Altera Cyclone II is the best solution due to its smaller technology size. In the spare time, the reconfigurable architectures can be reconfigured for other tasks of today's multimedia devices

    Deep Divergence-Based Approach to Clustering

    Get PDF
    A promising direction in deep learning research consists in learning representations and simultaneously discovering cluster structure in unlabeled data by optimizing a discriminative loss function. As opposed to supervised deep learning, this line of research is in its infancy, and how to design and optimize suitable loss functions to train deep neural networks for clustering is still an open question. Our contribution to this emerging field is a new deep clustering network that leverages the discriminative power of information-theoretic divergence measures, which have been shown to be effective in traditional clustering. We propose a novel loss function that incorporates geometric regularization constraints, thus avoiding degenerate structures of the resulting clustering partition. Experiments on synthetic benchmarks and real datasets show that the proposed network achieves competitive performance with respect to other state-of-the-art methods, scales well to large datasets, and does not require pre-training steps

    LOAD DISTRIBUTION AND RESOURCE SHARING

    Get PDF
    This paper will discuss about the system structure and system design philosophy for the large scale control systems. The design philosophy, the main theme of this article, is "load distribution and resource sharing", but also the following items will be discussed: - three level hierarchy control system philosophy; - coupling and optimal load sharing among SCC/DDC computers; - sharing of the process resources among computers.load distribution

    First experiences with the ATLAS Pixel Detector Control System at the Combined Test Beam 2004

    Full text link
    Detector control systems (DCS) include the read out, control and supervision of hardware devices as well as the monitoring of external systems like cooling system and the processing of control data. The implementation of such a system in the final experiment has also to provide the communication with the trigger and data acquisition system (TDAQ). In addition, conditions data which describe the status of the pixel detector modules and their environment must be logged and stored in a common LHC wide database system. At the combined test beam all ATLAS subdetectors were operated together for the first time over a longer period. To ensure the functionality of the pixel detector a control system was set up. We describe the architecture chosen for the pixel detector control system, the interfaces to hardware devices, the interfaces to the users and the performance of our system. The embedding of the DCS in the common infrastructure of the combined test beam and also its communication with surrounding systems will be discussed in some detail.Comment: 6 pages, 9 figures, Pixel 2005 proceedings preprin

    HILT : High-Level Thesaurus Project. Phase IV and Embedding Project Extension : Final Report

    Get PDF
    Ensuring that Higher Education (HE) and Further Education (FE) users of the JISC IE can find appropriate learning, research and information resources by subject search and browse in an environment where most national and institutional service providers - usually for very good local reasons - use different subject schemes to describe their resources is a major challenge facing the JISC domain (and, indeed, other domains beyond JISC). Encouraging the use of standard terminologies in some services (institutional repositories, for example) is a related challenge. Under the auspices of the HILT project, JISC has been investigating mechanisms to assist the community with this problem through a JISC Shared Infrastructure Service that would help optimise the value obtained from expenditure on content and services by facilitating subject-search-based resource sharing to benefit users in the learning and research communities. The project has been through a number of phases, with work from earlier phases reported, both in published work elsewhere, and in project reports (see the project website: http://hilt.cdlr.strath.ac.uk/). HILT Phase IV had two elements - the core project, whose focus was 'to research, investigate and develop pilot solutions for problems pertaining to cross-searching multi-subject scheme information environments, as well as providing a variety of other terminological searching aids', and a short extension to encompass the pilot embedding of routines to interact with HILT M2M services in the user interfaces of various information services serving the JISC community. Both elements contributed to the developments summarised in this report
    • …
    corecore