3,088 research outputs found

    Rank Conditioned Rank Selection Filters for Signal Restoration

    Get PDF
    A class of nonlinear filters called rank conditioned rank selection (RCRS) filters is developed and analyzed in this paper. The RCRS filters are developed within the general framework of rank selection(RS) filters, which are filters constrained to output an order statistic from the observation set. Many previously proposed rank order based filters can be formulated as RS filters. The only difference between such filters is in the information used in deciding which order statistic to output. The information used by RCRS filters is the ranks of selected input samples, hence the name rank conditioned rank selection filters. The number of input sample ranks used is referred to as the order of the RCRS filter. The order can range from zero to the number of samples in the observation window, giving the filters valuable flexibility. Low-order filters can give good performance and are relatively simple to optimize and implement. If improved performance is demanded, the order can be increased but at the expense of filter simplicity. In this paper, many statistical and deterministic properties of the RCRS filters are presented. A procedure for optimizing over the class of RCRS filters is also presented. Finally, extensive computer simulation results that illustrate the performance of RCRS filters in comparison with other techniques in image restoration applications are presented

    A New Automatic Method to Identify Galaxy Mergers I. Description and Application to the STAGES Survey

    Get PDF
    We present an automatic method to identify galaxy mergers using the morphological information contained in the residual images of galaxies after the subtraction of a Sersic model. The removal of the bulk signal from the host galaxy light is done with the aim of detecting the fainter minor mergers. The specific morphological parameters that are used in the merger diagnostic suggested here are the Residual Flux Fraction and the asymmetry of the residuals. The new diagnostic has been calibrated and optimized so that the resulting merger sample is very complete. However, the contamination by non-mergers is also high. If the same optimization method is adopted for combinations of other structural parameters such as the CAS system, the merger indicator we introduce yields merger samples of equal or higher statistical quality than the samples obtained through the use of other structural parameters. We explore the ability of the method presented here to select minor mergers by identifying a sample of visually classified mergers that would not have been picked up by the use of the CAS system, when using its usual limits. Given the low prevalence of mergers among the general population of galaxies and the optimization used here, we find that the merger diagnostic introduced in this work is best used as a negative merger test, i.e., it is very effective at selecting non-merging galaxies. As with all the currently available automatic methods, the sample of merger candidates selected is contaminated by non-mergers, and further steps are needed to produce a clean sample. This merger diagnostic has been developed using the HST/ACS F606W images of the A901/02 cluster (z=0.165) obtained by the STAGES team. In particular, we have focused on a mass and magnitude limited sample (log M/M_{O}>9.0, R_{Vega}<23.5mag)) which includes 905 cluster galaxies and 655 field galaxies of all morphological types.Comment: 25 pages, 14 figures, 4 tables. To appear in MNRA

    HMC-Based Accelerator Design For Compressed Deep Neural Networks

    Get PDF
    Deep Neural Networks (DNNs) offer remarkable performance of classifications and regressions in many high dimensional problems and have been widely utilized in real-word cognitive applications. In DNN applications, high computational cost of DNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. Moreover, energy consumption and performance cost of moving data between memory hierarchy and computational units are higher than that of the computation itself. To overcome the memory bottleneck, data locality and temporal data reuse are improved in accelerator design. In an attempt to further improve data locality, memory manufacturers have invented 3D-stacked memory where multiple layers of memory arrays are stacked on top of each other. Inherited from the concept of Process-In-Memory (PIM), some 3D-stacked memory architectures also include a logic layer that can integrate general-purpose computational logic directly within main memory to take advantages of high internal bandwidth during computation. In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compression and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling controller. In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation. In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compres- sion and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling con- troller. In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation

    Customer intimacy analytics : leveraging operational data to assess customer knowledge and relationships and to measure their business impact

    Get PDF
    The ability to capture customer needs and to tailor the provided solutions accordingly, also defined as customer intimacy, has become a significant success factor in the B2B space - in particular for increasingly \"servitizing\" businesses. This book elaborates on the solution CI Analytics to assess and monitor the impact of customer intimacy strategies by leveraging business analytics and social network analysis technology. This solution thereby effectively complements existing CRM solutions

    Predictive Maintenance Support System in Industry 4.0 Scenario

    Get PDF
    The fourth industrial revolution that is being witnessed nowadays, also known as Industry 4.0, is heavily related to the digitization of manufacturing systems and the integration of different technologies to optimize manufacturing. By combining data acquisition using specific sensors and machine learning algorithms to analyze this data and predict a failure before it happens, Predictive Maintenance is a critical tool to implement towards reducing downtime due to unpredicted stoppages caused by malfunctions. Based on the reality of Commercial Specialty Tires factory at Continental Mabor - Indústria de Pneus, S.A., the present work describes several problems faced regarding equipment maintenance. Taking advantage of the information gathered from studying the processes incorporated in the factory, it is designed a solution model for applying predictive maintenance in these processes. The model is divided into two primary layers, hardware, and software. Concerning hardware, sensors and respective applications are delineated. In terms of software, techniques of data analysis namely machine learning algorithms are described so that the collected data is studied to detect possible failures
    corecore