288,932 research outputs found

    Performance evaluation using bootstrapping DEA techniques: Evidence from industry ratio analysis

    Get PDF
    In Data Envelopment Analysis (DEA) context financial data/ ratios have been used in order to produce a unified measure of performance metric. However, several scholars have indicated that the inclusion of financial ratios create biased efficiency estimates with implications on firms’ and industries’ performance evaluation. There have been several DEA formulations and techniques dealing with this problem including sensitivity analysis, Prior-Ratio-Analysis and DEA/ output–input ratio analysis for the assessment of the efficiency and ranking of the examined units. In addition to these computational approaches this paper in order to overcome these problems applies bootstrap techniques. Moreover it provides an application evaluating the performance of 23 Greek manufacturing sectors with the use of financial data. The results reveal that in the first stage of our sensitivity analysis the efficiencies obtained are biased. However, after applying the bootstrap techniques the sensitivity analysis reveals that the efficiency scores have been significantly improved.Performance measurement; Data Envelopment Analysis; Financial ratios; Bootstrap; Bias correction

    Complexity evaluation for the implementation of a pre-FFT equalizer in an OFDM receiver

    Get PDF

    clcNet: Improving the Efficiency of Convolutional Neural Network using Channel Local Convolutions

    Full text link
    Depthwise convolution and grouped convolution has been successfully applied to improve the efficiency of convolutional neural network (CNN). We suggest that these models can be considered as special cases of a generalized convolution operation, named channel local convolution(CLC), where an output channel is computed using a subset of the input channels. This definition entails computation dependency relations between input and output channels, which can be represented by a channel dependency graph(CDG). By modifying the CDG of grouped convolution, a new CLC kernel named interlaced grouped convolution (IGC) is created. Stacking IGC and GC kernels results in a convolution block (named CLC Block) for approximating regular convolution. By resorting to the CDG as an analysis tool, we derive the rule for setting the meta-parameters of IGC and GC and the framework for minimizing the computational cost. A new CNN model named clcNet is then constructed using CLC blocks, which shows significantly higher computational efficiency and fewer parameters compared to state-of-the-art networks, when being tested using the ImageNet-1K dataset. Source code is available at https://github.com/dqzhang17/clcnet.torch

    Evaluation of protein surface roughness index using its heat denatured aggregates

    Get PDF
    Recent research works on potential of different protein surface describing parameters to predict protein surface properties gained significance for its possible implication in extracting clues on protein's functional site. In this direction, Surface Roughness Index, a surface topological parameter, showed its potential to predict SCOP-family of protein. The present work stands on the foundation of these works where a semi-empirical method for evaluation of Surface Roughness Index directly from its heat denatured protein aggregates (HDPA) was designed and demonstrated successfully. The steps followed consist, the extraction of a feature, Intensity Level Multifractal Dimension (ILMFD) from the microscopic images of HDPA, followed by the mapping of ILMFD into Surface Roughness Index (SRI) through recurrent backpropagation network (RBPN). Finally SRI for a particular protein was predicted by clustering of decisions obtained through feeding of multiple data into RBPN, to obtain general tendency of decision, as well as to discard the noisy dataset. The cluster centre of the largest cluster was found to be the best match for mapping of Surface Roughness Index of each protein in our study. The semi-empirical approach adopted in this paper, shows a way to evaluate protein's surface property without depending on its already evaluated structure

    Improving the efficiency of the detection of gravitational wave signals from inspiraling compact binaries: Chebyshev interpolation

    Full text link
    Inspiraling compact binaries are promising sources of gravitational waves for ground and space-based laser interferometric detectors. The time-dependent signature of these sources in the detectors is a well-characterized function of a relatively small number of parameters; thus, the favored analysis technique makes use of matched filtering and maximum likelihood methods. Current analysis methodology samples the matched filter output at parameter values chosen so that the correlation between successive samples is 97% for which the filtered output is closely correlated. Here we describe a straightforward and practical way of using interpolation to take advantage of the correlation between the matched filter output associated with nearby points in the parameter space to significantly reduce the number of matched filter evaluations without sacrificing the efficiency with which real signals are recognized. Because the computational cost of the analysis is driven almost exclusively by the matched filter evaluations, this translates directly into an increase in computational efficiency, which in turn, translates into an increase in the size of the parameter space that can be analyzed and, thus, the science that can be accomplished with the data. As a demonstration we compare the present "dense sampling" analysis methodology with our proposed "interpolation" methodology, restricted to one dimension of the multi-dimensional analysis problem. We find that the interpolated search reduces by 25% the number of filter evaluations required by the dense search with 97% correlation to achieve the same efficiency of detection for an expected false alarm probability. Generalized to higher dimensional space of a generic binary including spins suggests an order of magnitude increase in computational efficiency.Comment: 23 pages, 5 figures, submitted to Phys. Rev.

    GNA: new framework for statistical data analysis

    Full text link
    We report on the status of GNA --- a new framework for fitting large-scale physical models. GNA utilizes the data flow concept within which a model is represented by a directed acyclic graph. Each node is an operation on an array (matrix multiplication, derivative or cross section calculation, etc). The framework enables the user to create flexible and efficient large-scale lazily evaluated models, handle large numbers of parameters, propagate parameters' uncertainties while taking into account possible correlations between them, fit models, and perform statistical analysis. The main goal of the paper is to give an overview of the main concepts and methods as well as reasons behind their design. Detailed technical information is to be published in further works.Comment: 9 pages, 3 figures, CHEP 2018, submitted to EPJ Web of Conference
    • …
    corecore