139,436 research outputs found

    Unsupervised edge map scoring: a statistical complexity approach

    Get PDF
    We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an \emph{Equilibrium} index E\mathcal{E} obtained by projecting the edge map into a family of edge patterns, and an \emph{Entropy} index H\mathcal{H}, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt's Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation

    Modeling the correlations of crude oil properties based on sensitivity based linear learning method

    Get PDF
    This paper presented a new prediction model of pressure–volume–temperature (PVT) properties of crudeoil systems using sensitivity based linear learning method (SBLLM). PVT properties are very important in the reservoir engineering computations. The accurate determination of these properties, such as bubble-point pressure and oil formation volume factor, is important in the primary and subsequent development of an oil field. Earlier developed models are confronted with several limitations especially their instability and inconsistency during predictions. In this paper, a sensitivitybasedlinearlearningmethod (SBLLM) prediction model for PVT properties is presented using three distinct databases while comparing forecasting performance, using several kinds of evaluation criteria and quality measures, with neural network and the three common empirical correlations. In the formulation used, sensitivity analysis coupled with a linear training algorithm for each of the two layers is employed which ensures that the learning curve stabilizes soon and behaves homogenously throughout the entire process operation. In this way, the model will be able to adequately model PVT properties faster with high stability and consistency. Empirical results from simulations demonstrated that the proposed SBLLM model produced good generalization performance, with high stability and consistency, which are requisites of good prediction models in reservoir characterization and modeling

    Heterogeneous data source integration for smart grid ecosystems based on metadata mining

    Get PDF
    The arrival of new technologies related to smart grids and the resulting ecosystem of applications andmanagement systems pose many new problems. The databases of the traditional grid and the variousinitiatives related to new technologies have given rise to many different management systems with several formats and different architectures. A heterogeneous data source integration system is necessary toupdate these systems for the new smart grid reality. Additionally, it is necessary to take advantage of theinformation smart grids provide. In this paper, the authors propose a heterogeneous data source integration based on IEC standards and metadata mining. Additionally, an automatic data mining framework isapplied to model the integrated information.Ministerio de Economía y Competitividad TEC2013-40767-

    A characterization of the scientific impact of Brazilian institutions

    Full text link
    In this paper we studied the research activity of Brazilian Institutions for all sciences and also their performance in the area of physics between 1945 and December 2008. All the data come from the Web of Science database for this period. The analysis of the experimental data shows that, within a nonextensive thermostatistical formalism, the Tsallis \emph{q}-exponential distribution N(c)N(c) can constitute a new characterization of the research impact for Brazilian Institutions. The data examined in the present survey can be fitted successfully by applying a universal curve namely, N(c)1/[1+(q1)c/T]1q1N(c) \propto 1/[1+(q-1) c/T]^{\frac{1}{q-1}} with q4/3q\simeq 4/3 for {\it all} the available citations cc, TT being an "effective temperature". The present analysis ultimately suggests that via the "effective temperature" TT, we can provide a new performance metric for the impact level of the research activity in Brazil, taking into account the number of the publications and their citations. This new performance metric takes into account the "quantity" (number of publications) and the "quality" (number of citations) for different Brazilian Institutions. In addition we analyzed the research performance of Brazil to show how the scientific research activity changes with time, for instance between 1945 to 1985, then during the period 1986-1990, 1991-1995, and so on until the present. Finally, this work intends to show a new methodology that can be used to analyze and compare institutions within a given country.Comment: 7 pages, 5 figure
    corecore