3,958 research outputs found

    Azimuthal Anisotropy in High Energy Nuclear Collision - An Approach based on Complex Network Analysis

    Get PDF
    Recently, a complex network based method of Visibility Graph has been applied to confirm the scale-freeness and presence of fractal properties in the process of multiplicity fluctuation. Analysis of data obtained from experiments on hadron-nucleus and nucleus-nucleus interactions results in values of Power-of-Scale-freeness-of-Visibility-Graph-(PSVG) parameter extracted from the visibility graphs. Here, the relativistic nucleus-nucleus interaction data have been analysed to detect azimuthal-anisotropy by extending the Visibility Graph method and extracting the average clustering coefficient, one of the important topological parameters, from the graph. Azimuthal-distributions corresponding to different pseudorapidity-regions around the central-pseudorapidity value are analysed utilising the parameter. Here we attempt to correlate the conventional physical significance of this coefficient with respect to complex-network systems, with some basic notions of particle production phenomenology, like clustering and correlation. Earlier methods for detecting anisotropy in azimuthal distribution, were mostly based on the analysis of statistical fluctuation. In this work, we have attempted to find deterministic information on the anisotropy in azimuthal distribution by means of precise determination of topological parameter from a complex network perspective

    Time series classification based on fractal properties

    Full text link
    The article considers classification task of fractal time series by the meta algorithms based on decision trees. Binomial multiplicative stochastic cascades are used as input time series. Comparative analysis of the classification approaches based on different features is carried out. The results indicate the advantage of the machine learning methods over the traditional estimating the degree of self-similarity.Comment: 4 pages, 2 figures, 3 equations, 1 tabl

    Estimation of instrinsic dimension via clustering

    Full text link
    The problem of estimating the intrinsic dimension of a set of points in high dimensional space is a critical issue for a wide range of disciplines, including genomics, finance, and networking. Current estimation techniques are dependent on either the ambient or intrinsic dimension in terms of computational complexity, which may cause these methods to become intractable for large data sets. In this paper, we present a clustering-based methodology that exploits the inherent self-similarity of data to efficiently estimate the intrinsic dimension of a set of points. When the data satisfies a specified general clustering condition, we prove that the estimated dimension approaches the true Hausdorff dimension. Experiments show that the clustering-based approach allows for more efficient and accurate intrinsic dimension estimation compared with all prior techniques, even when the data does not conform to obvious self-similarity structure. Finally, we present empirical results which show the clustering-based estimation allows for a natural partitioning of the data points that lie on separate manifolds of varying intrinsic dimension

    Multi-layer model for the web graph

    Get PDF
    This paper studies stochastic graph models of the WebGraph. We present a new model that describes the WebGraph as an ensemble of different regions generated by independent stochastic processes (in the spirit of a recent paper by Dill et al. [VLDB 2001]). Models such as the Copying Model [17] and Evolving Networks Model [3] are simulated and compared on several relevant measures such as degree and clique distribution

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Mining for Culture: Reaching Out of Range

    Get PDF
    The goal of this paper is to present a tool that will sustain the development of culturally relevant computing artifacts by providing an effective means of detecting culture identities and cultures of participation. Culturally relevant designs rely heavily on how culture impacts design and though the guidelines for producing culturally relevant objects provide a mechanism for incorporating culture in the design, there still requires an effective method for garnering and identifying said cultures that reflects a holistic view of the target audience. This tool presents culturally relevant designs as a process of communicating with key audiences and thus bridging people and technology in a way that once seemed out of range

    The Methods to Improve Quality of Service by Accounting Secure Parameters

    Full text link
    A solution to the problem of ensuring quality of service, providing a greater number of services with higher efficiency taking into account network security is proposed. In this paper, experiments were conducted to analyze the effect of self-similarity and attacks on the quality of service parameters. Method of buffering and control of channel capacity and calculating of routing cost method in the network, which take into account the parameters of traffic multifractality and the probability of detecting attacks in telecommunications networks were proposed. The both proposed methods accounting the given restrictions on the delay time and the number of lost packets for every type quality of service traffic. During simulation the parameters of transmitted traffic (self-similarity, intensity) and the parameters of network (current channel load, node buffer size) were changed and the maximum allowable load of network was determined. The results of analysis show that occurrence of overload when transmitting traffic over a switched channel associated with multifractal traffic characteristics and presence of attack. It was shown that proposed methods can reduce the lost data and improve the efficiency of network resources.Comment: 10 pages, 1 figure, 1 equation, 1 table. arXiv admin note: text overlap with arXiv:1904.0520

    Improving multivariate data streams clustering.

    Get PDF
    Clustering data streams is an important task in data mining research. Recently, some algorithms have been proposed to cluster data streams as a whole, but just few of them deal with multivariate data streams. Even so, these algorithms merely aggregate the attributes without touching upon the correlation among them. In order to overcome this issue, we propose a new framework to cluster multivariate data streams based on their evolving behavior over time, exploring the correlations among their attributes by computing the fractal dimension. Experimental results with climate data streams show that the clusters' quality and compactness can be improved compared to the competing method, leading to the thoughtfulness that attributes correlations cannot be put aside. In fact, the clusters' compactness are 7 to 25 times better using our method. Our framework also proves to be an useful tool to assist meteorologists in understanding the climate behavior along a period of time.Edição dos Proceedings do 16th International Conference on Computational Science, San Diego, 2016
    corecore