11 research outputs found

    Distribution and infection of triatomines (Hemiptera: Reduviidae) by Trypanosoma cruzi in the state of Michoacán, Mexico

    Full text link
    An entomological study of triatomine species was carried out to assess their prevalence in 10 localities of the state of Michoacán, Mexico. Entomological indices were calculated to estimate the risk for vector-borne transmission of Trypanosoma cruzi to the human population in this area. Four triatomine species (Triatoma barberi, Triatoma dimidiata, Meccus pallidipennis and Meccus longipennis) were collected from the study area. This is the first report of M. longipennis and T. dimidiata in Michoacán. M. pallidipennis was significantly (p < 0.05) more abundant than any of the other species collected in the study area. Infection indices were greater than 50% for each of the four collected triatomine species. Significantly more triatomines were collected from intradomiciliary areas than from peridomiciliary or sylvatic areas. Infestation, crowding and density indices were low, whereas colonisation indices were high in five localities. The current vectorial conditions in the study area require continuous entomological and serological surveillance to diminish the risk of T. cruzi transmission to human populations

    Multi-resolution time series discord discovery

    No full text
    Discord Discovery is a recent approach for anomaly detection in time series that has attracted much research because of the wide variety of real-world applications in monitoring systems. However, finding anomalies by different levels of resolution has received little attention in this research line. In this paper, we introduce a multi-resolution representation based on local trends and mean values of the time series. We require the level of resolution as parameter, but it can be automatically computed if we consider the maximum resolution of the time series. In order to provide a useful representation for discord discovery, we propose dissimilarity measures for achieving high effective results, and a symbolic representation based on SAX technique for efficient searches using a multi-resolution indexing scheme. We evaluate our method over a diversity of data domains achieving a better performance compared with some of the best-known classic techniques

    A Multi-resolution Approximation for Time Series

    No full text
    Time series is a common and well-known way for describing temporal data. However, most of the state-of-the-art techniques for analysing time series have focused on generating a representation for a single level of resolution. For analysing of a time series at several levels of resolutions, one would require to compute different representations, one for each resolution level. We introduce a multi-resolution representation for time series based on local trends and mean values. We require the level of resolution as parameter, but it can be automatically computed if we consider the maximum resolution of the time series. Our technique represents a time series using trend-value pairs on each segment belonging to a resolution level. To provide a useful representation for data mining tasks, we also propose dissimilarity measures and a symbolic representation based on the SAX technique for efficient similarity search using a multi-resolution indexing scheme. We evaluate our method for classification and discord discovery tasks over a diversity of data domains, achieving a better performance in terms of efficiency and effectiveness compared with some of the best-known classic techniques. Indeed, for some of the experiments, the time series mining algorithms using our multi-resolution representation were an order of magnitude faster, in terms of distance computations, than the state of the art

    An Optimal Set of Indices for Dynamic Combinations of Metric Spaces

    No full text
    ArtĂ­culo de publicaciĂłn ISIA recent trend to improve the effectiveness of similarity queries in multimedia databases is based on dynamic combinations of metric spaces. The efficiency issue when using these dynamic combinations is still an open problem, especially in the case of binary weights. Our solution resorts to the use of a set of indices. We describe a binary linear program that finds the optimal set of indices given space constraints. Because binary linear programming is NP-hard in general, we also develop greedy algorithms that find good set of indices quickly. The solutions returned by the approximation algorithms are very close to the optimal value for the instances where these can be calculated

    A survey on frameworks used for robustness analysis on interdependent networks

    No full text
    The analysis of network robustness tackles the problem of studying how a complex network behaves under adverse scenarios, such as failures or attacks. In particular, the analysis of interdependent networks' robustness focuses on the specific case of the robustness of interacting networks and their emerging behaviors. This survey systematically reviews literature of frameworks that analyze the robustness of interdependent networks published between 2005 and 2017. This review shows that there exists a broad range of interdependent network models, robustness metrics, and studies that can be used to understand the behaviour of different systems under failure or attack. Regarding models, we found that there is a focus on systems where a node in one layer interacts with exactly one node at another layer. In studies, we observed a focus on the network percolation. While among the metrics, we observed a focus on measures that count network elements. Finally, for the networks used to test the frameworks, we found that the focus was on synthetic models, rather than analysis of real network systems. This review suggests opportunities in network research, such as the study of robustness on interdependent networks with multiple interactions and/or spatially embedded networks, and the use of interdependent network models in realistic network scenarios.Comision Nacional de Investigacion Cientifica y Tecnologica (CONICYT) 21170165 Millennium Institute for Foundational Research on Data (IMFD

    IMGpedia: A linked dataset with content-based analysis of wikimedia images

    No full text
    IMGpedia is a large-scale linked dataset that incorporates visual information of the images from the Wikimedia Commons dataset: it brings together descriptors of the visual content of 15 million images, 450 million visual-similarity relations between those images, links to image metadata from DBpedia Commons, and links to the DBpedia resources associated with individual images. In this paper we describe the creation of the IMGpedia dataset, provide an overview of its schema and statistics of its contents, offer example queries that combine semantic and visual information of images, and discuss other envisaged use-cases for the dataset

    An efficient algorithm for approximated self-similarity joins in metric spaces

    No full text
    Similarity join is a key operation in metric databases. It retrieves all pairs of elements that are similar. Solving such a problem usually requires comparing every pair of objects of the datasets, even when indexing and ad hoc algorithms are used. We propose a simple and efficient algorithm for the computation of the approximated k nearest neighbor self-similarity join. This algorithm computes Theta(n(3/2)) distances and it is empirically shown that it reaches an empirical precision of 46% in real-world datasets. We provide a comparison to other common techniques such as Quickjoin and Locality-Sensitive Hashing and argue that our proposal has a better execution time and average precision.Millennium Institute for Foundational Research on Data, Chile CONICYT-PFCHA, Argentina 2017-2117061

    Scalable 3D shape retrieval using local features and the signature quadratic form distance

    No full text
    We present a scalable and unsupervised approach for content-based retrieval on 3D model collections. Our goal is to represent a 3D shape as a set of discriminative local features, which is important to maintain robustness against deformations such as non-rigid transformations and partial data. However, this representation brings up the problem on how to compare two 3D models represented by feature sets. For solving this problem, we apply the signature quadratic form distance (SQFD), which is suitable for comparing feature sets. Using SQFD, the matching between two 3D objects involves only their representations, so it is easy to add new models to the collection. A key characteristic of the feature signatures, required by the SQFD, is that the final object representation can be easily obtained in a unsupervised manner. Additionally, as the SQFD is an expensive distance function, to make the system scalable we present a novel technique to reduce the amount of features by detecting clusters of key points on a 3D model. Thus, with smaller feature sets, the distance calculation is more efficient. Our experiments on a large-scale dataset show that our proposed matching algorithm not only performs efficiently, but also its effectiveness is better than state-of-the-art matching algorithms for 3D models.Programa Nacional de Innovacion para la Competitividad y Productividad, INNOVATE Peru 280-PNICP-BRI-2015 Charles University P46 SVV-2016-260331 FONDECYT (Chile) 1140783 Millennium Nucleus Center for Semantic Web Research NC12000

    A benchmark of simulated range images for partial shape retrieval

    No full text
    ArtĂ­culo de publicaciĂłn ISIIn this paper, we address the evaluation of algorithms for partial shape retrieval using a large-scale simulated benchmark of partial views which are used as queries. Since the scanning of real objects is a time-consuming task, we create a simulation that generates a set of views from a target model and at different levels of complexity (amount of missing data). In total, our benchmark contains 7,200 partial views. Furthermore, we propose the use of weighted effectiveness measures based on the complexity of a query. With these characteristics, we aim at jointly evaluating the effectiveness, efficiency and robustness of existing algorithms

    Combining pixel domain and compressed domain index for sketch based image retrieval

    No full text
    Sketch-based image retrieval (SBIR) lets one express a precise visual query with simple and widespread means. In the SBIR approaches, the challenge consists in representing the image dataset features in a structure that allows one to efficiently and effectively retrieve images in a scalable system. We put forward a sketch-based image retrieval solution where sketches and natural image contours are represented and compared, in both, the compressed-domain of wavelet and in the pixel domain. The query is efficiently performed in the wavelet domain, while effectiveness refinements are achieved using the pixel domain to verify the spatial consistency between the sketch strokes and the natural image contours. Also, we present an efficient scheme of inverted lists for sketch-based image retrieval using the compressed-domain of wavelets. Our proposal of indexing presents two main advantages, the amount of the data to compute the query is smaller than the traditional method while it presents a better effectiveness.CAPES/ COFECUB FAPEMIG PPM-006-16 CNPq 307062/2016-3 PUC Mina
    corecore