12 research outputs found

    An efficient time optimized scheme for progressive analytics in big data

    Get PDF
    Big data analytics is the key research subject for future data driven decision making applications. Due to the large amount of data, progressive analytics could provide an efficient way for querying big data clusters. Each cluster contains only a piece of the examined data. Continuous queries over these data sources require intelligent mechanisms to result the final outcome (query response) in the minimum time with the maximum performance. A Query Controller (QC) is responsible to manage continuous/sequential queries and return the final outcome to users or applications. In this paper, we propose a mechanism that can be adopted by the QC. The proposed mechanism is capable of managing partial results retrieved by a number of processors each one responsible for each cluster. Each processor executes a query over a specific cluster of data. Our mechanism adopts two sequential decision making models for handling the incoming partial results. The first model is based on a finite horizon time-optimized model and the second one is based on an infinite horizon optimally scheduled model. We provide mathematical formulations for solving the discussed problem and present simulation results. Through a large number of experiments, we reveal the advantages of the proposed models and give numerical results comparing them with a deterministic model. These results indicate that the proposed models can efficiently reduce the required time for returning the final outcome to the user/application while keeping the quality of the aggregated result at high levels

    Reinforcement machine learning for predictive analytics in smart cities

    Get PDF
    The digitization of our lives cause a shift in the data production as well as in the required data management. Numerous nodes are capable of producing huge volumes of data in our everyday activities. Sensors, personal smart devices as well as the Internet of Things (IoT) paradigm lead to a vast infrastructure that covers all the aspects of activities in modern societies. In the most of the cases, the critical issue for public authorities (usually, local, like municipalities) is the efficient management of data towards the support of novel services. The reason is that analytics provided on top of the collected data could help in the delivery of new applications that will facilitate citizens’ lives. However, the provision of analytics demands intelligent techniques for the underlying data management. The most known technique is the separation of huge volumes of data into a number of parts and their parallel management to limit the required time for the delivery of analytics. Afterwards, analytics requests in the form of queries could be realized and derive the necessary knowledge for supporting intelligent applications. In this paper, we define the concept of a Query Controller ( QC ) that receives queries for analytics and assigns each of them to a processor placed in front of each data partition. We discuss an intelligent process for query assignments that adopts Machine Learning (ML). We adopt two learning schemes, i.e., Reinforcement Learning (RL) and clustering. We report on the comparison of the two schemes and elaborate on their combination. Our aim is to provide an efficient framework to support the decision making of the QC that should swiftly select the appropriate processor for each query. We provide mathematical formulations for the discussed problem and present simulation results. Through a comprehensive experimental evaluation, we reveal the advantages of the proposed models and describe the outcomes results while comparing them with a deterministic framework

    Handling location uncertainty in probabilistic location-dependent queries

    Get PDF
    Location-based services have motivated intensive research in the field of mobile computing, and particularly on location-dependent queries. Existing approaches usually assume that the location data are expressed at a fine geographic precision (physical coordinates such as GPS). However, many positioning mechanisms are subject to an inherent imprecision (e.g., the cell-id mechanism used in cellular networks can only determine the cell where a certain moving object is located). Moreover, even a GPS location can be subject to an error or be obfuscated for privacy reasons. Thus, moving objects can be considered to be associated not to an exact location, but to an uncertainty area where they can be located. In this paper, we analyze the problem introduced by the imprecision of the location data available in the data sources by modeling them using uncertainty areas. To do so, we propose to use a higher-level representation of locations which includes uncertainty, formalizing the concept of uncertainty location granule. This allows us to consider probabilistic location-dependent queries, among which we will focus on probabilistic inside (range) constraints. The adopted model allows us to develop a systematic and efficient approach for processing this kind of queries. An experimental evaluation shows that these probabilistic queries can be supported efficiently

    Traitement continu des requêtes dépendantes de la localisation dans des environnements intérieurs

    Get PDF
    Cet article développe une représentation de données spatiales d’un environnement intérieur dit “indoor” qui tient compte des dimensions contextuelles centrées sur l’utilisateur et aborde les enjeux de gestion de données mobiles. Un modèle de données “indoor” hiérarchique et sensible au contexte est proposé. Cette conception hiérarchique favorise un traitement adaptatif et efficace des requêtes dépendantes de la localisation. Un langage de requêtes continues est développé et illustré par des exemples de requêtes. Cette approche de modélisation est complétée par le développement d’algorithmes de traitement continu des requêtes de recherche de chemin hiérarchique et des requêtes de zone sur des objets mobiles en “indoor”. Une étude expérimentale des solutions développées a été menée pour évaluer la performance et le passage à l’échelle à l’égard des propriétés intrinsèques des solutions proposées

    SCALABLE PROCESSING OF MULTIPLE AGGREGATE CONTINUOUS QUERIES

    Get PDF
    Data Stream Management Systems (DSMSs) were developed to be at the heart of every monitor- ing application. Monitoring applications typically register hundreds of Continuous Queries (CQs) in DSMSs in order to continuously process unbounded data streams to detect events of interest. DSMSs must be designed to efficiently handle unbounded streams with large volumes of data and large numbers of CQs, i.e., exhibit scalability. This need for scalability means that the underlying processing techniques a DSMS adopts should be optimized for high throughput (i.e., tuple output rate). Towards this, two main approaches were proposed in the literature: (1) Multiple Query Opti- mization (MQO) and (2) Scheduling. In this dissertation we focus on optimizing the processing of multiple Aggregate Continuous Queries (ACQs), given their high processing cost and popularity in all monitoring applications. Specifically, in this dissertation, we explore shared processing of ACQs and introduce the con- cept of ’Weaveability’ as an indicator of the potential gains of sharing the processing of ACQs. We develop Weave Share, a multiple ACQs optimizer that considers the different uncorrelated factors of the processing cost, such as the input rate and ACQs’ specifications. In order to fully reap the benefits of the new weave-based optimization techniques, we conceptualize a new underlying ag- gregate operator implementation and realize it in the TriOps framework. TriOps enables adaptive sharing of multiple ACQs that have different window specification, predicates and group-by at- tributes. The properties of the proposed techniques are studied analytically and their performance advantages are experimentally evaluated using simulation and in the context of the AQSIOS DSMS prototype

    Semantic Keyword-based Search on Heterogeneous Information Systems

    Get PDF
    En los últimos años, con la difusión y el uso de Internet, el volumen de información disponible para los usuarios ha crecido exponencialmente. Además, la posibilidad de acceder a dicha información se ha visto impulsada por los niveles de conectividad de los que disfrutamos actualmente gracias al uso de los móviles de nueva generación y las redes inalámbricas (e.g., 3G, Wi-Fi). Sin embargo, con los métodos de acceso actuales, este exceso de información es tan perjudicial como la falta de la misma, ya que el usuario no tiene tiempo de procesarla en su totalidad. Por otro lado, esta información está detrás de sistemas de información de naturaleza muy heterogénea (e.g., buscadores Web, fuentes de Linked Data, etc.), y el usuario tiene que conocerlos para poder explotar al máximo sus capacidades. Esta diversidad se hace más patente si consideramos cualquier servicio de información como potencial fuente de información para el usuario (e.g., servicios basados en la localización, bases de datos exportadas mediante Servicios Web, etc.). Dado este nivel de heterogeneidad, la integración de estos sistemas se debe hacer externamente, ocultando su complejidad al usuario y dotándole de mecanismos para que pueda expresar sus consultas de forma sencilla. En este sentido, el uso de interfaces basados en palabras clave (keywords) se ha popularizado gracias a su sencillez y a su adopción por parte de los buscadores Web más usados. Sin embargo, esa sencillez que es su mayor virtud también es su mayor defecto, ya que genera problemas de ambigüedad en las consultas. Las consultas expresadas como conjuntos de palabras clave son inherentemente ambiguas al ser una proyección de la verdadera pregunta que el usuario quiere hacer. En la presente tesis, abordamos el problema de integrar sistemas de información heterogéneos bajo una búsqueda guiada por la semántica de las palabras clave; y presentamos QueryGen, un prototipo de nuestra solución. En esta búsqueda semántica abogamos por establecer la consulta que el usuario tenía en mente cuando escribió sus palabras clave, en un lenguaje de consulta formal para evitar posibles ambigüedades. La integración de los sistemas subyacentes se realiza a través de la definición de sus lenguajes de consulta y de sus modelos de ejecución. En particular, nuestro sistema: - Descubre el significado de las palabras clave consultando un conjunto dinámico de ontologías, y desambigua dichas palabras teniendo en cuenta su contexto (el resto de palabras clave), ya que cada una de las palabras tiene influencia sobre el significado del resto de la entrada. Durante este proceso, los significados que son suficientemente similares son fusionados y el sistema propone aquellos más probables dada la entrada del usuario. La información semántica obtenida en el proceso es integrada y utilizada en fases posteriores para obtener la correcta interpretación del conjunto de palabras clave. - Un mismo conjunto de palabras pueden representar diversas consultas aún cuando se conoce su significado individual. Por ello, una vez establecidos los significados de cada palabra y para obtener la consulta exacta del usuario, nuestro sistema encuentra todas las preguntas posibles utilizando las palabras clave. Esta traducción de palabras clave a preguntas se realiza empleando lenguajes de consulta formales para evitar las posibles ambigüedades y expresar la consulta de manera precisa. Nuestro sistema evita la generación de preguntas semánticamente incorrectas o duplicadas con la ayuda de un razonador basado en Lógicas Descriptivas (Description Logics). En este proceso, nuestro sistema es capaz de reaccionar ante entradas insuficientes (e.g., palabras omitidas) mediante la adición de términos virtuales, que representan internamente palabras que el usuario tenía en mente pero omitió cuando escribió su consulta. - Por último, tras la validación por parte del usuario de su consulta, nuestro sistema accede a los sistemas de información registrados que pueden responderla y recupera la respuesta de acuerdo a la semántica de la consulta. Para ello, nuestro sistema implementa una arquitectura modular permite añadir nuevos sistemas al vuelo siempre que se proporcione su especificación (lenguajes de consulta soportados, modelos y formatos de datos, etc.). Por otro lado, el trabajar con sistemas de información heterogéneos, en particular sistemas relacionados con la Computación Móvil, ha permitido que las contribuciones de esta tesis no se limiten al campo de la búsqueda semántica. A este respecto, se ha estudiado el ámbito de la semántica de las consultas basadas en la localización, y especialmente, la influencia de la semántica de las localizaciones en el procesado e interpretación de las mismas. En particular, se proponen dos modelos ontológicos para modelar y capturar la relaciones semánticas de las localizaciones y ampliar la expresividad de las consultas basadas en la localización. Durante el desarrollo de esta tesis, situada entre el ámbito de la Web Semántica y el de la Computación Móvil, se ha abierto una nueva línea de investigación acerca del modelado de conocimiento volátil, y se ha estudiado la posibilidad de utilizar razonadores basados en Lógicas Descriptivas en dispositivos basados en Android. Por último, nuestro trabajo en el ámbito de las búsquedas semánticas a partir de palabras clave ha sido extendido al ámbito de los agentes conversacionales, haciéndoles capaces de explotar distintas fuentes de datos semánticos actualmente disponibles bajo los principios del Linked Data

    Towards an Efficient, Scalable Stream Query Operator Framework for Representing and Analyzing Continuous Fields

    Get PDF
    Advancements in sensor technology have made it less expensive to deploy massive numbers of sensors to observe continuous geographic phenomena at high sample rates and stream live sensor observations. This fact has raised new challenges since sensor streams have pushed the limits of traditional geo-sensor data management technology. Data Stream Engines (DSEs) provide facilities for near real-time processing of streams, however, algorithms supporting representing and analyzing Spatio-Temporal (ST) phenomena are limited. This dissertation investigates near real-time representation and analysis of continuous ST phenomena, observed by large numbers of mobile, asynchronously sampling sensors, using a DSE and proposes two novel stream query operator frameworks. First, the ST Interpolation Stream Query Operator Framework (STI-SQO framework) continuously transforms sensor streams into rasters using a novel set of stream query operators that perform ST-IDW interpolation. A key component of the STI-SQO framework is the 3D, main memory-based, ST Grid Index that enables high performance ST insertion and deletion of massive numbers of sensor observations through Isotropic Time Cell and Time Block-based partitioning. The ST Grid Index facilitates fast ST search for samples using ST shell-based neighborhood search templates, namely the Cylindrical Shell Template and Nested Shell Template. Furthermore, the framework contains the stream-based ST-IDW algorithms ST Shell and ST ak-Shell for high performance, parallel grid cell interpolation. Secondly, the proposed ST Predicate Stream Query Operator Framework (STP-SQO framework) efficiently evaluates value predicates over ST streams of ST continuous phenomena. The framework contains several stream-based predicate evaluation algorithms, including Region-Growing, Tile-based, and Phenomenon-Aware algorithms, that target predicate evaluation to regions with seed points and minimize the number of raster cells that are interpolated when evaluating value predicates. The performance of the proposed frameworks was assessed with regard to prediction accuracy of output results and runtime. The STI-SQO framework achieved a processing throughput of 250,000 observations in 2.5 s with a Normalized Root Mean Square Error under 0.19 using a 500×500 grid. The STP-SQO framework processed over 250,000 observations in under 0.25 s for predicate results covering less than 40% of the observation area, and the Scan Line Region Growing algorithm was consistently the fastest algorithm tested

    Large-Scale Indexing, Discovery, and Ranking for the Internet of Things (IoT)

    Get PDF
    Network-enabled sensing and actuation devices are key enablers to connect real-world objects to the cyber world. The Internet of Things (IoT) consists of the network-enabled devices and communication technologies that allow connectivity and integration of physical objects (Things) into the digital world (Internet). Enormous amounts of dynamic IoT data are collected from Internet-connected devices. IoT data are usually multi-variant streams that are heterogeneous, sporadic, multi-modal, and spatio-temporal. IoT data can be disseminated with different granularities and have diverse structures, types, and qualities. Dealing with the data deluge from heterogeneous IoT resources and services imposes new challenges on indexing, discovery, and ranking mechanisms that will allow building applications that require on-line access and retrieval of ad-hoc IoT data. However, the existing IoT data indexing and discovery approaches are complex or centralised, which hinders their scalability. The primary objective of this article is to provide a holistic overview of the state-of-the-art on indexing, discovery, and ranking of IoT data. The article aims to pave the way for researchers to design, develop, implement, and evaluate techniques and approaches for on-line large-scale distributed IoT applications and services

    Statistical Models for Querying and Managing Time-Series Data

    Get PDF
    In recent years we are experiencing a dramatic increase in the amount of available time-series data. Primary sources of time-series data are sensor networks, medical monitoring, financial applications, news feeds and social networking applications. Availability of large amount of time-series data calls for scalable data management techniques that enable efficient querying and analysis of such data in real-time and archival settings. Often the time-series data generated from sensors (environmental, RFID, GPS, etc.), are imprecise and uncertain in nature. Thus, it is necessary to characterize this uncertainty for producing clean answers. In this thesis we propose methods that address these important issues pertaining to time-series data. Particularly, this thesis is centered around the following three topics: Computing Statistical Measures on Large Time-Series Datasets. Computing statistical measures for large databases of time series is a fundamental primitive for querying and mining time-series data [31, 81, 97, 111, 132, 137]. This primitive is gaining importance with the increasing number and rapid growth of time-series databases. In Chapter 3, we introduce the Affinity framework for efficient computation of statistical measures by exploiting the concept of affine relationships [113, 114]. Affine relationships can be used to infer a large number of statistical measures for time series, from other related time series, instead of computing them directly; thus, reducing the overall computational cost significantly. Moreover, the Affinity framework proposes an unified approach for computing several statistical measures at once. Creating Probabilistic Databases from Imprecise Data. A large amount of time-series data produced in the real-world has an inherent element of uncertainty, arising due to the various sources of imprecision affecting its sources (like, sensor data, GPS trajectories, environmental monitoring data, etc.). The primary sources of imprecision in such data are: imprecise sensors, limited communication bandwidth, sensor failures, etc. Recently there has been an exponential rise in the number of such imprecise sensors, which has led to an explosion of imprecise data. Standard database techniques cannot be used to provide clean and consistent answers in such scenarios. Therefore, probabilistic databases that factor-in the inherent uncertainty and produce clean answers are required. An important assumption i while using probabilistic databases is that each data point has a probability distribution associated with it. This is not true in practice — the distributions are absent. As a solution to this fundamental limitation, in Chapter 4 we propose methods for inferring such probability distributions and using them for efficiently creating probabilistic databases [116]. Managing Participatory Sensing Data. Community-driven participatory sensing is a rapidly evolving paradigm in mobile geo-sensor networks. Here, sensors of various sorts (e.g., multi-sensor units monitoring air quality, cell phones, thermal watches, thermometers in vehicles, etc.) are carried by the community (public vehicles, private vehicles, or individuals) during their daily activities, collecting various types of data about their surrounding. Data generated by these devices is in large quantity, and geographically and temporally skewed. Therefore, it is important that systems designed for managing such data should be aware of these unique data characteristics. In Chapter 5, we propose the ConDense (Community-driven Sensing of the Environment) framework for managing and querying community-sensed data [5, 19, 115]. ConDense exploits spatial smoothness of environmental parameters (like, ambient pollution [5] or radiation [2]) to construct statistical models of the data. Since the number of constructed models is significantly smaller than the original data, we show that using our approach leads to dramatic increase in query processing efficiency [19, 115] and significantly reduces memory usage
    corecore