544 research outputs found

    Semantically-aware data discovery and placement in collaborative computing environments

    Get PDF
    As the size of scientific datasets and the demand for interdisciplinary collaboration grow in modern science, it becomes imperative that better ways of discovering and placing datasets generated across multiple disciplines be developed to facilitate interdisciplinary scientific research. For discovering relevant data out of large-scale interdisciplinary datasets. The development and integration of cross-domain metadata is critical as metadata serves as the key guideline for organizing data. To develop and integrate cross-domain metadata management systems in interdisciplinary collaborative computing environment, three key issues need to be addressed: the development of a cross-domain metadata schema; the implementation of a metadata management system based on this schema; the integration of the metadata system into existing distributed computing infrastructure. Current research in metadata management in distributed computing environment largely focuses on relatively simple schema that lacks the underlying descriptive power to adequately address semantic heterogeneity often found in interdisciplinary science. And current work does not take adequate consideration the issue of scalability in large-scale data management. Another key issue in data management is data placement, due to the increasing size of scientific datasets, the overhead incurred as a result of transferring data among different nodes also grow into a significant inhibiting factor affecting overall performance. Currently, few data placement strategies take into consideration semantic information concerning data content. In this dissertation, we propose a cross-domain metadata system in a collaborative distributed computing environment and identify and evaluate key factors and processes involved in a successful cross-domain metadata system with the goal of facilitating data discovery in collaborative environments. This will allow researchers/users to conduct interdisciplinary science in the context of large-scale datasets that will make it easier to access interdisciplinary datasets, reduce barrier to collaboration, reduce cost of future development of similar systems. We also investigate data placement strategies that involve semantic information about the hardware and network environment as well as domain information in the form of semantic metadata so that semantic locality could be utilized in data placement, that could potentially reduce overhead for accessing large-scale interdisciplinary datasets

    Using Ontologies and Intelligent Systems for Traffic Accident Assistance in Vehicular Environments

    Full text link
    A pesar de que las medidas de seguridad en los sistemas de transporte cada vez son mayores, el aumento progresivo del número de vehículos que circulan por las ciudades y carreteras en todo el mundo aumenta, sin duda, la probabilidad de que ocurra un accidente. En este tipo de situaciones, el tiempo de respuesta de los servicios de emergencia es crucial, ya que está demostrado que cuanto menor sea el tiempo transcurrido entre el accidente y la atención hospitalaria de los heridos, mayores son sus probabilidades de supervivencia. Las redes vehiculares permiten la comunicación entre los vehículos, así como la comunicación entre los vehículos y la infraestructura [4], lo que da lugar a una plétora de nuevas aplicaciones y servicios en el entorno vehicular. Centrándonos en las aplicaciones relacionadas con la seguridad vial, mediante este tipo de comunicaciones, los vehículos podrían informar en caso de accidente al resto de vehículos (evitando así colisiones en cadena) y a los servicios de emergencia (dando información precisa y rápida, lo que sin duda facilitaría las tareas de rescate). Uno de los aspectos importantes a determinar sería saber qué información se debe enviar, quién será capaz de recibirla, y cómo actuar una vez recibida. Actualmente los vehículos disponen de una serie de sensores que les permiten obtener información sobre ellos mismos (velocidad, posición, estado de los sistemas de seguridad, número de ocupantes del vehículo, etc.), y sobre su entorno (información meteorológica, estado de la calzada, luminosidad, etc.). En caso de accidente, toda esa información puede ser estructurada y enviada a los servicios de emergencia para que éstos adecúen el rescate a las características específicas y la gravedad del accidente, actuando en consecuencia. Por otro lado, para que la información enviada por los vehículos accidentados pueda llegar correctamente a los servicios de emergencias, es necesario disponer de una infraestructura capaz de dar cobertura a todos los vehículos que circulan por una determinada área. Puesto que la instalación y el mantenimiento de dicha infraestructura conllevan un elevado coste, sería conveniente proponer, implementar y evaluar técnicas consistentes en dar cobertura a todos los vehículos, reduciendo el coste total de la infraestructura. Finalmente, una vez que la información ha sido recibida por las autoridades, es necesario elaborar un plan de actuación eficaz, que permita el rápido rescate de los heridos. Hay que tener en cuenta que, cuando ocurre un accidente de tráfico, el tiempo de personación de los servicios de emergencia en el lugar del accidente puede suponer la diferencia entre que los heridos sobrevivan o fallezcan. Además, es importante conocer si la calle o carretera por la que circulaban los vehículos accidentados ha dejado de ser transitable para el resto de vehículos, y en ese caso, activar los mecanismos necesarios que permitan evitar los atascos asociados. En esta Tesis, se pretende gestionar adecuadamente estas situaciones adversas, distribuyendo el tráfico de manera inteligente para reducir el tiempo de llegada de los servicios de emergencia al lugar del accidente, evitando además posibles atascos.Barrachina Villalba, J. (2014). Using Ontologies and Intelligent Systems for Traffic Accident Assistance in Vehicular Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/39004TESI

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe
    corecore