11 research outputs found

    Langage de mashup pour l'intégration autonomique de services de données dans des environements dynamiques

    No full text
    The integration of the information coming from data services or data sources in virtual communities is user centred, i.e. associated to visualization issues determined by user needs. A virtual community can be seen as a cyberspace that can be customized by every user by specifying the information he/she is interested in and the way it should be retrieved and presented respecting specific security and QoS properties. Such requirements must be defined or inferred and then interpreted for building customized visualisations. Nowadays, there is no simple declarative language of mashups for retrieving; integrating and visualizing data produced by data services, according to spatio-temporal specifications. The purpose of this thesis is to develop such a language. This work is done within the framework of the REDSHINE project (red-shine.imag.fr) supported by the Grenoble INP "Bonus Qualité Recherche"Dans les communautés virtuelles, l'intégration des informations (provenant de sources ou services de données) est centrée sur les utilisateurs, i.e., associée à la visualisation d'informations déterminée par les besoins des utilisateurs. Une communauté virtuelle peut donc être vue comme un espace de données personnalisable par chaque utilisateur en spécifiant quelles sont ses informations d'intérêt et la façon dont elles doivent être présentées en respectant des contraintes de sécurité et de qualité de services (QoS). Ces contraintes sont définies ou inférées puis interprétées pour construire des visualisations personnalisées. Il n'existe pas aujourd'hui de langage déclaratif simple de mashups pour récupérer, intégrer et visualiser des données fournies par des services selon des spécifications spatio-temporelles. Dans le cadre de la thèse il s'agit de proposer un tel langage. Ce travail est réalisé dans le cadre du projet Redshine, bénéficiant d'un Bonus Qualité Recherche de Grenoble INP

    Una propuesta para asistir a la Co-evolución de Mashup cuando las APIs web evolucionan

    Get PDF
    As web application programming interfaces (APIs) evolve, previously established contracts change, and thus can affect the behavior, operation and / or execution of consumer applications such as Mashup. In these cases, these applications need to be repaired to continue working, which is a process called co-evolution. Identifying and locating the operations that are affected by the evolution of web APIs and estimating the impact they generate are necessary tasks that help the developer update the code. This work presents a proposal to assist the coevolution of Mashup. Specifically from a mashup operations graph, we identify and locate the operations affected by some changes in the web APIs. We also propose a set of simple metrics that allow estimating the impact of these changes on the mashup. The mashup operations graph and metrics assist web developers in co-evolution tasks. The proposal was applied to two mashups that are currently available on the web. The preliminary results show that the proposal is applicable.A medida que evolucionan las interfaces de programación de aplicaciones web (API), los contratos establecidos previamente cambian y, por lo tanto, pueden afectar el comportamiento, el funcionamiento y / o la ejecución de aplicaciones de consumo como Mashup. En estos casos, estas aplicaciones necesitan ser reparadas para seguir funcionando, es un proceso llamado co-evolución. Identificar y localizar las operaciones que se ven afectadas por la evolución de las API web y estimar el impacto que generan son tareas necesarias que ayudan al desarrollador a actualizar el código. Este trabajo presenta una propuesta para asistir a la coevolución de Mashup. Específicamente a partir de un grafo de operaciones de mashup, identificamos y ubicamos las operaciones afectadas por algunos cambios en las API web. También proponemos un conjunto de métricas simples que permiten estimar el impacto de estos cambios en el mashup. El grafo y las métricas de operaciones de mashup ayudan a los desarrolladores web en las tareas de co-evolución. La propuesta fue aplicada a dos mashup que actualmente se encuentran disponibles en la web. Los resultados preliminares muestran que la propuesta es aplicable

    Automated Bidding in Computing Service Markets. Strategies, Architectures, Protocols

    Get PDF
    This dissertation contributes to the research on Computational Mechanism Design by providing novel theoretical and software models - a novel bidding strategy called Q-Strategy, which automates bidding processes in imperfect information markets, a software framework for realizing agents and bidding strategies called BidGenerator and a communication protocol called MX/CS, for expressing and exchanging economic and technical information in a market-based scheduling system

    Dados abertos @ UA

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaHoje em dia vivemos numa sociedade de informação. Qualquer pessoa ou entidade pode aceder, através de inúmeros meios de comunicação, a fontes de informação. Com a evolução tecnológica, a Internet transformou-se num dos principais meios de divulgação de conhecimento, permitindo o acesso a inúmeros recursos. Acompanhando este progresso surgiram novos termos e conceitos referentes à divulgação de informação, sendo o open data um dos principais. O conceito open data aplica-se a dados disponibilizados publicamente por organizações e instituições. Este movimento está cada vez mais presente em instituições governamentais, que publicam os seus conjuntos de dados e bases de dados em portais web. Os benefícios do cruzamento de dados e implementação de mashups, assegurando a criação de novo conhecimento, são bastante evidentes, tanto a nível social como a nível económico. A divulgação de dados através da Internet deverá obedecer a normas e padrões da indústria. Os serviços web assumem-se neste contexto como um método importante de transmissão de dados e informação devido, sobretudo, à sua vasta utilização por diversas aplicações e entidades. Paralelamente à divulgação de informação pública, existem fontes de informação cujo seu conteúdo é privado. Neste contexto o protocolo OAuth surge como uma solução tecnológica que permite a divulgação de informação privada após a devida autorização pelo detentor dos conteúdos. O objectivo desta dissertação passa por aplicar os conceitos acima referidos na Universidade de Aveiro. Numa primeira fase foi realizada uma triagem e categorização dos serviços web já existentes, assim como das fontes de informação com potencialidade para o desenvolvimento de um serviço. Após a implementação dos serviços identificados, foi desenvolvido um portal web agregador dos múltiplos serviços da Universidade de Aveiro, PT Inovação e SAPO. Por fim foi implementado um servidor OAuth com o intuito de providenciar um mecanismo de autorização para o acesso a serviços web cuja informação é considerada sensível ou privada.Nowadays we live in an information society. Every person or entity is able to access, through endless communication methods, to information sources. As technology evolved, Internet has become one of the most important ways of spreading knowledge, allowing access to several resources. As this progress continued, some new concepts about information sharing started to arise, being open data one of the most important. The open data concept refers to data made publicly available by organizations and institutions. This concept is gaining importance on governmental parties, which publish their datasets and databases in web portals. Crossing data and implementation of mashups, providing new knowledge, comes with great social and economic benefits. Spreading data through the Internet must follow industry standards and protocols. Webservices can be faced, in this context, as an important method of data and information transmission, essentially because of its high utilization. Simultaneously there are some information sources that carry sensitive and private data. In this context, OAuth protocol stands as a technologic solution to share private contents. The objective of this thesis is to implement the concepts described above in The University of Aveiro. On a first approach it was gathered information about existing webservices as well as potential information sources to create new webservices. After the implementation of the identified webservices, a web portal was developed to aggregate webservices from the University of Aveiro, PT Inovação and SAPO. Finally an OAuth server was developed in order to provide an authorization mechanism to access private data webservices

    Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming

    Full text link
    Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica.Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637TESI

    Geospatial Computing: Architectures and Algorithms for Mapping Applications

    Get PDF
    Beginning with the MapTube website (1), which was launched in 2007 for crowd-sourcing maps, this project investigates approaches to exploratory Geographic Information Systems (GIS) using web-based mapping, or ‘web GIS’. Users can log in to upload their own maps and overlay different layers of GIS data sets. This work looks into the theory behind how web-based mapping systems function and whether their performance can be modelled and predicted. One of the important questions when dealing with different geospatial data sets is how they relate to one another. Internet data stores provide another source of information, which can be exploited if more generic geospatial data mining techniques are developed. The identification of similarities between thousands of maps is a GIS technique that can give structure to the overall fabric of the data, once the problems of scalability and comparisons between different geographies are solved. After running MapTube for nine years to crowd-source data, this would mark a natural progression from visualisation of individual maps to wider questions about what additional knowledge can be discovered from the data collected. In the new ‘data science’ age, the introduction of real-time data sets introduces a new challenge for web-based mapping applications. The mapping of real-time geospatial systems is technically challenging, but has the potential to show inter-dependencies as they emerge in the time series. Combined geospatial and temporal data mining of realtime sources can provide archives of transport and environmental data from which to accurately model the systems under investigation. By using techniques from machine learning, the models can be built directly from the real-time data stream. These models can then be used for analysis and experimentation, being derived directly from city data. This then leads to an analysis of the behaviours of the interacting systems. (1) The MapTube website: http://www.maptube.org

    Copyright in music- an analysis of emerging legal trends.

    Get PDF
    Technological innovations have always influenced the ways in which music is made and consumed in societies. Now that music has entered the digital realm a new revolution is underway. The near perfect duplication facilitated by digital technology in conspiracy with the ease of exchange provided by the Internet threaten to render copyright law into a redundant relic, while at the same time changing the way in which millions across the globe listen to music. A new music culture has been born –driven by technology advances, hindered only by copyright law. The combination of the Internet and digital technology presents copyright law with what has been described as a digital dilemma. The availability of digital music in the form of MP3s has allowed for songs and albums to be easily compressed into manageable digital file sizes while maintaining very high audio fidelity. Millions of individuals across the world have created MP3s by „ripping‟ music albums into digital files of this format and made them available to others in cyberspace. Millions of others have searched for and downloaded these tracks without having to go a bricks –and –mortar retail establishment and purchase them on CD, and have shared them widely through online transfers and by burning them on recordable CDs.3Digital copy of Ph.D thesis.University of Kashmir

    Drones and Geographical Information Technologies in Agroecology and Organic Farming

    Get PDF
    Although organic farming and agroecology are normally not associated with the use of new technologies, it’s rapid growth, new technologies are being adopted to mitigate environmental impacts of intensive production implemented with external material and energy inputs. GPS, satellite images, GIS, drones, help conventional farming in precision supply of water, pesticides, fertilizers. Prescription maps define the right place and moment for interventions of machinery fleets. Yield goal remains the key objective, integrating a more efficient use or resources toward an economic-environmental sustainability. Technological smart farming allows extractive agriculture entering the sustainability era. Societies that practice agroecology through the development of human-environmental co-evolutionary systems represent a solid model of sustainability. These systems are characterized by high-quality agroecosystems and landscapes, social inclusion, and viable economies. This book explores the challenges posed by the new geographic information technologies in agroecology and organic farming. It discusses the differences among technology-laden conventional farming systems and the role of technologies in strengthening the potential of agroecology. The first part reviews the new tools offered by geographic information technologies to farmers and people. The second part provides case studies of most promising application of technologies in organic farming and agroecology: the diffusion of hyperspectral imagery, the role of positioning systems, the integration of drones with satellite imagery. The third part of the book, explores the role of agroecology using a multiscale approach from the farm to the landscape level. This section explores the potential of Geodesign in promoting alliances between farmers and people, and strengthening food networks, whether through proximity urban farming or asserting land rights in remote areas in the spirit of agroecological transition. The Open Access version of this book, available at www.taylorfrancis.com, has been made available under a Creative Commons 4.0 license

    Banking theory based distributed resource management and scheduling for hybrid cloud computing

    Get PDF
    Cloud computing is a computing model in which the network offers a dynamically scalable service based on virtualized resources. The resources in the cloud environment are heterogeneous and geographically distributed. The user does not need to know how to manage those who support the cloud computing infrastructure. From the view of cloud computing, all hardware, software and networks are resources. All of the resources are dynamically scalable on demand. It can offer a complete service for the user even when these service resources are geographically distributed. The user pays for only what they use (pay-per-use). Meanwhile, the transaction environment will decide how to manage resource usage and cost, because all of the transactions have to follow the rule of the market. How to manage and schedule resources effectively becomes a very important part of cloud computing, and how to setup a new framework to offer a reliable, safe and executable service are very important issues. The approach herein is a new contribution to cloud computing. It not only proposes a hybrid cloud computing model based on banking theory to manage transactions among all participants in the hybrid cloud computing environment, but also proposes a "Cloud Bank" framework to support all the related issues. There are some of technology and theory been used to offer contributions as below: 1. This thesis presents an Optimal Deposit-loan Ratio Theory to adjust the pricing between the resource provider and resource consumer to realize both benefit maximization and cloud service optimization for all participants. 2. It also offers a new pricing schema using Centralized Synchronous Algorithm and Distributed Price Adjustment Algorithm to control all lifecycles and dynamically price all resources. 3. Normally, commercial banks apply four factors mitigation and to predict the risk: Probability of Default, Loss Given Default, Exposure at Default and Maturity. This thesis applies Probability of Default model of credit risk to forecast the safety supply of the resource. The Logistic Regression Model been used to control some factors in resource allocation. At the same time, the thesis uses Multivariate Statistical analysis to predict risk. 4. The Cloud Bank model applies an improved Pareto Optimality Algorithm to build its own scheduling system. 5. In order to archive the above purpose, this thesis proposes a new QoS-based SLA-CBSAL to describe all the physical resource and the processing of thread. In order to support all the related algorithms and theories, the thesis uses the CloudSim simulation tools give a test result to support some of the Cloud Bank management strategies and algorithms. The experiment shows us that the Cloud Bank Model is a new possible solution for hybrid cloud computing. For future research direction, the author will focus on building real hybrid cloud computing and simulate actual user behaviour in a real environment, and continue to modify and improve the feasibility and effectiveness of the project. For the risk mitigation and prediction, the risks can be divided into the four categories: credit risk, liquidity risk, operational risk, and other risks. Although this thesis uses credit risk and liquidity risk research, in a real trading environment operational risks and other risks exist. Only through improvements to the designation of all risk types of analysis and strategy can our Cloud Bank be considered relatively complete
    corecore