121 research outputs found

    Web Application Programming Interfaces (APIs): general-purpose standards, terms and European Commission initiatives

    Get PDF
    From their inception, digital technologies have had a huge impact on our everyday life. In both the private and the public sectors, they have contributed to, or at times driven, change in organisational structures, ways of working, and how products and services are shaped and shared. Governments and public administration units, driven by the digital evolution of information and communications technology (ICT), are evolving from traditional workflow-based public service provisions to digital equivalents (e-government), with more innovative forms of government and administration looking for the engagement of citizens and the private sector to co-create final services through user-centric approaches. Application Programming Interfaces (APIs), which are one of the most relevant ICT solutions, have contributed to this notable shift in the adoption of technology, especially when used over the web. They have affected the global economy of the private sector and are contributing to the digital transformation of governments. To explore this in more detail, the European Commission recently started the APIs4DGov study. One of the outputs of the study is an analysis of the API technological landscape, including its related standards and technical specifications for general purpose use. The goal of the analysis presented in this brief report is to support the definition of stable APIs for digital government services adopted by governments or single public administration units. Such adoption would avoid the need to develop ad hoc solutions that could have limited scalability or potential for reuse. Instead, the work suggests that we should consider a number of existing standards provided by standardisation bodies or, at least, technical specifications written by well-recognised consortia, vendors or users. The aim of this report is also to support API stakeholders in the identification and selection of such solutions. To do this, it first gives a series of definitions to help the reader understand some basic concepts, as well as related standards and technical specifications. Then, it presents the description and classification (by resource representation, security, usability, test, performance and licence) of the standards and technical specifications collected. A shortlist of these documents (based on their utilisation, maintenance and stability) is also proposed, together with a brief description of each of them. Finally, the report provides a useful glossary with definitions of the relevant terms we have collected so far within the APIs4DGov study.JRC.B.6-Digital Econom

    Enriching unstructured media content about events to enable semi-automated summaries, compilations, and improved search by leveraging social networks

    Get PDF
    (i) Mobile devices and social networks are omnipresent Mobile devices such as smartphones, tablets, or digital cameras together with social networks enable people to create, share, and consume enormous amounts of media items like videos or photos both on the road or at home. Such mobile devices "by pure definition" accompany their owners almost wherever they may go. In consequence, mobile devices are omnipresent at all sorts of events to capture noteworthy moments. Exemplary events can be keynote speeches at conferences, music concerts in stadiums, or even natural catastrophes like earthquakes that affect whole areas or countries. At such events" given a stable network connection" part of the event-related media items are published on social networks both as the event happens or afterwards, once a stable network connection has been established again. (ii) Finding representative media items for an event is hard Common media item search operations, for example, searching for the official video clip for a certain hit record on an online video platform can in the simplest case be achieved based on potentially shallow human-generated metadata or based on more profound content analysis techniques like optical character recognition, automatic speech recognition, or acoustic fingerprinting. More advanced scenarios, however, like retrieving all (or just the most representative) media items that were created at a given event with the objective of creating event summaries or media item compilations covering the event in question are hard, if not impossible, to fulfill at large scale. The main research question of this thesis can be formulated as follows. (iii) Research question "Can user-customizable media galleries that summarize given events be created solely based on textual and multimedia data from social networks?" (iv) Contributions In the context of this thesis, we have developed and evaluated a novel interactive application and related methods for media item enrichment, leveraging social networks, utilizing the Web of Data, techniques known from Content-based Image Retrieval (CBIR) and Content-based Video Retrieval (CBVR), and fine-grained media item addressing schemes like Media Fragments URIs to provide a scalable and near realtime solution to realize the abovementioned scenario of event summarization and media item compilation. (v) Methodology For any event with given event title(s), (potentially vague) event location(s), and (arbitrarily fine-grained) event date(s), our approach can be divided in the following six steps. 1) Via the textual search APIs (Application Programming Interfaces) of different social networks, we retrieve a list of potentially event-relevant microposts that either contain media items directly, or that provide links to media items on external media item hosting platforms. 2) Using third-party Natural Language Processing (NLP) tools, we recognize and disambiguate named entities in microposts to predetermine their relevance. 3) We extract the binary media item data from social networks or media item hosting platforms and relate it to the originating microposts. 4) Using CBIR and CBVR techniques, we first deduplicate exact-duplicate and near-duplicate media items and then cluster similar media items. 5) We rank the deduplicated and clustered list of media items and their related microposts according to well-defined ranking criteria. 6) In order to generate interactive and user-customizable media galleries that visually and audially summarize the event in question, we compile the top-n ranked media items and microposts in aesthetically pleasing and functional ways

    iSemServ : a framework for engineering intelligent semantic services

    Get PDF
    The need for modern enterprises and Web users to simply and rapidly develop and deliver platform-independent services to be accessed over the Web by the global community is growing. This is self-evident, when one considers the omnipresence of electronic services (e-services) on the Web. Accordingly, the Service-Oriented Architecture (SOA) is commonly considered as one of the de facto standards for the provisioning of heterogeneous business functionalities on the Web. As the basis for SOA, Web Services (WS) are commonly preferred, particularly because of their ability to facilitate the integration of heterogeneous systems. However, WS only focus on syntactic descriptions when describing the functional and behavioural aspects of services. This makes it a challenge for services to be automatically discovered, selected, composed, invoked, and executed – without any human intervention. Consequently, Semantic Web Services (SWS) are emerging to deal with such a challenge. SWS represent the convergence of Semantic Web (SW) and WS concepts, in order to enable Web services that can be automatically processed and understood by machines operating with limited or no user intervention. At present, research efforts within the SWS domain are mainly concentrated on semantic services automation aspects, such as discovery, matching, selection, composition, invocation, and execution. Moreover, extensive research has been conducted on the conceptual models and formal languages used in constructing semantic services. However, in terms of the engineering of semantic services, a number of challenges are still prevalent, as demonstrated by the lack of development and use of semantic services in real-world settings. The lack of development and use could be attributed to a number of challenges, such as complex semantic services enabling technologies, leading to a steep learning curve for service developers; lack of unified service platforms for guiding and supporting simple and rapid engineering of semantic services, and the limited integration of semantic technologies with mature service-oriented technologies. vi In addition, a combination of isolated software tools is normally used to engineer semantic services. This could, however, lead to undesirable consequences, such as prolonged service development times, high service development costs, lack of services re-use, and the lack of semantics interoperability, reliability, and re-usability. Furthermore, available software platforms do not support the creation of semantic services that are intelligent beyond the application of semantic descriptions, as envisaged for the next generation of services, where the connection of knowledge is of core importance. In addressing some of the challenges highlighted, this research study adopted a qualitative research approach with the main focus on conceptual modelling. The main contribution of this study is thus a framework called iSemServ to simplify and accelerate the process of engineering intelligent semantic services. The framework has been modelled and developed, based on the principles of simplicity, rapidity, and intelligence. The key contributions of the proposed framework are: (1) An end-to-end and unified approach of engineering intelligent semantic services, thereby enabling service engineers to use one platform to realize all the modules comprising such services; (2) proposal of a model-driven approach that enables the average and expert service engineers to focus on developing intelligent semantic services in a structured, extensible, and platform-independent manner. Thereby increasing developers’ productivity and minimizing development and maintenance costs; (3) complexity hiding through the exploitation of template and rule-based automatic code generators, supporting different service architectural styles and semantic models; and (4) intelligence wrapping of services at message and knowledge levels, for the purposes of automatically processing semantic service requests, responses and reasoning over domain ontologies and semantic descriptions by keeping user intervention at a minimum. The framework was designed by following a model-driven approach and implemented using the Eclipse platform. It was evaluated using practical use case scenarios, comparative analysis, and performance and scalability experiments. In conclusion, the iSemServ framework is considered appropriate for dealing with the complexities and restrictions involved in engineering intelligent semantic services, especially because the amount of time required to generate intelligent semantic vii services using the proposed framework is smaller compared with the time that the service engineer would need to manually generate all the different artefacts comprising an intelligent semantic service. Keywords: Intelligent semantic services, Web services, Ontologies, Intelligent agents, Service engineering, Model-driven techniques, iSemServ framework.ComputingD. Phil. (Computer science

    Service-Oriented Architecture for Patient-Centric eHealth Solutions

    Get PDF
    The world is in shortage of about 7.2 million healthcare workers in 2013, and the figure is estimated to grow to 12.9 million by 2035, according to the World Health Organization (WHO). On the other hand, the median age of the world’s population was predicted to increase from 26.6 years in 2000 to 37.3 years in 2050, and then to 45.6 years in 2100. Thus further escalating the need for new and efficient healthcare solutions. Telehealth, telecare, and Ambient Assisted Living (AAL) solutions promise to make healthcare services more sustainable, and to enable patients to live more independently and with a higher quality of life at their homes. Smart homes will host intelligent, connected devices that integrate with the Internet of Things (IoT) to form the basis of new and advanced healthcare systems. However, a number of challenges needs to be addressed before this vision can be actualised. These challenges include flexible integration, rapid service development and deployment, mobility, unified abstraction, scalability and high availability, security and privacy. This thesis presents an integration architecture based on Service-Oriented Architecture (SOA) that enables novel healthcare services to be developed rapidly by utilising capabilities of various devices in the patients’ surroundings. Special attention is given to a service broker component, the Information Integration Platform (IIP), that has been developed to bridge communications between everyday objects and Internet-based services following the Enterprise Service Bus (ESB) principles. It exposes its functionalities through a set of RESTfulWeb services, and maintains a unified information model which enables various applications to access in a uniform way. The IIP breaks the traditional vertical “silo” approach of integration, and handles information dissemination task between information providers and consumers by adopting a publish/subscribe messaging pattern. The feasibility of the IIP solution is evaluated both through prototyping and testing the platform’s representative healthcare services, e.g., remote health monitoring and emergency alarms. Experiments conducted on the IIP reveal how performance aspects are affected by needs for security, privacy, high availability, and scalability

    Domain-specific summarisation of Life-Science e-experiments from provenance traces

    Get PDF
    Translational research in Life-Science nowadays leverages e-Science platforms to analyze and produce huge amounts of data. With the unprecedented growth of Life-Science data repositories, identifying relevant data for analysis becomes increasingly difficult. The instrumentation of e-Science platforms with provenance tracking techniques provides useful information from a data analysis process design or debugging perspective. However raw provenance traces are too massive and too generic to facilitate the scientific interpretation of data. In this paper, we propose an integrated approach in which Life-Science knowledge is (i) captured through domain ontologies and linked to Life-Science data analysis tools, and (ii) propagated through rules to produced data, in order to constitute human-tractable experiment summaries. Our approach has been implemented in the Virtual Imaging Platform (VIP) and experimental results show the feasibility of producing few domain-specific statements which opens new data sharing and repurposing opportunities in line with Linked Data initiatives

    Linked Data Entity Summarization

    Get PDF
    On the Web, the amount of structured and Linked Data about entities is constantly growing. Descriptions of single entities often include thousands of statements and it becomes difficult to comprehend the data, unless a selection of the most relevant facts is provided. This doctoral thesis addresses the problem of Linked Data entity summarization. The contributions involve two entity summarization approaches, a common API for entity summarization, and an approach for entity data fusion

    IntegraDos: facilitating the adoption of the Internet of Things through the integration of technologies

    Get PDF
    También, han sido analizados los componentes para una integración del IoT y cloud computing, concluyendo en la arquitectura Lambda-CoAP. Y por último, los desafíos para una integración del IoT y Blockchain han sido analizados junto con una evaluación de las posibilidades de los dispositivos del IoT para incorporar nodos de Blockchain. Las contribuciones de esta tesis doctoral contribuyen a acercar la adopción del IoT en la sociedad, y por tanto, a la expansión de esta prominente tecnología. Fecha de lectura de Tesis: 17 de diciembre 2018.El Internet de las Cosas (IoT) fue un nuevo concepto introducido por K. Asthon en 1999 para referirse a un conjunto identificable de objetos conectados a través de RFID. Actualmente, el IoT se caracteriza por ser una tecnología ubicua que está presente en un gran número de áreas, como puede ser la monitorización de infraestructuras críticas, sistemas de trazabilidad o sistemas asistidos para el cuidado de la salud. El IoT está cada vez más presente en nuestro día a día, cubriendo un gran abanico de posibilidades con el fin de optimizar los procesos y problemas a los que se enfrenta la sociedad. Es por ello por lo que el IoT es una tecnología prometedora que está continuamente evolucionando gracias a la continua investigación y el gran número de dispositivos, sistemas y componentes emergidos cada día. Sin embargo, los dispositivos involucrados en el IoT se corresponden normalmente con dispositivos embebidos con limitaciones de almacenamiento y procesamiento, así como restricciones de memoria y potencia. Además, el número de objetos o dispositivos conectados a Internet contiene grandes previsiones de crecimiento para los próximos años, con unas expectativas de 500 miles de millones de objetos conectados para 2030. Por lo tanto, para dar cabida a despliegues globales del IoT, además de suplir las limitaciones que existen, es necesario involucrar nuevos sistemas y paradigmas que faciliten la adopción de este campo. El principal objetivo de esta tesis doctoral, conocida como IntegraDos, es facilitar la adopción del IoT a través de la integración con una serie de tecnologías. Por un lado, ha sido abordado cómo puede ser facilitada la gestión de sensores y actuadores en dispositivos físicos sin tener que acceder y programar las placas de desarrollo. Por otro lado, un sistema para programar aplicaciones del IoT portables, adaptables, personalizadas y desacopladas de los dispositivos ha sido definido

    Model-based Specification of RESTful SOA on the Basis of Flexible SOM Business Process Models

    Get PDF
    Die Umwelt von Unternehmen zeichnet sich in der heutigen Zeit durch eine hohe Dynamik und stetig wachsende Komplexität aus. In diesem Umfeld ist die rasche Anpassung der betrieblichen Leistungserstellung eine notwendige Konsequenz, um die Wettbewerbsfähigkeit eines Unternehmens und dadurch sein Überleben sicherzustellen. Bei der evolutionären Anpassung betrieblicher Systeme ist die Flexibilität betrieblicher Geschäftsprozesse ein zentraler Erfolgsfaktor. In der Vergangenheit führten flexible Geschäftsprozesse jedoch meist zu verringerten Automatisierungsgraden der unterstützenden Anwendungssysteme (AwS), und damit zu Inkonsistenzen im betrieblichen Informationssystem. Die Bereitstellung von Lösungsansätzen für eine zügige Entwicklung von AwS und ihre Ausrichtung auf veränderte fachliche Anforderungen ist Aufgabe der Systementwicklung. Bisherige Konzepte, Hilfsmittel und IT-Architekturen beantworten die Frage nach einer ganzheitlichen und systematischen Gestaltung und Pflege von AwS und deren konsistenten Abstimmung mit flexiblen Geschäftsprozessen jedoch methodisch nicht adäquat. Als Antwort auf diese Frage wird in der vorliegenden Arbeit die SOM-R-Methodik konstruiert, einer modellbasierten Entwicklungsmethodik auf Basis des Semantischen Objektmodells (SOM) für die ganzheitliche Entwicklung und Weiterentwicklung von RESTful SOA auf Basis flexibler SOM-Geschäftsprozessmodelle. Mit der RESTful SOA wird durch die Gestaltung service-orientierter Architekturen (SOA) nach dem Architekturstil REST eine Zielarchitektur für flexibel anpassbare AwS entworfen. Ein wesentlicher Beitrag dieser Arbeit besteht in der methodisch durchgängigen Zusammenführung der fachlichen Geschäftsprozessebene mit den softwaretechnischen Ebenen der RESTful SOA. Durch die Definition eines gemeinsamen Begriffssystems und einheitlichen Architekturrahmens wird eine modellbasierte Abbildung von Konzepten des SOM-Geschäftsprozessmodells in die Spezifikationen von Ressourcen sowie weiteren Bausteinen des AwS realisiert. Die Modellierung von Struktur und Verhalten der Geschäftsprozesse mit SOM ist dafür eine wichtige Voraussetzung. Der zweite zentrale Beitrag dieser Arbeit ist ein modellbasierter Lösungsansatz zur Unterstützung der Pflege von betrieblichen Informationssystemen. Die SOM-R-Methodik wird hierzu um ein Vorgehensmodell sowie Ansätze zur Analyse der Auswirkungen von Strukturänderungen und der Ermittlung von Assistenzinformationen für die Weiterentwicklung von AwS erweitert. Die werkzeuggestützte Bereitstellung dieser Informationen leitet den Systementwickler bei der zielgerichteten Anpassung von RESTful SOA, bzw. der dazu korrespondierenden Modellsysteme, an die Änderungen flexibler SOM-Geschäftsprozessmodelle an. Die praktische Anwendung der SOM-R-Methodik wird im Rahmen einer Fallstudie demonstriert und erläutert.Strong dynamics and a continuous increase of complexity characterize a company’s environment at present times. In such an environment, the rapid adaptation of the production and delivery of goods and services is a necessary consequence to ensure the survival of a company. A key success factor for the evolutionary adaptation of a business system is the flexibility of its business processes. In the past, flexible business processes generally lead to a reduced level of automation in the supported application system, and consequently to inconsistencies in the business information system. The provision of appropriate solutions for the quick development of application systems and their alignment to changing business requirements is a central task of the system development discipline. Current concepts, tools and IT architectures do not give a methodically adequate answer to the question of a holistic and systematic design and maintenance of application systems, and their consistent alignment with flexible business processes. As an answer to this question, the SOM-R methodology, a model-based development method based on the Semantic Object Model (SOM) for the holistic development and maintenance of RESTful SOA on the basis of flexible SOM business process models, is designed in this work. Through applying the architectural style REST to service oriented architectures (SOA), the RESTful SOA is designed as the target software architecture of flexible adaptable application systems. The first main contribution of this research is a methodically consistent way for bridging the gap between the business process layer and the software technical layers of the RESTful SOA. Defining a common conceptual and architectural framework realizes the mapping of the concepts of SOM business process models to the model-based specification of resources and other modules of the application system. Modeling the structure and behavior of business processes with SOM is an important prerequisite for that. The second main contribution of this work is a model-based approach to supporting the maintenance of business information systems. Therefore, various approaches for analyzing the effect of structural changes and deriving assistance information to support the application system maintenance extend the SOM-R methodology. The tool-supported provision of this information guides the system developer in adapting a RESTful SOA, or rather the corresponding modeling system, to the structural changes of flexible SOM business process models. A case study demonstrates and explains the practical application of the SOM-R methodology

    Micro-intelligence for the IoT: logic-based models and technologies

    Get PDF
    Computing is moving towards pervasive, ubiquitous environments in which devices, software agents and services are all expected to seamlessly integrate and cooperate in support of human objectives. An important next step for pervasive computing is the integration of intelligent agents that employ knowledge and reasoning to understand the local context and share this information in support of intelligent applications and interfaces. Such scenarios, characterised by "computation everywhere around us", require on the one hand software components with intelligent behaviour in terms of objectives and context, and on the other their integration so as to produce social intelligence. Logic Programming (LP) has been recognised as a natural paradigm for addressing the needs of distributed intelligence. Yet, the development of novel architectures, in particular in the context Internet of Things (IoT), and the emergence of new domains and potential applications, are creating new research opportunities where LP could be exploited, when suitably coupled with agent technologies and methods so that it can fully develop its potential in the new context. In particular, the LP and its extensions can act as micro-intelligence sources for the IoT world, both at the individual and the social level, provided that they are reconsidered in a renewed architectural vision. Such micro-intelligence sources could deal with the local knowledge of the devices taking into account the domain specificity of each environment. The goal of this thesis is to re-contextualise LP and its extensions in these new domains as a source of micro-intelligence for the IoT world, envisioning a large number of small computational units distributed and situated in the environment, thus promoting the local exploitation of symbolic languages with inference capabilities. The topic is explored in depth and the effectiveness of novel LP models and architectures -and of the corresponding technology- expressing the concept of micro-intelligence is tested

    Modeling and Selection of Software Service Variants

    Get PDF
    Providers and consumers have to deal with variants, meaning alternative instances of a service?s design, implementation, deployment, or operation, when developing or delivering software services. This work presents service feature modeling to deal with associated challenges, comprising a language to represent software service variants and a set of methods for modeling and subsequent variant selection. This work?s evaluation includes a POC implementation and two real-life use cases
    corecore