25 research outputs found

    Adaptive and Reactive Rich Internet Applications

    Get PDF
    In this thesis we present the client-side approach of Adaptive and Reactive Rich Internet Applications as the main result of our research into how to bring in time adaptivity to Rich Internet Applications. Our approach leverages previous work on adaptive hypermedia, event processing and other research disciplines. We present a holistic framework covering the design-time as well as the runtime aspects of Adaptive and Reactive Rich Internet Applications focusing especially on the run-time aspects

    Monitoraggio strutturale e ambientale con il Web delle cose

    Get PDF
    Structural health and Environmental monitoring are recently benefiting from the advancement in the digital industry. Thanks to the emergence of the Internet of Things (IoT) paradigm, monitoring systems are increasing their functionalities and reducing development costs. However, they are affected by a strong fragmentation in the solution proposed and technologies employed. This stale the overall benefits of the adoption of IoT frameworks or IoT devices since it limits the reusability and portability of the chosen platform. As in other IoT contexts, also the structural health and environmental monitoring domain is suffering from the negative effects of, what is called, an interoperability problem. Recently the World Wide Web Consortium (W3C) is joining the race in the definition of a standard for IoT unifying different solutions below a single paradigm. This new shift in the industry is called Web of Things or in short WoT. Together with other W3C technologies of the Semantic Web, the Web of Things unifies different protocols and data models thanks to a descriptive machine-understandable document called the Thing Description. This work wants to explore how this new paradigm can improve the quality of structural health and environmental monitoring applications. The goal is to provide a monitoring infrastructure solely based on WoT and Semantic technologies. The architecture is later tested and applied on two concrete use-cases taken from the industrial structural monitoring and the smart farming domains. Finally, this thesis proposes a layered structure for organizing the knowledge design of the two applications, and it provides evaluation comments on the results obtained.Le pratiche di monitoraggio strutturale e dell'ambiente stanno recemente beneficiando degli avanzamenti nella industria digitale. Grazie alla nascita di tecnologie basate sull'Internet of Things (IoT), i sistemi di monitoraggio hanno migliorato le loro funzionalità base e ridotto i costi di svilippo. Nonostante ciò, queste soluzioni hardware e software sono affette da una forte fragmentazione sia riguardo ai tipi dispositivo sia alle tecnologie usate. Questa fenomeno fa si che i benifici ottenuti utilizzando tecnologie IoT si riducano poichè spesso tali soluzioni mancano di portabilità e adattabilità. Come in altri contesti IoT, anche nel monitoraggio strutturale e ambintale possiamo incorre nel problema tipico della mancanza di interoperabilità tra diverse piattaforme. Recemenete il World Wide Web Consortium (W3C) ha iniziato a lavorare ad uno standard per unificare le maggiori tecnologie IoT sotto un unico paradigma. Questo nuova corrente è chiamata il Web of Things o in breve WoT. Assieme ad altre tecnologie del W3C come il Semantic Web, il Web of Things astrae differenti protocolli e middleware grazie ad un documento descritivo interpretabili dalle macchine chiamato Thing Description. Questo documento vuole esplorare come questo nuovo paradigma influenzi il mondo del monitoraggio strutturale e ambientale. In particolare vuole verificare se l'utilizzo di tecnologie puramente basate su WoT e Semantic Web possa migliorare la portabilità di un applicazione di monitoraggio. In concreto propone un architetuttura software poi implementata in due casi d'uso reali presi dal mondo dello smart farming e monitoraggio di strutture industriali. Infine, la tesi, propone un organizzazione a layer del modello dei dati e una valutazione dei risultati ottenuti

    Semantic Web and the Web of Things: concept, platform and applications

    Get PDF
    The ubiquitous presence of devices with computational resources and connectivity is fostering the diffusion of the Internet of Things (IoT), where smart objects interoperate and react to the available information providing services to the users. The pervasiveness of the IoT across many different areas proves the worldwide interest of researchers from academic and enterprises worlds. This Research has brought to new technologies and protocols addressing different needs of emerging scenarios, making difficult to develop interoperable applications. The Web of Things is born to address this problem through the standard protocols responsible for the success of the Web. But a greater contribution can be provided by standards of the Semantic Web. Semantic Web protocols grant univocal identification of resources and representation of data in a way that information is machine understandable and computable and such that information from different sources can be easily aggregated. Semantic Web technologies are then interoperability enablers for the IoT. This Thesis investigates how to employ Semantic Web protocols in the IoT, to realize the Semantic Web of Things (SWoT) vision of an interoperable network of applications. Part I introduces the IoT, Part II investigates the algorithms to efficiently support the publish/subscribe paradigm in semantic brokers for the SWoT and their implementation in Smart-M3 and SEPA. The preliminary work toward the first benchmark for SWoT applications is presented. Part IV describes the Research activity aimed at applying the developed semantic infrastructures in real life scenarios (electro-mobility, home automation, semantic audio and Internet of Musical Things). Part V presents the conclusions. A lack of effective ways to explore and debug Semantic Web datasets emerged during these activities. Part III describes a second Research aimed at devising of a novel way to visualize semantic datasets, based on graphs and the new concept of Semantic Planes.La presenza massiva di dispositivi dotati di capacità computazionale e connettività sta alimentando la diffusione di un nuovo paradigma nell'ICT, conosciuto come Internet of Things. L'IoT è caratterizzato dai cosiddetti smart object che interagiscono, cooperano e reagiscono alle informazioni a loro disponibili per fornire servizi agli utenti. La diffusione dell'IoT su così tante aree è la testimonianza di un interesse mondiale da parte di ricercatori appartenenti sia al mondo accademico che a quello industriale. La Ricerca ha portato alla nascita di tecnologie e protocolli progettati per rispondere ai diversi bisogni degli scenari emergenti, rendendo difficile sviluppare applicazioni interoperabili. Il Web of Things (WoT) è nato per rispondere a questi problemi tramite l'adozione degli standard che hanno favorito il successo del Web. Ma un contributo maggiore può venire dal Semantic Web of Things (SWoT). Infatti, i protocolli del Semantic Web permettono identificazione univoca delle risorse e una rappresentazione dei dati tale che le informazioni siano computabili e l'informazione di differenti fonti facilmente aggregabile. Le tecnologie del Semantic Web sono quindi degli interoperability enabler per l'IoT. Questa Tesi analizza come adottare le tecnologie del Semantic Web nell'IoT per realizzare la visione del SWoT di una rete di applicazioni interoperabile. Part I introduce l'IoT, Part II analizza gli algoritmi per supportare il publish-subscribe nei broker semantici e la loro implementazione in Smart-M3 e SEPA. Inoltre, viene presentato il lavoro preliminare verso il primo benchmark per applicazioni SWoT. Part IV discute l'applicazione dei risultati a diversi domini applicativi (mobilità elettrica, domotica, semantic audio ed Internet of Musical Things). Part V presenta le conclusioni sul lavoro svolto. La Ricerca su applicazioni semantiche ha evidenziato carenze negli attuali software di visualizzazione. Quindi, Part III presenta un nuovo metodo di rappresentazione delle basi di conoscenza semantiche basato sull’approccio a grafo che introduce il concetto di Semantic Plane

    Transformation Tool Contest 2010, 1-2 July 2010, Malaga, Spain

    Get PDF

    Knowledge-base and techniques for effective service-oriented programming & management of hybrid processes

    Full text link
    Recent advances in Web 2.0, SOA, crowd-sourcing, social and collaboration technologies, as well as cloud-computing, have truly transformed the Internet into a global development and deployment platform. As a result, developers have been presented with ubiquitous access to countless Web-services, resources and tools. However, while enabling tremendous automation and reuse opportunities, new productivity challenges have also emerged: The exploitation of services and resources nonetheless requires skilled programmers and a development-centric approach; it is thus inevitably susceptible to the same repetitive, error-prone and time consuming integration work each time a developer integrates a new API. Business Process Management on the other hand were proposed to support service-based integration. It provided the benefit of automation and modelling, which appealed to non-technical domain-experts. The problem however: it proves too rigid for unstructured processes. Thus, without this level of support, building new application either requires extensive manual programming or resorting to homebrew solutions. Alternatively, with the proliferation of SaaS, various such tools could be used for independent portions of the overall process - although this either presupposes conforming to the in-built process, or results in "shadow processes" via use of e-mail or the like, in order to exchange information and share decisions. There has therefore been an inevitable gap in technological support between structured and unstructured processes. To address these challenges, this thesis deals with transitioning process-support from structured to unstructured. We have been motivated to harness the foundational capabilities of BPM for its application to unstructured processes. We propose to achieve this by: First, addressing the productivity challenges of Web-services integration - simplifying this process - whilst encouraging an incremental curation and collective reuse approach. We then extend this to propose an innovative Hybrid-Process Management Platform that holistically combines structured, semi-structured and unstructured activities, based on a unified task-model that encapsulates a spectrum of process specificity. We have thus aimed to bridge the current lacking technology gap. The approach presented has been exposed as service-based libraries and tools. Whereby, we have devised several use-case scenarios and conducted user-studies in order to evaluate the overall effectiveness of our proposed work

    Conservative and traceable executions of heterogeneous model management workflows

    Get PDF
    One challenge of developing large scale systems is knowing how artefacts are interrelated across tools and languages, especially when traceability is mandated e.g., by certifying authorities. Another challenge is the interoperability of all required tools to allow the software to be built, tested, and deployed efficiently as it evolves. Build systems have grown in popularity as they facilitate these activities. To cope with the complexities of the development process, engineers can adopt model-driven practices that allow them to raise the system abstraction level by modelling its domain, therefore, reducing the accidental complexity that comes from e.g., writing boilerplate code. However, model-driven practices come with challenges such as integrating heterogeneous model management tasks e.g., validation, and modelling technologies e.g., Simulink (a proprietary modelling environment for dynamic systems). While there are tools that support the execution of model-driven workflows, some support only specific modelling technologies, lack the generation of traceability information, or do not offer the cutting-edge features of build systems like conservative executions i.e., where only tasks affected by changes to resources are executed. In this work we propose ModelFlow, a workflow language and interpreter able to specify and execute model management workflows conservatively and produce traceability information as a side product. In addition, ModelFlow reduces the overhead of model loading and disposal operations by allowing model management tasks to share already loaded models during the workflow execution. Our evaluation shows that ModelFlow can perform conservative executions which can improve the performance times in some scenarios. ModelFlow is designed to support the execution of model management tasks targeting various modelling frameworks and can be used in conjunction with models from heterogeneous technologies. In addition to EMF models, ModelFlow can also handle Simulink models through a driver developed in the context of this thesis which was used to support one case study

    Parallel and Distributed Execution of Model Management Programs

    Get PDF
    The engineering process of complex systems involves many stakeholders and development artefacts. Model-Driven Engineering (MDE) is an approach to development which aims to help curtail and better manage this complexity by raising the level of abstraction. In MDE, models are first-class artefacts in the development process. Such models can be used to describe artefacts of arbitrary complexity at various levels of abstraction according to the requirements of their prospective stakeholders. These models come in various sizes and formats and can be thought of more broadly as structured data. Since models are the primary artefacts in MDE, and the goal is to enhance the efficiency of the development process, powerful tools are required to work with such models at an appropriate level of abstraction. Model management tasks – such as querying, validation, comparison, transformation and text generation – are often performed using dedicated languages, with declarative constructs used to improve expressiveness. Despite their semantically constrained nature, the execution engines of these languages rarely capitalize on the optimization opportunities afforded to them. Therefore, working with very large models often leads to poor performance when using MDE tools compared to general-purpose programming languages, which has a detrimental effect on productivity. Given the stagnant single-threaded performance of modern CPUs along with the ubiquity of distributed computing, parallelization of these model management program is a necessity to address some of the scalability concerns surrounding MDE. This thesis demonstrates efficient parallel and distributed execution algorithms for model validation, querying and text generation and evaluates their effectiveness. By fully utilizing the CPUs on 26 hexa-core systems, we were able to improve performance of a complex model validation language by 122x compared to its existing sequential implementation. Up to 11x speedup was achieved with 16 cores for model query and model-to-text transformation tasks

    First CLIPS Conference Proceedings, volume 2

    Get PDF
    The topics of volume 2 of First CLIPS Conference are associated with following applications: quality control; intelligent data bases and networks; Space Station Freedom; Space Shuttle and satellite; user interface; artificial neural systems and fuzzy logic; parallel and distributed processing; enchancements to CLIPS; aerospace; simulation and defense; advisory systems and tutors; and intelligent control

    Knowledge Modelling and Learning through Cognitive Networks

    Get PDF
    One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot
    corecore