715 research outputs found

    A gap analysis of Internet-of-Things platforms

    Full text link
    We are experiencing an abundance of Internet-of-Things (IoT) middleware solutions that provide connectivity for sensors and actuators to the Internet. To gain a widespread adoption, these middleware solutions, referred to as platforms, have to meet the expectations of different players in the IoT ecosystem, including device providers, application developers, and end-users, among others. In this article, we evaluate a representative sample of these platforms, both proprietary and open-source, on the basis of their ability to meet the expectations of different IoT users. The evaluation is thus more focused on how ready and usable these platforms are for IoT ecosystem players, rather than on the peculiarities of the underlying technological layers. The evaluation is carried out as a gap analysis of the current IoT landscape with respect to (i) the support for heterogeneous sensing and actuating technologies, (ii) the data ownership and its implications for security and privacy, (iii) data processing and data sharing capabilities, (iv) the support offered to application developers, (v) the completeness of an IoT ecosystem, and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims to highlight the deficiencies of today's solutions to improve their integration to tomorrow's ecosystems. In order to strengthen the finding of our analysis, we conducted a survey among the partners of the Finnish IoT program, counting over 350 experts, to evaluate the most critical issues for the development of future IoT platforms. Based on the results of our analysis and our survey, we conclude this article with a list of recommendations for extending these IoT platforms in order to fill in the gaps.Comment: 15 pages, 4 figures, 3 tables, Accepted for publication in Computer Communications, special issue on the Internet of Things: Research challenges and solution

    Information visualisation and data analysis using web mash-up systems

    Get PDF
    A thesis submitted in partial fulfilment for the degree of Doctor of PhilosophyThe arrival of E-commerce systems have contributed greatly to the economy and have played a vital role in collecting a huge amount of transactional data. It is becoming difficult day by day to analyse business and consumer behaviour with the production of such a colossal volume of data. Enterprise 2.0 has the ability to store and create an enormous amount of transactional data; the purpose for which data was collected could quite easily be disassociated as the essential information goes unnoticed in large and complex data sets. The information overflow is a major contributor to the dilemma. In the current environment, where hardware systems have the ability to store such large volumes of data and the software systems have the capability of substantial data production, data exploration problems are on the rise. The problem is not with the production or storage of data but with the effectiveness of the systems and techniques where essential information could be retrieved from complex data sets in a comprehensive and logical approach as the data questions are asked. Using the existing information retrieval systems and visualisation tools, the more specific questions are asked, the more definitive and unambiguous are the visualised results that could be attained, but when it comes to complex and large data sets there are no elementary or simple questions. Therefore a profound information visualisation model and system is required to analyse complex data sets through data analysis and information visualisation, to make it possible for the decision makers to identify the expected and discover the unexpected. In order to address complex data problems, a comprehensive and robust visualisation model and system is introduced. The visualisation model consists of four major layers, (i) acquisition and data analysis, (ii) data representation, (iii) user and computer interaction and (iv) results repositories. There are major contributions in all four layers but particularly in data acquisition and data representation. Multiple attribute and dimensional data visualisation techniques are identified in Enterprise 2.0 and Web 2.0 environment. Transactional tagging and linked data are unearthed which is a novel contribution in information visualisation. The visualisation model and system is first realised as a tangible software system, which is then validated through different and large types of data sets in three experiments. The first experiment is based on the large Royal Mail postcode data set. The second experiment is based on a large transactional data set in an enterprise environment while the same data set is processed in a non-enterprise environment. The system interaction facilitated through new mashup techniques enables users to interact more fluently with data and the representation layer. The results are exported into various reusable formats and retrieved for further comparison and analysis purposes. The information visualisation model introduced in this research is a compact process for any size and type of data set which is a major contribution in information visualisation and data analysis. Advanced data representation techniques are employed using various web mashup technologies. New visualisation techniques have emerged from the research such as transactional tagging visualisation and linked data visualisation. The information visualisation model and system is extremely useful in addressing complex data problems with strategies that are easy to interact with and integrate

    A Usability Evaluation Framework for

    Get PDF
    Currently, more than two billions people access the Web for various purposes. The majority are people without programming or modelling background. Part of these people (called end-users) also likes to create their own Web applications to meet their daily needs. Mashup Makers are tools to create such end-user’s Web applications. As such, Mashup Makers could become the dominant environment for end-user development of Web applications. Existing Mashup Makers promise that creating a Web Mashup is very easy and just a matter of a few mouse clicks. However, there is no evidence that this is indeed the case. On the contrary, research has already revealed usability problems with Mashup Makers. Therefore, this thesis concentrates on the usability of Mashup Makers as development environments for Web applications for end-users. Usability is a key issue for the success of software artifacts, and especially if the artifacts are intended for non-technical users. Therefore, we target the achievement of a consolidated approach, model, and framework for the evaluation of the usability of Mashup Makers for end-users. Such a framework will not only allow evaluating the usability of existing Mashup Makers, but it will also provide key issues concerning usability (ie usability impact factors) that developers of Mashup Makers and of other future end-user development tools can take into consideration when developing new tools

    M2* - mobility to anywhere, an IoT aggregation service platform

    Get PDF
    This work addresses the problem of the creation of a central integrated platform to collect and manipulate mobility data and sensor data towards the creation of useful information for users in their mobility process. This is an academic work towards a framework for mobility process, where that manipulate can create useful information for users, public transportation operators and authorities, energy and water real time consumption.info:eu-repo/semantics/acceptedVersio

    Technology Integration around the Geographic Information: A State of the Art

    Get PDF
    One of the elements that have popularized and facilitated the use of geographical information on a variety of computational applications has been the use of Web maps; this has opened new research challenges on different subjects, from locating places and people, the study of social behavior or the analyzing of the hidden structures of the terms used in a natural language query used for locating a place. However, the use of geographic information under technological features is not new, instead it has been part of a development and technological integration process. This paper presents a state of the art review about the application of geographic information under different approaches: its use on location based services, the collaborative user participation on it, its contextual-awareness, its use in the Semantic Web and the challenges of its use in natural languge queries. Finally, a prototype that integrates most of these areas is presented

    Towards a Cyber-Physical Manufacturing Cloud through Operable Digital Twins and Virtual Production Lines

    Get PDF
    In last decade, the paradigm of Cyber-Physical Systems (CPS) has integrated industrial manufacturing systems with Cloud Computing technologies for Cloud Manufacturing. Up to 2015, there were many CPS-based manufacturing systems that collected real-time machining data to perform remote monitoring, prognostics and health management, and predictive maintenance. However, these CPS-integrated and network ready machines were not directly connected to the elements of Cloud Manufacturing and required human-in-the-loop. Addressing this gap, we introduced a new paradigm of Cyber-Physical Manufacturing Cloud (CPMC) that bridges a gap between physical machines and virtual space in 2017. CPMC virtualizes machine tools in cloud through web services for direct monitoring and operations through Internet. Fundamentally, CPMC differs with contemporary modern manufacturing paradigms. For instance, CPMC virtualizes machining tools in cloud using remote services and establish direct Internet-based communication, which is overlooked in existing Cloud Manufacturing systems. Another contemporary, namely cyber-physical production systems enable networked access to machining tools. Nevertheless, CPMC virtualizes manufacturing resources in cloud and monitor and operate them over the Internet. This dissertation defines the fundamental concepts of CPMC and expands its horizon in different aspects of cloud-based virtual manufacturing such as Digital Twins and Virtual Production Lines. Digital Twin (DT) is another evolving concept since 2002 that creates as-is replicas of machining tools in cyber space. Up to 2018, many researchers proposed state-of-the-art DTs, which only focused on monitoring production lifecycle management through simulations and data driven analytics. But they overlooked executing manufacturing processes through DTs from virtual space. This dissertation identifies that DTs can be made more productive if they engage directly in direct execution of manufacturing operations besides monitoring. Towards this novel approach, this dissertation proposes a new operable DT model of CPMC that inherits the features of direct monitoring and operations from cloud. This research envisages and opens the door for future manufacturing systems where resources are developed as cloud-based DTs for remote and distributed manufacturing. Proposed concepts and visions of DTs have spawned the following fundamental researches. This dissertation proposes a novel concept of DT based Virtual Production Lines (VPL) in CPMC in 2019. It presents a design of a service-oriented architecture of DTs that virtualizes physical manufacturing resources in CPMC. Proposed DT architecture offers a more compact and integral service-oriented virtual representations of manufacturing resources. To re-configure a VPL, one requirement is to establish DT-to-DT collaborations in manufacturing clouds, which replicates to concurrent resource-to-resource collaborations in shop floors. Satisfying the above requirements, this research designs a novel framework to easily re-configure, monitor and operate VPLs using DTs of CPMC. CPMC publishes individual web services for machining tools, which is a traditional approach in the domain of service computing. But this approach overcrowds service registry databases. This dissertation introduces a novel fundamental service publication and discovery approach in 2020, OpenDT, which publishes DTs with collections of services. Experimental results show easier discovery and remote access of DTs while re-configuring VPLs. Proposed researches in this dissertation have received numerous citations both from industry and academia, clearly proving impacts of research contributions

    Web archives: the future

    Get PDF
    T his report is structured first, to engage in some speculative thought about the possible futures of the web as an exercise in prom pting us to think about what we need to do now in order to make sure that we can reliably and fruitfully use archives of the w eb in the future. Next, we turn to considering the methods and tools being used to research the live web, as a pointer to the types of things that can be developed to help unde rstand the archived web. Then , we turn to a series of topics and questions that researchers want or may want to address using the archived web. In this final section, we i dentify some of the challenges individuals, organizations, and international bodies can target to increase our ability to explore these topi cs and answer these quest ions. We end the report with some conclusions based on what we have learned from this exercise
    corecore