139 research outputs found

    Edgar: Offloading Function Execution to the Ultimate Edge: Technical Report

    Get PDF
    Web applications are on the rise and rapidly evolve into mature replacements for their native counterparts. This trend is mainly driven by the attainment of platform-independence and instant deployability. While web applications are getting more and more complex, scalability and responsiveness remain key challenges that are addressed by rather costly approaches such as cloud computing. In this paper, we present Edgar, a novel middleware for web applications that enables client-side execution of code usually requiring server-side deployment due to missing trust in clients. Following the paradigm of Function-as-a-Service, applications consist of functions that can be distributed to browsers. Other nearby browsers can discover these functions and then directly invoke them on a peer-to-peer basis. Thus, client-side resources are used to provision the web application, which generates lower costs for service providers. Offering premium services such as liberation from ads can be used to incentivise users to provide their resources. In case of resource shortage or unresponsive clients, execution falls back to a cloud-based infrastructure. Edgar combines WebAssembly for executing workloads written in different languages at near-native speed, WebRTC for browser-to-browser communication and Intel SGX to establish trust in other browser’s computations.We evaluate Edgar by implementing a digital assistant as well as a recommendation system. Our evaluation shows that Edgar generates lower costs than traditional deployments, scales linearly with increasing client numbers and manages unresponsive clients well

    Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

    Get PDF
    Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities

    Cloud computing based bushfire prediction for cyber-physical emergency applications

    Get PDF
    In the past few years, several studies proposed to reduce the impact of bushfires by mapping their occurrences and spread. Most of these prediction/mapping tools and models were designed to run either on a single local machine or a High performance cluster, neither of which can scale with users' needs. The process of installing these tools and models their configuration can itself be a tedious and time consuming process. Thus making them, not suitable for time constraint cyber-physical emergency systems. In this research, to improve the efficiency of the fire prediction process and make this service available to several users in a scalable and cost-effective manner, we propose a scalable Cloud based bushfire prediction framework, which allows forecasting of the probability of fire occurrences in different regions of interest. The framework automates the process of selecting particular bushfire models for specific regions and scheduling users' requests within their specified deadlines. The evaluation results show that our Cloud based bushfire prediction system can scale resources and meet user requirements. © 2017 Elsevier B.V

    Ortsbezogene Anwendungen und Dienste: 9. Fachgespräch der GI/ITG-Fachgruppe Kommunikation und Verteilte Systeme ; 13. & 14. September 2012

    Get PDF
    Der Aufenthaltsort eines mobilen Benutzers stellt eine wichtige Information für Anwendungen aus den Bereichen Mobile Computing, Wearable Computing oder Ubiquitous Computing dar. Ist ein mobiles Endgerät in der Lage, die aktuelle Position des Benutzers zu bestimmen, kann diese Information von der Anwendung berücksichtigt werden -- man spricht dabei allgemein von ortsbezogenen Anwendungen. Eng verknüpft mit dem Begriff der ortsbezogenen Anwendung ist der Begriff des ortsbezogenen Dienstes. Hierbei handelt es sich beispielsweise um einen Dienst, der Informationen über den aktuellen Standort übermittelt. Mittlerweile werden solche Dienste kommerziell eingesetzt und erlauben etwa, dass ein Reisender ein Hotel, eine Tankstelle oder eine Apotheke in der näheren Umgebung findet. Man erwartet, nicht zuletzt durch die Einführung von LTE, ein großes Potenzial ortsbezogener Anwendungen für die Zukunft. Das jährlich stattfindende Fachgespräch "Ortsbezogene Anwendungen und Dienste" der GI/ITG-Fachgruppe Kommunikation und Verteilte Systeme hat sich zum Ziel gesetzt, aktuelle Entwicklungen dieses Fachgebiets in einem breiten Teilnehmerkreis aus Industrie und Wissenschaft zu diskutieren. Der vorliegende Konferenzband fasst die Ergebnisse des neunten Fachgesprächs zusammen.The location of a mobile user poses an important information for applications in the scope of Mobile Computung, Wearable Computing and Ubiquitous Computing. If a mobile device is able to determine the current location of its user, this information may be taken into account by an application. Such applications are called a location-based applications. Closely related to location-based applications are location-based services, which for example provides the user informations about his current location. Meanwhile such services are deployed commercially and enable travelers for example to find a hotel, a petrol station or a pharmacy in his vicinity. It is expected, not least because of the introduction of LTE, a great potential of locations-based applications in the future. The annual technical meeting "Location-based Applications and Services" of the GI/ITG specialized group "Communication and Dsitributed Systems" targets to discuss current evolutions in a broad group of participants assembling of industrial representatives and scientists. The present proceedings summarizes the result of the 9th annual meeting

    Data Spaces

    Get PDF
    This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical

    Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning

    Get PDF
    CCTV cameras produce a large amount of video surveillance data per day, and analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce programming model. It automates the operation of creating tasks for each function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed so that the processing can be done in the fastest (or within a known time constraint) and the most cost effective manner. Often such resources will also have to satisfy practical, procedural and legal requirements. The capability to model a distributed processing architecture where the resource requirements can be effectively and optimally predicted will thus be a useful tool, if available. In literature there is no clear and comprehensive modelling framework that provides proactive resource allocation mechanisms to satisfy a user's target requirements, especially for a processing intensive application such as video analytic. In this thesis, with the hope of closing the above research gap, novel research is first initiated by understanding the current legal practices and requirements of implementing video surveillance system within a distributed processing and data storage environment, since the legal validity of data gathered or processed within such a system is vital for a distributed system's applicability in such domains. Subsequently the thesis presents a comprehensive framework for the performance ii modelling and optimization of resource allocation in deploying a scalable distributed video analytic application in a Hadoop based framework, running on virtualized cluster of machines. The proposed modelling framework investigates the use of several machine learning algorithms such as, decision trees (M5P, RepTree), Linear Regression, Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to model and predict the execution time of video analytic jobs, based on infrastructure level as well as job level parameters. Further in order to propose a novel framework for the allocate resources under constraints to obtain optimal performance in terms of job execution time, we propose a Genetic Algorithms (GAs) based optimization technique. Experimental results are provided to demonstrate the proposed framework's capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters. Given the above, the thesis contributes to the state-of-art in distributed video analytics, design, implementation, performance analysis and optimisation

    Mobile phone technology as an aid to contemporary transport questions in walkability, in the context of developing countries

    Get PDF
    The emerging global middle class, which is expected to double by 2050 desires more walkable, liveable neighbourhoods, and as distances between work and other amenities increases, cities are becoming less monocentric and becoming more polycentric. African cities could be described as walking cities, based on the number of people that walk to their destinations as opposed to other means of mobility but are often not walkable. Walking is by far the most popular form of transportation in Africa’s rapidly urbanising cities, although it is not often by choice rather a necessity. Facilitating this primary mode, while curbing the growth of less sustainable mobility uses requires special attention for the safety and convenience of walking in view of a Global South context. In this regard, to further promote walking as a sustainable mobility option, there is a need to assess the current state of its supporting infrastructure and begin giving it higher priority, focus and emphasis. Mobile phones have emerged as a useful alternative tool to collect this data and audit the state of walkability in cities. They eliminate the inaccuracies and inefficiencies of human memories because smartphone sensors such as GPS provides information with accuracies within 5m, providing superior accuracy and precision compared to other traditional methods. The data is also spatial in nature, allowing for a range of possible applications and use cases. Traditional inventory approaches in walkability often only revealed the perceived walkability and accessibility for only a subset of journeys. Crowdsourcing the perceived walkability and accessibility of points of interest in African cities could address this, albeit aspects such as ease-of-use and road safety should also be considered. A tool that crowdsources individual pedestrian experiences; availability and state of pedestrian infrastructure and amenities, using state-of-the-art smartphone technology, would over time also result in complete surveys of the walking environment provided such a tool is popular and safe. This research will illustrate how mobile phone applications currently in the market can be improved to offer more functionality that factors in multiple sensory modalities for enhanced visual appeal, ease of use, and aesthetics. The overarching aim of this research is, therefore, to develop the framework for and test a pilot-version mobile phone-based data collection tool that incorporates emerging technologies in collecting data on walkability. This research project will assess the effectiveness of the mobile application and test the technical capabilities of the system to experience how it operates within an existing infrastructure. It will continue to investigate the use of mobile phone technology in the collection of user perceptions of walkability, and the limitations of current transportation-based mobile applications, with the aim of developing an application that is an improvement to current offerings in the market. The prototype application will be tested and later piloted in different locations around the globe. Past studies are primarily focused on the development of transport-based mobile phone applications with basic features and limited functionality. Although limited progress has been made in integrating emerging advanced technologies such as Augmented Reality (AR), Machine Learning (ML), Big Data analytics, amongst others into mobile phone applications; what is missing from these past examples is a comprehensive and structured application in the transportation sphere. In turn, the full research will offer a broader understanding of the iii information gathered from these smart devices, and how that large volume of varied data can be better and more quickly interpreted to discover trends, patterns, and aid in decision making and planning. This research project attempts to fill this gap and also bring new insights, thus promote the research field of transportation data collection audits, with particular emphasis on walkability audits. In this regard, this research seeks to provide insights into how such a tool could be applied in assessing and promoting walkability as a sustainable and equitable mobility option. In order to get policy-makers, analysts, and practitioners in urban transport planning and provision in cities to pay closer attention to making better, more walkable places, appealing to them from an efficiency and business perspective is vital. This crowdsourced data is of great interest to industry practitioners, local governments and research communities as Big Data, and to urban communities and civil society as an input in their advocacy activities. The general findings from the results of this research show clear evidence that transport-based mobile phone applications currently available in the market are increasingly getting outdated and are not keeping up with new and emerging technologies and innovations. It is also evident from the results that mobile smartphones have revolutionised the collection of transport-related information hence the need for new initiatives to help take advantage of this emerging opportunity. The implications of these findings are that more attention needs to be paid to this niche going forward. This research project recommends that more studies, particularly on what technologies and functionalities can realistically be incorporated into mobile phone applications in the near future be done as well as on improving the hardware specifications of mobile phone devices to facilitate and support these emerging technologies whilst keeping the cost of mobile devices as low as possible

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Data Spaces

    Get PDF
    This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical

    Web-based Stereoscopic Collaboration for Medical Visualization

    Get PDF
    Medizinische Volumenvisualisierung ist ein wertvolles Werkzeug zur Betrachtung von Volumen- daten in der medizinischen Praxis und Lehre. Eine interaktive, stereoskopische und kollaborative Darstellung in Echtzeit ist notwendig, um die Daten vollständig und im Detail verstehen zu können. Solche Visualisierung von hochauflösenden Daten ist jedoch wegen hoher Hardware- Anforderungen fast nur an speziellen Visualisierungssystemen möglich. Remote-Visualisierung wird verwendet, um solche Visualisierung peripher nutzen zu können. Dies benötigt jedoch fast immer komplexe Software-Deployments, wodurch eine universelle ad-hoc Nutzbarkeit erschwert wird. Aus diesem Sachverhalt ergibt sich folgende Hypothese: Ein hoch performantes Remote- Visualisierungssystem, welches für Stereoskopie und einfache Benutzbarkeit spezialisiert ist, kann für interaktive, stereoskopische und kollaborative medizinische Volumenvisualisierung genutzt werden. Die neueste Literatur über Remote-Visualisierung beschreibt Anwendungen, welche nur reine Webbrowser benötigen. Allerdings wird bei diesen kein besonderer Schwerpunkt auf die perfor- mante Nutzbarkeit von jedem Teilnehmer gesetzt, noch die notwendige Funktion bereitgestellt, um mehrere stereoskopische Präsentationssysteme zu bedienen. Durch die Bekanntheit von Web- browsern, deren einfach Nutzbarkeit und weite Verbreitung hat sich folgende spezifische Frage ergeben: Können wir ein System entwickeln, welches alle Aspekte unterstützt, aber nur einen reinen Webbrowser ohne zusätzliche Software als Client benötigt? Ein Proof of Concept wurde durchgeführt um die Hypothese zu verifizieren. Dazu gehörte eine Prototyp-Entwicklung, deren praktische Anwendung, deren Performanzmessung und -vergleich. Der resultierende Prototyp (CoWebViz) ist eines der ersten Webbrowser basierten Systeme, welches flüssige und interaktive Remote-Visualisierung in Realzeit und ohne zusätzliche Soft- ware ermöglicht. Tests und Vergleiche zeigen, dass der Ansatz eine bessere Performanz hat als andere ähnliche getestete Systeme. Die simultane Nutzung verschiedener stereoskopischer Präsen- tationssysteme mit so einem einfachen Remote-Visualisierungssystem ist zur Zeit einzigartig. Die Nutzung für die normalerweise sehr ressourcen-intensive stereoskopische und kollaborative Anatomieausbildung, gemeinsam mit interkontinentalen Teilnehmern, zeigt die Machbarkeit und den vereinfachenden Charakter des Ansatzes. Die Machbarkeit des Ansatzes wurde auch durch die erfolgreiche Nutzung für andere Anwendungsfälle gezeigt, wie z.B. im Grid-computing und in der Chirurgie
    • …
    corecore