414 research outputs found

    Smart hospital emergency system via mobile-based requesting services

    Get PDF
    In recent years, the UK’s emergency call and response has shown elements of great strain as of today. The strain on emergency call systems estimated by a 9 million calls (including both landline and mobile) made in 2014 alone. Coupled with an increasing population and cuts in government funding, this has resulted in lower percentages of emergency response vehicles at hand and longer response times. In this paper, we highlight the main challenges of emergency services and overview of previous solutions. In addition, we propose a new system call Smart Hospital Emergency System (SHES). The main aim of SHES is to save lives through improving communications between patient and emergency services. Utilising the latest of technologies and algorithms within SHES is aiming to increase emergency communication throughput, while reducing emergency call systems issues and making the process of emergency response more efficient. Utilising health data held within a personal smartphone, and internal tracked data (GPU, Accelerometer, Gyroscope etc.), SHES aims to process the mentioned data efficiently, and securely, through automatic communications with emergency services, ultimately reducing communication bottlenecks. Live video-streaming through real-time video communication protocols is also a focus of SHES to improve initial communications between emergency services and patients. A prototype of this system has been developed. The system has been evaluated by a preliminary usability, reliability, and communication performance study

    Orchestrating Service Migration for Low Power MEC-Enabled IoT Devices

    Full text link
    Multi-Access Edge Computing (MEC) is a key enabling technology for Fifth Generation (5G) mobile networks. MEC facilitates distributed cloud computing capabilities and information technology service environment for applications and services at the edges of mobile networks. This architectural modification serves to reduce congestion, latency, and improve the performance of such edge colocated applications and devices. In this paper, we demonstrate how reactive service migration can be orchestrated for low-power MEC-enabled Internet of Things (IoT) devices. Here, we use open-source Kubernetes as container orchestration system. Our demo is based on traditional client-server system from user equipment (UE) over Long Term Evolution (LTE) to the MEC server. As the use case scenario, we post-process live video received over web real-time communication (WebRTC). Next, we integrate orchestration by Kubernetes with S1 handovers, demonstrating MEC-based software defined network (SDN). Now, edge applications may reactively follow the UE within the radio access network (RAN), expediting low-latency. The collected data is used to analyze the benefits of the low-power MEC-enabled IoT device scheme, in which end-to-end (E2E) latency and power requirements of the UE are improved. We further discuss the challenges of implementing such schemes and future research directions therein

    Video Streaming to Empowered Video Walls

    Get PDF
    Video walls are useful to display large size video content. Empowered video walls combine display functionality with computing power. Such video walls can display large scientific visualizations. If they can also display high-resolution video streamed over a network, they could enable distance collaboration over scientific data.We proposed several methods of network streaming of highresolution video content to a major type of empowered video walls, which is the SAGE2 system. For all methods, we evaluated their performance and discussed their scalability and properties. The results should be applicable to other web-based empowered video walls as well

    Casting Virtual Reality (VR) Content Over A Network

    Get PDF
    A user of a virtual reality (VR) application uses a head-mounted display (HMD) to view VR content, such as graphics rendered by the VR application in the HMD. While the VR user is viewing the VR content, the VR user can cast the viewed VR content to friends or contacts enabling them to view the same VR content. The sharing is over a network and works regardless of the locations, devices, or types of networks being used. Audio from the VR user and/or from the VR application is also sent to the other users, e.g., via voice over internet protocol (VoIP). Further, audio from the other users can be sent (also via VoIP) to the VR user and other users, thereby providing multi-way communication

    Assemble.live: Designing for Schisms in Large Groups in Audio/Video Calls

    Get PDF
    Although new communication technologies have compressed the space and latency between participants, leading to new forms of computer mediated interaction that scale with the number of participants [Klein, 1999], there still exist no audio/video calling solutions that can accommodate the type of group conversation that takes place in a group of four or more. Groups of this size frequently schism, forming two or more sub-conversations with their own independently operating turn taking systems [Egbert, 1997]. This paper proposes that traditional audio/video calling fails to accommodate schisms because a) there is no way to signal intended recipiency, b) there exists only one, largely blocking audio channel, and c) leaving and joining audio/video calls is too difficult to schism. A solution is developed called assemble.live that uses enables users to move throughout a virtual room, and is designed to enable multiple sub-conversations to emerge. From a few recorded sessions of use, it is clear that while this enables multiple conversations to emerge, its affordances for signaling intended recipiency are insufficient
    corecore