18,248 research outputs found

    Application of multiple-wireless to a visual localisation system for emergency services

    Get PDF
    Abstract—In this paper we discuss the application of multiplewireless technology to a practical context-enhanced service system called ViewNet. ViewNet develops technologies to support enhanced coordination and cooperation between operation teams in the emergency services and the police. Distributed localisation of users and mapping of environments implemented over a secure wireless network enables teams of operatives to search and map an incident area rapidly and in full coordination with each other and with a control centre. Sensing is based on fusing absolute positioning systems (UWB and GPS) with relative localisation and mapping from on-body or handheld vision and inertial sensors. This paper focuses on the case for multiple-wireless capabilities in such a system and the benefits it can provide. We describe our work of developing a software API to support both WLAN and TETRA in ViewNet. It also provides a basis for incorporating future wireless technologies into ViewNet. I

    RFID Localisation For Internet Of Things Smart Homes: A Survey

    Full text link
    The Internet of Things (IoT) enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. The IoT incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. Localisation in indoor positioning systems plays an important role in the IoT. Location Based IoT applications range from tracking objects and people in real-time, assets management, agriculture, assisted monitoring technologies for healthcare, and smart homes, to name a few. Radio Frequency based systems for indoor positioning such as Radio Frequency Identification (RFID) is a key enabler technology for the IoT due to its costeffective, high readability rates, automatic identification and, importantly, its energy efficiency characteristic. This paper reviews the state-of-the-art RFID technologies in IoT Smart Homes applications. It presents several comparable studies of RFID based projects in smart homes and discusses the applications, techniques, algorithms, and challenges of adopting RFID technologies in IoT smart home systems.Comment: 18 pages, 2 figures, 3 table

    The LAB@FUTURE Project - Moving Towards the Future of E-Learning

    Get PDF
    This paper presents Lab@Future, an advanced e-learning platform that uses novel Information and Communication Technologies to support and expand laboratory teaching practices. For this purpose, Lab@Future uses real and computer-generated objects that are interfaced using mechatronic systems, augmented reality, mobile technologies and 3D multi user environments. The main aim is to develop and demonstrate technological support for practical experiments in the following focused subjects namely: Fluid Dynamics - Science subject in Germany, Geometry - Mathematics subject in Austria, History and Environmental Awareness ñ€“ Arts and Humanities subjects in Greece and Slovenia. In order to pedagogically enhance the design and functional aspects of this e-learning technology, we are investigating the dialogical operationalisation of learning theories so as to leverage our understanding of teaching and learning practices in the targeted context of deployment

    Sensor System for Rescue Robots

    Get PDF
    A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible

    Vehicle Accident Alert and Locator (VAAL)

    Get PDF
    An emergency is a deviation from planned or expected behaviour or a course of event that endangers or adversely affects people, property, or the environment. This paper reports a complete research work in accident (automobile) emergency alert situation. The authors were able to programme a GPS / GSM module incorporating a crash detector to report automatically via the GSM communication platform (using SMS messaging) to the nearest agencies such as police posts, hospitals, fire services etc, giving the exact position of the point where the crash had occurred. This will allow early response and rescue of accident victims; saving lives and properties. The paper reports its experimental results, gives appropriate conclusions and recommendations

    Biosignal and context monitoring: Distributed multimedia applications of body area networks in healthcare

    Get PDF
    We are investigating the use of Body Area Networks (BANs), wearable sensors and wireless communications for measuring, processing, transmission, interpretation and display of biosignals. The goal is to provide telemonitoring and teletreatment services for patients. The remote health professional can view a multimedia display which includes graphical and numerical representation of patients’ biosignals. Addition of feedback-control enables teletreatment services; teletreatment can be delivered to the patient via multiple modalities including tactile, text, auditory and visual. We describe the health BAN and a generic mobile health service platform and two context aware applications. The epilepsy application illustrates processing and interpretation of multi-source, multimedia BAN data. The chronic pain application illustrates multi-modal feedback and treatment, with patients able to view their own biosignals on their handheld device

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    A component-based approach towards mobile distributed and collaborative PTAM

    Get PDF
    Having numerous sensors on-board, smartphones have rapidly become a very attractive platform for augmented reality applications. Although the computational resources of mobile devices grow, they still cannot match commonly available desktop hardware, which results in downscaled versions of well known computer vision techniques that sacrifice accuracy for speed. We propose a component-based approach towards mobile augmented reality applications, where components can be configured and distributed at runtime, resulting in a performance increase by offloading CPU intensive tasks to a server in the network. By sharing distributed components between multiple users, collaborative AR applications can easily be developed. In this poster, we present a component-based implementation of the Parallel Tracking And Mapping (PTAM) algorithm, enabling to distribute components to achieve a mobile, distributed version of the original PTAM algorithm, as well as a collaborative scenario
    • 

    corecore