410 research outputs found

    Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

    Full text link
    Data-driven decision making is becoming an integral part of manufacturing companies. Data is collected and commonly used to improve efficiency and produce high quality items for the customers. IoT-based and other forms of object tracking are an emerging tool for collecting movement data of objects/entities (e.g. human workers, moving vehicles, trolleys etc.) over space and time. Movement data can provide valuable insights like process bottlenecks, resource utilization, effective working time etc. that can be used for decision making and improving efficiency. Turning movement data into valuable information for industrial management and decision making requires analysis methods. We refer to this process as movement analytics. The purpose of this document is to review the current state of work for movement analytics both in manufacturing and more broadly. We survey relevant work from both a theoretical perspective and an application perspective. From the theoretical perspective, we put an emphasis on useful methods from two research areas: machine learning, and logic-based knowledge representation. We also review their combinations in view of movement analytics, and we discuss promising areas for future development and application. Furthermore, we touch on constraint optimization. From an application perspective, we review applications of these methods to movement analytics in a general sense and across various industries. We also describe currently available commercial off-the-shelf products for tracking in manufacturing, and we overview main concepts of digital twins and their applications

    Cleansing Indoor RFID Tracking Data

    Get PDF

    Context-based scene recognition from visual data in smart homes: an Information Fusion approach

    Get PDF
    Ambient Intelligence (AmI) aims at the development of computational systems that process data acquired by sensors embedded in the environment to support users in everyday tasks. Visual sensors, however, have been scarcely used in this kind of applications, even though they provide very valuable information about scene objects: position, speed, color, texture, etc. In this paper, we propose a cognitive framework for the implementation of AmI applications based on visual sensor networks. The framework, inspired by the Information Fusion paradigm, combines a priori context knowledge represented with ontologies with real time single camera data to support logic-based high-level local interpretation of the current situation. In addition, the system is able to automatically generate feedback recommendations to adjust data acquisition procedures. Information about recognized situations is eventually collected by a central node to obtain an overall description of the scene and consequently trigger AmI services. We show the extensible and adaptable nature of the approach with a prototype system in a smart home scenario.This research activity is supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008- 06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    Get PDF

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment

    Location estimation in smart homes setting with RFID systems

    Get PDF
    Indoor localisation technologies are a core component of Smart Homes. Many applications within Smart Homes benefit from localisation technologies to determine the locations of things, objects and people. The tremendous characteristics of the Radio Frequency Identification (RFID) systems have become one of the enabler technologies in the Internet of Things (IOT) that connect objects and things wirelessly. RFID is a promising technology in indoor positioning that not only uniquely identifies entities but also locates affixed RFID tags on objects or subjects in stationary and real-time. The rapid advancement in RFID-based systems has sparked the interest of researchers in Smart Homes to employ RFID technologies and potentials to assist with optimising (non-) pervasive healthcare systems in automated homes. In this research localisation techniques and enabled positioning sensors are investigated. Passive RFID sensors are used to localise passive tags that are affixed to Smart Home objects and track the movement of individuals in stationary and real-time settings. In this study, we develop an affordable passive localisation platform using inexpensive passive RFID sensors. To fillful this aim, a passive localisation framework using minimum tracking resources (RFID sensors) has been designed. A localisation prototype and localisation application that examined the affixed RFID tag on objects to evaluate our proposed locaisation framework was then developed. Localising algorithms were utilised to achieve enhanced accuracy of localising one particular passive tag which that affixed to target objects. This thesis uses a general enough approach so that it could be applied more widely to other applications in addition to Health Smart Homes. A passive RFID localising framework is designed and developed through systematic procedures. A localising platform is built to test the proposed framework, along with developing a RFID tracking application using Java programming language and further data analysis in MATLAB. This project applies localisation procedures and evaluates them experimentally. The experimental study positively confirms that our proposed localisation framework is capable of enhancing the accuracy of the location of the tracked individual. The low-cost design uses only one passive RFID target tag, one RFID reader and three to four antennas

    A fuzzy logic approach to localisation in wireless local area networks

    Get PDF
    This thesis examines the use and value of fuzzy sets, fuzzy logic and fuzzy inference in wireless positioning systems and solutions. Various fuzzy-related techniques and methodologies are reviewed and investigated, including a comprehensive review of fuzzy-based positioning and localisation systems. The thesis is aimed at the development of a novel positioning technique which enhances well-known multi-nearest-neighbour (kNN) and fingerprinting algorithms with received signal strength (RSS) measurements. A fuzzy inference system is put forward for the generation of weightings for selected nearest-neighbours and the elimination of outliers. In this study, Monte Carlo simulations of a proposed multivariable fuzzy localisation (MVFL) system showed a significant improvement in the root mean square error (RMSE) in position estimation, compared with well-known localisation algorithms. The simulation outcomes were confirmed empirically in laboratory tests under various scenarios. The proposed technique uses available indoor wireless local area network (WLAN) infrastructure and requires no additional hardware or modification to the network, nor any active user participation. The thesis aims to benefit practitioners and academic researchers of system positioning

    Activity, context, and plan recognition with computational causal behavior models

    Get PDF
    Objective of this thesis is to answer the question "how to achieve efficient sensor-based reconstruction of causal structures of human behaviour in order to provide assistance?". To answer this question, the concept of Computational Causal Behaviour Models (CCBMs) is introduced. CCBM allows the specification of human behaviour by means of preconditions and effects and employs Bayesian filtering techniques to reconstruct action sequences from noisy and ambiguous sensor data. Furthermore, a novel approximative inference algorithm – the Marginal Filter – is introduced
    • 

    corecore