2,061 research outputs found

    Evaluating Machine Learning Techniques for Smart Home Device Classification

    Get PDF
    Smart devices in the Internet of Things (IoT) have transformed the management of personal and industrial spaces. Leveraging inexpensive computing, smart devices enable remote sensing and automated control over a diverse range of processes. Even as IoT devices provide numerous benefits, it is vital that their emerging security implications are studied. IoT device design typically focuses on cost efficiency and time to market, leading to limited built-in encryption, questionable supply chains, and poor data security. In a 2017 report, the United States Government Accountability Office recommended that the Department of Defense investigate the risks IoT devices pose to operations security, information leakage, and endangerment of senior leaders [1]. Recent research has shown that it is possible to model a subject’s pattern-of-life through data leakage from Bluetooth Low Energy (BLE) and Wi-Fi smart home devices [2]. A key step in establishing pattern-of-life is the identification of the device types within the smart home. Device type is defined as the functional purpose of the IoT device, e.g., camera, lock, and plug. This research hypothesizes that machine learning algorithms can be used to accurately perform classification of smart home devices. To test this hypothesis, a Smart Home Environment (SHE) is built using a variety of commercially-available BLE and Wi-Fi devices. SHE produces actual smart device traffic that is used to create a dataset for machine learning classification. Six device types are included in SHE: door sensors, locks, and temperature sensors using BLE, and smart bulbs, cameras, and smart plugs using Wi-Fi. In addition, a device classification pipeline (DCP) is designed to collect and preprocess the wireless traffic, extract features, and produce tuned models for testing. K-nearest neighbors (KNN), linear discriminant analysis (LDA), and random forests (RF) classifiers are built and tuned for experimental testing. During this experiment, the classifiers are tested on their ability to distinguish device types in a multiclass classification scheme. Classifier performance is evaluated using the Matthews correlation coefficient (MCC), mean recall, and mean precision metrics. Using all available features, the classifier with the best overall performance is the KNN classifier. The KNN classifier was able to identify BLE device types with an MCC of 0.55, a mean precision of 54%, and a mean recall of 64%, and Wi-Fi device types with an MCC of 0.71, a mean precision of 81%, and a mean recall of 81%. Experimental results provide support towards the hypothesis that machine learning can classify IoT device types to a high level of performance, but more work is necessary to build a more robust classifier

    Movement Detection with Event-Based Cameras: Comparison with Frame-Based Cameras in Robot Object Tracking Using Powerlink Communication

    Get PDF
    Event-based cameras are not common in industrial applications despite the fact that they can add multiple advantages for applications with moving objects. In comparison with frame-based cameras, the amount of generated data is very low while keeping the main information in the scene. For an industrial environment with interconnected systems, data reduction becomes very important to avoid network congestion and provide faster response time. However, the use of new sensors as event-based cameras is not common since they do not usually provide connectivity to industrial buses. This work develops a network node based on a Field Programmable Gate Array (FPGA), including data acquisition and tracking position for an event-based camera. It also includes spurious reduction and filtering algorithms while keeping the main features at the scene. The FPGA node also includes the stack of the network protocol to provide standard communication among other nodes. The powerlink IEEE 61158 industrial network is used to communicate the FPGA with a controller connected to a self-developed two-axis servo-controlled robot. The inverse kinematics model for the robot is included in the controller. To complete the system and provide a comparison, a traditional frame-based camera is also connected to the controller. Response time and robustness to lighting conditions are tested. Results show that, using the event-based camera, the robot can follow the object using fast image recognition achieving up to 85% percent data reduction providing an average of 99 ms faster position detection and less dispersion in position detection (4.96 mm vs. 17.74 mm in the Y-axis position, and 2.18 mm vs. 8.26 mm in the X-axis position) than the frame-based camera, showing that event-based cameras are more stable under light changes. Additionally, event-based cameras offer intrinsic advantages due to the low computational complexity required: small size, low power, reduced data and low cost. Thus, it is demonstrated how the development of new equipment and algorithms can be efficiently integrated into an industrial system, merging commercial industrial equipment with new devices

    Non-Invasive Data Acquisition and IoT Solution for Human Vital Signs Monitoring: Applications, Limitations and Future Prospects

    Get PDF
    The rapid development of technology has brought about a revolution in healthcare stimulating a wide range of smart and autonomous applications in homes, clinics, surgeries and hospitals. Smart healthcare opens the opportunity for a qualitative advance in the relations between healthcare providers and end-users for the provision of healthcare such as enabling doctors to diagnose remotely while optimizing the accuracy of the diagnosis and maximizing the benefits of treatment by enabling close patient monitoring. This paper presents a comprehensive review of non-invasive vital data acquisition and the Internet of Things in healthcare informatics and thus reports the challenges in healthcare informatics and suggests future work that would lead to solutions to address the open challenges in IoT and non-invasive vital data acquisition. In particular, the conducted review has revealed that there has been a daunting challenge in the development of multi-frequency vital IoT systems, and addressing this issue will help enable the vital IoT node to be reachable by the broker in multiple area ranges. Furthermore, the utilization of multi-camera systems has proven its high potential to increase the accuracy of vital data acquisition, but the implementation of such systems has not been fully developed with unfilled gaps to be bridged. Moreover, the application of deep learning to the real-time analysis of vital data on the node/edge side will enable optimal, instant offline decision making. Finally, the synergistic integration of reliable power management and energy harvesting systems into non-invasive data acquisition has been omitted so far, and the successful implementation of such systems will lead to a smart, robust, sustainable and self-powered healthcare system

    Human Pose Detection for Robotic-Assisted and Rehabilitation Environments

    Get PDF
    Assistance and rehabilitation robotic platforms must have precise sensory systems for human–robot interaction. Therefore, human pose estimation is a current topic of research, especially for the safety of human–robot collaboration and the evaluation of human biomarkers. Within this field of research, the evaluation of the low-cost marker-less human pose estimators of OpenPose and Detectron 2 has received much attention for their diversity of applications, such as surveillance, sports, videogames, and assessment in human motor rehabilitation. This work aimed to evaluate and compare the angles in the elbow and shoulder joints estimated by OpenPose and Detectron 2 during four typical upper-limb rehabilitation exercises: elbow side flexion, elbow flexion, shoulder extension, and shoulder abduction. A setup of two Kinect 2 RGBD cameras was used to obtain the ground truth of the joint and skeleton estimations during the different exercises. Finally, we provided a numerical comparison (RMSE and MAE) among the angle measurements obtained with OpenPose, Detectron 2, and the ground truth. The results showed how OpenPose outperforms Detectron 2 in these types of applications.Óscar G. Hernández holds a grant from the Spanish Fundación Carolina, the University of Alicante, and the National Autonomous University of Honduras

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas
    • …
    corecore