184 research outputs found

    Smart Cameras with onboard Signcryption for Securing IoT Applications

    Get PDF
    Cameras are expected to become key sensor devices for various internet of things (IoT) applications. Since cameras often capture highly sensitive information, security is a major concern. Our approach towards data security for smart cameras is rooted on protecting the captured images by signcryption based on elliptic curve cryptography (ECC). Signcryption achieves resource-efficiency by performing data signing and encryption in a single step. By running the signcryption on the sensing unit, we can relax some security assumptions for the camera host unit which typically runs a complex software stack. We introduce our system architecture motivated by a typical case study for camera-based IoT applications, evaluate security properties and present performance results of an ARM-based implementatio

    Left/Right Hand Segmentation in Egocentric Videos

    Full text link
    Wearable cameras allow people to record their daily activities from a user-centered (First Person Vision) perspective. Due to their favorable location, wearable cameras frequently capture the hands of the user, and may thus represent a promising user-machine interaction tool for different applications. Existent First Person Vision methods handle hand segmentation as a background-foreground problem, ignoring two important facts: i) hands are not a single "skin-like" moving element, but a pair of interacting cooperative entities, ii) close hand interactions may lead to hand-to-hand occlusions and, as a consequence, create a single hand-like segment. These facts complicate a proper understanding of hand movements and interactions. Our approach extends traditional background-foreground strategies, by including a hand-identification step (left-right) based on a Maxwell distribution of angle and position. Hand-to-hand occlusions are addressed by exploiting temporal superpixels. The experimental results show that, in addition to a reliable left/right hand-segmentation, our approach considerably improves the traditional background-foreground hand-segmentation

    Advantages of dynamic analysis in HOG-PCA feature space for video moving object classification

    Get PDF
    Classification of moving objects for video surveillance applications still remains a challenging problem due to the video inherently changing conditions such as lighting or resolution. This paper proposes a new approach for vehicle/pedestrian object classification based on the learning of a static kNN classifier, a dynamic Hidden Markov Model (HMM)-based classifier, and the definition of a fusion rule that combines the two outputs. The main novelty consists in the study of the dynamic aspects of the moving objects by analysing the trajectories of the features followed in the HOG-PCA feature space, instead of the classical trajectory study based on the frame coordinates. The complete hybrid system was tested on the VIRAT database and worked in real time, yielding up to 100% peak accuracy rate in the tested video sequences

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio

    A bio-inspired logical process for saliency detections in cognitive crowd monitoring

    Get PDF
    It is well known from physiological studies that the level of human attention for adult individuals rapidly decreases after five to twenty minutes [1]. Attention retention for a surveillance operator represents a crucial aspect in Video Surveillance applications and could have a significant impact in identifying relevance, especially in crowded situations. In this field, advanced mechanisms for selection and extraction of saliency information can improve the performances of autonomous video surveillance systems and increase the effectiveness of human operator support. In particular, crowd monitoring represents a central aspect in many practical applications for managing and preventing emergencies due to panic and overcrowding

    Online pedestrian group walking event detection using spectral analysis of motion similarity graph

    Get PDF
    A method for online identification of group of moving objects in the video is proposed in this paper. This method at each frame identifies group of tracked objects with similar local instantaneous motion pattern using spectral clustering on motion similarity graph. Then, the output of the algorithm is used to detect the event of more than two object moving together as required by PETS2015 challenge. The performance of the algorithm is evaluated on the PETS2015 dataset

    Active Inference for Sum Rate Maximization in UAV-Assisted Cognitive NOMA Networks

    Full text link
    Given the surge in wireless data traffic driven by the emerging Internet of Things (IoT), unmanned aerial vehicles (UAVs), cognitive radio (CR), and non-orthogonal multiple access (NOMA) have been recognized as promising techniques to overcome massive connectivity issues. As a result, there is an increasing need to intelligently improve the channel capacity of future wireless networks. Motivated by active inference from cognitive neuroscience, this paper investigates joint subchannel and power allocation for an uplink UAV-assisted cognitive NOMA network. Maximizing the sum rate is often a highly challenging optimization problem due to dynamic network conditions and power constraints. To address this challenge, we propose an active inference-based algorithm. We transform the sum rate maximization problem into abnormality minimization by utilizing a generalized state-space model to characterize the time-changing network environment. The problem is then solved using an Active Generalized Dynamic Bayesian Network (Active-GDBN). The proposed framework consists of an offline perception stage, in which a UAV employs a hierarchical GDBN structure to learn an optimal generative model of discrete subchannels and continuous power allocation. In the online active inference stage, the UAV dynamically selects discrete subchannels and continuous power to maximize the sum rate of secondary users. By leveraging the errors in each episode, the UAV can adapt its resource allocation policies and belief updating to improve its performance over time. Simulation results demonstrate the effectiveness of our proposed algorithm in terms of cumulative sum rate compared to benchmark schemes.Comment: This paper has been accepted for the 2023 IEEE 9th World Forum on Internet of Things (IEEE WFIoT2023

    Towards a unified framework for hand-based methods in First Person Vision

    Get PDF
    First Person Vision (Egocentric) video analysis stands nowadays as one of the emerging fields in computer vision. The availability of wearable devices recording exactly what the user is looking at is ineluctable and the opportunities and challenges carried by this kind of devices are broad. Particularly, for the first time a device is so intimate with the user to be able to record the movements of his hands, making hand-based applications for First Person Vision one the most explored area in the field. This paper explores the more popular processing steps to develop hand-based applications, and proposes a hierarchical structure that optimally switches between each of the levels to reduce the computational cost of the system and improve its performance
    corecore