5,210 research outputs found

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Correlation-based Cross-layer Communication in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN) are event based systems that rely on the collective effort of densely deployed sensor nodes continuously observing a physical phenomenon. The spatio-temporal correlation between the sensor observations and the cross-layer design advantages are significant and unique to the design of WSN. Due to the high density in the network topology, sensor observations are highly correlated in the space domain. Furthermore, the nature of the energy-radiating physical phenomenon constitutes the temporal correlation between each consecutive observation of a sensor node. This unique characteristic of WSN can be exploited through a cross-layer design of communication functionalities to improve energy efficiency of the network. In this thesis, several key elements are investigated to capture and exploit the correlation in the WSN for the realization of advanced efficient communication protocols. A theoretical framework is developed to capture the spatial and temporal correlations in WSN and to enable the development of efficient communication protocols. Based on this framework, spatial Correlation-based Collaborative Medium Access Control (CC-MAC) protocol is described, which exploits the spatial correlation in the WSN in order to achieve efficient medium access. Furthermore, the cross-layer module (XLM), which melts common protocol layer functionalities into a cross-layer module for resource-constrained sensor nodes, is developed. The cross-layer analysis of error control in WSN is then presented to enable a comprehensive comparison of error control schemes for WSN. Finally, the cross-layer packet size optimization framework is described.Ph.D.Committee Chair: Ian F. Akyildiz; Committee Member: Douglas M. Blough; Committee Member: Mostafa Ammar; Committee Member: Raghupathy Sivakumar; Committee Member: Ye (Geoffrey) L

    Investigation on Design and Development Methods for Internet of Things

    Get PDF
    The thesis work majorly focuses on the development methodologies of the Internet of Things (IoT). A detailed literature survey is presented for the discussion of various challenges in the development of software and design and deployment of hardware. The thesis work deals with the efficient development methodologies for the deployment of IoT system. Efficient hardware and software development reduces the risk of the system bugs and faults. The optimal placement of the IoT devices is the major challenge for the monitoring application. A Qualitative Spatial Reasoning (QSR) and Qualitative Temporal Reasoning (QTR) methodologies are proposed to build software systems. The proposed hybrid methodology includes the features of QSR, QTR, and traditional databased methodologies. The hybrid methodology is proposed to build the software systems and direct them to the specific goal of obtaining outputs inherent to the process. The hybrid methodology includes the support of tools and is detailed, integrated, and fits the general proposal. This methodology repeats the structure of Spatio-temporal reasoning goals. The object-oriented IoT device placement is the major goal of the proposed work. Segmentation and object detection is used for the division of the region into sub-regions. The coverage and connectivity are maintained by the optimal placement of the IoT devices using RCC8 and TPCC algorithms. Over the years, IoT has offered different solutions in all kinds of areas and contexts. The diversity of these challenges makes it hard to grasp the underlying principles of the different solutions and to design an appropriate custom implementation on the IoT space. One of the major objective of the proposed thesis work is to study numerous production-ready IoT offerings, extract recurring proven solution principles, and classify them into spatial patterns. The method of refinement of the goals is employed so that complex challenges are solved by breaking them down into simple and achievable sub-goals. The work deals with the major sub-goals e.g. efficient coverage of the field, connectivity of the IoT devices, Spatio-temporal aggregation of the data, and estimation of spatially connected regions of event detection. We have proposed methods to achieve each sub-goal for all different types of spatial patterns. The spatial patterns developed can be used in ongoing and future research on the IoT to understand the principles of the IoT, which will, in turn, promote the better development of existing and new IoT devices. The next objective is to utilize the IoT network for enterprise architecture (EA) based IoT application. EA defines the structure and operation of an organization to determine the most effective way for it to achieve its objectives. Digital transformation of EA is achieved through analysis, planning, design, and implementation, which interprets enterprise goals into an IoT-enabled enterprise design. A blueprint is necessary for the readying of IT resources that support business services and processes. A systematic approach is proposed for the planning and development of EA for IoT-Applications. The Enterprise Interface (EI) layer is proposed to efficiently categorize the data. The data is categorized based on local and global factors. The clustered data is then utilized by the end-users. A novel four-tier structure is proposed for Enterprise Applications. We analyzed the challenges, contextualized them, and offered solutions and recommendations. The last objective of the thesis work is to develop energy-efficient data consistency method. The data consistency is a challenge for designing energy-efficient medium access control protocol used in IoT. The energy-efficient data consistency method makes the protocol suitable for low, medium, and high data rate applications. The idea of energyefficient data consistency protocol is proposed with data aggregation. The proposed protocol efficiently utilizes the data rate as well as saves energy. The optimal sampling rate selection method is introduced for maintaining the data consistency of continuous and periodic monitoring node in an energy-efficient manner. In the starting phase, the nodes will be classified into event and continuous monitoring nodes. The machine learning based logistic classification method is used for the classification of nodes. The sampling rate of continuous monitoring nodes is optimized during the setup phase by using optimized sampling rate data aggregation algorithm. Furthermore, an energy-efficient time division multiple access (EETDMA) protocol is used for the continuous monitoring on IoT devices, and an energy-efficient bit map assisted (EEBMA) protocol is proposed for the event driven nodes

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    Correlated multi-streaming in distributed interactive multimedia systems

    Get PDF
    Distributed Interactive Multimedia Environments (DIMEs) enable geographically distributed people to interact with each other in a joint media-rich virtual environment for a wide range of activities, such as art performance, medical consultation, sport training, etc. The real-time collaboration is made possible by exchanging a set of multi-modal sensory streams over the network in real time. The characterization and evaluation of such multi-stream interactive environments is challenging because the traditional Quality of Service metrics (e.g., delay, jitter) are limited to a per stream basis. In this work, we present a novel ???Bundle of Streams??? concept to de???ne correlated multi-streams in DIMEs and present new cyber-physical, spatio-temporal QoS metrics to measure QoS over bundle of streams. We realize Bundle of Streams concept by presenting a novel paradigm of Bundle Streaming as a Service (SAS). We propose and develop SAS Kernel, a generic, distributed, modular and highly ???exible streaming kernel realizing SAS concept. We validate the Bundle of Streams model by comparing the QoS performance of bundle of streams over different transport protocols in a 3D tele-immersive testbed. Also, further experiments demonstrate that the SAS Kernel incurs low overhead in delay, CPU, and bandwidth demands

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Cooperative Spectrum Sensing in Cognitive Radio Networks Using Multidimensional Correlations

    Get PDF
    In this paper, a multidimensional-correlation-based sensing scheduling algorithm, (CORN)2, is developed for cognitive radio networks to minimize energy consumption. A sensing quality metric is defined as a measure of the correctness of spectral availability information based on the fact that spectrum sensing information at a given space and time can represent spectrum information at a different point in space and time. The scheduling algorithm is shown to achieve a cost of sensing (e.g., energy consumption, sensing duration) arbitrarily close to the possible minimum, while meeting the sensing quality requirements. To this end, (CORN)2 utilizes a novel sensing deficiency virtual queue concept and exploits the correlation between spectrum measurements of a particular secondary user and its collaborating neighbors. The proposed algorithm is proved to achieve a distributed and arbitrarily close to optimal solution under certain, easily satisfied assumptions. Furthermore, a distributed Selective-(CORN)2 (S-(CORN)2) is introduced by extending the distributed algorithm to allow secondary users to select collaboration neighbors in densely populated cognitive radio networks. In addition to the theoretically proved performance guarantees, the algorithms are evaluated through simulations

    Distributed information extraction from large-scale wireless sensor networks

    Get PDF
    • …
    corecore