11 research outputs found

    Applying Deep Bidirectional LSTM and Mixture Density Network for Basketball Trajectory Prediction

    Full text link
    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories

    Door and window detection in 3D point cloud of indoor scenes.

    Get PDF
    This paper proposes a 3D-2D-3D algorithm for doors and windows detection in 3D indoor environment of point cloud data. Firstly, by setting up a virtual camera in the middle of this 3D environment, a set of pictures are taken from different angles by rotating the camera, so that corresponding 2D images can be generated. Next, these images are used to detect and identify the positions of doors and windows in the space. To obtain point cloud data containing the doors and windows position information, the 2D information are then mapped back to the origin 3D point cloud environment. Finally, by processing the contour lines and crossing points, the features of doors and windows through the position information are optimized. The experimental results show that this "global-local" approach is efficient when detecting and identifying the location of doors and windows in 3D point cloud environment

    Door Detection in 3D Colored Laser Scans for Autonomous Indoor Navigation

    Get PDF

    Deep learning model for doors detection a contribution for context awareness recognition of patients with Parkinson’s disease

    Get PDF
    Freezing of gait (FoG) is one of the most disabling motor symptoms in Parkinson’s disease, which is described as a symptom where walking is interrupted by a brief, episodic absence, or marked reduction, of forward progression despite the intention to continue walking. Although FoG causes are multifaceted, they often occur in response of environment triggers, as turnings and passing through narrow spaces such as a doorway. This symptom appears to be overcome using external sensory cues. The recognition of such environments has consequently become a pertinent issue for PD-affected community. This study aimed to implement a real-time DL-based door detection model to be integrated into a wearable biofeedback device for delivering on-demand proprioceptive cues. It was used transfer-learning concepts to train a MobileNet-SSD in TF environment. The model was then integrated in a RPi being converted to a faster and lighter computing power model using TensorFlow Lite settings. Model performance showed a considerable precision of 97,2%, recall of 78,9% and a good F1-score of 0,869. In real-time testing with the wearable device, DL-model showed to be temporally efficient (~2.87 fps) to detect with accuracy doors over real-life scenarios. Future work will include the integration of sensory cues with the developed model in the wearable biofeedback device aiming to validate the final solution with end-users

    Development and Adaptation of Robotic Vision in the Real-World: the Challenge of Door Detection

    Full text link
    Mobile service robots are increasingly prevalent in human-centric, real-world domains, operating autonomously in unconstrained indoor environments. In such a context, robotic vision plays a central role in enabling service robots to perceive high-level environmental features from visual observations. Despite the data-driven approaches based on deep learning push the boundaries of vision systems, applying these techniques to real-world robotic scenarios presents unique methodological challenges. Traditional models fail to represent the challenging perception constraints typical of service robots and must be adapted for the specific environment where robots finally operate. We propose a method leveraging photorealistic simulations that balances data quality and acquisition costs for synthesizing visual datasets from the robot perspective used to train deep architectures. Then, we show the benefits in qualifying a general detector for the target domain in which the robot is deployed, showing also the trade-off between the effort for obtaining new examples from such a setting and the performance gain. In our extensive experimental campaign, we focus on the door detection task (namely recognizing the presence and the traversability of doorways) that, in dynamic settings, is useful to infer the topology of the map. Our findings are validated in a real-world robot deployment, comparing prominent deep-learning models and demonstrating the effectiveness of our approach in practical settings

    Indoor Localization and Mapping Using Deep Learning Networks

    Get PDF
    Over the past several decades, robots have been used extensively in environments that pose high risk to human operators and in jobs that are repetitive and monotonous. In recent years, robot autonomy has been exploited to extend their use in several non-trivial tasks such as space exploration, underwater exploration, and investigating hazardous environments. Such tasks require robots to function in unstructured environments that can change dynamically. Successful use of robots in these tasks requires them to be able to determine their precise location, obtain maps and other information about their environment, navigate autonomously, and operate intelligently in the unknown environment. The process of determining the location of the robot and generating a map of its environment has been termed in the literature as Simultaneous Localization and Mapping (SLAM). Light Detection and Ranging (LiDAR), Sound Navigation and Ranging (SONAR) sensors, and depth cameras are typically used to generate a representation of the environment during the SLAM process. However, the real-time localization and generation of map information are still challenging tasks. Therefore, there is a need for techniques to speed up the approximate localization and mapping process while using fewer computational resources. This thesis presents an alternative method based on deep learning and computer vision algorithms for generating approximate localization information for mobile robots. This approach has been investigated to obtain approximate localization information captured by monocular cameras. Approximate localization can subsequently be used to develop coarse maps where a priori information is not available. Experiments were conducted to verify the ability of the proposed technique to determine the approximate location of the robot. The approximate location of the robot was qualitatively denoted in terms of its location in a building, a floor of the building, and interior corridors. ArUco markers were used to determine the quantitative location of the robot. The use of this approximate location of the robot in determining the location of key features in the vicinity of the robot was also studied. The results of the research reported in this thesis demonstrate that low cost, low resolution techniques can be used in conjunction with deep learning techniques to obtain approximate localization of an autonomous robot. Further such approximate information can be used to determine coarse position information of key features in the vicinity. It is anticipated that this approach can be subsequently extended to develop low-resolution maps of the environment that are suitable for autonomous navigation of robots
    corecore