15 research outputs found

    Indoor Distance Measurement System COPS (COVID-19 Prevention System)

    No full text
    With the rapid spread of coronavirus disease 2019 (COVID-19), measures are needed to detect social distancing and prevent further infection. In this paper, we propose a system that detects social distancing in indoor environments and identifies the movement path and contact objects according to the presence or absence of an infected person. This system detects objects through frames of video data collected from a closed-circuit television using You Only Look Once (v. 4) and assigns and tracks object IDs using DeepSORT, a multiple object tracking algorithm. Next, the coordinates of the detected object are transformed by image warping the area designated by the top angle composition in the original frame. The converted coordinates are matched with the actual map to measure the distance between objects and detect the social distance. If an infected person is present, the object that violates the movement path and social distancing of the infected person is detected using the ID assigned to each object. The proposed system can be used to prevent the rapid spread of infection by detecting social distancing and detecting and tracking objects according to the presence of infected persons

    TN-GAN-Based Pet Behavior Prediction through Multiple-Dimension Time-Series Augmentation

    No full text
    Behavioral prediction modeling applies statistical techniques for classifying, recognizing, and predicting behavior using various data. However, performance deterioration and data bias problems occur in behavioral prediction. This study proposed that researchers conduct behavioral prediction using text-to-numeric generative adversarial network (TN-GAN)-based multidimensional time-series augmentation to minimize the data bias problem. The prediction model dataset in this study used nine-axis sensor data (accelerometer, gyroscope, and geomagnetic sensors). The ODROID N2+, a wearable pet device, collected and stored data on a web server. The interquartile range removed outliers, and data processing constructed a sequence as an input value for the predictive model. After using the z-score as a normalization method for sensor values, cubic spline interpolation was performed to identify the missing values. The experimental group assessed 10 dogs to identify nine behaviors. The behavioral prediction model used a hybrid convolutional neural network model to extract features and applied long short-term memory techniques to reflect time-series features. The actual and predicted values were evaluated using the performance evaluation index. The results of this study can assist in recognizing and predicting behavior and detecting abnormal behavior, capacities which can be applied to various pet monitoring systems

    Dog Behavior Recognition Based on Multimodal Data from a Camera and Wearable Device

    No full text
    Although various studies on monitoring dog behavior have been conducted, methods that can minimize or compensate data noise are required. This paper proposes multimodal data-based dog behavior recognition that fuses video and sensor data using a camera and a wearable device. The video data represent the moving area of dogs to detect the dogs. The sensor data represent the movement of the dogs and extract features that affect dog behavior recognition. Seven types of behavior recognition were conducted, and the results of the two data types were used to recognize the dog’s behavior through a fusion model based on deep learning. Experimentation determined that, among FasterRCNN, YOLOv3, and YOLOv4, the object detection rate and behavior recognition accuracy were the highest when YOLOv4 was used. In addition, the sensor data showed the best performance when all statistical features were selected. Finally, it was confirmed that the performance of multimodal data-based fusion models was improved over that of single data-based models and that the CNN-LSTM-based model had the best performance. The method presented in this study can be applied for dog treatment or health monitoring, and it is expected to provide a simple way to estimate the amount of activity

    MovieDIRec: Drafted-Input-Based Recommendation System for Movies

    No full text
    In a DNN-based recommendation system, the input selection of a model and design of an appropriate input are very important in terms of the accuracy and reflection of complex user preferences. Since the learning of layers by the goal of the model depends on the input, the more closely the input is related to the goal, the less the model needs to learn unnecessary information. In relation to this, the term Drafted-Input, defined in this paper, is input data that have been appropriately selected and processed to meet the goals of the system, and is a subject that is updated while continuously reflecting user preferences along with the learning of model parameters. In this paper, the effects of properly designed and generated inputs on accuracy and usability are verified using the proposed systems. Furthermore, the proposed method and user–item interaction are compared with state-of-the-art systems using simple embedding data as the input, and a model suitable for a practical client–server environment is also proposed

    Acoustic Sensor-Based Multiple Object Tracking with Visual Information Association

    No full text
    Abstract Object tracking by an acoustic sensor based on particle filtering is extended for the tracking of multiple objects. In order to overcome the inherent limitation of the acoustic sensor for the simultaneous multiple object tracking, support from the visual sensor is considered. Cooperation from the visual sensor, however, is better to be minimized, as the visual sensor's operation requires much higher computational resources than the acoustic sensor-based estimation, especially when the visual sensor is not dedicated to object tracking and deployed for other applications. The acoustic sensor mainly tracks multiple objects, and the visual sensor supports the tracking task only when the acoustic sensor has a difficulty. Several techniques based on particle filtering are used for multiple object tracking by the acoustic sensor, and the limitations of the acoustic sensor are discussed to identify the need for the visual sensor cooperation. Performance of the triggering-based cooperation by the two visual sensors is evaluated and compared with a periodic cooperation in a real environment.</p
    corecore