45 research outputs found

    Change Detection in Multivariate Datastreams: Likelihood and Detectability Loss

    Get PDF
    We address the problem of detecting changes in multivariate datastreams, and we investigate the intrinsic difficulty that change-detection methods have to face when the data dimension scales. In particular, we consider a general approach where changes are detected by comparing the distribution of the log-likelihood of the datastream over different time windows. Despite the fact that this approach constitutes the frame of several change-detection methods, its effectiveness when data dimension scales has never been investigated, which is indeed the goal of our paper. We show that the magnitude of the change can be naturally measured by the symmetric Kullback-Leibler divergence between the pre- and post-change distributions, and that the detectability of a change of a given magnitude worsens when the data dimension increases. This problem, which we refer to as \emph{detectability loss}, is due to the linear relationship between the variance of the log-likelihood and the data dimension. We analytically derive the detectability loss on Gaussian-distributed datastreams, and empirically demonstrate that this problem holds also on real-world datasets and that can be harmful even at low data-dimensions (say, 10)

    Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions

    Full text link
    Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention. GANs learns complex and high-dimensional distributions implicitly over images, audio, and data. However, there exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability, due to inappropriate design of network architecture, use of objective function and selection of optimization algorithm. Recently, to address these challenges, several solutions for better design and optimization of GANs have been investigated based on techniques of re-engineered network architectures, new objective functions and alternative optimization algorithms. To the best of our knowledge, there is no existing survey that has particularly focused on broad and systematic developments of these solutions. In this study, we perform a comprehensive survey of the advancements in GANs design and optimization solutions proposed to handle GANs challenges. We first identify key research issues within each design and optimization technique and then propose a new taxonomy to structure solutions by key research issues. In accordance with the taxonomy, we provide a detailed discussion on different GANs variants proposed within each solution and their relationships. Finally, based on the insights gained, we present the promising research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table

    Drone Tracking with Drone using Deep Learning

    Get PDF
    With the development of technology, studies in fields such as artificial intelligence, computer vision and deep learning are increasing day by day. In line with these developments, object tracking and object detection studies have spread over wide areas. In this article, a study is presented by simulating two different drones, a leader and a follower drone, accompanied by deep learning algorithms. Within the scope of this study, it is aimed to perform a drone tracking with drone in an autonomous way. Two different approaches are developed and tested in the simulator environment within the scope of drone tracking. The first of these approaches is to enable the leader drone to detect the target drone by using object-tracking algorithms. YOLOv5 deep learning algorithm is preferred for object detection. A data set of approximately 2500 images was created for training the YOLOv5 algorithm. The Yolov5 object detection algorithm, which was trained with the created data set, reached a success rate of approximately 93% as a result of the training. As the second approach, the object-tracking algorithm we developed is used. Trainings were carried out in the simulator created in the Matlab environment. The results are presented in detail in the following sections. In this article, some artificial neural networks and some object tracking methods used in the literature are explained
    corecore