5 research outputs found

    Performance Analysis of Tracking on Mobile Devices using Local Binary Descriptors

    Get PDF
    With the growing ubiquity of mobile devices, users are turning to their smartphones and tablets to perform more complex tasks than ever before. Performing computer vision tasks on mobile devices must be done despite the constraints on CPU performance, memory, and power consumption. One such task for mobile devices involves object tracking, an important area of computer vision. The computational complexity of tracking algorithms makes them ideal candidates for optimization on mobile platforms. This thesis presents a mobile implementation for real time object tracking. Currently few tracking approaches take into consideration the resource constraints on mobile devices. Optimizing performance for mobile devices can result in better and more efficient tracking approaches for mobile applications such as augmented reality. These performance benefits aim to increase the frame rate at which an object is tracked and reduce power consumption during tracking. For this thesis, we utilize binary descriptors, such as Binary Robust Independent Elementary Features (BRIEF), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Invariant Scalable Keypoints (BRISK), and Fast Retina Keypoint (FREAK). The tracking performance of these descriptors is benchmarked on mobile devices. We consider an object tracking approach based on a dictionary of templates that involves generating keypoints of a detected object and candidate regions in subsequent frames. Descriptor matching, between candidate regions in a new frame and a dictionary of templates, identifies the location of the tracked object. These comparisons are often computationally intensive and require a great deal of memory and processing time. Google\u27s Android operating system is used to implement the tracking application on a Samsung Galaxy series phone and tablet. Control of the Android camera is largely done through OpenCV\u27s Android SDK. Power consumption is measured using the PowerTutor Android application. Other performance characteristics, such as processing time, are gathered using the Dalvik Debug Monitor Server (DDMS) tool included in the Android SDK. These metrics are used to evaluate the tracker\u27s performance on mobile devices

    Safe code transfromations for speculative execution in real-time systems

    Get PDF
    Although compiler optimization techniques are standard and successful in non-real-time systems, if naively applied, they can destroy safety guarantees and deadlines in hard real-time systems. For this reason, real-time systems developers have tended to avoid automatic compiler optimization of their code. However, real-time applications in several areas have been growing substantially in size and complexity in recent years. This size and complexity makes it impossible for real-time programmers to write optimal code, and consequently indicates a need for compiler optimization. Recently researchers have developed or modified analyses and transformations to improve performance without degrading worst-case execution times. Moreover, these optimization techniques can sometimes transform programs which may not meet constraints/deadlines, or which result in timeouts, into deadline-satisfying programs. One such technique, speculative execution, also used for example in parallel computing and databases, can enhance performance by executing parts of the code whose execution may or may not be needed. In some cases, rollback is necessary if the computation turns out to be invalid. However, speculative execution must be applied carefully to real-time systems so that the worst-case execution path is not extended. Deterministic worst-case execution for satisfying hard real-time constraints, and speculative execution with rollback for improving average-case throughput, appear to lie on opposite ends of a spectrum of performance requirements and strategies. Deterministic worst-case execution for satisfying hard real-time constraints, and speculative execution with rollback for improving average-case throughput, appear to lie on opposite ends of a spectrum of performance requirements and strategies. Nonetheless, this thesis shows that there are situations in which speculative execution can improve the performance of a hard real-time system, either by enhancing average performance while not affecting the worst-case, or by actually decreasing the worst-case execution time. The thesis proposes a set of compiler transformation rules to identify opportunities for speculative execution and to transform the code. Proofs for semantic correctness and timeliness preservation are provided to verify safety of applying transformation rules to real-time systems. Moreover, an extensive experiment using simulation of randomly generated real-time programs have been conducted to evaluate applicability and profitability of speculative execution. The simulation results indicate that speculative execution improves average execution time and program timeliness. Finally, a prototype implementation is described in which these transformations can be evaluated for realistic applications

    Federated Learning of Artificial Neural Networks

    Get PDF
    A jelenlegi, legszélesebb körben alkalmazható gépi tanulás (ML) modellek, és különösképp mesterséges neurális hálók betanítása rendkívül nagy mennyiségű adatot és jelentős számítási kapacitást igényel. A Federált Tanulás (FL) kutatás fókuszában az ML modellek kollaboratív tanítása áll, napjaink heterogén, földrajzilag is erősen elosztott információs infrastruktúráján. Az FL célja ezáltal eloszlatni a tanulás számítási igényét a résztvevők (node-ok) között, az adatot annak keletkezési helyén feldolgozva, míg tanulás maga a node-okon számított módosítási igények (update-ek) időszakonkénti begyűjtésével, összegzésével és a frissített modell szétosztásával történik. Az FL-lel kapcsolatos kutatások, a mi megátsunk szerint három főbb irányba folynak: (1) Az első irány az általánosan elfogadott federált tanulási metódus, a Federált Átlagolás (FedAvg) életszerű környezetben való alkalmazásának kérdéseivel foglalkozik, azaz hogyan lehetséges a szükséges kommunikációs és számítási kapacitás biztosítása. (2) A második irány a FedAvg algoritmus alkalmazásakor fellépő problémákra fókuszál, úgymint a modell csökkenő általános pontosága, valamint a közös modell potenciálisan elégtelen teljesítménye a végfelhasználóknál. (3) A harmadik sokat kutatott téma pedig a résztvevők bizalmas adatinak minél erősebb védelmének módjait vizsgálja. A disszertációban az mesterséges neurális hálók federált tanításának az ezen, általunk a legfontosabbnak ítélt irányokban történő fejlesztésére irányuló munkánkat mutatom be. Az bemutatott metódusok az egyes problémák ehnyhítésére a következő ötleteken alapulnak: (1) A FedAvg algoritmus peer-to-peer átalakítása (2) a múltbeli állapotokon alapuló optimalizációs metódusok alkalmazása; valamint (3) a gradiensek használatát nem igénylő természet által inspirált optimalizációs módszerek alkalmazása

    Personality Identification from Social Media Using Deep Learning: A Review

    Get PDF
    Social media helps in sharing of ideas and information among people scattered around the world and thus helps in creating communities, groups, and virtual networks. Identification of personality is significant in many types of applications such as in detecting the mental state or character of a person, predicting job satisfaction, professional and personal relationship success, in recommendation systems. Personality is also an important factor to determine individual variation in thoughts, feelings, and conduct systems. According to the survey of Global social media research in 2018, approximately 3.196 billion social media users are in worldwide. The numbers are estimated to grow rapidly further with the use of mobile smart devices and advancement in technology. Support vector machine (SVM), Naive Bayes (NB), Multilayer perceptron neural network, and convolutional neural network (CNN) are some of the machine learning techniques used for personality identification in the literature review. This paper presents various studies conducted in identifying the personality of social media users with the help of machine learning approaches and the recent studies that targeted to predict the personality of online social media (OSM) users are reviewed

    Inclusion results for convolution submethods

    Get PDF
    If B is a summability matrix, then the submethod Bλ is the matrix obtained by deleting a set of rows from the matrix B. Comparisons between Euler-Knopp submethods and the Borel summability method are made. Also, an equivalence result for convolution submethods is established. This result will necessarily apply to the submethods of the Euler-Knopp, Taylor, Meyer-König, and Borel matrix summability methods
    corecore