195 research outputs found

    Toward a flexible facial analysis framework in OpenISS for visual effects

    Get PDF
    Facial analysis, including tasks such as face detection, facial landmark detection, and facial expression recognition, is a significant research domain in computer vision for visual effects. It can be used in various domains such as facial feature mapping for movie animation, biometrics/face recognition for security systems, and driver fatigue monitoring for transportation safety assistance. Most applications involve basic face and landmark detection as preliminary analysis approaches before proceeding into further specialized processing applications. As technology develops, there are plenty of implementations and resources for each task available for researchers, but the key missing properties among them all are fexibility and usability. The integration of functionality components involves complex configurations for each connection joint which is typically problematic with poor reusability and adjustability. The lack of support for integrating different functionality components greatly impact the research effort and cost for individual researchers, which also leads us to the idea of providing a framework solution that can help regarding the issue once and for all. To address this problem, we propose a user-friendly and highly expandable facial analysis framework solution. It contains a core that supports fundamental services for the framework, and a facial analysis module composed of implementations for facial analysis tasks. We evaluate our framework solution and achieve our goals of instantiating the facial analysis specialized framework, which essentially perform tasks in face detection, facial landmark detection, and facial expression recognition. This framework solution as a whole, solves the industry problem of lacking an execution platform for integrated facial analysis implementations and fills the gap in visual effects industry

    ADDRESSING THREE PROBLEMS IN EMBEDDED SYSTEMS VIA COMPRESSIVE SENSING BASED METHODS

    Full text link
    Compressive sensing is a mathematical theory concerning exact/approximate recovery of sparse/compressible vectors using the minimum number of measurements called projections. Its theory covers topics such as l1 optimisation, dimensionality reduction, information preserving projection matrices, random projection matrices and others. In this thesis we extend and use the theory of compressive sensing to address the challenges of limited computation power and energy supply in embedded systems. Three different problems are addressed. The first problem is to improve the efficiency of data gathering in wireless sensor networks. Many wireless sensor networks exhibit heterogeneity because of the environment. We leverage this heterogeneity and extend the theory of compressive sensing to cover non-uniform sampling to derive a new data collection protocol. We show that this protocol can realise a more accurate temporal-spatial profile for a given level of energy consumption. The second problem is to realise realtime background subtraction in embedded cameras. Background subtraction algorithms are normally computationally expensive because they use complex models to deal with subtle changes in background. Therefore existing background subtraction algorithms cannot provide realtime performance on embedded cameras which have limited processing power. By leveraging information preserving projection matrices, we derive a new background subtraction algorithm which is 4.6 times faster and more accurate than existing methods. We demonstrate that our background subtraction algorithm can realise realtime background subtraction and tracking in an embedded camera network. The third problem is to enable efficient and accurate face recognition on smartphones. The state-of-the-art face recognition algorithm is inspired by compressive sensing and is based on l1 optimisation. It also uses random projection matrices for dimensionality reduction. A key problem of using random projection matrices is that they give highly variable recognition accuracy. We propose an algorithm to optimise projection matrix to remove this performance variability. This means we can use fewer projections to achieve the same accuracy. This translates to a smaller l1 optimisation problem and reduces the computation time needed on smartphones, which have limited computation power. We demonstrate the performance of our proposed method on smartphones

    Physical effects of nano particles and polymer on vesicles

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Analysis of Cross-border Mergers and Acquisitions Activities in Emerging Markets: Case Studies based on Automotive Industry of Geely and Tata Motors

    Get PDF
    Since the last several decades, an increasing number of firms from emerging markets have joined the trend of cross-border M&As through the buying up of a high equity brand in the west, with China and India being among the most active participants. The cases from emerging market automakers are particularly looked into for providing a new insight to the previous research which seems to be irrelevant of industry groups. This research examines and explores three questions: firstly, the proactive motives and reactive motives behind the cross-border M&As of emerging market firms; secondly, the challenges during post-acquisition stage of the M&A deals; thirdly, the rationale that forms EMEs’ choice of cross-border M&As rather than equity joint venture as the entry mode of internationalisation. By analyzing the case study of Geely’s acquisition of Volvo and Tata Motors’ acquisition of JLR, EMEs proactively conduct M&As for gaining advanced technology and the realising quick access to the rich global distribution channel, whereas the reactive motives vary from the Chinese and Indian firms due to institutional-specific factors. During the post-acquisition stage, challenges of cultural discrepancy may lead to difficulties on staff management, and also the supervision from the government could largely influence the performance. Furthermore, the benefits of a quick global presence, the chance to wipe out a competitor and the right to gain full control over the activities are among the crucial factors that push EMEs to adopt M&As instead of EJV as the main entry mode of internationalisation

    Securing Cyber-Physical Social Interactions on Wrist-worn Devices

    Get PDF
    Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this article, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel key generation system, which harvests motion data during user handshaking from the wrist-worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn’t involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed key generation system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to different types of attacks including impersonate mimicking attacks, impersonate passive attacks, or eavesdropping attacks. Specifically, for real-time impersonate mimicking attacks, in our experiments, the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed key generation system can be extremely lightweight and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption

    Shake-n-shack : enabling secure data exchange between Smart Wearables via handshakes

    Get PDF
    Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this paper, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel Shake-n-Shack system, which harvests motion data during user handshaking from the wrist worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn't involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed Shake-n-Shack 1 system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to real-time mimicking attacks: in our experiments the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed Shake-n-Shack system can be extremely lightweight, and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption

    VibHead: An Authentication Scheme for Smart Headsets through Vibration

    Full text link
    Recent years have witnessed the fast penetration of Virtual Reality (VR) and Augmented Reality (AR) systems into our daily life, the security and privacy issues of the VR/AR applications have been attracting considerable attention. Most VR/AR systems adopt head-mounted devices (i.e., smart headsets) to interact with users and the devices usually store the users' private data. Hence, authentication schemes are desired for the head-mounted devices. Traditional knowledge-based authentication schemes for general personal devices have been proved vulnerable to shoulder-surfing attacks, especially considering the headsets may block the sight of the users. Although the robustness of the knowledge-based authentication can be improved by designing complicated secret codes in virtual space, this approach induces a compromise of usability. Another choice is to leverage the users' biometrics; however, it either relies on highly advanced equipments which may not always be available in commercial headsets or introduce heavy cognitive load to users. In this paper, we propose a vibration-based authentication scheme, VibHead, for smart headsets. Since the propagation of vibration signals through human heads presents unique patterns for different individuals, VibHead employs a CNN-based model to classify registered legitimate users based the features extracted from the vibration signals. We also design a two-step authentication scheme where the above user classifiers are utilized to distinguish the legitimate user from illegitimate ones. We implement VibHead on a Microsoft HoloLens equipped with a linear motor and an IMU sensor which are commonly used in off-the-shelf personal smart devices. According to the results of our extensive experiments, with short vibration signals (1s\leq 1s), VibHead has an outstanding authentication accuracy; both FAR and FRR are around 5%
    corecore