8 research outputs found

    Logic Synthesis for Cellular Architecture FPGA using BDD

    No full text
    In this paper, an efficient approach to the synthesis of CA (Cellular Architecture)-type FPGAs is presented. To exploit the array structure of cells in CA-type FPGAs, logic expressions called Maitra terms, which can be mapped directly to the cell arrays are generated. In this approach, a BDD is modified so that each node of the BDD has another branch which is an exclusive-OR of the two branches of a node. Once the modified BDD is obtained, a traversal of the BDD is sufficient to generate the Maitra terms needed. Since a BDD can be traversed in O(n) steps, where n is the number of nodes in the BDD, Maitra terms are generated very efficiently. This also removes the need for generating minimal SOP or ESOP expressions which can be costly in some cases. The experiments show that the proposed method generates better results than existing methods

    ETDD-based Synthesis of Term-Based FPGAs for Incompletely Specified Boolean Functions

    No full text
    Complex terms are logic expressions which can be mapped directly to cell arrays of cellular architecture devices such as Atmel 6000 series FPGAs. This paper presents an approach to generation of complex terms for incompletely specified Boolean functions using ETDD (EXOR Ternary Decision Diagram) s. Basically the decompositions, Shannon, positive Davio and negative Davio, inherent in ETDDs are employed to generate complex terms. While traversing the ETDD can be done in a simple and efficient way for completely specified functions, the manipulation of ETDDs with don't care terms becomes very complex because the three decompositions require different evaluations of the function. The changes made to the function due to don't cares in each decomposition are analyzed and an approximation algorithm is presented with its applications to the synthesis of complex terms. I. Introduction Recently, there have been various approaches to the synthesis for two-dimensional CA (cellular architecture) d..

    A robust algorithm for text region detection in natural scene images

    No full text

    Distance Error Correction in Time-of-Flight Cameras Using Asynchronous Integration Time

    No full text
    A distance map captured using a time-of-flight (ToF) depth sensor has fundamental problems, such as ambiguous depth information in shiny or dark surfaces, optical noise, and mismatched boundaries. Severe depth errors exist in shiny and dark surfaces owing to excess reflection and excess absorption of light, respectively. Dealing with this problem has been a challenge due to the inherent hardware limitations of ToF, which measures the distance using the number of reflected photons. This study proposes a distance error correction method using three ToF sensors, set to different integration times to address the ambiguity in depth information. First, the three ToF depth sensors are installed horizontally at different integration times to capture distance maps at different integration times. Given the amplitude maps and error regions are estimated based on the amount of light, the estimated error regions are refined by exploiting the accurate depth information from the neighboring depth sensors that use different integration times. Moreover, we propose a new optical noise reduction filter that considers the distribution of the depth information biased toward one side. Experimental results verified that the proposed method overcomes the drawbacks of ToF cameras and provides enhanced distance maps

    Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning

    No full text
    Even without hearing or seeing individuals, humans are able to determine subtle emotions from a range of indicators and surroundings. However, existing research on emotion recognition is mostly focused on recognizing the emotions of speakers across complete modalities. In real-world situations, emotion reasoning is an interesting field for inferring human emotions from a person’s surroundings when neither the face nor voice can be observed. Therefore, in this paper, we propose a novel multimodal approach for predicting emotion from missing one or more modalities based on attention mechanisms. Specifically, we employ self-attention for each unimodal representation to extract the dominant features and utilize the compounded paired-modality attention (CPMA) among sets of modalities to identify the context of the considered individual, such as the interplay of modalities, and capture people’s interactions in the video. The proposed model is trained on the Multimodal Emotion Reasoning (MEmoR) dataset, which includes multimedia inputs such as visual, audio, text, and personality. The proposed model achieves a weighted F1-score of 50.63% for the primary emotion group and 42.7% for the fine-grained one. According to the results, our proposed model outperforms the conventional approaches in terms of emotion reasoning

    Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network

    No full text
    Using hand gestures is a natural method of interaction between humans and computers. We use gestures to express meaning and thoughts in our everyday conversations. Gesture-based interfaces are used in many applications in a variety of fields, such as smartphones, televisions (TVs), video gaming, and so on. With advancements in technology, hand gesture recognition is becoming an increasingly promising and attractive technique in human–computer interaction. In this paper, we propose a novel method for fingertip detection and hand gesture recognition in real-time using an RGB-D camera and a 3D convolution neural network (3DCNN). This system can accurately and robustly extract fingertip locations and recognize gestures in real-time. We demonstrate the accurateness and robustness of the interface by evaluating hand gesture recognition across a variety of gestures. In addition, we develop a tool to manipulate computer programs to show the possibility of using hand gesture recognition. The experimental results showed that our system has a high level of accuracy of hand gesture recognition. This is thus considered to be a good approach to a gesture-based interface for human–computer interaction by hand in the future
    corecore