3,486 research outputs found

    Brand Tracking on Social Media: The Role of Country of Origin Perceptions

    Get PDF
    Marketers are now almost a decade into using social media as another outlet in developing brand relationships with consumers. Yet an understanding of how consumers interact with brands online is still in its infancy. This paper compares the social media and brand-tracking habits of consumers in three parts of the world: Asia, the Middle East and the USA. In addition, the study attempts to explain what motivates consumers to follow brands on social media, focusing on the role of products’ country of origin in explaining the relationship. The results show that US consumers spent the most time on social media and tracked the most brands, while Thai respondents did the least of both. Four dimensions of social media brand tracking were identified and ratings compared across groups. Significant differences among groups were found for one of the four factors, ‘brand experience’, with US consumers experiencing significantly more positive ‘brand experiences’ than Thai consumers, and Egyptian consumers falling somewhere in between. The results also indicate that the country of product origin can have some effects on brand tracking

    Neural Responding Machine for Short-Text Conversation

    Full text link
    We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75% of the input text, outperforming state-of-the-arts in the same setting, including retrieval-based and SMT-based models.Comment: accepted as a full paper at ACL 201

    Fuzzy logic based intention recognition in STS processes

    Get PDF
    This paper represents a fuzzy logic based classifier that is able to recognise human users' intention of standing up from their behaviours in terms of the force they apply to the ground. The research reported focused on the selection of meaningful input data to the classifier and on the determination of fuzzy sets that best represent the intention information hidden in the force data. The classifier is a component of a robot chair which provides the users with assistance to stand up based on the recognised intention by the classifier

    Development of a robotic platform for maize functional genomics research

    Get PDF
    The food supply requirement of a growing global population leads to an increasing demand for agricultural crops. Without enlarging the current cultivated area, the only way to satisfy the needs of increasing food demand is to improve the yield per acre. Production, fertilization, and choosing productive crops are feasible approaches. How to pick the beneficial genotypes turns out to be a genetic optimization problem, so a biological tool is needed to study the function of crop genes and for the particular purpose of identifying genes important for agronomy traits. Virus-induced gene silencing (VIGS) can be used as such a tool by knocking down gene expression of genes to test their functions. The use of VIGS and other functional genomics approaches in corn plants has increased the need for determining how to rapidly associate genes with traits. A significant amount of observation, comparison, and data analysis are required for such corn genetic studies. An autonomous maize functional genomics system with the capacity to collect data collection, measure parameters, and identify virus-plants should be developed. This research project established a system combining sensors with customized algorithms that can distinguish a viral infected plant and measure parameters of maize plants. An industrial robot arm was used to collect data in multiple views with 3D sensors. Hand-eye calibration between a 2D color camera and the robot arm was performed to transform different camera coordinates into arm-based coordinates. TCP socket-based software written in Visual C ++ was developed at both the robot arm side and the PC side to perform behavioral bidirectional real-time communication. A 3D time-of-flight (ToF) camera was used to reconstruct the corn plant model. The point clouds of corn plants, in different views, were merged into one representation through a homogeneous transform matrix. Functions of a pass-through filter and a statistical outlier removal filter were called from the Point Cloud Library to remove background and random noise. An algorithm for leaf and stem segmentation based on the morphological characteristics of corn plants was developed. A least-squares method was used to fit the skeletons of leaves for computation of parameters such as leaf length and numbers. After locating the leaf center, the arm is made ready to position the 2D camera for color imaging. Color-based segmentation was applied to pick up a rectangular interest of area on the leaf image. The algorithm computing the Gray-Level Co-occurrence Matrix (GLCM) value of the leaf image was implemented using the OPENCV library. After training, Bayes classification was used to identify the infected corn plant leaf based on GLCM value. The System User Interface is capable of generating data collection commands, 3D reconstruction, parameter table output, color image acquisition control, specific leaf-probing and infected corn leaf diagnosis. This application was developed under a Qt cross-platform environment with multithreading between tasks, making the interface user-friendly and efficient

    Multimodal Convolutional Neural Networks for Matching Image and Sentence

    Full text link
    In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.Comment: Accepted by ICCV 201

    The Development of an assistive chair for elderly with sit to stand problems

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyStanding up from a seated position, known as sit-to-stand (STS) movement, is one of the most frequently performed activities of daily living (ADLs). However, the aging generation are often encountered with STS issues owning to their declined motor functions and sensory capacity for postural control. The motivated is rooted from the contemporary market available STS assistive devices that are lack of genuine interaction with elderly users. Prior to the software implementation, the robot chair platform with integrated sensing footmat is developed with STS biomechanical concerns for the elderly. The work has its main emphasis on recognising the personalised behavioural patterns from the elderly users’ STS movements, namely the STS intentions and personalised STS feature prediction. The former is known as intention recognition while the latter is defined as assistance prediction, both achieved by innovative machine learning techniques. The proposed intention recognition performs well in multiple subjects scenarios with different postures involved thanks to its competence of handling these uncertainties. To the provision of providing the assistance needed by the elderly user, a time series prediction model is presented, aiming to configure the personalised ground reaction force (GRF) curve over time which suggests successful movement. This enables the computation of deficits between the predicted oncoming GRF curve and the personalised one. A multiple steps ahead prediction into the future is also implemented so that the completion time of actuation in reality is taken into account
    • …
    corecore