5 research outputs found

    DYNAMIC CONSUMER DECISION MAKING PROCESS IN E-COMMERCE

    Get PDF
    This dissertation studies the dynamic decision making process in E-commerce. In the first essay, we use eye tracking to investigate how consumers make information acquisition decisions on attribute-by-product matrices in online choice environment such as comparison websites. Hierarchical Hidden Markov Model is used to describe this process. The model consists of three connected hierarchical layers: a lower layer that describes the eye movements, a middle layer that identifies product- and attribute-based information acquisition modes, and an upper layer that flexibly captures switching between these modes over time. Findings of a controlled experiment show that low-level properties of the eye and the visual brain play an important role in dynamic information acquisition. Consumer switch frequently between two acquisition modes, and higher switching frequency increases decision time and reduces easiness of decision making. These results have implications for web design and online retailing, and may open new directions for research and theories of online choice. The second essay investigates how usage experience with different types of decision aids contributes to the evolution of online shopping behavior over time. In the context of online grocery stores, we categorize four types of decision aids that are commonly available, namely, those 1) for nutritional needs, 2) for brand preference, 3) for economic needs, and 4) personalized shopping lists. We construct a Non-homogeneous Hidden Markov Model of category purchase incidence and purchase quantity, in which parameters are allowed to vary over time across hidden states as driven by usage experience with different decision aids. The dataset was collected during the period when the retailer first launched its web business, which makes it particularly suited to study the evolution of online purchase behavior. We estimate the model for the spaghetti sauce and liquid detergent categories. Results indicate that four types of decisions influence evolution of purchase behavior differently. Findings from this study enrich the understanding of how purchase behavior may evolve over time in online stores, and provide valuable insights for online retailers to improvement the design of their store environments

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems
    corecore