3,189 research outputs found

    Preservation and Dissolution of the Target Firm's Embedded Ties in Acquisitions

    Get PDF
    Our study builds on extant theory on embeddness to concentrate on the process of preservation and dissolution of the target firm’s embedded ties in acquisitions. We identify four critical areas - communication, idiosyncratic investments, inter-personal relations and, personnel turnover – where managerial decisions taken during the acquisition process affect the components of the target firm’s embedded ties – trust, joint problem-solving and exchange of fine-grained information. The preservation or dissolution of an embedded tie depends ultimately on two specific tie-contingencies, the balance of power between the target firm and the embedded relation and interpretive processes at the inter-face between the two. Our findings have implications for the study of the dissolution of market ties as they point to different roles played by social and institutional forces, power asymmetries and competition in the dynamics of embedded ones. Finally, we encourage theory development in acquisition studies by positing the importance of interpretive processes and, more broadly, relational elements that span the boundaries of the parent-target dyad.Acquisitions;Embedded tie dynamics;Embedded ties

    Screening Mothers: Representations of motherhood in Australian films from 1900 to 1988.

    Get PDF
    Although the position of mothers has changed considerably since the beginning of the twentieth century, an idealised notion of motherhood persists. The cinema provides a source of information about attitudes towards mothering in Australian society which is not diminished by the fact that mothers are often marginal to the narrative. While the study recognises that cinematic images are not unconditionally authoritative, it rests on the belief that films have some capacity to reflect and influence society. The films are placed in an historical context with regard to social change in Australian society, so that the images can be understood within the context of the time of the making and viewing of the films. The depictions of the mother are scrutinised with regard to her appearance, her attitude, her relationship with others and the expectations, whether explicit or implicit, of her role. Of particular significance is what happens to her during the film and whether she is punished or rewarded for her behaviour. The conclusions reached after analysis are used to challenge those ideas which assume that portrayals of motherhood are unchangeable and timeless. The study examines Australian feature films from 1900 to 1988. To augment its historical focus, it uses sociological, psychoanalytical and feminist theoretical writing with special relevance for motherhood and mothering practice. Looking at areas of importance to mothers, it comprises an exploration of what makes a mother good or bad; the significance of the birth of female and male children; the relationship of mothers to daughters; the mother's sexuality and the metaphor of the missing mother. It shows that images of motherhood on screen are organised according to political, social and economic requirements in the community. Further, films frequently show mothers in traditional roles which are useful for maintaining notions of patriarchal privilege in society. The analysis exposes stereotypical depictions of motherhood which are often inaccurate, unfair and oppressive to women

    Unsupervised text segmentation predicts eye fixations during reading

    Get PDF
    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units

    Less is Better: A cognitively inspired unsupervised model for language segmentation

    Get PDF
    Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB’s workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement light-weight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an information-theoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon

    Next Steps in Signaling (NSIS): Framework

    Get PDF

    Optimal decentralized Kalman filter

    Get PDF
    The Kalman filter is a powerful state estimation algorithm which incorporates noise models, process model and measurements to obtain an accurate estimate of the states of a process. Implementation of conventional Kalman filter algorithm requires a central processor that harvests measurements from all the sensors in the field. Central algorithms have some drawbacks such as reliability, robustness and high computation which result in a need for non-central algorithms. This study takes optimality in decentralized Kalman filter (DKF) as its focus and derives the optimal decentralized Kalman filter (ODKF) algorithm, in case the network topology is provided to every node in the network, by introducing global Kalman equations. ODKF sets a lower bound of estimation error in least squares sense for DKF

    Meta-Learning for Phonemic Annotation of Corpora

    Get PDF
    We apply rule induction, classifier combination and meta-learning (stacked classifiers) to the problem of bootstrapping high accuracy automatic annotation of corpora with pronunciation information. The task we address in this paper consists of generating phonemic representations reflecting the Flemish and Dutch pronunciations of a word on the basis of its orthographic representation (which in turn is based on the actual speech recordings). We compare several possible approaches to achieve the text-to-pronunciation mapping task: memory-based learning, transformation-based learning, rule induction, maximum entropy modeling, combination of classifiers in stacked learning, and stacking of meta-learners. We are interested both in optimal accuracy and in obtaining insight into the linguistic regularities involved. As far as accuracy is concerned, an already high accuracy level (93% for Celex and 86% for Fonilex at word level) for single classifiers is boosted significantly with additional error reductions of 31% and 38% respectively using combination of classifiers, and a further 5% using combination of meta-learners, bringing overall word level accuracy to 96% for the Dutch variant and 92% for the Flemish variant. We also show that the application of machine learning methods indeed leads to increased insight into the linguistic regularities determining the variation between the two pronunciation variants studied.Comment: 8 page
    • …
    corecore