2,583 research outputs found

    The Long-Term Erosion of Repeat-Purchase Loyalty

    Get PDF
    The study investigates the long-term erosion of repeat-purchase loyalty among consumers who purchase brands in a one-year base period. The study utilises a five-year consumer panel of continuous reporters. We identify brand buyers in a base year, then calculate the proportion that fail to buy the brand in later years. We analyse the top 20 brands in 10 consumer goods categories We find pronounced erosion in repeat-buying over the long-term. The proportion of buyers from a base year that fail to buy the brand in a later year increases steadily over time, from 57% in year 2 to 71.5% by year 5. Moreover, we identify brand and marketing mix factors linked to this over-time customer loss, or erosion. The study provides evidence that consumers’ propensity to buy particular brands changes over a period of years, even though those brands continue to exhibit stable market share. This evidence provides a different interpretation than the literature to date, which has viewed purchase propensities as fixed. The study finds that store brands and niche brands exhibit lower levels of erosion in their buyer base; that a broad range is associated with lower erosion, and that high price promotion incidence is associated with lower erosion for manufacturer brands. Loyalty erosion has been reported before (Ehrenberg, 1988; East & Hammond 1996) but only over short periods. This study examines the phenomenon over five years, confirms that the rate of erosion does diminish over time, and that it is related to category and brand characteristics, as well as marketing mix decisions

    Voting and the Cardinal Aggregation of Judgments

    Get PDF
    The paper elaborates the idea that voting is an instance of the aggregation of judgments, this being a more general concept than the aggregation of preferences. To aggregate judgments one must first measure them. I show that such aggregation has been unproblematic whenever it has been based on an independent and unrestricted scale. The scales analyzed in voting theory are either context dependent or subject to unreasonable restrictions. This is the real source of the diverse 'paradoxes of voting' that would better be termed 'voting pathologies'. The theory leads me to advocate what I term evaluative voting. It can also be called utilitarian voting as it is based on having voters express their cardinal preferences. The alternative that maximizes the sum wins. This proposal operationalizes, in an election context, the abstract cardinal theories of collective choice due to Fleming and Harsanyi. On pragmatic grounds, I argue for a three valued scale for general elections

    Combining classifiers for robust PICO element detection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Formulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements) facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents.</p> <p>Methods</p> <p>In this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element.</p> <p>Results</p> <p>Combining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an <it>f</it>-measure score of 86.3% for P, 67% for I and 56.6% for O.</p> <p>Conclusions</p> <p>Our experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements.</p

    Population-based studies of myocardial hypertrophy: high resolution cardiovascular magnetic resonance atlases improve statistical power

    Get PDF
    BACKGROUND: Cardiac phenotypes, such as left ventricular (LV) mass, demonstrate high heritability although most genes associated with these complex traits remain unidentified. Genome-wide association studies (GWAS) have relied on conventional 2D cardiovascular magnetic resonance (CMR) as the gold-standard for phenotyping. However this technique is insensitive to the regional variations in wall thickness which are often associated with left ventricular hypertrophy and require large cohorts to reach significance. Here we test whether automated cardiac phenotyping using high spatial resolution CMR atlases can achieve improved precision for mapping wall thickness in healthy populations and whether smaller sample sizes are required compared to conventional methods. METHODS: LV short-axis cine images were acquired in 138 healthy volunteers using standard 2D imaging and 3D high spatial resolution CMR. A multi-atlas technique was used to segment and co-register each image. The agreement between methods for end-diastolic volume and mass was made using Bland-Altman analysis in 20 subjects. The 3D and 2D segmentations of the LV were compared to manual labeling by the proportion of concordant voxels (Dice coefficient) and the distances separating corresponding points. Parametric and nonparametric data were analysed with paired t-tests and Wilcoxon signed-rank test respectively. Voxelwise power calculations used the interstudy variances of wall thickness. RESULTS: The 3D volumetric measurements showed no bias compared to 2D imaging. The segmented 3D images were more accurate than 2D images for defining the epicardium (Dice: 0.95 vs 0.93, P < 0.001; mean error 1.3 mm vs 2.2 mm, P < 0.001) and endocardium (Dice 0.95 vs 0.93, P < 0.001; mean error 1.1 mm vs 2.0 mm, P < 0.001). The 3D technique resulted in significant differences in wall thickness assessment at the base, septum and apex of the LV compared to 2D (P < 0.001). Fewer subjects were required for 3D imaging to detect a 1 mm difference in wall thickness (72 vs 56, P < 0.001). CONCLUSIONS: High spatial resolution CMR with automated phenotyping provides greater power for mapping wall thickness than conventional 2D imaging and enables a reduction in the sample size required for studies of environmental and genetic determinants of LV wall thickness

    Agent-based Social Psychology: from Neurocognitive Processes to Social Data

    Full text link
    Moral Foundation Theory states that groups of different observers may rely on partially dissimilar sets of moral foundations, thereby reaching different moral valuations. The use of functional imaging techniques has revealed a spectrum of cognitive styles with respect to the differential handling of novel or corroborating information that is correlated to political affiliation. Here we characterize the collective behavior of an agent-based model whose inter individual interactions due to information exchange in the form of opinions are in qualitative agreement with experimental neuroscience data. The main conclusion derived connects the existence of diversity in the cognitive strategies and statistics of the sets of moral foundations and suggests that this connection arises from interactions between agents. Thus a simple interacting agent model, whose interactions are in accord with empirical data on conformity and learning processes, presents statistical signatures consistent with moral judgment patterns of conservatives and liberals as obtained by survey studies of social psychology.Comment: 11 pages, 4 figures, 2 C codes, to appear in Advances in Complex System

    AT innovation ecosystem design – a kenyan case study

    Get PDF
    Innovations within the AT space frequently fail to get to market and therefore to the people who could benefit from the products. The Scoping Report which underpins the AT2030 programme identified the need to test and develop “what works” for AT innovation to ensure new products, services and approaches are able to scale and reach people, especially people living in low- and middle-income countries. This paper sets out the initial thinking for an East Africa Innovation Ecosystem. We present the emerging thinking from initial scoping exercises and product trials which have helped to shape the newly launched Innovate Now ecosystem. We outline the ecosystem including the core elements – the accelerator programmes and Live Labs. Live labs will allow for rapid innovation testing and user feedback. Thus, increasing user-involvement in the design and development process, and reducing the time to market. The Innovate Now ecosystem is growing and is being led by AMREF. Successful graduates of innovate Now will be connected into the Innovation Scale Fund which will be launched by AT2030 next year (2020)

    Using second harmonic generation to predict patient outcome in solid tumors

    Full text link
    Abstract Background Over-treatment of estrogen receptor positive (ER+), lymph node-negative (LNN) breast cancer patients with chemotherapy is a pressing clinical problem that can be addressed by improving techniques to predict tumor metastatic potential. Here we demonstrate that analysis of second harmonic generation (SHG) emission direction in primary tumor biopsies can provide prognostic information about the metastatic outcome of ER+, LNN breast cancer, as well as stage 1 colorectal adenocarcinoma. Methods SHG is an optical signal produced by fibrillar collagen. The ratio of the forward-to-backward emitted SHG signals (F/B) is sensitive to changes in structure of individual collagen fibers. F/B from excised primary tumor tissue was measured in a retrospective study of LNN breast cancer patients who had received no adjuvant systemic therapy and related to metastasis-free survival (MFS) and overall survival (OS) rates. In addition, F/B was studied for its association with the length of progression-free survival (PFS) in a subgroup of ER+ patients who received tamoxifen as first-line treatment for recurrent disease, and for its relation with OS in stage I colorectal and stage 1 lung adenocarcinoma patients. Results In 125 ER+, but not in 96 ER-negative (ER-), LNN breast cancer patients an increased F/B was significantly associated with a favorable MFS and OS (log rank trend for MFS: p = 0.004 and for OS: p = 0.03). On the other hand, an increased F/B was associated with shorter PFS in 60 ER+ recurrent breast cancer patients treated with tamoxifen (log rank trend p = 0.02). In stage I colorectal adenocarcinoma, an increased F/B was significantly related to poor OS (log rank trend p = 0.03), however this relationship was not statistically significant in stage I lung adenocarcinoma. Conclusion Within ER+, LNN breast cancer specimens the F/B can stratify patients based upon their potential for tumor aggressiveness. This offers a “matrix-focused” method to predict metastatic outcome that is complementary to genomic “cell-focused” methods. In combination, this and other methods may contribute to improved metastatic prediction, and hence may help to reduce patient over-treatment.http://deepblue.lib.umich.edu/bitstream/2027.42/116036/1/12885_2015_Article_1911.pd

    A “Learning Revolution”? Investigating Pedagogic Practices around Interactive Whiteboards in British Primary Classrooms

    Get PDF
    Interactive whiteboards have been rapidly introduced into all primary schools under UK Government initiatives. These large, touch-sensitive screens, which control a computer connected to a digital projector, seem to be the first type of educational technology particularly suited for whole-class teaching and learning. Strong claims are made for their value by manufacturers and policy makers, but there has been little research on how, if at all, they influence established pedagogic practices, communicative processes and educational goals. This study has been designed to examine this issue, using observations in primary (elementary) school classrooms. It is funded by the UK Economic and Social Research Council and builds on the authors’ previous research on ICT in educational dialogues and collaborative activities

    Benevolent characteristics promote cooperative behaviour among humans

    Full text link
    Cooperation is fundamental to the evolution of human society. We regularly observe cooperative behaviour in everyday life and in controlled experiments with anonymous people, even though standard economic models predict that they should deviate from the collective interest and act so as to maximise their own individual payoff. However, there is typically heterogeneity across subjects: some may cooperate, while others may not. Since individual factors promoting cooperation could be used by institutions to indirectly prime cooperation, this heterogeneity raises the important question of who these cooperators are. We have conducted a series of experiments to study whether benevolence, defined as a unilateral act of paying a cost to increase the welfare of someone else beyond one's own, is related to cooperation in a subsequent one-shot anonymous Prisoner's dilemma. Contrary to the predictions of the widely used inequity aversion models, we find that benevolence does exist and a large majority of people behave this way. We also find benevolence to be correlated with cooperative behaviour. Finally, we show a causal link between benevolence and cooperation: priming people to think positively about benevolent behaviour makes them significantly more cooperative than priming them to think malevolently. Thus benevolent people exist and cooperate more
    corecore