246 research outputs found

    From Brand Loyalty to E-Loyalty: a Conceptual Framework

    Get PDF
    The concept of brand loyalty has been extensively discussed in traditional marketing literature with main emphasis on two different dimensions: behavioral and attitudinal loyalty. Oliver (1997) has extended a conceptual framework of brand loyalty that includes the full spectrum of brand loyalty based on a hierarchy of effects model with cognitive, affective, conative (behavioral intent) and action (repeat purchase behavior) dimensions. The concept of e-loyalty extends the traditional brand loyalty concept to on-line consumer behavior. Although the underlying theoretical foundations of traditional brand loyalty and the newly defined phenomena of e-loyalty are generally similar there are unique aspects of its manifestation in Internet based marketing and buyer behavior. Schultz and Bailey (2000) describe customer/brand loyalty in cyberspace as an evolution from traditional product driven – marketer controlled concept towards a distribution driven – consumer controlled and technology facilitated concept. E-loyalty also has several parallels to the “store loyalty” concept (Corstjens and Lal, 2000) such as building repeat store visiting behavior over and above the purchase of established brand name items in the store. As extensively discussed in Schefter and Reichheld (2000) e-loyalty is all about quality, customer support, on-time delivery, compelling product presentations, convenient and reasonably priced shipping and handling, and clear and trustworthy privacy policies. This paper presents an integrated framework of e-loyalty (see figure below) and its underlying drivers in terms of (a) Website & Technology (b) Customer Service & Logistics (c) Trust & Security (d) Product & Price and (e) Brand Building Activities. The nature of these factors in building customer loyalty are discussed with examples of current practices. Managerial and future research implications from the proposed framework are also presented

    Three-dimensional modeling of the human eye based on magnetic resonance imaging

    Get PDF
    PURPOSE. A methodology for noninvasively characterizing the three-dimensional (3-D) shape of the complete human eye is not currently available for research into ocular diseases that have a structural substrate, such as myopia. A novel application of a magnetic resonance imaging (MRI) acquisition and analysis technique is presented that, for the first time, allows the 3-D shape of the eye to be investigated fully. METHODS. The technique involves the acquisition of a T2-weighted MRI, which is optimized to reveal the fluid-filled chambers of the eye. Automatic segmentation and meshing algorithms generate a 3-D surface model, which can be shaded with morphologic parameters such as distance from the posterior corneal pole and deviation from sphericity. Full details of the method are illustrated with data from 14 eyes of seven individuals. The spatial accuracy of the calculated models is demonstrated by comparing the MRI-derived axial lengths with values measured in the same eyes using interferometry. RESULTS. The color-coded eye models showed substantial variation in the absolute size of the 14 eyes. Variations in the sphericity of the eyes were also evident, with some appearing approximately spherical whereas others were clearly oblate and one was slightly prolate. Nasal-temporal asymmetries were noted in some subjects. CONCLUSIONS. The MRI acquisition and analysis technique allows a novel way of examining 3-D ocular shape. The ability to stratify and analyze eye shape, ocular volume, and sphericity will further extend the understanding of which specific biometric parameters predispose emmetropic children subsequently to develop myopia. Copyright © Association for Research in Vision and Ophthalmology

    Three-dimensional magnetic resonance imaging of the phakic crystalline lens during accommodation

    Get PDF
    To quantify changes in crystalline lens curvature, thickness, equatorial diameter, surface area, and volume during accommodation using a novel two-dimensional magnetic resonance imaging (MRI) paradigm to generate a complete three-dimensional crystalline lens surface model

    Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index

    Full text link
    With the rise of prolific ChatGPT, the risk and consequences of AI-generated text has increased alarmingly. To address the inevitable question of ownership attribution for AI-generated artifacts, the US Copyright Office released a statement stating that 'If a work's traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it'. Furthermore, both the US and the EU governments have recently drafted their initial proposals regarding the regulatory framework for AI. Given this cynosural spotlight on generative AI, AI-generated text detection (AGTD) has emerged as a topic that has already received immediate attention in research, with some initial methods having been proposed, soon followed by emergence of techniques to bypass detection. This paper introduces the Counter Turing Test (CT^2), a benchmark consisting of techniques aiming to offer a comprehensive evaluation of the robustness of existing AGTD techniques. Our empirical findings unequivocally highlight the fragility of the proposed AGTD methods under scrutiny. Amidst the extensive deliberations on policy-making for regulating AI development, it is of utmost importance to assess the detectability of content generated by LLMs. Thus, to establish a quantifiable spectrum facilitating the evaluation and ranking of LLMs according to their detectability levels, we propose the AI Detectability Index (ADI). We conduct a thorough examination of 15 contemporary LLMs, empirically demonstrating that larger LLMs tend to have a higher ADI, indicating they are less detectable compared to smaller LLMs. We firmly believe that ADI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making.Comment: EMNLP 2023 Mai

    Decreased Gray Matter Concentration in the Lateral Geniculate Nuclei in Human Amblyopes

    Get PDF
    Purpose. In a group of humans with strabismic amblyopia, the relationship was examined between the structure and function of different brain regions. Three question were addressed: (1) Is the lateral geniculate nucleus (LGN) in humans with amblyopia structurally as well as functionally abnormal? (2) Do structural anomalies in the visual cortex correlate with the previously reported cortical functional losses? (3) Is there a link between the functional anomalies in the visual cortex and any structural anomalies in the geniculate? Methods. The structure was compared by using voxel-based morphometry (VBM) and the function by functional magnetic resonance imaging (fMRI). Results. The results showed that the geniculate is structurally abnormal in humans with strabismic amblyopia. Conclusions. These findings add further weight to the role of the LGN in the cortical deficits exhibited in human strabismic amblyopes

    The Simon and Simon-Mars Tensors for Stationary Einstein-Maxwell Fields

    Full text link
    Modulo conventional scale factors, the Simon and Simon-Mars tensors are defined for stationary vacuum spacetimes so that their equality follows from the Bianchi identities of the second kind. In the nonvacuum case one can absorb additional source terms into a redefinition of the Simon tensor so that this equality is maintained. Among the electrovacuum class of solutions of the Einstein-Maxwell equations, the expression for the Simon tensor in the Kerr-Newman-Taub-NUT spacetime in terms of the Ernst potential is formally the same as in the vacuum case (modulo a scale factor), and its vanishing guarantees the simultaneous alignment of the principal null directions of the Weyl tensor, the Papapetrou field associated with the timelike Killing vector field, the electromagnetic field of the spacetime and even the Killing-Yano tensor.Comment: 12 pages, Latex IOP article class, no figure

    A bottom-up design framework for CAD tools to support design for additive manufacturing

    Get PDF
    Additive manufacturing (AM) technology is enabling a platform to produce parts with enhanced shape complexity. Design engineers are exploiting this capability to produce high performance functional parts. The current top-down approach to design for AM requires the designer to develop a design model in CAD software and then use optimization tools to adapt the design for the AM technology, however this approach neglects a number of desired criteria. This paper proposes an alternative bottom-up design framework for a new type of CAD tool which combines the knowledge required to design a part with evolutionary programming in order to design parts specifically for the AM platform.</p

    Narrow band imaging and serology in the assessment of premalignant gastric pathology

    Get PDF
    Background: Patient outcomes in gastric adenocarcinoma are poor due to late diagnosis. Detecting and treating at the premalignant stage has the potential to improve this. Helicobacter pylori is also a strong risk factor for this disease.Aims: Primary aims were to assess the diagnostic accuracy of magnified narrow band imaging (NBI-Z) endoscopy and serology in detecting normal mucosa, H. pylori gastritis and gastric atrophy. Secondary aims were to compare the diagnostic accuracies of two classification systems using both NBI-Z and white light endoscopy with magnification (WLE-Z) and evaluate the inter-observer agreement.Methods: Patients were prospectively recruited. Images of gastric mucosa were stored with histology and serum for IgG H. pylori and Pepsinogen (PG) I/II ELISAs. Blinded expert endoscopists agreed on mucosal pattern. Mucosal images and serological markers were compared with histology. Kappa statistics determined inter-observer variability for randomly allocated images among four experts and four non-experts.Results: 116 patients were prospectively recruited. Diagnostic accuracy of NBI-Z for determining normal gastric mucosa was 0.87(95%CI 0.82–0.92), H. pylori gastritis 0.65(95%CI 0.55–0.75) and gastric atrophy 0.88(95%CI 0.81–0.94). NBI-Z was superior to serology at detecting gastric atrophy: NBI-Z gastric atrophy 0.88(95%CI 0.81-0.94) vs PGI/II ratio

    Deep Learning for Distinguishing Normal versus Abnormal Chest Radiographs and Generalization to Unseen Diseases

    Full text link
    Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to build specific systems to detect every possible condition. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For development, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system generalizes to new patient populations and abnormalities. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist
    • …
    corecore