10 research outputs found

    Faculty Research Incentives and Business School Health: A New Perspective from and for Marketing

    Get PDF
    Grounded in sociological agency theory, the authors study the role of the faculty research incentive system in the academic research conducted at business schools and business school health. The authors surveyed 234 marketing professors and completed 22 interviews with 14 (associate) deans and 8 external institution stakeholders. They find that research quantity contributes to the research health of the school, but not to other aspects of business school health. r-quality of research (i.e., rigor) contributes more strongly to the research health of the school than research quantity. q-quality (i.e., practical importance) of research does not contribute to the research health of the school but contributes positively to teaching health and several other dimensions of business school health. Faculty research incentives are misaligned: (1) when monitoring research faculty, the number of publications receives too much weight, while creativity, literacy, relevance, and awards receive too little weight; and (2) on average, faculty feels that they are insufficiently compensated for their research, while (associate) deans feel they are compensated too much for their research. These incentive misalignments are largest in schools that perform the worst on research (r- and q-) quality. The authors explore how business schools and faculty can remedy these misalignments

    High Reliability in Medication Administration

    Get PDF
    This research was designed to test the theory of Robust Process Improvement (RPI) as it has been applied to the problem of medication error at Navy medical treatment facilities (MTFs). Medication error is the greatest cause of patient injury in America. In an effort to duplicate the success of High Reliability Organizations (HROs), leaders of the Joint Commission advocated the application of Lean Six Sigma (LSS) as the key elements of RPI and the best way to increase safety and improve the quality of healthcare. The specific problem was that very little empirical evidence existed supporting the theory. The research question asked if the application of LSS could reduce medication error rates. To answer that question, the researcher used a quantitative pre-post design which measured the number of medication related Patient Safety Reports (PSRs) before and after the LSS studies performed at Navy MTFs. Navy Medicine was used as a test bed because it has developed a formidable LSS program. The researcher examined all Navy LSS studies that were directed toward reducing medication error. There were five studies conducted at three different MTFs. The research hypothesis H1 stated that the medication PSR rate prior to the LSS study would be greater than the PSR rate after the study. The five studies combined, showed a total reduction of PSRs from 462 to 407 but the reduction in PSR rate was not statistically significant. One of the LSS studies did show a statistically significant reduction of the PSR rate. Although the results did not give a decisive answer to the research question, it did provide credible evidence that LSS, if applied correctly, may reduce medication error. The findings also produced inquisitive insight into how the principles of HRO should be intertwined with the interventions of process improvement initiatives to create more long-term success

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Measurement of direct response advertising in the financial services industry : a new metrics model

    Get PDF
    Direct response advertising in the financial services industry in South Africa has become one of the most important tactics companies utilise to build and maintain market share. Ensuring that these advertising campaigns yield optimal return on investment numbers is the responsibility of marketing departments and their partners in the marketing and sales processes, such as the creative and media agencies, the distribution force, as well as the client service area that supports the client value proposition. The marketing executive therefore is accountable for the planning, budgeting and execution of direct response campaigns, which need to deliver sufficient results to support the company’s overall business objectives. The challenge all marketers face is the lack of a proven structured and scientific methodology to facilitate this planning, budgeting and execution process. It has always been a general view in the marketing fraternity that it is extremely difficult if not impossible to combine creative output measures, which are subjective in nature, with cost, sales and profit measures, which are objective in nature. This study aims to create a structured approach to marketing strategising and planning, by creating a marketing metrics model that enables the marketing practitioner to budget according to output needed to achieve the overarching business objectives of sales, cost management and profit. This marketing metrics model therefore unpacks the business drivers in detail, but through a marketing effort lense, to link the various factors underlying successful marketing output, to the bigger business objectives. This is done by incorporating both objective (verifiable data, such as cost per sale) and subjective variables (qualitative factors, such as creative quality) into a single model, which enables the marketing practitioner to identify areas of underperformance, which can then be managed, tweaked or discontinued in order to optimise marketing return on investment. Although many marketing metrics models and variables exist, there is a gap in the combination of objective and subjective factors in a single model, such as the proposed model, which will give the marketer a single tool to plan, analyse and manage the output in relation to pre-determined performance benchmarks.Business ManagementDCOM (Business Management

    Discrete Geometry and Convexity in Honour of Imre Bárány

    Get PDF
    This special volume is contributed by the speakers of the Discrete Geometry and Convexity conference, held in Budapest, June 19–23, 2017. The aim of the conference is to celebrate the 70th birthday and the scientific achievements of professor Imre Bárány, a pioneering researcher of discrete and convex geometry, topological methods, and combinatorics. The extended abstracts presented here are written by prominent mathematicians whose work has special connections to that of professor Bárány. Topics that are covered include: discrete and combinatorial geometry, convex geometry and general convexity, topological and combinatorial methods. The research papers are presented here in two sections. After this preface and a short overview of Imre Bárány’s works, the main part consists of 20 short but very high level surveys and/or original results (at least an extended abstract of them) by the invited speakers. Then in the second part there are 13 short summaries of further contributed talks. We would like to dedicate this volume to Imre, our great teacher, inspiring colleague, and warm-hearted friend

    Multikonferenz Wirtschaftsinformatik (MKWI) 2016: Technische Universität Ilmenau, 09. - 11. März 2016; Band II

    Get PDF
    Übersicht der Teilkonferenzen Band II • eHealth as a Service – Innovationen für Prävention, Versorgung und Forschung • Einsatz von Unternehmenssoftware in der Lehre • Energieinformatik, Erneuerbare Energien und Neue Mobilität • Hedonische Informationssysteme • IKT-gestütztes betriebliches Umwelt- und Nachhaltigkeitsmanagement • Informationssysteme in der Finanzwirtschaft • IT- und Software-Produktmanagement in Internet-of-Things-basierten Infrastrukturen • IT-Beratung im Kontext digitaler Transformation • IT-Sicherheit für Kritische Infrastrukturen • Modellierung betrieblicher Informationssysteme – Konzeptuelle Modelle im Zeitalter der digitalisierten Wirtschaft (d!conomy) • Prescriptive Analytics in I

    Metrics--When and Why Nonaveraging Statistics Work

    No full text
    Good metrics are well-defined formulae (often involving averaging) that transmute multiple measures of raw numerical performance (e.g., dollar sales, referrals, number of customers) to create informative summary statistics (e.g., average share of wallet, average customer tenure). Despite myriad uses (benchmarking, monitoring, allocating resources, diagnosing problems, explanatory variables), most uses require metrics that contain information summarizing multiple observations. On this criterion, we show empirically (with people data) that although averaging has remarkable theoretical properties, supposedly inferior nonaveraging metrics (e.g., maximum, variance) are often better. We explain theoretically (with exact proofs) and numerically (with simulations) when and why. For example, when the environment causes a correlation between observed sample sizes (e.g., number of past purchases, projects, observations) and latent underlying parameters (e.g., the likelihood of favorable outcomes), the maximum statistic is a better metric than the mean. We refer to this environmental effect as the Muth effect, which occurs when rational markets provide more opportunities (i.e., more observations) to individuals and organizations with greater innate ability. Moreover, when environments are adverse (e.g., failure-rich), nonaveraging metrics correctly overweight favorable outcomes. We refer to this environmental effect as the Anna Karenina effect, which occurs when less-favorable outcomes convey less information. These environmental effects impact metric construction, selection, and employment.metrics, metric selection, metric evaluation, summary statistics, environmental effects, natural correlations, forecasting, benchmarking, monitoring, statistical biases, choosing explanatory variables
    corecore