161 research outputs found
Interpersonal bundling
This paper studies a model of interpersonal bundling, in which a monopolist offers a good for sale under a regular price and a group purchase discount if the number of consumers in a group—the bundle size—belongs to some menu of intervals. We find that this is often a profitable selling strategy in response to demand uncertainty, and it can achieve the highest profit among all possible selling mechanisms. We explain how the profitability of interpersonal bundling with a minimum or maximum group size may depend on the nature of uncertainty and on parameters of the market environment, and we discuss strategic issues related to the optimal design and implementation of these bundling schemes. Our analysis sheds light on popular marketing practices such as group purchase discounts, and it offers insights on potential new marketing innovation
Experience Goods and Consumer Search
We introduce a search model where products differ in variety and unobserved quality (`experience goods'), and firms can establish quality reputation. We show that the inability of consumers to observe quality before purchase significantly changes how search frictions affect market performance. In equilibrium, higher search costs hinder consumers' search for better-matched variety and increase price, but can boost firms' investment in product quality. Under plausible conditions, both consumer and total welfare initially increase in search cost, whereas both would monotonically decrease if quality were observable. We apply the analysis to online markets, where low search costs coexist with low-quality products
Experience Goods and Consumer Search
We introduce a search model where products differ in variety and unobserved quality (`experience goods'), and firms can establish quality reputation. We show that the inability of consumers to observe quality before purchase significantly changes how search frictions affect market performance. In equilibrium, higher search costs hinder consumers' search for better-matched variety and increase price, but can boost firms' investment in product quality. Under plausible conditions, both consumer and total welfare initially increase in search cost, whereas both would monotonically decrease if quality were observable. We apply the analysis to online markets, where low search costs coexist with low-quality products
Recommended from our members
Statistical modeling and statistical learning for disease prediction and classification
This dissertation studies prediction and classification models for disease risk through semiparametric modeling and statistical learning. It consists of three parts. In the first part, we propose several survival models to analyze the Cooperative Huntington's Observational Research Trial (COHORT) study data accounting for the missing mutation status in relative participants (Kieburtz and Huntington Study Group, 1996a). Huntington's disease (HD) is a progressive neurodegenerative disorder caused by an expansion of cytosine-adenine-guanine (CAG) repeats at the IT15 gene. A CAG repeat number greater than or equal to 36 is defined as carrying the mutation and carriers will eventually show symptoms if not censored by other events. There is an inverse relationship between the age-at-onset of HD and the CAG repeat length; the greater the CAG expansion, the earlier the age-at-onset. Accurate estimation of age-at-onset based on CAG repeat length is important for genetic counseling and the design of clinical trials for HD. Participants in COHORT (denoted as probands) undergo a genetic test and their CAG repeat number is determined. Family members of the probands do not undergo the genetic test and their HD onset information is provided by probands. Several methods are proposed in the literature to model the age specific cumulative distribution function (CDF) of HD onset as a function of the CAG repeat length. However, none of the existing methods can be directly used to analyze COHORT proband and family data because family members' mutation status is not always known. In this work, we treat the presence or absence of an expanded CAG repeat in first-degree family members as missing data and use the expectation-maximization (EM) algorithm to carry out the maximum likelihood estimation of the COHORT proband and family data jointly. We perform simulation studies to examine finite sample performance of the proposed methods and apply these methods to estimate the CDF of HD age-at-onset from the COHORT proband and family combined data. Our results show a slightly lower estimated cumulative risk of HD with the combined data compared to using proband data alone.
We then extend the approach to predict the cumulative risk of disease accommodating predictors with time-varying effects and outcomes subject to censoring. We model the time-specific effect through a nonparametric varying-coefficient function and handle censoring through self-consistency equations that redistribute the probability mass of censored outcomes to the right. The computational procedure is extremely convenient and can be implemented by standard software. We prove large sample properties of the proposed estimator and evaluate its finite sample performance through simulation studies. We apply the method to estimate the cumulative risk of developing HD from the mutation carriers in COHORT data and illustrate an inverse relationship between the cumulative risk of HD and the length of CAG repeats at the IT15 gene.
In the second part of the dissertation, we develop methods to accurately predict whether pre-symptomatic individuals are at risk of a disease based on their various marker profiles, which offers an opportunity for early intervention well before definitive clinical diagnosis. For many diseases, existing clinical literature may suggest the risk of disease varies with some markers of biological and etiological importance, for example age. To identify effective prediction rules using nonparametric decision functions, standard statistical learning approaches treat markers with clear biological importance (e.g., age) and other markers without prior knowledge on disease etiology interchangeably as input variables. Therefore, these approaches may be inadequate in singling out and preserving the effects from the biologically important variables, especially in the presence of potential noise markers. Using age as an example of a salient marker to receive special care in the analysis, we propose a local smoothing large margin classifier implemented with support vector machine to construct effective age-dependent classification rules. The method adaptively adjusts age effect and separately tunes age and other markers to achieve optimal performance. We derive the asymptotic risk bound of the local smoothing support vector machine, and perform extensive simulation studies to compare with standard approaches. We apply the proposed method to two studies of premanifest HD subjects and controls to construct age-sensitive predictive scores for the risk of HD and risk of receiving HD diagnosis during the study period.
In the third part of the dissertation, we develop a novel statistical learning method for longitudinal data. Predicting disease risk and progression is one of the main goals in many clinical studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. We develop a statistical learning method for longitudinal data by introducing subject-specific long-term and short-term latent effects through designed kernels to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of distinctive feature of data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzhemeier's Disease (Alzhemeier's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to predict the conversion from mild cognitive impairment to dementia, and show a substantial gain in performance while accounting for the longitudinal feature of data
Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects
Determinants of the profitability of China commercial bank
With the further opening of Chinese capital markets to foreign banks, Chinese commercial banks are facing huge international competition in the domestic market. The profitability of commercial banks is the most important factor for their survival and development. Therefore, the determinants of the profitability of Chinese commercial banks are of great significance. On the basis of reviewing the research theory of commercial banks' profitability, this paper first introduces the historical background and bank structure of Chinese commercial banks, and then systematically and comprehensively influences the profitability from the perspectives of banks' own factors and external macro-environmental factors. The factors are analyzed by normative research and hypothesis, and the theoretical basis for selecting specific variables and empirical tests is provided. Then, build a commercial bank profitability evaluation index system: profitability (ROAE), asset quality (LLPNIR), capital adequacy (ETA), liquidity problems (LADSTF, NLTA), cost efficiency (CIR), bank size (TA), credit risk (ILGL), macroeconomic conditions (GDP growth, INFLATION). On this basis, using the panel data of 203 major commercial banks from 2013 to 2018, the 2 step General Moment Movement (GMM) model was used for analysis, and then measuring the decisive factors affecting the bank's commercial profitability.
Keywords: Chinese commercial banks, profitability determinants, 2 step GM
DreamEdit: Subject-driven Image Editing
Subject-driven image generation aims at generating images containing
customized subjects, which has recently drawn enormous attention from the
research community. However, the previous works cannot precisely control the
background and position of the target subject. In this work, we aspire to fill
the void and propose two novel subject-driven sub-tasks, i.e., Subject
Replacement and Subject Addition. The new tasks are challenging in multiple
aspects: replacing a subject with a customized one can change its shape,
texture, and color, while adding a target subject to a designated position in a
provided scene necessitates a context-aware posture. To conquer these two novel
tasks, we first manually curate a new dataset DreamEditBench containing 22
different types of subjects, and 440 source images with different difficulty
levels. We plan to host DreamEditBench as a platform and hire trained
evaluators for standard human evaluation. We also devise an innovative method
DreamEditor to resolve these tasks by performing iterative generation, which
enables a smooth adaptation to the customized subject. In this project, we
conduct automatic and human evaluations to understand the performance of
DreamEditor and baselines on DreamEditBench. For Subject Replacement, we found
that the existing models are sensitive to the shape and color of the original
subject. The model failure rate will dramatically increase when the source and
target subjects are highly different. For Subject Addition, we found that the
existing models cannot easily blend the customized subjects into the background
smoothly, leading to noticeable artifacts in the generated image. We hope
DreamEditBench can become a standard platform to enable future investigations
toward building more controllable subject-driven image editing. Our project
homepage is https://dreameditbenchteam.github.io/
Entry and Welfare in Search Markets
The effects of entry on consumer and total welfare are studied in a model of consumer search. Potential entrants differ in quality, with high-quality sellers being more likely to meet consumer needs. Contrary to the standard view in economics that more entry benefits consumers, we find that consumer welfare has an inverted-U relationship with entry cost, and free entry is excessive for both consumer and total welfare when entry cost is relatively low. We explain why these results may arise naturally in search markets due to the variety and quality effects of entry, and discuss their business and policy implications
- …