190 research outputs found

    Consumer-Producer Interaction: A Strategic Analysis of the Market for Customized Products

    Get PDF
    This paper focuses on the process by which consumers and producers interact to create better value for consumers. This happens in many situations but is arguably most prominent in mass-customization, an area that has recently gained a lot of popularity among manufacturers (Business Week, March 20, 2000). In terms of communications, such interaction entails a shift from the one-way communication (usually from seller to buyer) of traditional markets, to a two-way communication. Specifically, potential producers need to elicit preference (and other) information from consumers. They then have to provide a product that correctly incorporates such information. This brings up many strategic issues. In particular, we are interested in answering the following questions: (1) What is the 'economic value' of consumers' information? (2) Are there any strategic implications for producers, if they depend on consumer input and have to pay for consumers' information? (3) In what way does pricing for customized products differ from pricing for similar standardized products? (4) Is the strategic relationship between consumers and producers different in the market for customized goods as compared to more traditional markets? The main contribution of this paper is to bring into focus the issues surrounding mass-customization via an analysis of consumer-producer interaction, which is the facilitating process. This paper is the first attempt in marketing to analytically model this emerging area and should be of interest to academics. Practitioners should be interested in the marketing and strategic perspective on mass-customization that this paper adopts. The trade press has approached mass-customization from a manufacturing/production cost angle, while its marketing implications have largely been left open (Wind and Rangaswamy, 2000). To answer the above questions we build a game-theoretic model, which analyses the interaction between consumers and producers in an agency-theoretic framework. The main features of our model are the following. Consumers vary in their desire for customization, with some consumers having a higher need for and willingness to pay for customized goods. Producers vary in the ability to 'successfully customize' according to consumer specifications. Producers first solicit consumers' suggestions/preferences and attempt to screen consumers who are willing to pay for customized products (stage 1: 'Information market'). They then try to provide a product, which correctly incorporates consumers' input and set prices for such customized products (stage 2: 'Product market'). The main question for consumers at this stage is whether the producer has been able to successfully incorporate their input given in the first stage. We start first with the monopoly case to isolate the strategic issues in consumer-producer interaction. Later we incorporate competition between firms. In the latter case, both the information market (where firms compete for consumers' information) and the product market (where firms compete to sell the final product) come into their own and have interesting interactions. We find that, in equilibrium, firms will pay consumers for their information in the first stage. Intuitively, consumers provide costly input, but any commitment by the firm to provide surplus through a lower price of the product in the second stage, lacks commitment. Moreover, the producer's payment can act as a signal of high quality for the skillful customizer who tries to separate from a 'ghost firm', which cannot customize well. Under monopoly, the price of customized products is the same as that of non-customized products, contrary to common wisdom as reflected in the trade press (Anderson, 1997). Thus, our analyses could explain why some manufacturers find that they cannot charge a premium for customized products (Wind and Rangaswamy, 2000). We find that equilibrium prices of customized products are at the high end of the price range for similar non-customized products, consistent with casual observation.Under duopoly, when firms compete for consumers' information, the prices of customized products are in fact less than the price of non-customized products. This counter-intuitive result occurs because firms try to avoid being heldup by consumers who may withhold purchase, after first getting the firm to produce a very individually tailored product which the firm might not be able to sell to other consumers. Since, first stage competition for information gives consumers a high price for their information, it increases their incentive to holdup the firm. The firm, therefore, has to charge a lower price to induce consumers to purchase the product.Finally, we show that, in the market for customized goods (stage 2), consumers can be better off with less competition between firms. When firms compete in the product market in the second stage, they earn less equilibrium profits. Thus, they compensate consumers less for their information in the first stage, and this may yield consumers less overall utility. This finding could be of interest to manufacturers who increasingly attempt to build deep, long lasting ties with consumers. Often such ties are perceived as conflicting with the consumers' desire to retain the flexibility to compare and opt for the offerings of different producers. Our results suggest that such misalignment of interests need not exist, at least in the market for customized goods

    Optimizing a dynamic fossil fuel CO2 emission model with CTDAS (CarbonTracker Data Assimilation Shell, v1.0) for an urban area using atmospheric observations of CO2, CO, NOx, and SO2

    Get PDF
    We present a modelling framework for fossil fuel CO2 emissions in an urban environment, which allows constraints from emission inventories to be combined with atmospheric observations of CO2 and its co-emitted species CO, NOx , and SO2. Rather than a static assignment of average emission rates to each unit area of the urban domain, the fossil fuel emissions we use are dynamic: they vary in time and space in relation to data that describe or approximate the activity within a sector, such as traffic density, power demand, 2m temperature (as proxy for heating demand), and sunlight and wind speed (as proxies for renewable energy supply). Through inverse modelling, we optimize the relationships between these activity data and the resulting emissions of all species within the dynamic fossil fuel emission model, based on atmospheric mole fraction observations. The advantage of this novel approach is that the optimized parameters (emission factors and emission ratios, N D 44) in this dynamic emission model (a) vary much less over space and time, (b) allow for a physical interpretation of mean and uncertainty, and (c) have better defined uncertainties and covariance structure. This makes them more suited to extrapolate, optimize, and interpret than the gridded emissions themselves. The merits of this approach are investigated using a pseudo-observation-based ensemble Kalman filter inversion set-up for the Dutch Rijnmond area at 1km-1km resolution. We find that the fossil fuel emission model approximates the gridded emissions well (annual mean differences < 2 %, hourly temporal r2 D 0:21-0.95), while reported errors in the underlying parameters allow a full covariance structure to be created readily. Propagating this error structure into atmospheric mole fractions shows a strong dominance of a few large sectors and a few dominant uncertainties, most notably the emission ratios of the various gases considered. If the prior emission ratios are either sufficiently well-known or well constrained from a dense observation network, we find that including observations of co-emitted species improves our ability to estimate emissions per sector relative to using CO2 mole fractions only. Nevertheless, the total CO2 emissions can be well constrained with CO2 as the only tracer in the inversion. Because some sectors are sampled only sparsely over a day, we find that propagating solutions from day-to-day leads to largest uncertainty reduction and smallest CO2 residuals over the 14 consecutive days considered. Although we can technically estimate the temporal distribution of some emission categories like shipping separate from their total magnitude, the controlling parameters are difficult to distinguish. Overall, we conclude that our new system looks promising for application in verification studies, provided that reliable urban atmospheric transport fields and reasonable a priori emission ratios for CO2 and its co-emitted species can be produced

    Evolving Synaptic Plasticity with an Evolutionary Cellular Development Model

    Get PDF
    Since synaptic plasticity is regarded as a potential mechanism for memory formation and learning, there is growing interest in the study of its underlying mechanisms. Recently several evolutionary models of cellular development have been presented, but none have been shown to be able to evolve a range of biological synaptic plasticity regimes. In this paper we present a biologically plausible evolutionary cellular development model and test its ability to evolve different biological synaptic plasticity regimes. The core of the model is a genomic and proteomic regulation network which controls cells and their neurites in a 2D environment. The model has previously been shown to successfully evolve behaving organisms, enable gene related phenomena, and produce biological neural mechanisms such as temporal representations. Several experiments are described in which the model evolves different synaptic plasticity regimes using a direct fitness function. Other experiments examine the ability of the model to evolve simple plasticity regimes in a task -based fitness function environment. These results suggest that such evolutionary cellular development models have the potential to be used as a research tool for investigating the evolutionary aspects of synaptic plasticity and at the same time can serve as the basis for novel artificial computational systems

    Can we use atmospheric CO<sub>2</sub> measurements to verify emission trends reported by cities? Lessons from a 6-year atmospheric inversion over Paris

    Get PDF
    Existing CO2 emissions reported by city inventories usually lag in real-time by a year or more and are prone to large uncertainties. This study responds to the growing need for timely and precise estimation of urban CO2 emissions to support present and future mitigation measures and policies. We focus on the Paris metropolitan area, the largest urban region in the European Union and the city with the densest atmospheric CO2 observation network in Europe. We performed long-term atmospheric inversions to quantify the citywide CO2 emissions, i.e., fossil fuel as well as biogenic sources and sinks, over 6 years (2016–2021) using a Bayesian inverse modeling system. Our inversion framework benefits from a novel near-real-time hourly fossil fuel CO2 emission inventory (Origins.earth) at 1 km spatial resolution. In addition to the mid-afternoon observations, we attempt to assimilate morning CO2 concentrations based on the ability of the Weather Research and Forecasting model with Chemistry (WRF-Chem) transport model to simulate atmospheric boundary layer dynamics constrained by observed layer heights. Our results show a long-term decreasing trend of around 2 % ± 0.6 % per year in annual CO2 emissions over the Paris region. The impact of the COVID-19 pandemic led to a 13 % ± 1 % reduction in annual fossil fuel CO2 emissions in 2020 with respect to 2019. Subsequently, annual emissions increased by 5.2 % ± 14.2 % from 32.6 ± 2.2 Mt CO2 in 2020 to 34.3 ± 2.3 Mt CO2 in 2021. Based on a combination of up-to-date inventories, high-resolution atmospheric modeling and high-precision observations, our current capacity can deliver near-real-time CO2 emission estimates at the city scale in less than a month, and the results agree within 10 % with independent estimates from multiple city-scale inventories.</p
    • …
    corecore