1,536,734 research outputs found

    Accelerated Asymptotics for Diffusion Model Estimation

    Get PDF
    We propose a semiparametric estimation procedure for scalar homogeneous stochastic differential equations. We specify a parametric class for the underlying diffusion process and identify the parameters of interest by minimizing criteria given by the integrated squared difference between kernel estimates of drift and diffusion function and their parametric counterparts. The nonparametric estimates are simplified versions of those in Bandi and Phillips (1998). A complete asymptotic theory for the semiparametric estimates is developed. The limit theory relies on infill and long span asymptotics and the asymptotic distributions are shown to depend on the chronological local time of the underlying diffusion process. The estimation method and asymptotic results apply to both stationary and nonstationary processes. As is standard with semiparametric approaches in other contexts, faster convergence rates are attained than is possible in the fully functional case. From a purely technical point of view, this work merges two strands of the most recent econometrics literature, namely the estimation of nonlinear models of integrated time-series [Park and Phillips (1999, 2000)] and the functional identification of diffusions under minimal assumptions on the dynamics of the underlying process [Florens-Zmirou (1993), Jacod (1997), Bandi and Phillips (1998) and Bandi (1999)]. In effect, the 'minimum distance' type of estimation that is presented in this paper can be interpreted as extremum estimation for potentially nonstationary and nonlinear continuous-time models.

    How Active is Active Learning: Value Function Method Versus an Approximation Method

    Get PDF
    In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learn- ing), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    Estimation of aggregated modal split mode

    Get PDF
    In spite of the fact that disaggregate modelling has undergone considerable development in the last twenty years, many studies are still based on aggregate modelling. In France, for example, aggregate models are still in much more common use than disaggregate models, even for modal split. The estimation of aggregate models is still therefore an important issue.In France, for most studies it is possible to use behavioural data from household surveys, which are conducted every ten years in most French conurbations. These household surveys provide data on the socioeconomic characteristics both of individuals and the households to which they belong and data on modal choice for all the trips made the day before the survey. The sampling rate is generally of 1% of the population, which gives about 50,000 trips for a conurbation of 1 million inhabitants. However, matrices that contain several hundred rows and columns are frequently used. We therefore have to construct several modal matrices that contain more than 10,000 cells (in the case of a small matrix with only 100 rows) with less than 50,000 trips (to take the above example). Obviously, the matrices will contain a large number of empty cells and the precision of almost all the cells will be very low. It is consequently not possible to estimate the model at this level of zoning.The solution which is generally chosen is to aggregate zones. This must comply with two contradictory objectives:- the number of zones must be as small as possible in order to increase the number of surveyed trips that can be used during estimation and hence the accuracy of the O-D matrices for trips conducted on each mode;- the zones must be as small as possible in order to produce accurate data for the explanatory variables such as the generalized cost for each of the transport modes considered. When the size of the zone increases, it is more difficult to evaluate the access and regress time for public transport and there are several alternative routes with different travel times between each origin zone and each destination. Therefore more uncertainty is associated with the generalized cost that represents the quality of service available between the two zones. The generally adopted solution is to produce a weighted average of all the generalized costs computed from the most disaggregated matrix. However, there is no guarantee that this weighted mean will be accurate for the origin-destination pair in question.When the best compromise has been made, some of the matrix cells are generally empty or suffer from an insufficient level of precision. To deal with this problem we generally keep only the cells for which the data is sufficiently precise by selecting those cells in which the number of surveyed trips exceeds a certain threshold. However, this process involves rejecting part of the data which cannot be used for estimation purposes. When a fairly large number of zones is used, the origin destination pairs which are selected for the estimation of the model mainly involve trips that are performed in the centre of the conurbation or radial trips between the centre and the suburbs. These origin-destination pairs are also those for which public transport's share is generally the highest. The result is to reduce the variance of the data and therefore the quality of the estimation.To cope with this problem we propose a different aggregation process which makes it possible to retain all the trips and use a more disaggregate zoning system. The principle of the method is very simple. We shall apply the method to the model most commonly used for modal split, which is the logit model. When there are only two modes of transport, the share of each mode is obtained directly from the difference in the utility between the two modes with the logit function. We can therefore aggregate the origin-destination pairs for which the difference between the utility of the two modes is very small in order to obtain enough surveyed trips to ensure sufficient data accuracy. This process is justified by the fact that generally the data used to calculate the utility of each mode is as accurate or even more accurate at a more disaggregate level of zoning. The problem with this method is that the utility function coefficients have to be estimated at the same time as the logit model. An iterative process is therefore necessary. The steps of the method are summarised below:- selection of initialization values for the utility function coefficients for the two transport modes in order to intitialize the iteration process. These values can, for example, be obtained from a previous study or calibration performed according to the classical method described in Section 1.2;- the utility for each mode is computed on the basis of the above coefficients, followed by the difference in the utility for each O-D pair in the smallest scale zoning system for which explanatory variables with an adequate level of accuracy are available (therefore with very limited zonal aggregation or even none at all);- the O-D pairs are classified on the basis of increasing utility difference;- the O-D pairs are then aggregated. This is done on the basis of closeness of utility difference. The method involves taking the O-D link with the smallest utility difference then combining it with the next O-D pair (in order of increasing utility difference). This process is continued until the number of surveyed trips in the grouping is greater than a threshold value that is decided on the basis of the level of accuracy that is required for trip flow estimation. When this threshold is reached the construction of the second grouping is commenced, and so on and so forth until each O-D pair has been assigned to a group;- for each new class of O-D pairs it is necessary to compute the values of the explanatory variables which make up the utility functions for each class. This value is obtained on the basis of the weighted average of the values for each O-D pair in the class;- a new estimation of the utility function coefficients.This process is repeated until the values of the utility function coefficients converge. We have tested this method for the Lyon conurbation with data from the most recent household travel survey conducted in 1995/96. We have conducted a variety of tests in order to identify the best application of the method and to test the stability of the results. It would seem that this method always produces better results than the more traditional method that involves zoning aggregation. The paper presents both the methodology and the results obtained from different aggregation methods. In particular, we analyse how the choice of zoning system affects the results of the estimation.Aggregate modelling ; choice modal ; Zoning system ; Urban mobility ; Conurbation (Lyon, France) ; Estimation method

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    How active is active learning: value function method vs an approximation method

    Get PDF
    In a previous paper Amman and Tucci (2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control

    Modeling grassland productivity through remote sensing products

    Get PDF
    Mixed grasslands in south Canada serve a variety of economic, environmental and ecological purposes. Numerical modeling has become a major method used to identify potential grassland ecosystem responses to environment changes and human activities. In recent years, the focus has been on process models because of their high accuracy and ability to describe the interactions among different environmental components and the ecological processes. At present, two commonly-used process models (CENTURY and BIOME-BGC) have significantly improved our understanding of the possible consequences and responses of terrestrial ecosystems under different environmental conditions. However, problems with these models include only using site-based parameters and adopting different assumptions on interactions between plant, environmental conditions and human activities in simulating such complex phenomenon. In light of this shortfall, the overall objective of this research is to integrate remote sensing products into ecosystem process model in order to simulate productivity for the mixed grassland ecosystem in the landscape level. Data used includes 4-years of field measurements and diverse satellite data (System Pour l’Observation de la Terre (SPOT) 4 and 5, Landsat TM and ETM, Advanced Very High Resolution Radiometer (AVHRR) imagery). Using wavelet analyses, the study first detects that the dominant spatial scale is controlled by topography and thus determines that 20-30 m is the optimum resolution to capture the vegetation spatial variation for the study area. Second, the performance of the RDVI (Renormalized Difference Vegetation Index), ATSAVI (Adjusted Transformed Soil-Adjusted Vegetation Index), and MCARI2 (Modified Chlorophyll Absorption Ratio Index 2) are slightly better than the other VIs in the groups of ratio-based, soil-line-related, and chlorophyll-corrected VIs, respectively. By incorporating CAI (Cellulose Absorption Index) as a litter factor in ATSAVI, a new VI is developed (L-ATSAVI) and it improves LAI estimation capability by about 10%. Third, vegetation maps are derived from a SPOT 4 image based on the significant relationship between LAI and ATSAVI to aid spatial modeling. Fourth, object-oriented classifier is determined as the best approach, providing ecosystem models with an accurate land cover map. Fifth, the phenology parameters are identified for the study area using 22-year AVHRR data, providing the input variables for spatial modeling. Finally, the performance of popular ecosystem models in simulating grassland vegetation productivity is evaluated using site-based field data, AVHRR NDVI data, and climate data. A new model frame, which integrates remote sensing data with site-based BIOME-BGC model, is developed for the mixed grassland prairie. The developed remote sensing-based process model is able to simulate ecosystem processes at the landscape level and can simulate productivity distribution with 71% accuracy for 2005

    Knowledge transfer in website design:exploring the processes and benefits of design collaboration for non-creative Micros

    Get PDF
    This thesis explores the interaction between Micros (<10 employees) from non-creative sectors and website designers ("Creatives") that occurred when creating a website of a higher order than a basic template site. The research used Straussian Grounded Theory Method with a longitudinal design, in order to identify what knowledge transferred to the Micros during the collaboration, how it transferred, what factors affected the transfer and outcomes of the transfer including behavioural additionality. To identify whether the research could be extended beyond this, five other design areas were also examined, as well as five Small to Medium Enterprises (SMEs) engaged in website and branding projects. The findings were that, at the start of the design process, many Micros could not articulate their customer knowledge, and had poor marketing and visual language skills, knowledge core to web design, enabling targeted communication to customers through images. Despite these gaps, most Micros still tried to lead the process. To overcome this disjoint, the majority of the designers used a knowledge transfer strategy termed in this thesis as ‘Bi-Modal Knowledge Transfer’, where the Creative was aware of the transfer but the Micro was unaware, both for drawing out customer knowledge from the Micro and for transferring visual language skills to the Micro. Two models were developed to represent this process. Two models were also created to map changes in the knowledge landscapes of customer knowledge and visual language – the Knowledge Placement Model and the Visual Language Scale. The Knowledge Placement model was used to map the placement of customer knowledge within the consciousness, extending the known Automatic-Unconscious -Conscious model, adding two more locations – Peripheral Consciousness and Occasional Consciousness. Peripheral Consciousness is where potential knowledge is held, but not used. Occasional Consciousness is where potential knowledge is held but used only for specific tasks. The Visual Language Scale was created to measure visual language ability from visually responsive, where the participant only responds personally to visual symbols, to visually multi-lingual, where the participant can use visual symbols to communicate with multiple thought-worlds. With successful Bi-Modal Knowledge Transfer, the outcome included not only an effective website but also changes in the knowledge landscape for the Micros and ongoing behavioural changes, especially in marketing. These effects were not seen in the other design projects, and only in two of the SME projects. The key factors for this difference between SMEs and Micros appeared to be an expectation of knowledge by the Creatives and failure by the SMEs to transfer knowledge within the company

    The Impact of Social Media Marketing Components on the Online Consumer Buying Behavior: A Comparative Study between Greek and Finnish consumers

    Get PDF
    Due to the massive explosion of technology and the Internet boom, every individual can connect, share information and shape relationships. What contributes toward this innovative boost and at the same time creates an effective environment in which a person could function with other people cooperatively, is social media. Through electronic word-of-mouth (e-WOM) and online advertisement, social media brings a new and powerful perspective in shaping consumers' attitudes and behaviours. Thus, this presents marketers with the opportunity to affect consumers purchase decisions through online marketing and social media. Social media marketing offers a connection between the product or service with the consumer while establishing an environment in which every individual can become a part of an influential "social chain-interaction". In this present thesis, the effect of social media marketing mechanisms, namely, e-WOM and online advertisement, is examined on the online Greek and Finnish consumer buying behaviour. A non-probability sampling technique and the convenience sampling method were applied together with the minimum sample size calculation conducted with the log Montel Carlo simulation. "n*(n-Star)" method, in order to establish an appropriate sample strategy and sample size. Two separate online questionnaires with the same questions were distributed across Greek and Finnish participants with a distinct characteristic. They all had an" active social media life". To identify the impact of social media marketing components (e-WOM and online advertisement) on the Greek and Finnish consumers' online buying behaviour, I first go through a detailed data analysis, transformation, and variable selection process. After that stage, two separate multiple regression models are applied to identify the difference between the Greek and Finnish consumers online behaviour. The results suggest that, although both e- WOM and online advertisement significantly affect both the Greek and Finnish online consumer behaviour, e-WOM's impact is far more significant than that of online advertising. These findings indicate that by reading online reviews and watching online advertisements on social media websites, the Greek and Finnish consumers could learn the value of their purchase intentions. Thus, e-WOM communication and online advertisement could be classified as powerful tools of motivation

    Investigation of single drop particle scavenging using an ultrasonically levitated drop

    Get PDF
    Airborne particulate, known as aerosols, produced by both natural and anthropogenic means, have significant health and environmental impacts. Therefore understanding the produc-tion and removal of these particles is of critical importance. The main thrust of this thesis research is concerned with improving the understanding of removal of particulates via interaction with falling liquid drops, known as wet deposition. This process occurs naturally within rain and can be imposed in industrial applications with wet scrubbers. Therefore improved models for wet scavenging have applications in both climatology and pollution control. To perform this study, first the performance of existing models for wet deposition was investigated. Models for drop scavenging of aerosols via inertial impaction proposed by Slinn and by Calvert were compared with published experimental measurements. A parametric study was performed on the residual of the model predictions from the measurements to identify dimensionless groups not included in these models, which might increase model performance. The study found that two dimensionless groups, the relative Stokes number, Stkr and the drop Reynolds number Re, are both well correlated with the residual of these models. They are included in modified versions of both of these models to provide better performance. That these two dimensionless groups improve model performance suggests that an inertial mechanism and an advective mechanism not accounted for in the existing models play some role in aerosol scavenging in the inertial regime. These findings were experimentally investigated to identify more specifically these mecha-nisms. To do this, single drop particle scavenging was experimentally measured using an ultrasonic levitation technique. This technique enabled measurements of scavenging efficiency, E, for individ-ual drops, and allowed for control of drop axis ratio, α, drop shape oscillations, and Re independently from drop diameter. This allowed for more controlled manipulation of the drop wakes in both at-tached and vortex shedding regimes. Non-evaporating drops were used which resulted in essentially zero temperature and vapor concentration difference between the drop surface and the surrounding air, virtually eliminating the possibility of confounding phoretic effects. Plots of E versus Stokes number, Stk, were found to depend on α. These plots became independent of α when Stk was calculated using the Sauter mean diameter (as opposed to the equivolume diameter). Furthermore, E was shown to be insensitive to both Re and drop shape oscillations, suggesting that wake effects do not have a measurable impact on E. Finally, a method was developed to relate models of E for spherical drops (the assumed shape in existing scavenging model predictions) to E for arbitrarily deformed drops, such as those occurring in rain. Of note, these are the first measurements of droplet scavenging obtained using ultrasonic levitation. Finally, as drop scavenging is heavily dependent on particle size, a novel technique was identified and explored for improving aerosol sizing measurements. To do this, experiments were carried out in an impactor where the distance between the impactor nozzle and the impactor plate was small, much less than the typically used one nozzle diameter separation. The aerosol deposition patterns in this impactor were investigated for aerosols in the 3µm to 15µm diameter range. Ring-shaped deposition patterns were observed where the internal diameter and thickness of the rings were a function of the particle diameter. Specifically, the inner diameter and ring thickness were correlated to the Stokes number, Stk; the ring diameter decreased with Stk, and the ring thickness increased with Stk. At Stk ∼ 0.4 the ring closed up, leaving a mostly uniform disk deposition pattern. These ring patterns do not appear to correspond to patterns previously described in the literature, and an order of magnitude analysis shows that this is an inertially dominated process. Though this method was not used for particle sizing in this thesis research, it is possible that further development of this approach will result in a more advanced particle sizing tool for aerosol science research
    corecore