135 research outputs found

    Mine evaluation optimisation

    Get PDF
    The definition of a mineral resource during exploration is a fundamental part of lease evaluation, which establishes the fair market value of the entire asset being explored in the open market. Since exact prediction of grades between sampled points is not currently possible by conventional methods, an exact agreement between predicted and actual grades will nearly always contain some error. These errors affect the evaluation of resources so impacting on characterisation of risks, financial projections and decisions about whether it is necessary to carry on with the further phases or not. The knowledge about minerals below the surface, even when it is based upon extensive geophysical analysis and drilling, is often too fragmentary to indicate with assurance where to drill, how deep to drill and what can be expected. Thus, the exploration team knows only the density of the rock and the grade along the core. The purpose of this study is to improve the process of resource evaluation in the exploration stage by increasing prediction accuracy and making an alternative assessment about the spatial characteristics of gold mineralisation. There is significant industrial interest in finding alternatives which may speed up the drilling phase, identify anomalies, worthwhile targets and help in establishing fair market value. Recent developments in nonconvex optimisation and high-dimensional statistics have led to the idea that some engineering problems such as predicting gold variability at the exploration stage can be solved with the application of clusterwise linear and penalised maximum likelihood regression techniques. This thesis attempts to solve the distribution of the mineralisation in the underlying geology using clusterwise linear regression and convex Least Absolute Shrinkage and Selection Operator (LASSO) techniques. The two presented optimisation techniques compute predictive solutions within a domain using physical data provided directly from drillholes. The decision-support techniques attempt a useful compromise between the traditional and recently introduced methods in optimisation and regression analysis that are developed to improve exploration targeting and to predict the gold occurrences at previously unsampled locations.Doctor of Philosoph

    Segmentation of mesoscale ocean surface dynamics using satellite SST and SSH observations

    No full text
    International audienceMulti-satellite measurements of altimeter-derived Sea Surface Height (SSH) and Sea Surface Temperature (SST) provide a wealth of information about ocean circulation, especially mesoscale ocean dynamics which may involve strong spatio-temporal relationships between SSH and SST fields. Within an observation-driven framework, we investigate the extent to which mesoscale ocean dynamics may be decomposed into a mixture of dynamical modes, characterized by different local regressions between SSH and SST fields. Formally, we develop a novel latent class regression model to identify dynamical modes from joint SSH and SST observation series. Applied to the highly dynamical Agulhas region, we demonstrate and discuss the geophysical relevance of the proposed mixture model to achieve a spatio-temporal segmentation of the upper ocean dynamics

    Three essays in quantitative marketing.

    Get PDF
    by Ka-Kit Tse.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references.Acknowledgments --- p.iList of tables --- p.vChapter Chapter 1: --- Overall Review --- p.1Chapter Chapter 2: --- Essay one - A Mathematical Programming Approach to Clusterwise Regression Model and its ExtensionsChapter 2.0. --- Abstract --- p.5Chapter 2.1. --- Introduction --- p.6Chapter 2.2. --- A Mathematical Programming Formulation of the Clusterwise Regression Model --- p.10Chapter 2.2.1. --- The Generalized Clusterwise Regression Model --- p.10Chapter 2.2.2. --- "Clusterwise Regression Model (Spath, 1979)" --- p.14Chapter 2.2.3. --- A Nonparametric Clusterwise Regression Model --- p.15Chapter 2.2.4. --- A Mixture Approach to Clusterwise Regression Model --- p.16Chapter 2.2.5. --- An Illustrative Application --- p.19Chapter 2.3. --- Mathematical Programming Formulation of the Clusterwise Discriminant Analysis --- p.21Chapter 2.4. --- Conclusion --- p.25Chapter 2.5. --- Appendix --- p.28Chapter 2.6. --- References --- p.32Chapter 2.7. --- Tables --- p.35Chapter Chapter 3: --- Essay two - A Mathematical Programming Approach to Clusterwise Rank Order Logit ModelChapter 3.0. --- Abstract --- p.40Chapter 3.1. --- Introduction --- p.41Chapter 3.2. --- Clusterwise Rank Order Logit Model --- p.42Chapter 3.3. --- Numerical Illustration --- p.46Chapter 3.4. --- Conclustion --- p.48Chapter 3.5. --- References --- p.50Chapter 3.6. --- Tables --- p.52Chapter Chapter 4: --- Essay three - A Mathematical Programming Approach to Metric Unidimensional ScalingChapter 4.0. --- Abstract --- p.53Chapter 4.1. --- Introduction --- p.54Chapter 4.2. --- Nonlinear Programming Formulation --- p.56Chapter 4.3. --- Numerical Examples --- p.60Chapter 4.4. --- Possible Extensions --- p.61Chapter 4.5. --- Conclusion and Extensions --- p.63Chapter 4.6. --- References --- p.64Chapter 4.7. --- Tables --- p.66Chapter Chapter 5: --- Research Project in ProgressChapter 5.1. --- Project 1 -- An Integrated Approach to Taste Test Experiment Within the Prospect Theory Framework --- p.68Chapter 5.1.1. --- Experiment Procedure --- p.68Chapter 5.1.2. --- Experimental Result --- p.72Chapter 5.2. --- Project 2 -- An Integrated Approach to Multi- Dimensional Scaling Problem --- p.75Chapter 5.2.1. --- Introduction --- p.75Chapter 5.2.2. --- Experiment Procedure --- p.76Chapter 5.2.3. --- Questionnaire --- p.78Chapter 5.2.4. --- Experimental Result --- p.7

    Modelling Daily Rainfall Amount in Pekanbaru City using Gamma and Some Extended Gamma Distribution

    Get PDF
    Abstract - Modeling rainfall is very important to be developed in managing natural resources to deal with the impacts of climate change. We modelled the daily rainfall for data recorded in Pekanbaru City from 1999 to 2008. the main goal of this study is to find the best fitting distribution to the daily rainfalls by using the maximum likelihood approach. for this purpose, Gamma distribution and some Extended Gamma Distribution will be used and tested to determine the best model to describe daily rainfall in Pekanbaru City. the extended gamma distribution meaning some mixture two and three gamma distribution, namely rani, shanker and sujatha distribution. the maximum likelihood method will be used to get the estimated parameter value from the distribution used in this study. the distributions will be selected based on graphical inspection probability density function (pdf), numerical criteria Akaike’s information criterion (AIC) and Bayesian Information Criterion (BIC). in most the cases, graphical inspection gave the same result but their AIC and BIC result differed. the best fit result was chosen as the distribution with the lowest values of AIC and BIC. in general, the Gamma distribution has been selected as the best mode

    Representing complex data using localized principal components with application to astronomical data

    Full text link
    Often the relation between the variables constituting a multivariate data space might be characterized by one or more of the terms: ``nonlinear'', ``branched'', ``disconnected'', ``bended'', ``curved'', ``heterogeneous'', or, more general, ``complex''. In these cases, simple principal component analysis (PCA) as a tool for dimension reduction can fail badly. Of the many alternative approaches proposed so far, local approximations of PCA are among the most promising. This paper will give a short review of localized versions of PCA, focusing on local principal curves and local partitioning algorithms. Furthermore we discuss projections other than the local principal components. When performing local dimension reduction for regression or classification problems it is important to focus not only on the manifold structure of the covariates, but also on the response variable(s). Local principal components only achieve the former, whereas localized regression approaches concentrate on the latter. Local projection directions derived from the partial least squares (PLS) algorithm offer an interesting trade-off between these two objectives. We apply these methods to several real data sets. In particular, we consider simulated astrophysical data from the future Galactic survey mission Gaia.Comment: 25 pages. In "Principal Manifolds for Data Visualization and Dimension Reduction", A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev (eds), Lecture Notes in Computational Science and Engineering, Springer, 2007, pp. 180--204, http://www.springer.com/dal/home/generic/search/results?SGWID=1-40109-22-173750210-

    A Business Intelligence Framework for Network-level Traffic Safety Analyses

    Full text link
    Currently, there are both methodological and practical barriers that together preclude a substantial use of theoretically sound approaches, such as the ones recommended by the Highway Safety Manual (HSM), for traffic safety management. Although the state-of-the-art provides theoretically sound approaches such as the Empirical Bayes method, there are still various important capabilities missing. Methodological barriers include among others (i) lack of a theoretically sound approach for corridor-level network screening, (ii) lack of a comprehensive approach for estimation of Safety Performance Functions based on a simultaneous consideration of both crash patterns and associated explanatory variables, and (iii) lack of theoretically sound methods to forecast crash patterns at the regional level. In addition, the use of existing theoretically sound approaches such as the ones recommended by the HSM are associated with important practical barriers including 1) significant data integration requirements, 2) a special schema is needed to enable analysis using specialized software, 3) time-consuming and intensive processes are involved, 4) substantial technical knowledge is needed, 5) visualization capabilities are limited, and 6) coordination across various data owners is required. Considering the above barriers, most practitioners use theoretically unsound methodologies to perform traffic safety analyses for highway safety improvement programs. This research proposes a single comprehensive framework to address all the above barriers to enable the use of theoretically sound methodologies for network wide traffic safety analyses. The proposed framework provides access through a single platform, Business Intelligence (BI), to theoretically sound methods and associated algorithms, data management and integration tools, and visualization capabilities. That is, the proposed BI framework provides methods and mechanisms to integrate and process data, generate advanced and theoretically sound analytics, and visualize results through intuitive and interactive web-based dashboards and maps. The proposed BI framework integrates data using Extract-Load-Transform process and creates a traffic safety data warehouse. Algorithms are implemented to use the data warehouse for network screening analysis of roadway segments, intersections, ramps, and corridors. The methodology proposed and implemented here for corridor-level network screening represents an important expansion to the existing methods recommended by the HSM. Corridor-level network screening is important for decision makers because it enables to rank corridors rather than sites so as to provide homogenous infrastructure to minimize changes within relatively short distances. Improvements are recommended for long sections of roadways that could include multiple sites with the potential for safety improvements. Existing corridor screening methodologies use observed crash frequency as a performance measure which does not consider regression-to-the-mean bias. The proposed methodology uses expected crash frequency as a performance measure and searches corridors using a sliding window mechanism which addresses crash location reporting errors by considering the same section of roadway multiple times using overlapping windows. The proposed BI framework includes a comprehensive methodology for the estimation of SPFs considering simultaneously local crash patterns and site characteristics. The current state-of-the-art uses predefined crash site types to create single clusters of data to generate regression models, SPFs, for the estimation of predicted crash frequency. It is highly unlikely for all crash sites within a single predefined cluster/type to have similar crash patterns and associated explanatory characteristics. That is, there could be sites within a cluster/type with different crash patterns and explanatory characteristics. Hence, assigning a single predefined SPF to all sites within a type is not necessarily the best approach to minimize the estimation error. To address this issue, a mathematical program was formulated to determine simultaneously cluster memberships for crash sites and the corresponding SPFs. Cluster memberships are determined using both crash patterns and associated explanatory variables. A solution algorithm coupling simulation annealing and maximum log likely estimation was implemented and tested. Results indicated that multiple SPFs for a crash and/or facility type can maximize the probability of observing the available data to increase accuracy and reliability. The estimated SPFs using the proposed approach were implemented within the BI framework for network screening. The results illustrate that the gain in predicted crashes provided by the SPFs translates into superior rankings for sites and corridors with the potential for safety improvements. A performance-based safety program requires the forecasting, at the regional level, of safety performance measures and establish targets to reduce fatalities and serious injuries. This is in contrast to the analysis required for traffic safety management where forecasts are required at the site or corridor level. For regional level forecasting, historically, theoretically unsound methods such as extrapolation or simple moving-average models have been used. To address this issue, this study proposed deterministic and stochastic time series models to forecast performance measures for performance-based safety programs. Results indicated that stochastic time series, a seasonal autoregressive integrated moving average model, provides the required statistically sound forecasts. In summary, the fundamental contributions of this research include: (i) a theoretically sound methodology for corridor level network screening, (ii) a comprehensive methodology for the estimation of local SPFs considering simultaneously crash patterns and associated explanatory variables, and (iii) a theoretically sound methodology to forecast performance measures to set realistic targets for performance-based safety programs. In addition, this study implemented and tested the above contributions along with existing algorithms for traffic safety network screening within a single BI platform. The result is a single web-based BI framework to enable integration and management of source data, generation of theoretically sound analyses, and visualization capabilities through intuitive dashboards, drilldown menus, and interactive maps
    • …
    corecore