1,817 research outputs found
Optimal design of an unsupervised adaptive classifier with unknown priors
An adaptive detection scheme for M hypotheses was analyzed. It was assumed that the probability density function under each hypothesis was known, and that the prior probabilities of the M hypotheses were unknown and sequentially estimated. Each observation vector was classified using the current estimate of the prior probabilities. Using a set of nonlinear transformations, and applying stochastic approximation theory, an optimally converging adaptive detection and estimation scheme was designed. The optimality of the scheme lies in the fact that convergence to the true prior probabilities is ensured, and that the asymptotic error variance is minimum, for the class of nonlinear transformations considered. An expression for the asymptotic mean square error variance of the scheme was also obtained
Quantifying the health burden misclassification from the use of different PM2.5 exposure tier models: A case study of London
Exposure to PM2.5 has been associated with increased mortality in urban areas. Hence, reducing the uncertainty in human exposure assessments is essential for more accurate health burden estimates. Here we quantify the misclassification that occurs when using different exposure approaches to predict the mortality burden of a population using London as a case study. We develop a framework for quantifying the misclassification of the total mortality burden attributable to exposure to fine particulate matter (PM2.5) in four major microenvironments (MEs) (dwellings, aboveground transportation, London Underground (LU) and outdoors)in the Greater London Area (GLA), in 2017. We demonstrate that differences exist between five different exposure Tier-models with incrementally increasing complexity, moving from static to more dynamic approaches. BenMap-CE, the open source software developed by the U.S. Environmental Protection Agency, is used as a tool to achieve spatial distribution of the ambient concentration by interpolating the monitoring data to the unmonitored areas and ultimately estimate the change in mortality on a fine resolution. Our results showed that using the outdoor concentration as a surrogate for the total population exposure but ignoring the different exposure concentration that occurs indoors and the time spent in transit, would lead to a misclassification of 1,174 predicted mortalities in GLA. Indoor exposure to PM2.5 is the largest contributor to total population exposure, accounting for 80% of total mortality, followed by the London Underground which contributes 15%, albeit the average percentage of time spent there by Londoners is only 0.4%. We generally confirmed that increasing the complexity and incorporating important microenvironments, such as the highly polluted LU, could significantly reduce the misclassification in health burden assessments
A game theoretic approach to robust filtering
A game theoretic approach to the filtering or smoothing problem is presented. A family of stationary information carrying processes and generalized models for the noise channel and the filter is considered. Sufficient conditions for the existence of saddle-point type solutions are stated. In addition, the solution for a special case of noise channel, a family of information carrying processes, and a nonlinear filter are found
Recursive estimation of prior probabilities using the mixture approach
The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable
Virtual power plants with electric vehicles
The benefits of integrating aggregated Electric Vehicles (EV) within the Virtual Power Plant (VPP) concept, are addressed. Two types of EV aggregators are identified: i) Electric Vehicle Residential Aggregator (EVRA), which is responsible for the management of dispersed and clustered EVs in a residential area and ii) Electric Vehicle Commercial Aggregator (EVCA), which is responsible for the management of EVs clustered in a single car park. A case study of a workplace EVCA is presented, providing an insight on its operation and service capabilities
Recommended from our members
Automating virtual power plant decision making with fuzzy logic and human psychology
This paper presents a Virtual Power Plant (VPP) decision making approach which uses fuzzy logic and a novel “insecurity” metric, based on human psychology. The VPP approach is modelled as a multi-agent system, which aims to minimize carbon emissions and/or energy cost, using an aggregation structure similar to energy or carbon markets. The “insecurity factor” reflects the operational flexibility of micro-generators, translated to a numerical value through fuzzy logic. The system was able to create a functional internal VPP market, where the micro-generators were trading autonomously according to external price signals and taking into account their own needs and limitations, as well as short-term forecasts
- …