704 research outputs found
Application of AI in Modeling of Real System in Chemistry
In recent years, discharge of synthetic dye waste from different industries leading to aquatic and environmental pollution is a serious global problem of great concern. Hence, the removal of dye prediction plays an important role in wastewater management and conservation of nature. Artificial intelligence methods are popular owing due to its ease of use and high level of accuracy. This chapter proposes a detailed review of artificial intelligence-based removal dye prediction methods particularly multiple linear regression (MLR), artificial neural networks (ANNs), and least squares-support vector machine (LS-SVM). Furthermore, this chapter will focus on ensemble prediction models (EPMs) used for removal dye prediction. EPMs improve the prediction accuracy by integrating several prediction models. The principles, advantages, disadvantages, and applications of these artificial intelligence-based methods are explained in this chapter. Furthermore, future directions of the research on artificial intelligence-based removal dye prediction methods are discussed
Procuring load curtailment from local customers under uncertainty
J.M. was supported by EPSRC grant no. EP/K00557X/2, A.M. was partially supported by EPSRC grant EP/P003818/1 and J.V. by a President’s PhD Scholarship from Imperial College London
Quality Sensitive Price Competition in Spectrum Oligopoly:Part 1
We investigate a spectrum oligopoly market where primaries lease their
channels to secondaries in lieu of financial remuneration. Transmission quality
of a channel evolves randomly. Each primary has to select the price it would
quote without knowing the transmission qualities of its competitors' channels.
Each secondary buys a channel depending on the price and the transmission
quality a channel offers. We formulate the price selection problem as a non
co-operative game with primaries as players. In the one-shot game, we show that
there exists a unique symmetric Nash Equilibrium(NE) strategy profile and
explicitly compute it. Our analysis reveals that under the NE strategy profile
a primary prices its channel to render high quality channel more preferable to
the secondary; this negates the popular belief that prices ought to be selected
to render channels equally preferable to the secondary regardless of their
qualities. We show the loss of revenue in the asymptotic limit due to the non
co-operation of primaries. In the repeated version of the game, we characterize
a subgame perfect NE where a primary can attain a payoff arbitrarily close to
the payoff it would obtain when primaries co-operate.Comment: Accepted for publication in IEEE/ACM Transactions on Networking. 41
pages single column format.Conference version is available at arXiv:1305.335
How to Host a Data Competition: Statistical Advice for Design and Analysis of a Data Competition
Data competitions rely on real-time leaderboards to rank competitor entries
and stimulate algorithm improvement. While such competitions have become quite
popular and prevalent, particularly in supervised learning formats, their
implementations by the host are highly variable. Without careful planning, a
supervised learning competition is vulnerable to overfitting, where the winning
solutions are so closely tuned to the particular set of provided data that they
cannot generalize to the underlying problem of interest to the host. This paper
outlines some important considerations for strategically designing relevant and
informative data sets to maximize the learning outcome from hosting a
competition based on our experience. It also describes a post-competition
analysis that enables robust and efficient assessment of the strengths and
weaknesses of solutions from different competitors, as well as greater
understanding of the regions of the input space that are well-solved. The
post-competition analysis, which complements the leaderboard, uses exploratory
data analysis and generalized linear models (GLMs). The GLMs not only expand
the range of results we can explore, they also provide more detailed analysis
of individual sub-questions including similarities and differences between
algorithms across different types of scenarios, universally easy or hard
regions of the input space, and different learning objectives. When coupled
with a strategically planned data generation approach, the methods provide
richer and more informative summaries to enhance the interpretation of results
beyond just the rankings on the leaderboard. The methods are illustrated with a
recently completed competition to evaluate algorithms capable of detecting,
identifying, and locating radioactive materials in an urban environment.Comment: 36 page
The Two Kinds of Free Energy and the Bayesian Revolution
The concept of free energy has its origins in 19th century thermodynamics,
but has recently found its way into the behavioral and neural sciences, where
it has been promoted for its wide applicability and has even been suggested as
a fundamental principle of understanding intelligent behavior and brain
function. We argue that there are essentially two different notions of free
energy in current models of intelligent agency, that can both be considered as
applications of Bayesian inference to the problem of action selection: one that
appears when trading off accuracy and uncertainty based on a general maximum
entropy principle, and one that formulates action selection in terms of
minimizing an error measure that quantifies deviations of beliefs and policies
from given reference models. The first approach provides a normative rule for
action selection in the face of model uncertainty or when information
processing capabilities are limited. The second approach directly aims to
formulate the action selection problem as an inference problem in the context
of Bayesian brain theories, also known as Active Inference in the literature.
We elucidate the main ideas and discuss critical technical and conceptual
issues revolving around these two notions of free energy that both claim to
apply at all levels of decision-making, from the high-level deliberation of
reasoning down to the low-level information processing of perception
Technology and the environment: an evolutionary approach to sustainable technological change
(WP 02/04 Clave pdf) The results of our model show that it would be advisable to undertake policies expressly aimed at the process of sustainable technological change in a way that is complementary to the conventional equilibrium oriented environmental policies. In short, the main objectives of this paper are to understand more fully the dynamics of the process of technological change, its role in sustainable development, and to assess the implications of this dynamic approach to techno-environmental policy. To achieve these goals we have developed an agent based model, using distributed artificial intelligence concepts drawn from the general methodology of social simulation.Agent-based models, Evolutionary models, Lock-in , Standardization, Technology difussion, Sustainability
- …