7,484 research outputs found
On-line Non-stationary Inventory Control using Champion Competition
The commonly adopted assumption of stationary demands cannot actually reflect
fluctuating demands and will weaken solution effectiveness in real practice. We
consider an On-line Non-stationary Inventory Control Problem (ONICP), in which
no specific assumption is imposed on demands and their probability
distributions are allowed to vary over periods and correlate with each other.
The nature of non-stationary demands disables the optimality of static (s,S)
policies and the applicability of its corresponding algorithms. The ONICP
becomes computationally intractable by using general Simulation-based
Optimization (SO) methods, especially under an on-line decision-making
environment with no luxury of time and computing resources to afford the huge
computational burden. We develop a new SO method, termed "Champion Competition"
(CC), which provides a different framework and bypasses the time-consuming
sample average routine adopted in general SO methods. An alternate type of
optimal solution, termed "Champion Solution", is pursued in the CC framework,
which coincides the traditional optimality sense under certain conditions and
serves as a near-optimal solution for general cases. The CC can reduce the
complexity of general SO methods by orders of magnitude in solving a class of
SO problems, including the ONICP. A polynomial algorithm, termed "Renewal Cycle
Algorithm" (RCA), is further developed to fulfill an important procedure of the
CC framework in solving this ONICP. Numerical examples are included to
demonstrate the performance of the CC framework with the RCA embedded.Comment: I just identified a flaw in the paper. It may take me some time to
fix it. I would like to withdraw the article and update it once I finished.
Thank you for your kind suppor
Can intelligence explode?
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS, 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity
Exploring Convolutional Networks for End-to-End Visual Servoing
Present image based visual servoing approaches rely on extracting hand
crafted visual features from an image. Choosing the right set of features is
important as it directly affects the performance of any approach. Motivated by
recent breakthroughs in performance of data driven methods on recognition and
localization tasks, we aim to learn visual feature representations suitable for
servoing tasks in unstructured and unknown environments. In this paper, we
present an end-to-end learning based approach for visual servoing in diverse
scenes where the knowledge of camera parameters and scene geometry is not
available a priori. This is achieved by training a convolutional neural network
over color images with synchronised camera poses. Through experiments performed
in simulation and on a quadrotor, we demonstrate the efficacy and robustness of
our approach for a wide range of camera poses in both indoor as well as outdoor
environments.Comment: IEEE ICRA 201
The AGI Containment Problem
There is considerable uncertainty about what properties, capabilities and
motivations future AGIs will have. In some plausible scenarios, AGIs may pose
security risks arising from accidents and defects. In order to mitigate these
risks, prudent early AGI research teams will perform significant testing on
their creations before use. Unfortunately, if an AGI has human-level or greater
intelligence, testing itself may not be safe; some natural AGI goal systems
create emergent incentives for AGIs to tamper with their test environments,
make copies of themselves on the internet, or convince developers and operators
to do dangerous things. In this paper, we survey the AGI containment problem -
the question of how to build a container in which tests can be conducted safely
and reliably, even on AGIs with unknown motivations and capabilities that could
be dangerous. We identify requirements for AGI containers, available
mechanisms, and weaknesses that need to be addressed
- …