470,305 research outputs found
Reasoning About Liquids via Closed-Loop Simulation
Simulators are powerful tools for reasoning about a robot's interactions with
its environment. However, when simulations diverge from reality, that reasoning
becomes less useful. In this paper, we show how to close the loop between
liquid simulation and real-time perception. We use observations of liquids to
correct errors when tracking the liquid's state in a simulator. Our results
show that closed-loop simulation is an effective way to prevent large
divergence between the simulated and real liquid states. As a direct
consequence of this, our method can enable reasoning about liquids that would
otherwise be infeasible due to large divergences, such as reasoning about
occluded liquid.Comment: Robotics: Science & Systems (RSS), July 12-16, 2017. Cambridge, MA,
US
Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects
Human vision greatly benefits from the information about sizes of objects.
The role of size in several visual reasoning tasks has been thoroughly explored
in human perception and cognition. However, the impact of the information about
sizes of objects is yet to be determined in AI. We postulate that this is
mainly attributed to the lack of a comprehensive repository of size
information. In this paper, we introduce a method to automatically infer object
sizes, leveraging visual and textual information from web. By maximizing the
joint likelihood of textual and visual observations, our method learns reliable
relative size estimates, with no explicit human supervision. We introduce the
relative size dataset and show that our method outperforms competitive textual
and visual baselines in reasoning about size comparisons.Comment: To appear in AAAI 201
Does Suppositional Reasoning Solve the Bootstrapping Problem?
In a 2002 article Stewart Cohen advances the “bootstrapping problem” for what he calls “basic justification theories,” and in a 2010 followup he offers a solution to the problem, exploiting the idea that suppositional reasoning may be used with defeasible as well as with deductive inference rules. To curtail the form of bootstrapping permitted by basic justification theories, Cohen insists that subjects must know their perceptual faculties are reliable before perception can give them knowledge. But how is such knowledge of reliability to be acquired if not through perception itself? Cohen proposes that such knowledge may be acquired a priori through suppositional reasoning. I argue that his strategy runs afoul of a plausible view about how epistemic principles function; in brief, I argue that one must actually satisfy the antecedent of an epistemic principle, not merely suppose that one does, to acquire any justification by its means – even justification for a merely conditional proposition
Some notes on an extended query language for FSM
FSM is a database model that has been recently proposed by the authors. FSM uses basic concepts of
classification, generalization, aggregation and association that are commonly used in semantic modelling and
supports the fuzziness of real-world at attribute, entity, class and relations intra and inter-classes levels. Hence, it
provides tools to formalize and conceptualize real-world within a manner adapted to human perception of and
reasoning about this real-word. In this paper we briefly review basic concepts of FSM and provide some notes on an
extended query language adapted to it.ou
- …