2,334 research outputs found
Risk Assessment Algorithms Based On Recursive Neural Networks
The assessment of highly-risky situations at road intersections have been
recently revealed as an important research topic within the context of the
automotive industry. In this paper we shall introduce a novel approach to
compute risk functions by using a combination of a highly non-linear processing
model in conjunction with a powerful information encoding procedure.
Specifically, the elements of information either static or dynamic that appear
in a road intersection scene are encoded by using directed positional acyclic
labeled graphs. The risk assessment problem is then reformulated in terms of an
inductive learning task carried out by a recursive neural network. Recursive
neural networks are connectionist models capable of solving supervised and
non-supervised learning problems represented by directed ordered acyclic
graphs. The potential of this novel approach is demonstrated through well
predefined scenarios. The major difference of our approach compared to others
is expressed by the fact of learning the structure of the risk. Furthermore,
the combination of a rich information encoding procedure with a generalized
model of dynamical recurrent networks permit us, as we shall demonstrate, a
sophisticated processing of information that we believe as being a first step
for building future advanced intersection safety system
Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense Reasoning for High-level Analyses in Geographic Information Systems
The modelling, analysis, and visualisation of dynamic geospatial phenomena
has been identified as a key developmental challenge for next-generation
Geographic Information Systems (GIS). In this context, the envisaged
paradigmatic extensions to contemporary foundational GIS technology raises
fundamental questions concerning the ontological, formal representational, and
(analytical) computational methods that would underlie their spatial
information theoretic underpinnings.
We present the conceptual overview and architecture for the development of
high-level semantic and qualitative analytical capabilities for dynamic
geospatial domains. Building on formal methods in the areas of commonsense
reasoning, qualitative reasoning, spatial and temporal representation and
reasoning, reasoning about actions and change, and computational models of
narrative, we identify concrete theoretical and practical challenges that
accrue in the context of formal reasoning about `space, events, actions, and
change'. With this as a basis, and within the backdrop of an illustrated
scenario involving the spatio-temporal dynamics of urban narratives, we address
specific problems and solutions techniques chiefly involving `qualitative
abstraction', `data integration and spatial consistency', and `practical
geospatial abduction'. From a broad topical viewpoint, we propose that
next-generation dynamic GIS technology demands a transdisciplinary scientific
perspective that brings together Geography, Artificial Intelligence, and
Cognitive Science.
Keywords: artificial intelligence; cognitive systems; human-computer
interaction; geographic information systems; spatio-temporal dynamics;
computational models of narrative; geospatial analysis; geospatial modelling;
ontology; qualitative spatial modelling and reasoning; spatial assistance
systemsComment: ISPRS International Journal of Geo-Information (ISSN 2220-9964);
Special Issue on: Geospatial Monitoring and Modelling of Environmental
Change}. IJGI. Editor: Duccio Rocchini. (pre-print of article in press
Toward a computational theory for motion understanding: The expert animators model
Artificial intelligence researchers claim to understand some aspect of human intelligence when their model is able to emulate it. In the context of computer graphics, the ability to go from motion representation to convincing animation should accordingly be treated not simply as a trick for computer graphics programmers but as important epistemological and methodological goal. In this paper we investigate a unifying model for animating a group of articulated bodies such as humans and robots in a three-dimensional environment. The proposed model is considered in the framework of knowledge representation and processing, with special reference to motion knowledge. The model is meant to help setting the basis for a computational theory for motion understanding applied to articulated bodies
Visual location awareness for mobile robots using feature-based vision
Department Head: L. Darrell Whitley.2010 Spring.Includes bibliographical references (pages 48-50).This thesis presents an evaluation of feature-based visual recognition paradigm for the task of mobile robot localization. Although many works describe feature-based visual robot localization, they often do so using complex methods for map-building and position estimation which obscure the underlying vision systems' performance. One of the main contributions of this work is the development of an evaluation algorithm employing simple models for location awareness with focus on evaluating the underlying vision system. While SeeAsYou is used as a prototypical vision system for evaluation, the algorithm is designed to allow it to be used with other feature-based vision systems as well. The main result is that feature-based recognition with SeeAsYou provides some information but is not strong enough to reliably achieve location awareness without the temporal context. Adding a simple temporal model, however, suggests a more reliable localization performance
Topology Reasoning for Driving Scenes
Understanding the road genome is essential to realize autonomous driving.
This highly intelligent problem contains two aspects - the connection
relationship of lanes, and the assignment relationship between lanes and
traffic elements, where a comprehensive topology reasoning method is vacant. On
one hand, previous map learning techniques struggle in deriving lane
connectivity with segmentation or laneline paradigms; or prior lane
topology-oriented approaches focus on centerline detection and neglect the
interaction modeling. On the other hand, the traffic element to lane assignment
problem is limited in the image domain, leaving how to construct the
correspondence from two views an unexplored challenge. To address these issues,
we present TopoNet, the first end-to-end framework capable of abstracting
traffic knowledge beyond conventional perception tasks. To capture the driving
scene topology, we introduce three key designs: (1) an embedding module to
incorporate semantic knowledge from 2D elements into a unified feature space;
(2) a curated scene graph neural network to model relationships and enable
feature interaction inside the network; (3) instead of transmitting messages
arbitrarily, a scene knowledge graph is devised to differentiate prior
knowledge from various types of the road genome. We evaluate TopoNet on the
challenging scene understanding benchmark, OpenLane-V2, where our approach
outperforms all previous works by a great margin on all perceptual and
topological metrics. The code would be released soon
Topological structures of complex belief systems
The concepts of substantive beliefs and derived beliefs are defined, a set of substantive beliefs S like open set and the neighborhood of an element substantive belief. A semantic operation of conjunction is defined with a structure of an Abelian group. Mathematical structures exist such as poset beliefs and join-semilattttice beliefs. A metric space of beliefs and the distance of belief depending on the believer are defined. The concepts of closed and opened ball are defined. S′ is defined as subgroup of the metric space of beliefs Σ and S′ is a totally limited set. The term s is defined (substantive belief) in terms of closing of S′. It is deduced that Σ is paracompact due to Stone's Theorem. The pseudometric space of beliefs is defined to show how the metric of the nonbelieving subject has a topological space like a nonmaterial abstract ideal space formed in the mind of the believing subject, fulfilling the conditions of Kuratowski axioms of closure. To establish patterns of materialization of beliefs we are going to consider that these have defined mathematical structures. This will allow us to understand better cultural processes of text, architecture, norms, and education that are forms or the materialization of an ideology. This materialization is the conversion by means of certain mathematical correspondences, of an abstract set whose elements are beliefs or ideas, in an impure set whose elements are material or energetic. Text is a materialization of ideology
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
ViNG: Learning Open-World Navigation with Visual Goals
We propose a learning-based navigation system for reaching visually indicated
goals and demonstrate this system on a real mobile robot platform. Learning
provides an appealing alternative to conventional methods for robotic
navigation: instead of reasoning about environments in terms of geometry and
maps, learning can enable a robot to learn about navigational affordances,
understand what types of obstacles are traversable (e.g., tall grass) or not
(e.g., walls), and generalize over patterns in the environment. However, unlike
conventional planning algorithms, it is harder to change the goal for a learned
policy during deployment. We propose a method for learning to navigate towards
a goal image of the desired destination. By combining a learned policy with a
topological graph constructed out of previously observed data, our system can
determine how to reach this visually indicated goal even in the presence of
variable appearance and lighting. Three key insights, waypoint proposal, graph
pruning and negative mining, enable our method to learn to navigate in
real-world environments using only offline data, a setting where prior methods
struggle. We instantiate our method on a real outdoor ground robot and show
that our system, which we call ViNG, outperforms previously-proposed methods
for goal-conditioned reinforcement learning, including other methods that
incorporate reinforcement learning and search. We also study how \sysName
generalizes to unseen environments and evaluate its ability to adapt to such an
environment with growing experience. Finally, we demonstrate ViNG on a number
of real-world applications, such as last-mile delivery and warehouse
inspection. We encourage the reader to visit the project website for videos of
our experiments and demonstrations sites.google.com/view/ving-robot.Comment: Presented at International Conference on Robotics and Automation
(ICRA) 202
- …