210,050 research outputs found
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page
Institutionalising Ontology-Based Semantic Integration
We address what is still a scarcity of general mathematical foundations for ontology-based semantic integration underlying current knowledge engineering methodologies in decentralised and distributed environments. After recalling the first-order ontology-based approach to semantic integration and a formalisation of ontological commitment, we propose a general theory that uses a syntax-and interpretation-independent formulation of language, ontology, and ontological commitment in terms of institutions. We claim that our formalisation generalises the intuitive notion of ontology-based semantic integration while retaining its basic insight, and we apply it for eliciting and hence comparing various increasingly complex notions of semantic integration and ontological commitment based on differing understandings of semantics
Isabelle/PIDE as Platform for Educational Tools
The Isabelle/PIDE platform addresses the question whether proof assistants of
the LCF family are suitable as technological basis for educational tools. The
traditionally strong logical foundations of systems like HOL, Coq, or Isabelle
have so far been counter-balanced by somewhat inaccessible interaction via the
TTY (or minor variations like the well-known Proof General / Emacs interface).
Thus the fundamental question of math education tools with fully-formal
background theories has often been answered negatively due to accidental
weaknesses of existing proof engines.
The idea of "PIDE" (which means "Prover IDE") is to integrate existing
provers like Isabelle into a larger environment, that facilitates access by
end-users and other tools. We use Scala to expose the proof engine in ML to the
JVM world, where many user-interfaces, editor frameworks, and educational tools
already exist. This shall ultimately lead to combined mathematical assistants,
where the logical engine is in the background, without obstructing the view on
applications of formal methods, formalized mathematics, and math education in
particular.Comment: In Proceedings THedu'11, arXiv:1202.453
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
A Foundational View on Integration Problems
The integration of reasoning and computation services across system and
language boundaries is a challenging problem of computer science. In this
paper, we use integration for the scenario where we have two systems that we
integrate by moving problems and solutions between them. While this scenario is
often approached from an engineering perspective, we take a foundational view.
Based on the generic declarative language MMT, we develop a theoretical
framework for system integration using theories and partial theory morphisms.
Because MMT permits representations of the meta-logical foundations themselves,
this includes integration across logics. We discuss safe and unsafe integration
schemes and devise a general form of safe integration
On the mathematical and foundational significance of the uncountable
We study the logical and computational properties of basic theorems of
uncountable mathematics, including the Cousin and Lindel\"of lemma published in
1895 and 1903. Historically, these lemmas were among the first formulations of
open-cover compactness and the Lindel\"of property, respectively. These notions
are of great conceptual importance: the former is commonly viewed as a way of
treating uncountable sets like e.g. as 'almost finite', while the
latter allows one to treat uncountable sets like e.g. as 'almost
countable'. This reduction of the uncountable to the finite/countable turns out
to have a considerable logical and computational cost: we show that the
aforementioned lemmas, and many related theorems, are extremely hard to prove,
while the associated sub-covers are extremely hard to compute. Indeed, in terms
of the standard scale (based on comprehension axioms), a proof of these lemmas
requires at least the full extent of second-order arithmetic, a system
originating from Hilbert-Bernays' Grundlagen der Mathematik. This observation
has far-reaching implications for the Grundlagen's spiritual successor, the
program of Reverse Mathematics, and the associated G\"odel hierachy. We also
show that the Cousin lemma is essential for the development of the gauge
integral, a generalisation of the Lebesgue and improper Riemann integrals that
also uniquely provides a direct formalisation of Feynman's path integral.Comment: 35 pages with one figure. The content of this version extends the
published version in that Sections 3.3.4 and 3.4 below are new. Small
corrections/additions have also been made to reflect new development
Designing Improved Sediment Transport Visualizations
Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects
Reclaiming human machine nature
Extending and modifying his domain of life by artifact production is one of
the main characteristics of humankind. From the first hominid, who used a wood
stick or a stone for extending his upper limbs and augmenting his gesture
strength, to current systems engineers who used technologies for augmenting
human cognition, perception and action, extending human body capabilities
remains a big issue. From more than fifty years cybernetics, computer and
cognitive sciences have imposed only one reductionist model of human machine
systems: cognitive systems. Inspired by philosophy, behaviorist psychology and
the information treatment metaphor, the cognitive system paradigm requires a
function view and a functional analysis in human systems design process.
According that design approach, human have been reduced to his metaphysical and
functional properties in a new dualism. Human body requirements have been left
to physical ergonomics or "physiology". With multidisciplinary convergence, the
issues of "human-machine" systems and "human artifacts" evolve. The loss of
biological and social boundaries between human organisms and interactive and
informational physical artifact questions the current engineering methods and
ergonomic design of cognitive systems. New developpment of human machine
systems for intensive care, human space activities or bio-engineering sytems
requires grounding human systems design on a renewed epistemological framework
for future human systems model and evidence based "bio-engineering". In that
context, reclaiming human factors, augmented human and human machine nature is
a necessityComment: Published in HCI International 2014, Heraklion : Greece (2014
- …