123 research outputs found
Ontology modeling and object modeling in software engineering
A data model is a plan for building a database and is comparable to an architect's building plans. There are two major methodologies used to create a data model: the Entity-Relationship (ER) approach and the Object Model. This paper will be discussed only the object model approach. The goal of the data model is to certify that all data objects required by the database are completely and accurately represented. Ontologies are objects of interest (Universal of discourse). The objective of this paper is to simplify object models compare with ontologies models. There are some similarities between objects in object models and concepts sometimes called classes in ontologies. Ontology can help building object model. The object model is the center of data modeling; on the other hand ontology itself has the concept which is the basis of knowledge base. Because ontologies are closely related to modern object-oriented software design, it is good attempt to adapt existing object-oriented software development methodologies for the task of ontology development. Selected approaches originate from research in artificial intelligence; knowledge representation and object modeling are presented in this paper. Some issues mentioned in this paper are related with their connection; some are addressed directly into the similarities or differences point of view of both. This paper also presents the available tools, methods, procedures, language, reusability which shows the corporation with object modeling and ontologies. 1
Shapes: Seeing and doing with shape grammars
This paper describes the visual interface of a configurable and extensible system to support generic work with shape grammars. Shape grammars allow the implementation of computational mechanisms to analyze and synthesize designs of visual languages and have been used to represent the knowledge behind the creative work of architects, designers and artists. This kind of grammars is inherently visual. The system described, a kind of universal machine for shape grammars, allows users to build their own shape grammars and experiment with them. It has been the focus of our past work, it mixes technological and artistic aspects and it has a specific computational architecture which includes a symbolic and a visual interface. The latter one is the subject of this paper.info:eu-repo/semantics/acceptedVersio
Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense Reasoning for High-level Analyses in Geographic Information Systems
The modelling, analysis, and visualisation of dynamic geospatial phenomena
has been identified as a key developmental challenge for next-generation
Geographic Information Systems (GIS). In this context, the envisaged
paradigmatic extensions to contemporary foundational GIS technology raises
fundamental questions concerning the ontological, formal representational, and
(analytical) computational methods that would underlie their spatial
information theoretic underpinnings.
We present the conceptual overview and architecture for the development of
high-level semantic and qualitative analytical capabilities for dynamic
geospatial domains. Building on formal methods in the areas of commonsense
reasoning, qualitative reasoning, spatial and temporal representation and
reasoning, reasoning about actions and change, and computational models of
narrative, we identify concrete theoretical and practical challenges that
accrue in the context of formal reasoning about `space, events, actions, and
change'. With this as a basis, and within the backdrop of an illustrated
scenario involving the spatio-temporal dynamics of urban narratives, we address
specific problems and solutions techniques chiefly involving `qualitative
abstraction', `data integration and spatial consistency', and `practical
geospatial abduction'. From a broad topical viewpoint, we propose that
next-generation dynamic GIS technology demands a transdisciplinary scientific
perspective that brings together Geography, Artificial Intelligence, and
Cognitive Science.
Keywords: artificial intelligence; cognitive systems; human-computer
interaction; geographic information systems; spatio-temporal dynamics;
computational models of narrative; geospatial analysis; geospatial modelling;
ontology; qualitative spatial modelling and reasoning; spatial assistance
systemsComment: ISPRS International Journal of Geo-Information (ISSN 2220-9964);
Special Issue on: Geospatial Monitoring and Modelling of Environmental
Change}. IJGI. Editor: Duccio Rocchini. (pre-print of article in press
Veracity and velocity of social media content during breaking news: analysis of November 2015 Paris shootings
Social media sources are becoming increasingly important in journalism. Under breaking news deadlines semi-automated support for identification and verification of content is critical. We describe a large scale content-level analysis of over 6 million Twitter, You Tube and Instagram records covering the first 6 hours of the November 2015 Paris shootings. We ground our analysis by tracing how 5 ground truth images used in actual news reports went viral. We look at velocity of newsworthy content and its veracity with regards trusted source attribution. We also examine temporal segmentation combined with statistical frequency counters to identify likely eyewitness content for input to real-time breaking content feeds. Our results suggest attribution to trusted sources might be a good indicator of content veracity, and that temporal segmentation coupled with frequency statistical metrics could be used to highlight in real-time eyewitness content if applied with some additional text filters
Super Logic Programs
The Autoepistemic Logic of Knowledge and Belief (AELB) is a powerful
nonmonotic formalism introduced by Teodor Przymusinski in 1994. In this paper,
we specialize it to a class of theories called `super logic programs'. We argue
that these programs form a natural generalization of standard logic programs.
In particular, they allow disjunctions and default negation of arbibrary
positive objective formulas.
Our main results are two new and powerful characterizations of the static
semant ics of these programs, one syntactic, and one model-theoretic. The
syntactic fixed point characterization is much simpler than the fixed point
construction of the static semantics for arbitrary AELB theories. The
model-theoretic characterization via Kripke models allows one to construct
finite representations of the inherently infinite static expansions.
Both characterizations can be used as the basis of algorithms for query
answering under the static semantics. We describe a query-answering interpreter
for super programs which we developed based on the model-theoretic
characterization and which is available on the web.Comment: 47 pages, revised version of the paper submitted 10/200
- …