114 research outputs found

    A COMPREHENSIVE GEOSPATIAL KNOWLEDGE DISCOVERY FRAMEWORK FOR SPATIAL ASSOCIATION RULE MINING

    Get PDF
    Continuous advances in modern data collection techniques help spatial scientists gain access to massive and high-resolution spatial and spatio-temporal data. Thus there is an urgent need to develop effective and efficient methods seeking to find unknown and useful information embedded in big-data datasets of unprecedentedly large size (e.g., millions of observations), high dimensionality (e.g., hundreds of variables), and complexity (e.g., heterogeneous data sources, space–time dynamics, multivariate connections, explicit and implicit spatial relations and interactions). Responding to this line of development, this research focuses on the utilization of the association rule (AR) mining technique for a geospatial knowledge discovery process. Prior attempts have sidestepped the complexity of the spatial dependence structure embedded in the studied phenomenon. Thus, adopting association rule mining in spatial analysis is rather problematic. Interestingly, a very similar predicament afflicts spatial regression analysis with a spatial weight matrix that would be assigned a priori, without validation on the specific domain of application. Besides, a dependable geospatial knowledge discovery process necessitates algorithms supporting automatic and robust but accurate procedures for the evaluation of mined results. Surprisingly, this has received little attention in the context of spatial association rule mining. To remedy the existing deficiencies mentioned above, the foremost goal for this research is to construct a comprehensive geospatial knowledge discovery framework using spatial association rule mining for the detection of spatial patterns embedded in geospatial databases and to demonstrate its application within the domain of crime analysis. It is the first attempt at delivering a complete geo-spatial knowledge discovery framework using spatial association rule mining

    Massively parallel reasoning in transitive relationship hierarchies

    Get PDF
    This research focuses on building a parallel knowledge representation and reasoning system for the purpose of making progress in realizing human-like intelligence. To achieve human-like intelligence, it is necessary to model human reasoning processes by programs. Knowledge in the real world is huge in size, complex in structure, and is also constantly changing even in limited domains. Unfortunately, reasoning algorithms are very often intractable, which means that they are too slow for any practical applications. One technique to deal with this problem is to design special-purpose reasoners. Many past Al systems have worked rather nicely for limited problem sizes, but attempts to extend them to realistic subsets of world knowledge have led to difficulties. Even special purpose reasoners are not immune to this impasse. In this work, to overcome this problem, we are combining special purpose reasoners with massive We have developed and implemented a massively parallel transitive closure reasoner, called Hydra, that can dynamically assimilate any transitive, binary relation and efficiently answer queries using the transitive closure of all those relations. Within certain limitations, we achieve constant-time responses for transitive closure queries. Hydra can dynamically insert new concepts or new links into a. knowledge base for realistic problem sizes. To get near human-like reasoning capabilities requires the possibility of dynamic updates of the transitive relation hierarchies. Our incremental, massively parallel, update algorithms can achieve almost constant time updates of large knowledge bases. Hydra expands the boundaries of Knowledge Representation and Reasoning in a number of different directions: (1) Hydra improves the representational power of current systems. We have developed a set-based representation for class hierarchies that makes it easy to represent class hierarchies on arrays of processors. Furthermore, we have developed and implemented two methods for mapping this set-based representation onto the processor space of a Connection Machine. These two representations, the Grid Representation and the Double Strand Representation successively improve transitive closure reasoning in terms of speed and processor utilization. (2) Hydra allows fast rerieval and dynamic update of a large knowledge base. New fast update algorithms are formulated to dynamically insert new concepts or new relations into a knowledge base of thousands of nodes. (3) Hydra provides reasoning based on mixed hierarchical representations. We have designed representational tools and massively parallel reasoning algorithms to model reasoning in combined IS-A, Part-of, and Contained-in hierarchies. (4) Hydra\u27s reasoning facilities have been successfully applied to the Medical Entities Dictionary, a large medical vocabulary of Columbia Presbyterian Medical Center. As a result of (1) - (3), Hydra is more general than many current special-purpose reasoners, faster than currently existing general-purpose reasoners, and its knowledge base can be updated dynamically

    Information resources management, 1984-1989: A bibliography with indexes

    Get PDF
    This bibliography contains 768 annotated references to reports and journal articles entered into the NASA scientific and technical information database 1984 to 1989

    Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making

    Ontology-based Access Control in Open Scenarios: Applications to Social Networks and the Cloud

    Get PDF
    La integració d'Internet a la societat actual ha fet possible compartir fàcilment grans quantitats d'informació electrònica i recursos informàtics (que inclouen maquinari, serveis informàtics, etc.) en entorns distribuïts oberts. Aquests entorns serveixen de plataforma comuna per a usuaris heterogenis (per exemple, empreses, individus, etc.) on es proporciona allotjament d'aplicacions i sistemes d'usuari personalitzades; i on s'ofereix un accés als recursos compartits des de qualsevol lloc i amb menys esforços administratius. El resultat és un entorn que permet a individus i empreses augmentar significativament la seva productivitat. Com ja s'ha dit, l'intercanvi de recursos en entorns oberts proporciona importants avantatges per als diferents usuaris, però, també augmenta significativament les amenaces a la seva privacitat. Les dades electròniques compartides poden ser explotades per tercers (per exemple, entitats conegudes com "Data Brokers"). Més concretament, aquestes organitzacions poden agregar la informació compartida i inferir certes característiques personals sensibles dels usuaris, la qual cosa pot afectar la seva privacitat. Una manera de del.liar aquest problema consisteix a controlar l'accés dels usuaris als recursos potencialment sensibles. En concret, la gestió de control d'accés regula l'accés als recursos compartits d'acord amb les credencials dels usuaris, el tipus de recurs i les preferències de privacitat dels propietaris dels recursos/dades. La gestió eficient de control d'accés és crucial en entorns grans i dinàmics. D'altra banda, per tal de proposar una solució viable i escalable, cal eliminar la gestió manual de regles i restriccions (en la qual, la majoria de les solucions disponibles depenen), atès que aquesta constitueix una pesada càrrega per a usuaris i administradors . Finalment, la gestió del control d'accés ha de ser intuïtiu per als usuaris finals, que en general no tenen grans coneixements tècnics.La integración de Internet en la sociedad actual ha hecho posible compartir fácilmente grandes cantidades de información electrónica y recursos informáticos (que incluyen hardware, servicios informáticos, etc.) en entornos distribuidos abiertos. Estos entornos sirven de plataforma común para usuarios heterogéneos (por ejemplo, empresas, individuos, etc.) donde se proporciona alojamiento de aplicaciones y sistemas de usuario personalizadas; y donde se ofrece un acceso ubicuo y con menos esfuerzos administrativos a los recursos compartidos. El resultado es un entorno que permite a individuos y empresas aumentar significativamente su productividad. Como ya se ha dicho, el intercambio de recursos en entornos abiertos proporciona importantes ventajas para los distintos usuarios, no obstante, también aumenta significativamente las amenazas a su privacidad. Los datos electrónicos compartidos pueden ser explotados por terceros (por ejemplo, entidades conocidas como “Data Brokers”). Más concretamente, estas organizaciones pueden agregar la información compartida e inferir ciertas características personales sensibles de los usuarios, lo cual puede afectar a su privacidad. Una manera de paliar este problema consiste en controlar el acceso de los usuarios a los recursos potencialmente sensibles. En concreto, la gestión de control de acceso regula el acceso a los recursos compartidos de acuerdo con las credenciales de los usuarios, el tipo de recurso y las preferencias de privacidad de los propietarios de los recursos/datos. La gestión eficiente de control de acceso es crucial en entornos grandes y dinámicos. Por otra parte, con el fin de proponer una solución viable y escalable, es necesario eliminar la gestión manual de reglas y restricciones (en la cual, la mayoría de las soluciones disponibles dependen), dado que ésta constituye una pesada carga para usuarios y administradores. Por último, la gestión del control de acceso debe ser intuitivo para los usuarios finales, que por lo general carecen de grandes conocimientos técnicos.Thanks to the advent of the Internet, it is now possible to easily share vast amounts of electronic information and computer resources (which include hardware, computer services, etc.) in open distributed environments. These environments serve as a common platform for heterogeneous users (e.g., corporate, individuals etc.) by hosting customized user applications and systems, providing ubiquitous access to the shared resources and requiring less administrative efforts; as a result, they enable users and companies to increase their productivity. Unfortunately, sharing of resources in open environments has significantly increased the privacy threats to the users. Indeed, shared electronic data may be exploited by third parties, such as Data Brokers, which may aggregate, infer and redistribute (sensitive) personal features, thus potentially impairing the privacy of the individuals. A way to palliate this problem consists on controlling the access of users over the potentially sensitive resources. Specifically, access control management regulates the access to the shared resources according to the credentials of the users, the type of resource and the privacy preferences of the resource/data owners. The efficient management of access control is crucial in large and dynamic environments such as the ones described above. Moreover, in order to propose a feasible and scalable solution, we need to get rid of manual management of rules/constraints (in which most available solutions rely) that constitutes a serious burden for the users and the administrators. Finally, access control management should be intuitive for the end users, who usually lack technical expertise, and they may find access control mechanism more difficult to understand and rigid to apply due to its complex configuration settings

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    A cognitive model of fiction writing.

    Get PDF
    Models of the writing process are used to design software tools for writers who work with computers. This thesis is concerned with the construction of a model of fiction writing. The first stage in this construction is to review existing models of writing. Models of writing used in software design and writing research include behavioural, cognitive and linguistic varieties. The arguments of this thesis are, firstly, that current models do not provide an adequate basis for designing software tools for fiction writers. Secondly, research into writing is often based on questionable assumptions concerning language and linguistics, the interpretation of empirical research, and the development of cognitive models. It is argued that Saussure's linguistics provides an alternative basis for developing a model of fiction writing, and that Barthes' method of textual analysis provides insight into the ways in which readers and writers create meanings. The result of reviewing current models of writing is a basic model of writing, consisting of a cycle of three activities - thinking, writing, and reading. The next stage is to develop this basic model into a model of fiction writing by using narratology, textual analysis, and cognitive psychology to identify the kinds of thinking processes that create fictional texts. Remembering and imagining events and scenes are identified as basic processes in fiction writing; in cognitive terms, events are verbal representations, while scenes are visual representations. Syntax is identified as another distinct object of thought, to which the processes of remembering and imagining also apply. Genette's notion of focus in his analysis of text types is used to describe the role of characters in the writer's imagination: focusing the imagination is a process in which a writer imagines she is someone else, and it is shown how this process applies to events, scenes, and syntax. It is argued that a writer's story memory, influences his remembering and imagining; Todorov's work on symbolism is used to argue that interpretation plays the role in fiction writing of binding together these two processes. The role of naming in reading and its relation to problem solving is compared with its role in writing, and names or signifiers are added to the objects of thought in fiction writing. It is argued that problem solving in fiction writing is sometimes concerned with creating problems or mysteries for the reader, and it is shown how this process applies to events, scenes, signifiers and syntax. All these findings are presented in the form of a cognitive model of fiction writing. The question of testing is discussed, and the use of the model in designing software tools is illustrated by the description of a hypertextual aid for fiction writers

    SLEMS : a knowledge based approach to soil loss estimation and modelling

    Get PDF
    ThesisThesis (M.Sc.E.), University of New Brunswick, 199

    Foundations of Multi-Paradigm Modelling for Cyber-Physical Systems

    Get PDF
    This open access book coherently gathers well-founded information on the fundamentals of and formalisms for modelling cyber-physical systems (CPS). Highlighting the cross-disciplinary nature of CPS modelling, it also serves as a bridge for anyone entering CPS from related areas of computer science or engineering. Truly complex, engineered systems—known as cyber-physical systems—that integrate physical, software, and network aspects are now on the rise. However, there is no unifying theory nor systematic design methods, techniques or tools for these systems. Individual (mechanical, electrical, network or software) engineering disciplines only offer partial solutions. A technique known as Multi-Paradigm Modelling has recently emerged suggesting to model every part and aspect of a system explicitly, at the most appropriate level(s) of abstraction, using the most appropriate modelling formalism(s), and then weaving the results together to form a representation of the system. If properly applied, it enables, among other global aspects, performance analysis, exhaustive simulation, and verification. This book is the first systematic attempt to bring together these formalisms for anyone starting in the field of CPS who seeks solid modelling foundations and a comprehensive introduction to the distinct existing techniques that are multi-paradigmatic. Though chiefly intended for master and post-graduate level students in computer science and engineering, it can also be used as a reference text for practitioners
    corecore