15,055 research outputs found

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    Automating Security Analysis: Symbolic Equivalence of Constraint Systems

    Get PDF
    We consider security properties of cryptographic protocols, that are either trace properties (such as confidentiality or authenticity) or equivalence properties (such as anonymity or strong secrecy). Infinite sets of possible traces are symbolically represented using deducibility constraints. We give a new algorithm that decides the trace equivalence for the traces that are represented using such constraints, in the case of signatures, symmetric and asymmetric encryptions. Our algorithm is implemented and performs well on typical benchmarks. This is the first implemented algorithm, deciding symbolic trace equivalence

    From calculations to reasoning: history, trends, and the potential of Computational Ethnography and Computational Social Anthropology

    Get PDF
    The domains of 'computational social anthropology' and 'computational ethnography' refer to the computational processing or computational modelling of data for anthropological or ethnographic research. In this context, the article surveys the use of computational methods regarding the production and the representation of knowledge. The ultimate goal of the study is to highlight the significance of modelling ethnographic data and anthropological knowledge by harnessing the potential of the semantic web. The first objective was to review the use of computational methods in anthropological research focusing on the last 25 years, while the second objective was to explore the potential of the semantic web focusing on existing technologies for ontological representation. For these purposes, the study explores the use of computers in anthropology regarding data processing and data modelling for more effective data processing. The survey reveals that there is an ongoing transition from the instrumentalisation of computers as tools for calculations, to the implementation of information science methodologies for analysis, deduction, knowledge representation, and reasoning, as part of the research process in social anthropology. Finally, it is highlighted that the ecosystem of the semantic web does not subserve quantification and metrics but introduces a new conceptualisation for addressing and meeting research questions in anthropology

    Designing as Construction of Representations: A Dynamic Viewpoint in Cognitive Design Research

    Get PDF
    This article presents a cognitively oriented viewpoint on design. It focuses on cognitive, dynamic aspects of real design, i.e., the actual cognitive activity implemented by designers during their work on professional design projects. Rather than conceiving de-signing as problem solving - Simon's symbolic information processing (SIP) approach - or as a reflective practice or some other form of situated activity - the situativity (SIT) approach - we consider that, from a cognitive viewpoint, designing is most appropriately characterised as a construction of representations. After a critical discussion of the SIP and SIT approaches to design, we present our view-point. This presentation concerns the evolving nature of representations regarding levels of abstraction and degrees of precision, the function of external representations, and specific qualities of representation in collective design. Designing is described at three levels: the organisation of the activity, its strategies, and its design-representation construction activities (different ways to generate, trans-form, and evaluate representations). Even if we adopt a "generic design" stance, we claim that design can take different forms depending on the nature of the artefact, and we propose some candidates for dimensions that allow a distinction to be made between these forms of design. We discuss the potential specificity of HCI design, and the lack of cognitive design research occupied with the quality of design. We close our discussion of representational structures and activities by an outline of some directions regarding their functional linkages

    A multimodal perspective on modality in the English language classroom

    Get PDF
    This thesis is a study of an engage, study, activate (ESA) lesson of teaching modals of present deduction. The lesson has been taken from a published English language teaching course book and is typical of the way modal forms are presented to teach epistemic modality in many commercially produced English language teaching course books. I argue that for cognitive, social, linguistic and procedural reasons the linguistic forms and structures presented in the lesson are not straightforwardly transferred to the activate stage of the lesson. Using insights from spoken language corpora I carry out a comparative analysis with the modal forms presented in the course book. I then explore the notion of ‘context’ and drawing on systemic functional grammar discuss how modal forms function in discourse to realise interpersonal relations. Moving my research to the English language classroom I collect ethnographic classroom data and using social semiotic multimodality as an analytical framework I explore learner interaction to uncover the communicative resources learners use to express epistemic modality in a discussion activity from the same lesson. My analysis reveals that the modal structures in the course book differ to some extent from spoken language corpora. It shows that the course book offers no instruction on the interpersonal dimension of modality and thus how speakers use signals of modality to position themselves interpersonally vis-à-vis their interlocutors. The data collected from the English language class reveals that during the lesson learners communicate modality through modes of communication such as eye gaze, gesture and posture in addition to spoken language. Again drawing from systemic functional grammar I explain how these modes have the potential to express interpersonal meaning and thus highlight that meaning is communicated through modal ensembles. Based on these findings I propose a number of teaching strategies to raise awareness of the interpersonal function of modality in multimodal discourse, and for the use of language corpora to better inform teaching materials on selections of modality

    A Framework for Genetic Algorithms Based on Hadoop

    Full text link
    Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in many real-world applications. The sequential execution of GAs requires considerable computational power both in time and resources. Nevertheless, GAs are naturally parallel and accessing a parallel platform such as Cloud is easy and cheap. Apache Hadoop is one of the common services that can be used for parallel applications. However, using Hadoop to develop a parallel version of GAs is not simple without facing its inner workings. Even though some sequential frameworks for GAs already exist, there is no framework supporting the development of GA applications that can be executed in parallel. In this paper is described a framework for parallel GAs on the Hadoop platform, following the paradigm of MapReduce. The main purpose of this framework is to allow the user to focus on the aspects of GA that are specific to the problem to be addressed, being sure that this task is going to be correctly executed on the Cloud with a good performance. The framework has been also exploited to develop an application for Feature Subset Selection problem. A preliminary analysis of the performance of the developed GA application has been performed using three datasets and shown very promising performance

    Data Profiling to Reveal Meaningful Structures for Standardization

    Get PDF
    Today many organisations and enterprises are using data from several sources either for strategic decision making or other business goals such as data integration. Data quality problems are always a hindrance to effective and efficient utilization of such data. Tools have been built to clean and standardize data, however, there is a need to pre-process this data by applying techniques and processes from statistical semantics, NLP, and lexical analysis. Data profiling employed these techniques to discover, reveal commonalties and differences in the inherent data structures, present ideas for creation of unified data model, and provide metrics for data standardization and verification. The IBM WebSphere tool was used to pre-process dataset/records by design and implementation of rule sets which were developed in QualityStage and tasks which were created in DataStage. Data profiling process generated set of statistics (frequencies), token/phrase relationships (RFDs, GRFDs), and other findings in the dataset that provided an overall view of the data source's inherent properties and structures. The examination of data ( identifying violations of the normal forms and other data commonalities) from a dataset and collecting the desired information provided useful statistics for data standardization and verification by enable disambiguation and classification of data.Master i Informatikk - programutviklingMAMN-INFPRINFP
    • …
    corecore