7,556 research outputs found

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    MaxHopCount: A New Drop Policy to Optimize Messages Delivery Rate in Delay Tolerant Networks

    Get PDF
    Communication has become a necessity, not only between every point on the earth, but also on the globe. That includes hard topography, highlands, underwater areas, and also space- crafts on other planets. However, the classic wired internet cannot be implemented in such areas, hence, researchers have invented wireless networks. The big challenge for wireless networking nowadays, is maintaining nodes connected in some difficult conditions, such as intermittent connectivity, power failure, and lot of obstacles for the interplanetary networks. In these challenging circumstances, a new networking model arises; it is Delay Tolerant networking which is based on the Store-Carry-and-Forward mechanism. Thus, a node may keep a message in its buffer for long periods of time; until a delivery or forward chance arises then it transmit it to other nodes. One of the big issues that confront this mechanism is the congestion of nodes buffer due to the big number of messages and the limited buffer size. Here, researchers have proposed buffer management algorithms in order to deal with the buffer overload problem, and they called it Drop Policies. In our present work, we propose a new Drop policy which we have compared to other existing policies in different conditions and with different routing protocols, and it always shows good result in term of number of delivered messages, network overhead and also average of latency

    Agent Based Modeling in Land-Use and Land-Cover Change Studies

    Get PDF
    Agent based models (ABM) for land use and cover change (LUCC) holds the promise to provide new insight into the processes and patterns of the human and biophysical interactions in ways that have never been explored. Advances in computer technology make it possible to run almost infinite numbers of simulations with multiple heterogeneously shaped actors that reciprocally interact via vertical and horizontal power lines on various levels. Based upon an extensive literature review the basic components for such exercises are explored and discussed. This resulted in a systematic representation of these components consisting of: (1) Spatial static input data, (2) Actor and Actor-group static input data, (3) Spatial dynamic input, (4) Actor and Actor-group dynamic input data, (5) the model with the rules describing the rules, (6) Spatial static output, (7) Actor and Actor-group static output, (8) Dynamic output of Actor behaviour changes, (9) Dynamic output of actor-group behavioural changes, (10) Dynamic output of spatial patterns, (11) Dynamic output of temporal patterns. This representation proves to be epistemologically useful in the analysis of the relationships between the ABM LUCC components. In this paper, this representation is also used to enumerate the strengths and limitations of agent based modelling in LUCC

    Semantic Blockchain to Improve Scalability in the Internet of Things

    Get PDF
    Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal

    Web-based strategies in the manufacturing industry

    Get PDF
    The explosive growth of Internet-based architectures is allowing an efficient access to information resources over geographically dispersed areas. This fact is exerting a major influence on current manufacturing practices. Business activities involving customers, partners, employees and suppliers are being rapidly and efficiently integrated through networked information management environments. Therefore, efforts are required to take advantage of distributed infrastructures that can satisfy information integration and collaborative work strategies in corporate environments. In this research, Internet-based distributed solutions focused on the manufacturing industry are proposed. Three different systems have been developed for the tooling sector, specifically for the company Seco Tools UK Ltd (industrial collaborator). They are summarised as follows. SELTOOL is a Web-based open tool selection system involving the analysis of technical criteria to establish appropriate selection of inserts, toolholders and cutting data for turning, threading and grooving operations. It has been oriented to world-wide Seco customers. SELTOOL provides an interactive and crossed-way of searching for tooling parameters, rather than conventional representation schemes provided by catalogues. Mechanisms were developed to filter, convert and migrate data from different formats to the database (SQL-based) used by SELTOOL.TTS (Tool Trials System) is a Web-based system developed by the author and two other researchers to support Seco sales engineers and technical staff, who would perform tooling trials in geographically dispersed machining centres and benefit from sharing data and results generated by these tests. Through TTS tooling engineers (authorised users) can submit and retrieve highly specific technical tooling data for both milling and turning operations. Moreover, it is possible for tooling engineers to avoid the execution of new tool trials knowing the results of trials carried out in physically distant places, when another engineer had previously executed these trials. The system incorporates encrypted security features suitable for restricted use on the World Wide Web. An urgent need exists for tools to make sense of raw data, extracting useful knowledge from increasingly large collections of data now being constructed and made available from networked information environments. This explosive growth in the availability of information is overwhelming the capabilities of traditional information management systems, to provide efficient ways of detecting anomalies and significant patterns in large sets of data. Inexorably, the tooling industry is generating valuable experimental data. It is a potential and unexplored sector regarding the application of knowledge capturing systems. Hence, to address this issue, a knowledge discovery system called DISKOVER was developed. DISKOVER is an integrated Java-application consisting of five data mining modules, able to be operated through the Internet. Kluster and Q-Fast are two of these modules, entirely developed by the author. Fuzzy-K has been developed by the author in collaboration with another research student in the group at Durham. The final two modules (R-Set and MQG) have been developed by another member of the Durham group. To develop Kluster, a complete clustering methodology was proposed. Kluster is a clustering application able to combine the analysis of quantitative as well as categorical data (conceptual clustering) to establish data classification processes. This module incorporates two original contributions. Specifically, consistent indicators to measure the quality of the final classification and application of optimisation methods to the final groups obtained. Kluster provides the possibility, to users, of introducing case-studies to generate cutting parameters for particular Input requirements. Fuzzy-K is an application having the advantages of hierarchical clustering, while applying fuzzy membership functions to support the generation of similarity measures. The implementation of fuzzy membership functions helped to optimise the grouping of categorical data containing missing or imprecise values. As the tooling database is accessed through the Internet, which is a relatively slow access platform, it was decided to rely on faster Information retrieval mechanisms. Q-fast is an SQL-based exploratory data analysis (EDA) application, Implemented for this purpose

    Science and Ideology in Economic, Political, and Social Thought

    Get PDF
    This paper has two sources: One is my own research in three broad areas: business cycles, economic measurement and social choice. In all of these fields I attempted to apply the basic precepts of the scientific method as it is understood in the natural sciences. I found that my effort at using natural science methods in economics was met with little understanding and often considerable hostility. I found economics to be driven less by common sense and empirical evidence, then by various ideologies that exhibited either a political or a methodological bias, or both. This brings me to the second source: Several books have appeared recently that describe in historical terms the ideological forces that have shaped either the direct areas in which I worked, or a broader background. These books taught me that the ideological forces in the social sciences are even stronger than I imagined on the basis of my own experiences. The scientific method is the antipode to ideology. I feel that the scientific work that I have done on specific, long standing and fundamental problems in economics and political science have given me additional insights into the destructive role of ideology beyond the history of thought orientation of the works I will be discussing

    Intelligent Management of Virtualised Computer Based Workloads and Systems

    Get PDF
    Managing the complexity within virtualised IT infrastructure platforms is a common problem for many organisations today. Computer systems are often highly consolidated into a relatively small physical footprint compared with previous decades prior to late 2000s, so much thought, planning and control is necessary to effectively operate such systems within the enterprise computing space. With the development of private, hybrid and public cloud utility computing this has become even more relevant; this work examines how such cloud systems are using virtualisation technology and embedded software to leverage advantages, and it uses a fresh approach of developing and creating an Intelligent decision engine (expert system). Its aim is to help reduce the complexity of managing virtualised computer-based platforms, through tight integration, high-levels of automation to minimise human inputs, errors, and enforce standards and consistency, in order to achieve better management and control. The thesis investigates whether an expert system known as the Intelligent Decision Engine (IDE) could aid the management of virtualised computer-based platforms. Through conducting a series of mixed quantitative and qualitative experiments in the areas of research, the initial findings and evaluation are presented in detail, using repeatable and observable processes and provide detailed analysis on the recorded outputs. The results of the investigation establish the advantages of using the IDE (expert system) to achieve the goal of reducing the complexity of managing virtualised computer-based platforms. In each detailed area examined, it is demonstrated how using a global management approach in combination with VM provisioning, migration, failover, and system resource controls can create a powerful autonomous system

    Design Methodology for Self-organized Mobile Networks Based

    Get PDF
    The methodology proposed in this article enables a systematic design of routing algorithms based on schemes of biclustering, which allows you to respond with timely techniques, clustering heuristics proposed by a researcher, and a focused approach to routing in the choice of clusterhead nodes. This process uses heuristics aimed at improving the different costs in communication surface groups called biclusters. This methodology globally enables a variety of techniques and heuristics of clustering that have been addressed in routing algorithms, but we have not explored all possible alternatives and their different assessments. Therefore, the methodology oriented design research of routing algorithms based on biclustering schemes will allow new concepts of evolutionary routing along with the ability to adapt the topological changes that occur in self-organized data networks
    • …
    corecore