1,453 research outputs found

    Moving from Data-Constrained to Data-Enabled Research: Experiences and Challenges in Collecting, Validating and Analyzing Large-Scale e-Commerce Data

    Get PDF
    Widespread e-commerce activity on the Internet has led to new opportunities to collect vast amounts of micro-level market and nonmarket data. In this paper we share our experiences in collecting, validating, storing and analyzing large Internet-based data sets in the area of online auctions, music file sharing and online retailer pricing. We demonstrate how such data can advance knowledge by facilitating sharper and more extensive tests of existing theories and by offering observational underpinnings for the development of new theories. Just as experimental economics pushed the frontiers of economic thought by enabling the testing of numerous theories of economic behavior in the environment of a controlled laboratory, we believe that observing, often over extended periods of time, real-world agents participating in market and nonmarket activity on the Internet can lead us to develop and test a variety of new theories. Internet data gathering is not controlled experimentation. We cannot randomly assign participants to treatments or determine event orderings. Internet data gathering does offer potentially large data sets with repeated observation of individual choices and action. In addition, the automated data collection holds promise for greatly reduced cost per observation. Our methods rely on technological advances in automated data collection agents. Significant challenges remain in developing appropriate sampling techniques integrating data from heterogeneous sources in a variety of formats, constructing generalizable processes and understanding legal constraints. Despite these challenges, the early evidence from those who have harvested and analyzed large amounts of e-commerce data points toward a significant leap in our ability to understand the functioning of electronic commerce.Comment: Published at http://dx.doi.org/10.1214/088342306000000231 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Survey of Agent-Based Modeling Practices (January 1998 to July 2008)

    Get PDF
    In the 1990s, Agent-Based Modeling (ABM) began gaining popularity and represents a departure from the more classical simulation approaches. This departure, its recent development and its increasing application by non-traditional simulation disciplines indicates the need to continuously assess the current state of ABM and identify opportunities for improvement. To begin to satisfy this need, we surveyed and collected data from 279 articles from 92 unique publication outlets in which the authors had constructed and analyzed an agent-based model. From this large data set we establish the current practice of ABM in terms of year of publication, field of study, simulation software used, purpose of the simulation, acceptable validation criteria, validation techniques and complete description of the simulation. Based on the current practice we discuss six improvements needed to advance ABM as an analysis tool. These improvements include the development of ABM specific tools that are independent of software, the development of ABM as an independent discipline with a common language that extends across domains, the establishment of expectations for ABM that match their intended purposes, the requirement of complete descriptions of the simulation so others can independently replicate the results, the requirement that all models be completely validated and the development and application of statistical and non-statistical validation techniques specifically for ABM.Agent-Based Modeling, Survey, Current Practices, Simulation Validation, Simulation Purpose

    Exploring foundations for using simulations in IS research

    Get PDF
    Simulation has been adopted in many disciplines as a means for understanding the behavior of a system by imitating it through an artificial object that exhibits a nearly identical behavior. Although simulation approaches have been widely adopted for theory building in disciplines such as engineering, computer science, management, and social sciences, their potential in the IS field is often overlooked. The aim of this paper is to understand how different simulation approaches are successfully used in IS research, thereby providing hypotheses that allow deriving methodological guidelines for subsequent studies. A survey of 46 pieces of IS research provides the grounding for defining a taxonomy of simulation approaches and for identifying possible application patterns linking simulation approaches to their theory contributions, research domains and information views

    Exploring foundations for using simulations in IS research

    Get PDF
    Simulation has been adopted in many disciplines as a means for understanding the behavior of a system by imitating it through an artificial object that exhibits a nearly identical behavior. Although simulation approaches have been widely adopted for theory building in disciplines such as engineering, computer science, management, and social sciences, their potential in the IS field is often overlooked. The aim of this paper is to understand how different simulation approaches are used in IS research, thereby providing insights and methodological recommendations for future studies. A literature review of simulation studies published in top-tier IS journals leads to the definition of three classes of simulations, namely the self-organizing, the elementary, and the situated. A set of stylized facts is identified for characterizing the ways in which the premise, the inference, and the contribution are presented in IS simulation studies. As a result, this study provides guidance to future simulation researchers in designing and presenting findings

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    Using Qualitative Evidence to Enhance an Agent-Based Modelling System for Studying Land Use Change

    Get PDF
    This paper describes and evaluates a process of using qualitative field research data to extend the pre-existing FEARLUS agent-based modelling system through enriching its ontological capabilities, but without a deep level of involvement of the stakeholders in designing the model itself. Use of qualitative research in agent-based models typically involves protracted and expensive interaction with stakeholders; consequently gathering the valuable insights that qualitative methods could provide is not always feasible. At the same time, many researchers advocate building completely new models for each scenario to be studied, violating one of the supposed advantages of the object-oriented programming languages in which many such systems are built: that of code reuse. The process described here uses coded interviews to identify themes suggesting changes to an existing model, the assumptions behind which are then checked with respondents. We find this increases the confidence with which the extended model can be applied to the case study, with a relatively small commitment required on the part of respondents.Agent-Based Modelling, Land Use/Cover Change, Qualitative Research, Interdisciplinary Research

    Reputation systems and recruitment in online labor markets: insights from an agent-based model

    Get PDF
    Online labor markets—freelance marketplaces, where digital labor is distributed via a web-based platform—commonly use reputation systems to overcome uncertainties in the hiring process, that can arise from a lack of objective information about employees’ abilities. Research shows, however, that reputation systems tend to create winner-takes-all dynamics, in which differences in candidates’ reputations become disconnected from differences in their objective abilities. In this paper, we use an empirically validated agent-based computational model to investigate the extent to which reputation systems can create segmented hiring patterns that are biased toward freelancers with good reputation. We explore how jobs and earnings become distributed on a stylized platform, under different contextual conditions of information asymmetry. Our results suggest that information asymmetry influences the extent to which reputation systems may lead to inequality between freelancers, but contrary to our expectations, lower levels of information asymmetry can facilitate higher inequality in outcomes

    Artificial Markets under a Complexity Perspective

    Get PDF
    The focus of this study is to build, from the ‘bottom-up’, a market with artificially intelligent adaptive agents based on the institutional arrangement of the Colombian Foreign Exchange Market (1994-1999) in order to determine simple agents’ design, rules and interactions that are sufficient to create interesting behaviours at the macroscopic level - emerging patterns that replicate the properties of the time series from the case study. Tools from artificial intelligence research, such as genetic algorithms and fuzzy logic, are the basis of the agents’ mental models, which in turn are used for forecasting, quoting and learning purposes in a double auction market. Sets of fuzzy logic rules yield adequate, approximately continuous risk and utility preferences without the need to fix their mathematical form ex-ante. Statistical properties of financial time series are generated by the artificial market, as well as some additional non-linearity linked to the existence of a crawling band. Moreover, the behaviour of the simulated exchange rate is consistent with currency band theory. Agent’s learning favours forecasting rules based on regulatory signals against rules based on fundamental information. Also, intra-day volatility is strongly linked to the rate of arrival and size of real sector trades. Intra-day volatility is also a function of the frequency of learning and search specialisation. It is found that when a moderately low frequency of learning is used, volatility increases.Adaptive agents, artificial markets, constrained generating procedures, fuzzy logic and genetic algorithms. Classification JEL: G1; G12; G39.
    • 

    corecore