1 research outputs found

    Co-construction of adaptive public policies using SmartGov

    No full text
    International audienceDesigning a public urban policy is a demanding process which requires both time and money with no warranty of its efficiency. It involves knowledge about the purpose of urban design, behaviors of users and needs in terms of mobility. We believe that in the near future, decision makers will have to react and more frequently adapt public policies, based on the huge amount of available data, feedbacks from both target users and stakeholders. In this paper, we propose a generic agent-based architecture to model and simulate urban policies, which could facilitate the co-design and assessment of public policies in a specific environment. Two agent-based models are coupled with a micro-macro dynamic loop, and they can be adapted either by the system using reinforcement learning, or by the stakeholders using simulation results. A generic formalism is elaborated to represent urban policies, which can be instantiated in a co-design approach between the policymaker and our system.An experimentation is conducted on an urban mobility policy, related to the configuration of parking price system in downtown area. The agent's behavior and environment are developed to be as realistic as possible, based on a real-world source of modeling. However, our architecture has been designed to be generic, exploiting infrastructure data from any city using available community data (Open Street Map).The scenario of parking pricing shows that the system learns post-policy behaviors and can propose some adjustments (e.g., specific actions to apply, when and how) to better meet the stakeholders' objectives (e.g., maximize parking gains). The policy maker can then choose to validate the provided policies, or modified them for additional simulations
    corecore