41,775 research outputs found

    Вибір інвестиційного проекту з використанням імітаційного моделювання

    Get PDF
    В роботі розглянуто вибір інвестиційного проекту з використанням імітаційного моделювання. Досліджено історію розвитку імітаційного моделювання. Наведено теоретико-методологічні засади імітаційного моделювання. Проведено імітаційний експеримент з використанням засобів програмування MS ЕХСЕL і порівняльний аналіз результатів моделюванняIn the article the using of imitation modeling for choice of investment project is considered. The history of imitation modeling development is investigated. Theoretical and methodological bases of imitation modeling are presented. An imitation experiment is executed. The programming facilities of MS Ехсеl are used. Comparative analysis of modeling results is carried ou

    Weighted Goal Programming and Penalty Functions: Whole-farm Planning Approach Under Risk

    Get PDF
    The paper presents multiple criteria approach to deal with risk in farmer’s decisions. Decision making process is organised in a framework of spreadsheet tool. It is supported by deterministic and stochastic mathematical programming techniques applying optimisation concept. Decision making process is conceptually divided into seven autonomous modules that are mutually linked up. Beside the common maximisation of expected income through linear programming it enables also reconstruction of current production practice. Income risk modelling is based on portfolio theory resting on expected value, variance (E,V) paradigm. Modules dealing with risk are therefore supported with quadratic and constrained quadratic programming. Non-parametric approach is utilised to estimate decision maker’s risk attitude. It is measured with coefficient of risk aversion, needed to maximise certainty equivalent for analysed farms. Multiple criteria paradigm is based on goal programming approach. In contribution focus is put on benefits and possible drawbacks of supporting weighted goal programming with penalty functions. Application of the tool is illustrated with three dairy farm cases. Obtained results confirm advantage of utilizing penalty function system. Beside greater positiveness it proves as useful approach for fine tuning of the model enabling imitation of farmer’s behaviour, which is due to his/her conservative nature not perfect or rational. Results confirm hypothesis that single criteria decision making, based on maximisation of expected income, might be biased and does not necessary lead to the best - achievable option for analysed farm.goal programming, risk modelling, risk aversion, production planning, Risk and Uncertainty,

    An Agent-based Modelling Framework for Driving Policy Learning in Connected and Autonomous Vehicles

    Get PDF
    Due to the complexity of the natural world, a programmer cannot foresee all possible situations, a connected and autonomous vehicle (CAV) will face during its operation, and hence, CAVs will need to learn to make decisions autonomously. Due to the sensing of its surroundings and information exchanged with other vehicles and road infrastructure, a CAV will have access to large amounts of useful data. While different control algorithms have been proposed for CAVs, the benefits brought about by connectedness of autonomous vehicles to other vehicles and to the infrastructure, and its implications on policy learning has not been investigated in literature. This paper investigates a data driven driving policy learning framework through an agent-based modelling approaches. The contributions of the paper are two-fold. A dynamic programming framework is proposed for in-vehicle policy learning with and without connectivity to neighboring vehicles. The simulation results indicate that while a CAV can learn to make autonomous decisions, vehicle-to-vehicle (V2V) communication of information improves this capability. Furthermore, to overcome the limitations of sensing in a CAV, the paper proposes a novel concept for infrastructure-led policy learning and communication with autonomous vehicles. In infrastructure-led policy learning, road-side infrastructure senses and captures successful vehicle maneuvers and learns an optimal policy from those temporal sequences, and when a vehicle approaches the road-side unit, the policy is communicated to the CAV. Deep-imitation learning methodology is proposed to develop such an infrastructure-led policy learning framework
    corecore