3,100 research outputs found

    Quantitative Genetics and Functional-Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    Full text link
    Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions: By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits - such as cob weight - and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GreenLab maize model. The paper discusses the potentials of GreenLab to represent environment x genotype interactions, in particular through its main state variable, the ratio of biomass supply over demand

    A Simulated Annealing Method to Cover Dynamic Load Balancing in Grid Environment

    Get PDF
    High-performance scheduling is critical to the achievement of application performance on the computational grid. New scheduling algorithms are in demand for addressing new concerns arising in the grid environment. One of the main phases of scheduling on a grid is related to the load balancing problem therefore having a high-performance method to deal with the load balancing problem is essential to obtain a satisfactory high-performance scheduling. This paper presents SAGE, a new high-performance method to cover the dynamic load balancing problem by means of a simulated annealing algorithm. Even though this problem has been addressed with several different approaches only one of these methods is related with simulated annealing algorithm. Preliminary results show that SAGE not only makes it possible to find a good solution to the problem (effectiveness) but also in a reasonable amount of time (efficiency)

    Dynamic Load Balancing and Self-load Migration with Delay Queue in DVE

    Get PDF
    Distributed Virtual environments are gaining a lot of attention recently, due to the ever improving popularity of on the internet and social networking sites. As the variety of contingency users of a distributed virtual environment increases the critical issue is coming up, the issue describes as improving amount of work between several web servers how can be balanced to maintain real-time efficiency. The variety of load balancing methods has been suggested recently but they either try to produce high quality load balancing outcomes and become too slow or highlight on efficiency and the load balancing outcomes become less effective. In this perform, the new approach is suggested to address this issue based on the Front load balancer. The heat diffusion methods is used to develop a load balancing system after that the front load balancer will create improvements in the Dynamic load balancing of the several web servers with Delay queue. The numbers of tests are performed to evaluate the efficiency of the suggested technique. The trial outcomes show that the suggested technique works effectively in reducing server over-loading while at the same time being efficient. DOI: 10.17762/ijritcc2321-8169.15077

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A fuzzified systematic adjustment of the robotic Darwinian PSO

    Get PDF
    The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization using natural selection to enhance the ability to escape from sub-optimal solutions. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots, hence decreasing the amount of required information exchange among robots. This paper further extends the previously proposed algorithm adapting the behavior of robots based on a set of context-based evaluation metrics. Those metrics are then used as inputs of a fuzzy system so as to systematically adjust the RDPSO parameters (i.e., outputs of the fuzzy system), thus improving its convergence rate, susceptibility to obstacles and communication constraints. The adapted RDPSO is evaluated in groups of physical robots, being further explored using larger populations of simulated mobile robots within a larger scenario

    Self-organising multi-agent control for distribution networks with distributed energy resources

    Get PDF
    Recent years have seen an increase in the connection of dispersed distributed energy resources (DERs) and advanced control and operational components to the distribution network. These DERs can come in various forms, including distributed generation (DG), electric vehicles (EV), energy storage, etc. The conditions of these DERs can be varying and unpredictably intermittent. The integration of these distributed components adds more complexity and uncertainty to the operation of future power networks, such as voltage, frequency, and active/reactive power control. The stochastic and distributed nature of DGs and the difficulty in predicting EV charging patterns presents problems to the control and management of the distribution network. This adds more challenges to the planning and operation of such systems. Traditional methods for dealing with network problems such as voltage and power control could therefore be inadequate. In addition, conventional optimisation techniques will be difficult to apply successfully and will be accompanied with a large computational load. There is therefore a need for new control techniques that break the problem into smaller subsets and one that uses a multi-agent system (MAS) to implement distributed solutions. These groups of agents would coordinate amongst themselves, to regulate local resources and voltage levels in a distributed and adaptive manner considering varying conditions of the network. This thesis investigates the use of self-organising systems, presenting suitable approaches and identifying the challenges of implementing such techniques. It presents the development of fully functioning self-organising multi-agent control algorithms that can perform as effectively as full optimization techniques. It also demonstrates these new control algorithms on models of large and complex networks with DERs. Simulation results validate the autonomy of the system to control the voltage independently using only local DERs and proves the robustness and adaptability of the system by maintaining stable voltage control in response to network conditions over time.Recent years have seen an increase in the connection of dispersed distributed energy resources (DERs) and advanced control and operational components to the distribution network. These DERs can come in various forms, including distributed generation (DG), electric vehicles (EV), energy storage, etc. The conditions of these DERs can be varying and unpredictably intermittent. The integration of these distributed components adds more complexity and uncertainty to the operation of future power networks, such as voltage, frequency, and active/reactive power control. The stochastic and distributed nature of DGs and the difficulty in predicting EV charging patterns presents problems to the control and management of the distribution network. This adds more challenges to the planning and operation of such systems. Traditional methods for dealing with network problems such as voltage and power control could therefore be inadequate. In addition, conventional optimisation techniques will be difficult to apply successfully and will be accompanied with a large computational load. There is therefore a need for new control techniques that break the problem into smaller subsets and one that uses a multi-agent system (MAS) to implement distributed solutions. These groups of agents would coordinate amongst themselves, to regulate local resources and voltage levels in a distributed and adaptive manner considering varying conditions of the network. This thesis investigates the use of self-organising systems, presenting suitable approaches and identifying the challenges of implementing such techniques. It presents the development of fully functioning self-organising multi-agent control algorithms that can perform as effectively as full optimization techniques. It also demonstrates these new control algorithms on models of large and complex networks with DERs. Simulation results validate the autonomy of the system to control the voltage independently using only local DERs and proves the robustness and adaptability of the system by maintaining stable voltage control in response to network conditions over time

    Hierarchical feature extraction from spatiotemporal data for cyber-physical system analytics

    Get PDF
    With the advent of ubiquitous sensing, robust communication and advanced computation, data-driven modeling is increasingly becoming popular for many engineering problems. Eliminating difficulties of physics-based modeling, avoiding simplifying assumptions and ad hoc empirical models are significant among many advantages of data-driven approaches, especially for large-scale complex systems. While classical statistics and signal processing algorithms have been widely used by the engineering community, advanced machine learning techniques have not been sufficiently explored in this regard. This study summarizes various categories of machine learning tools that have been applied or may be a candidate for addressing engineering problems. While there are increasing number of machine learning algorithms, the main steps involved in applying such techniques to the problems consist in: data collection and pre-processing, feature extraction, model training and inference for decision-making. To support decision-making processes in many applications, hierarchical feature extraction is key. Among various feature extraction principles, recent studies emphasize hierarchical approaches of extracting salient features that is carried out at multiple abstraction levels from data. In this context, the focus of the dissertation is towards developing hierarchical feature extraction algorithms within the framework of machine learning in order to solve challenging cyber-physical problems in various domains such as electromechanical systems and agricultural systems. Furthermore, the feature extraction techniques are described using the spatial, temporal and spatiotemporal data types collected from the systems. The wide applicability of such features in solving some selected real-life domain problems are demonstrated throughout this study

    Improved Cloud resource allocation: how INDIGO-Datacloud is overcoming the current limitations in Cloud schedulers

    Get PDF
    Trabajo presentado a: 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP2016) 10–14 October 2016, San Francisco.Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.The authors want to acknowledge the support of the INDIGO-DataCloud (grant number 653549) project, funded by the European Commission’s Horizon 2020 Framework Programme.Peer Reviewe
    corecore