87,748 research outputs found

    Conversion to Organic Production Software (OrgPlan, OF0159)

    Get PDF
    This is the final report for Defra Project OF0159. The Organic Conversion Planner (OrgPlan) is a computer program for farmers and advisors reducing the time input necessary for planning a conversion to organic farming. Conversion planning can help to identify whether organic management is suited to the farm and potential problems during the conversion period itself. This involves an assessment of the current situation of the farm, based on which proposals for an organic 'target (endpoint)' can be developed. This includes proposed rotation(s), cropping and stocking plan for the specific farm situation and the proposals need to be tested for their technical and financial feasibility, including impact on forage supply, nutrient requirements and financial budgets. In a final step a more detailed strategy for getting from the current situation to the target situation needs to be worked out. On the basis of such a plan a farmers can make an informed choice about the feasibility of a conversion and planning can help to reduce the risk of conversion. General whole farm planning methods can be broadly split into budgeting and optimisation methods. The former uses input and output data from existing enterprises or standard data, whereas the latter uses mathematical models to determine the optimal choice of enterprises for a maximisation of a key indicator, e.g. profit. OrgPlan uses the budgeting approach, building on experience with mainly German speaking budgeting software for organic conversion. It overcomes a number of key limitations of spreadsheet based budgeting approaches in relation to access to standard enterprise data, additional support tools (e.g. rotation planner) and ease of use. The software is structured into three major sections: In Central Resources basic standard data and farm profiles are entered, viewed and adjusted, and rotations can be planned. Access is also provided to the advisory section, containing documents about organic production standards, organic managmenet notes and a software help file. It is also possible to access these from other sections of the software. In the Scenario Planning section new files for a scenario are created, where a scenario refers to a period of several years of a farm during conversion and/or under organic management. Cropping and livestock plans are generated and a first assessment of the scenario of key farm mangement indicators, nutrients and forage budgets is provided. After adding whole farm financial data the results are transferred into the Report Builder where profit and loss and cash-flow forecasts for the scenario can be generated. Reports can be viewed on screen, printed (HTML format) or exported for further analysis in other packages (spreadsheets). A key aim in developing the software was to reduce the time input needed for conversion planning. The software is windows based and follows the layout of the EMA software (developed by UH). It was programmed in Microsoft (MS) Visual Basic, using MS Access databases for the storage of data. It used results of several DEFRA funded research projects and has relevance to the Organic Conversion Information Service (OCIS). A series of nine basic steps are needed to use the software to plan conversion. These are: viewing and modifying standard enterprise data, viewing and modifying rotations, creating a farm profile, creating and planning a conversion scenario, getting first feedback on the scenario, adding whole farm financial data, planning new investment during the scenario period and viewing and printing reports and/or export data for further analysis in other packages. The basic planning tool has been released as part of the EMA 2002 software (EMA Plan). Because of the sensitive nature of the financial calculations that are the main feature of OrgPlan, further field testing of the programme in conjunction with the Organic Standard Data Collection is envisaged in the autumn of 2002 for with experienced Organic Farming Consultants

    Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim

    Get PDF
    Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process

    HandyBroker - An intelligent product-brokering agent for M-commerce applications with user preference tracking

    Get PDF
    One of the potential applications for agent-based systems is m-commerce. A lot of research has been done on making such systems intelligent to personalize their services for users. In most systems, user-supplied keywords are generally used to help generate profiles for users. In this paper, an evolutionary ontology-based product-brokering agent has been designed for m-commerce applications. It uses an evaluation function to represent a user’s preference instead of the usual keyword-based profile. By using genetic algorithms, the agent tracks the user’s preferences for a particular product by tuning some parameters inside its evaluation function. A prototype called “Handy Broker” has been implemented in Java and the results obtained from our experiments looks promising for m-commerce use

    Neuro-fuzzy knowledge processing in intelligent learning environments for improved student diagnosis

    Get PDF
    In this paper, a neural network implementation for a fuzzy logic-based model of the diagnostic process is proposed as a means to achieve accurate student diagnosis and updates of the student model in Intelligent Learning Environments. The neuro-fuzzy synergy allows the diagnostic model to some extent "imitate" teachers in diagnosing students' characteristics, and equips the intelligent learning environment with reasoning capabilities that can be further used to drive pedagogical decisions depending on the student learning style. The neuro-fuzzy implementation helps to encode both structured and non-structured teachers' knowledge: when teachers' reasoning is available and well defined, it can be encoded in the form of fuzzy rules; when teachers' reasoning is not well defined but is available through practical examples illustrating their experience, then the networks can be trained to represent this experience. The proposed approach has been tested in diagnosing aspects of student's learning style in a discovery-learning environment that aims to help students to construct the concepts of vectors in physics and mathematics. The diagnosis outcomes of the model have been compared against the recommendations of a group of five experienced teachers, and the results produced by two alternative soft computing methods. The results of our pilot study show that the neuro-fuzzy model successfully manages the inherent uncertainty of the diagnostic process; especially for marginal cases, i.e. where it is very difficult, even for human tutors, to diagnose and accurately evaluate students by directly synthesizing subjective and, some times, conflicting judgments

    Data intensive scientific analysis with grid computing

    Get PDF
    At the end of September 2009, a new Italian GPS receiver for radio occultation was launched from the Satish Dhawan Space Center (Sriharikota, India) on the Indian Remote Sensing OCEANSAT-2 satellite. The Italian Space Agency has established a set of Italian universities and research centers to implement the overall processing radio occultation chain. After a brief description of the adopted algorithms, which can be used to characterize the temperature, pressure and humidity, the contribution will focus on a method for automatic processing these data, based on the use of a distributed architecture. This paper aims at being a possible application of grid computing for scientific research

    Understand Your Chains: Towards Performance Profile-based Network Service Management

    Full text link
    Allocating resources to virtualized network functions and services to meet service level agreements is a challenging task for NFV management and orchestration systems. This becomes even more challenging when agile development methodologies, like DevOps, are applied. In such scenarios, management and orchestration systems are continuously facing new versions of functions and services which makes it hard to decide how much resources have to be allocated to them to provide the expected service performance. One solution for this problem is to support resource allocation decisions with performance behavior information obtained by profiling techniques applied to such network functions and services. In this position paper, we analyze and discuss the components needed to generate such performance behavior information within the NFV DevOps workflow. We also outline research questions that identify open issues and missing pieces for a fully integrated NFV profiling solution. Further, we introduce a novel profiling mechanism that is able to profile virtualized network functions and entire network service chains under different resource constraints before they are deployed on production infrastructure.Comment: Submitted to and accepted by the European Workshop on Software Defined Networks (EWSDN) 201
    corecore