21,126 research outputs found

    Understanding soil fertility in organically farmed systems (OF0164)

    Get PDF
    This is the final report of the Defra project OF0164. Organic farming aims to create an economically and environmentally sustainable agriculture, with the emphasis placed on self-sustaining biological systems rather than external inputs. Building soil fertility is central to this ethos. ‘Soil fertility’ can be considered as a measure of the soil’s ability to sustain satisfactory crop growth, both in the short- and longer-term. It is determined by a set of interactions between the soil’s physical environment, chemical environment and biological activity. The aim of this project was, therefore, to provide a better scientific understanding of soil fertility under organic farming. The approach was to undertake a comprehensive literature review at the start of the project to assess and synthesise available information. Studies were then designed to address specific questions identified from the literature review. The literature review was written during the first year of the project. In addition to submitting written copies to DEFRA, the chapters were posted on a project website: www.adas.co.uk/soilfertility. The Review was based around key questions: • What are the soil organic matter characteristics and the roles of different fractions of the soil organic matter? • Do organically managed soils have higher levels of organic matter (SOM), with a resultant improvement in soil properties? • Is the soil biology different in organically managed soils, in terms of size, biodiversity and activity? • Do organically managed soils have a greater inherent capacity to supply plant nutrients? • What are the nutrient pools and their sizes? • What are the processes and rates of nutrient transfer in relation to nutrient demand? • What are the environmental consequences of organic management? The project also included a large amount of practical work. This necessarily covered a wide range of topics, which were examined in a series of separate studies: • Soil microbiology: a series of measurements focusing on two sites, undertaken by University of Wales Bangor (UWB) • Field campaigns in autumn 1999 and spring/summer 2000: separate field sampling campaigns focusing especially on nutrient pools, undertaken by HDRA, ADAS and IGER • Incubation studies: a series of three separate experiments to look in more detail at N dynamics, managed by ADAS, with support from IGER and HDRA From the literature review and the practical work, the following was concluded: Organic matter is linked intrinsically to soil fertility, because it is important in maintaining good soil physical conditions (e.g. soil structure, aeration and water holding capacity), which contribute to soil fertility. Organic matter also contains most of the soil reserve of N and large proportions of other nutrients such as P and sulphur. Field management data gathered from farmers showed, however, that organic matter returns are not necessarily larger in organic systems. Many non-organically farmed soils receive regular manure applications and the generally higher yielding crops on conventional farms may return larger crop residues. Conversely, many organic fields receive little or no manure, relying on the fertility building ley phase for organic matter input. This observation is important. Management practices within organic and non-organic systems are diverse, and all have consequences for soil fertility. The Executive Summary at the start of the main attached report has additional sections on Soil Structure, Soil Biology, and Nutrient Cycling with some greater detail on comparisons of organic and conventional management and the consequences for soil fertility

    An ontology enhanced parallel SVM for scalable spam filter training

    Get PDF
    This is the post-print version of the final paper published in Neurocomputing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Spam, under a variety of shapes and forms, continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) techniques have been proposed for spam filter training and classification. However, SVM training is a computationally intensive process. This paper presents a MapReduce based parallel SVM algorithm for scalable spam filter training. By distributing, processing and optimizing the subsets of the training data across multiple participating computer nodes, the parallel SVM reduces the training time significantly. Ontology semantics are employed to minimize the impact of accuracy degradation when distributing the training data among a number of SVM classifiers. Experimental results show that ontology based augmentation improves the accuracy level of the parallel SVM beyond the original sequential counterpart

    The Allocation of Software Development Resources In ‘Open Source’ Production Mode

    Get PDF
    This paper aims to develop a stochastic simulation structure capable of describing the decentralized, micro-level decisions that allocate programming resources both within and among open source/free software (OS/FS) projects, and that thereby generate an array of OS/FS system products each of which possesses particular qualitative attributes. The core or behavioral kernel of simulation tool presented here represents the effects of the reputational reward structure of OS/FS communities (as characterized by Raymond 1998) to be the key mechanism governing the probabilistic allocation of agents’ individual contributions among the constituent components of an evolving software system. In this regard, our approach follows the institutional analysis approach associated with studies of academic researchers in “open science” communities. For the purposes of this first step, the focus of the analysis is confined to showing the ways in which the specific norms of the reward system and organizational rules can shape emergent properties of successive releases of code for a given project, such as its range of functions and reliability. The global performance of the OS/FS mode, in matching the functional and other characteristics of the variety of software systems that are produced with the needs of users in various sectors of the economy and polity, obviously, is a matter of considerable importance that will bear upon the long-term viability and growth of this mode of organizing production and distribution. Our larger objective, therefore, is to arrive at a parsimonious characterization of the workings of OS/FS communities engaged across a number of projects, and their collective productive performance in dimensions that are amenable to “social welfare” evaluation. Seeking that goal will pose further new and interesting problems for study, a number of which are identified in the essay’s conclusion. Yet, it is argued that that these too will be found to be tractable within the framework provided by refining and elaborating on the core (“proof of concept”) model that is presented in this paper.

    Identification of microservices from monolithic applications through topic modelling

    Get PDF
    Dissertação de mestrado em Informatics EngineeringMicroservices emerged as one of the most popular architectural patterns in the recent years given the increased need to scale, grow and flexibilize software projects accompanied by the growth in cloud computing and DevOps. Many software applications are being submitted to a process of migration from its monolithic architecture to a more modular, scalable and flexible architecture of microservices. This process is slow and, depending on the project’s complexity, it may take months or even years to complete. This dissertation proposes a new approach on microservices identification by resorting to topic modelling in order to identify services according to domain terms. This approach in combination with clustering techniques produces a set of services based on the original software. The proposed methodology is implemented as an open-source tool for exploration of monolithic architectures and identification of microservices. An extensive quantitative analysis using the state of the art metrics on independence of functionality and modularity of services was conducted on 200 open-source projects collected from GitHub. Cohesion at message and domain level metrics showed medians of roughly 0.6. Interfaces per service exhibited a median of 1.5 with a compact interquartile range. Structural and conceptual modularity revealed medians of 0.2 and 0.4 respectively. Further analysis to understand if the methodology works better for smaller/larger projects revealed an overall stability and similar performance across metrics. Our first results are positive demonstrating beneficial identification of services due to overall metrics’ results.Os microserviços emergiram como um dos padrões arquiteturais mais populares na atualidade dado o aumento da necessidade em escalar, crescer e flexibilizar projetos de software, acompanhados da crescente da computação na cloud e DevOps. Muitas aplicações estão a ser submetidas a processos de migração de uma arquitetura monolítica para uma arquitetura mais modular, escalável e flexivel de microserviços. Este processo de migração é lento, e dependendo da complexidade do projeto, poderá levar vários meses ou mesmo anos a completar. Esta dissertação propõe uma nova abordagem na identificação de microserviços recorrendo a modelação de tópicos de forma a identificar serviços de acordo com termos de domínio de um projeto de software. Esta abordagem em combinação com técnicas de clustering produz um conjunto de serviços baseado no projeto de software original. A metodologia proposta é implementada como uma ferramenta open-source para exploração de arquiteturas monolíticas e identificação de microserviços. Uma análise quantitativa extensa recorrendo a métricas de independência de funcionalidade e modularidade de serviços foi conduzida em 200 aplicações open-source recolhidas do GitHub. Métricas de coesão ao nível da mensagem e domínio revelaram medianas em torno de 0.6. Interfaces por serviço demonstraram uma mediana de 1.5 com um intervalo interquartil compacto. Métricas de modularidade estrutural e conceptual revelaram medianas de 0.2 e 0.4 respetivamente. Uma análise mais aprofundada para tentar perceber se a metodologia funciona melhor para projetos de diferentes dimensões/características revelaram uma estabilidade geral do funcionamento do método. Os primeiros resultados são positivos demonstrando identificações de serviços benéficos tendo em conta que os valores das métricas são de uma forma global positivos e promissores

    Dynamic Influence Networks for Rule-based Models

    Get PDF
    We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle.Comment: Accepted to TVCG, in pres

    A Requirements-Based Exploration of Open-Source Software Development Projects – Towards a Natural Language Processing Software Analysis Framework

    Get PDF
    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects, error-prone. Automated analysis of natural language requirements, even partial, will be of great benefit. Towards that end, I describe the design and validation of an automated natural language requirements classifier for open source software development projects. I compare two strategies for recognizing requirements in open forums of software features. The results suggest that classifying text at the forum post aggregation and sentence aggregation levels may be effective. Initial results suggest that it can reduce the effort required to analyze requirements of open source software development projects. Software development organizations and communities currently employ a large number of software development techniques and methodologies. This implied complexity is also enhanced by a wide range of software project types and development environments. The resulting lack of consistency in the software development domain leads to one important challenge that researchers encounter while exploring this area: specificity. This results in an increased difficulty of maintaining a consistent unit of measure or analysis approach while exploring a wide variety of software development projects and environments. The problem of specificity is more prominently exhibited in an area of software development characterized by a dynamic evolution, a unique development environment, and a relatively young history of research when compared to traditional software development: the open-source domain. While performing research on open source and the associated communities of developers, one can notice the same challenge of specificity being present in requirements engineering research as in the case of closed-source software development. Whether research is aimed at performing longitudinal or cross-sectional analyses, or attempts to link requirements to other aspects of software development projects and their management, specificity calls for a flexible analysis tool capable of adapting to the needs and specifics of the explored context. This dissertation covers the design, implementation, and evaluation of a model, a method, and a software tool comprising a flexible software development analysis framework. These design artifacts use a rule-based natural language processing approach and are built to meet the specifics of a requirements-based analysis of software development projects in the open-source domain. This research follows the principles of design science research as defined by Hevner et. al. and includes stages of problem awareness, suggestion, development, evaluation, and results and conclusion (Hevner et al. 2004; Vaishnavi and Kuechler 2007). The long-term goal of the research stream stemming from this dissertation is to propose a flexible, customizable, requirements-based natural language processing software analysis framework which can be adapted to meet the research needs of multiple different types of domains or different categories of analyses

    Meeting the challenge of zero carbon homes : a multi-disciplinary review of the literature and assessment of key barriers and enablers

    Get PDF
    Within the built environment sector, there is an increasing pressure on professionals to consider the impact of development upon the environment. These pressures are rooted in sustainability, and particularly climate change. But what is meant by sustainability? It is a term whose meaning is often discussed, the most common definition taken from the Bruntland report as “sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs” (World Commission on Environment and Development, 1987). In the built environment, the sustainability issues within the environment, social and economic spheres are often expressed through design considerations of energy, water and waste. Given the Stern Report’s economic and political case for action with respect to climate change (Stern, 2006) and the IPCC’s Fourth Assessment Report’s confirmation of the urgency of the climate change issue and it’s root causes (IPCC, 2007), the need for action to mitigate the effects of climate change is currently high on the political agenda. Excess in carbon dioxide concentrations over the natural level have been attributed to anthropogenic sources, most particularly the burning of carbon-based fossil fuels. Over 40% of Europe’s energy and 40% of Europe’s carbon dioxide emissions arise from use of energy in buildings. Energy use in buildings is primarily for space heating, water heating, lighting and appliance use. Professionals in the built environment can therefore play a significant role in meeting targets for mitigating the effects of climate change. The UK Government recently published the Code for Sustainable Homes (DCLG, 2006). Within this is the objective of development of zero carbon domestic new build dwellings by 2016. It is the domestic zero carbon homes agenda which is the focus of this report. The report is the culmination of a research project, funded by Northumbria University, and conducted from February 2008 to July 2008, involving researchers from the Sustainable Cities Research Institute (within the School of the Built Environment) and academics, also from within the School. The aim of the project was to examine, in a systematic and holistic way, the critical issues, drivers and barriers to building and adapting houses to meet zero carbon targets. The project involved a wide range of subject specialisms within the built environment and took a multi-disciplinary approach. Practitioner contribution was enabled through a workshop. The focus of this work was to review the academic literature on the built environment sector and its capabilities to meet zero carbon housing targets. It was not possible to undertake a detailed review of energy efficiency or micro-generation technologies, the focus of the research was instead in four focussed areas: policy, behaviour, supply chain and technology.What follows is the key findings of the review work undertaken. Chapter One presents the findings of the policy and regulation review. In Chapter Two the review of behavioural aspects of energy use in buildings is presented. Chapter Three presents the findings of the review of supply chain issues. Chapter Four presents the findings of the technology review, which focuses on phase change materials. A summary of the key barriers and enablers, and areas for future research work, concludes this report in Chapter Five. Research is always a work in progress, and therefore comments on this document are most welcome, as are offers of collaboration towards solutions. The School of the Built Environment at Northumbria University strives to embed its research in practical applications and solutions to the need for a low carbon economy
    corecore