1,808 research outputs found

    Feasibility Study: Vertical Farm EDEN

    Get PDF
    Hundreds of millions of people around the world do not have access to sufficient food. With the global population continuing to increase, the global food output will need to drastically increase to meet demands. At the same time, the amount of land suitable for agriculture is finite, so it is not possibly to meet the growing demand by simply increasing the use of land. Thus, to be able to feed the entire global population, and continue to do so in the future, it will be necessary to drastically increase the food output per land area. One idea which has been recently discussed in the scientific community is called Vertical Farming (VF), which cultivates food crops on vertically stacked levels in (high-rise) buildings. The Vertical Farm, so it is said, would allow for more food production in a smaller area. Additionally, a Vertical Farm could be situated in any place (e.g. Taiga- or desert regions, cities), which would make it possible to reduce the amount of transportation needed to deliver the crops to the supermarkets. The technologies required for the Vertical Farm are well-known and already being used in conventional terrestrial greenhouses, as well as in the designs of bioregenerative Life Support Systems for space missions. However, the economic feasibility of the Vertical Farm, which will determine whether this concept will be developed or not, has not yet been adequately assessed. Through a Concurrent Engineering (CE) process, the DLR Institute for Space Systems (RY) in Bremen, aims to apply its know-how of Controlled Environment Agriculture (CEA) Technologies in space systems to provide valuable spin-off projects on Earth and to provide the first engineering study of a Vertical Farm to assess its economic feasibility

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms

    Architecture aware parallel programming in Glasgow parallel Haskell (GPH)

    Get PDF
    General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large units of work. We define a semantics for the constructs that specifies the sets of PEs that each construct identifies, and we check four properties of the semantics using QuickCheck. We report a preliminary investigation of architecture aware programming models that abstract over the new constructs. In particular, we propose architecture aware evaluation strategies and skeletons. We investigate three common paradigms, such as data parallelism, divide-and-conquer and nested parallelism, on hierarchical architectures with up to 224 cores. The results show that the architecture-aware programming model consistently delivers better speedup and scalability than existing constructs, together with a dramatic reduction in the execution time variability. We present a comparison of functional multicore technologies and it reports some of the first ever multicore results for the Feedback Directed Implicit Parallelism (FDIP) and the semi-explicit parallelism (GpH and Eden) languages. The comparison reflects the growing maturity of the field by systematically evaluating four parallel Haskell implementations on a common multicore architecture. The comparison contrasts the programming effort each language requires with the parallel performance delivered. We investigate the minimum thread granularity required to achieve satisfactory performance for three implementations parallel functional language on a multicore platform. The results show that GHC-GUM requires a larger thread granularity than Eden and GHC-SMP. The thread granularity rises as the number of cores rises

    Enhancing in vitro biocompatibility and corrosion protection of organic-inorganic hybrid sol-gel films with nanocrystalline hydroxyapatite

    Get PDF
    Application of novel organic-inorganic hybrid sol-gel coatings containing dispersed hydroxyapatite (HAp) particles improves the biocompatibility, normal human osteoblast (NHOst) response in terms of osteoblast viability and adhesion of a Ti6Al4V alloy routinely used in medical implants. The incorporation of HAp particles additionally results in more effective barrier proprieties and improved corrosion protection of the Ti6Al4V alloy through higher degree of cross-linking in the organopolysiloxane matrix and enhanced film thickness

    Examining the strategy development process through the lens of complex adaptive systems theory

    Get PDF
    The development of strategy remains a debate for academics and a concern for practitioners. Published research has focused on producing models for strategy development and on studying how strategy is developed in organisations. The Operational Research literature has highlighted the importance of considering complexity within strategic decision making; but little has been done to link strategy development with complexity theories, despite organisations and organisational environments becoming increasingly more complex. We review the dominant streams of strategy development and complexity theories. Our theoretical investigation results in the first conceptual framework which links an established Strategic Operational Research model, the Strategy Development Process model, with complexity via Complex Adaptive Systems theory. We present preliminary findings from the use of this conceptual framework applied to a longitudinal, in-depth case study, to demonstrate the advantages of using this integrated conceptual model. Our research shows that the conceptual model proposed provides rich data and allows for a more holistic examination of the strategy development process. © 2012 Operational Research Society Ltd. All rights reserved

    The Continuing of Organicism: An Enviro-organic Form Integrating to the Built Environment

    Get PDF
    Humans have engaged nature as an ideal paradigm of form and function since time immemorial. Within the organic paradigm, architecture may be seen to constitute an organic relationship with nature in any climatic, cultural and social condition. Though often rejected in canonical modern architecture, organic forms have been manifested, in various forms, and with different purposes. Recently, some modern organic movements have emerged, such as those following principles of biomorphic form and biomimicry. Unfortunately, these movements often fail to more fully embrace organicism in the totality and depth of their relationship to the natural. Following D‘Arcy Thompson‘s On Growth and Form, this research aims at uncovering the key attributes of natural form, in order to allow the design of enviro-organic form. Such form is defined as one that opens to the natural world, facilitating the making of architecture that sustains human life and nature today and in the future. In order to carry this out, the research offers graphic and analytic tools that help aid understanding into what organic architecture is, and how we can undertake a design process leading to enviro-organic form. The research concentrates on the analogies between architectural form and natural forms. The outcomes are, to paraphrase D‘Arcy Thompson, explained by the, “equilibrium resulting from the interaction or balance of forces.” Natural forms result from the fitness of the resolution of inside and outside living forces. Similarly, architectural organic form, as embodied in indigenous or vernacular architecture, result from integrating environmental and socio-cultural forces. Because architecture must adapt to cultural and social changes, human built environments are argued to be functionally more complex than those made by animals, as seen for example in a bird-nest, spider-web, or ant-hill. Since vernacular architecture is largely shaped by instinct, and in response to specific local place and culture, vernacular forms are not typically suited to be applied directly to the needs of contemporary culture. Geometry is proposed as the medium for historical examination of the incidental analogy between nature and organic architecture, for the rational fitness of integrating between natural principles and architecture disciplines, and for the selective transformation of enviro-organic forms that promise to more fully integrate the works of humans into the natural environment

    Electrochemical and Microstructural Analysis of Solid Oxide Fuel Cell Electrodes

    Get PDF
    Fuel cells offer several advantages over conventional routines of power generation, such as substantially higher conversion efficiency, modular construction, minimal sitting restriction, and much lower production of pollutants. Solid oxide fuel cell (SOFC), in principle, can utilize all kinds of combustion fuels including coal derived syngas (CSG). The U.S. Department of Energy is currently working on coupling coal gasification and SOFC to form Integrated Gasification Fuel Cell (IGFC) systems. Such IGFC systems will enable the clean, efficient and cost-effective use of coal---the nation\u27s most abundant fossil fuel.;However several issues need to be considered before SOFC can be really commercialized. The anode of SOFC can interact with the trace impurities in CSG such as arsenic, phosphorous, chlorine etc, which leads severe degradation of the cell performance. The operation temperature of current SOFC is also high (\u3e800 °C), which increases the system cost. Hence, further study of both the anode and cathode is necessary in order to develop high performance, long serving time SOFC for IGFC power plant.;One of the aims of this project is to investigate the degradation mechanisms of the Ni-YSZ (yttria stabilized zirconia) anode in PH3 contained coal syngas. Materials microstructure change and electrochemical performance degradation were studied synchronously. Key factors such as the operating temperature, the impurity concentration as well as the cell operation conditions were investigated, which is essential to understanding the degradation behavior of SOFC. It has been found that Ni phosphate is the product of reaction between Ni and PH 3, leading to the loss of both the electrochemical activity and the electron conductivity of the anode. In addition, surface reconstruction is ascribed to Ni-P diffusion to the anode surface. Impedance spectra are fitted with the equivalent circuits to interpret the physical and chemical processes during degradation. The impedance analysis has shown that the mass diffusion resistance increases faster than the charge transfer resistance. The anode degradation is accelerated by increase in the operating temperature, the PH3 concentration and the electrical bias.;On the other side, novel metal oxide nanofibers have been developed as the SOFC cathode materials using the electrospinning method in order to enhance the electrochemical performance of SOFCs. A high performance cathode has been developed by infiltrating lanthanum strontium manganite (LSM) into the porous yttria-stabilized zirconia (YSZ) nanofiber backbone, which has showed superior oxygen reduction activity as compared to a conventional powder cathode. The power density of single unit of SOFC with LSCF nanofibers as the cathode can reach 1.07 W/cm2 at 750 °C. Such cathode architecture could bring the operation temperature of SOFC down to the intermediate temperature range, which will significantly reduce the cost of whole SOFC system

    Pronghorn procurement on the northern plains : a case for small-scale hunting

    Get PDF
    In general, when an archaeologist addresses the issue of faunal procurement on the Plains, especially the northern Plains, the model used entails the communal hunting of bison. The non-communal procurement of a secondary prey species is frequently overlooked by Plains archaeologists. It is the intent of this thesis to present a pronghorn procurement strategy that aligns itself with the current archaeological evidence, gathered from across the northern Plains. Based on the abundance of Wyoming and Great Basin communal pronghorn procurement features, along with a single northern Plains trapping structure, the procurement of pronghorn is often regarded as a communal undertaking. However, a review of the site literature reveals that archaeological pronghorn remains are present in small quantities in numerous habitation sites situated throughout their prehistoric range. In addition, evidence for pronghorn kill sites on the northern Plains is minimal at present. This leaves one to ponder the question; why are small quantities of pronghorn remains present in campsites across the northern Plains? The first part of this thesis addresses the above question through the examination of the unique behavioural and morphological characteristics of the pronghorn, as well as bow and arrow technology. This is undertaken in order to demonstrate the suitability of both the pronghorn and the aboriginal hunting technology to small-scale procurement. In addition ethnographic, historic and archaeological data concerning pronghorn procurement on the northern Plains are presented in a framework that allows for a revision of prevailing models concerning this activity. In addition, small-scale and communal procurement is analyzed within the theoretical framework of optimal foraging theory. This provides evidence that the small-scale hunting of pronghorn was an efficient hunting strategy and therefore it is reasonable to assume that it was practiced prehistorically. The remainder of this thesis addresses a secondary, yet relevant, question involving the lack of visibility of pronghorn remains in the archaeological record. If pronghorn were an obtainable and useful secondary resource then why are such small quantities of bone present at archaeological sites situated within ideal pronghorn habitat? This question is explored within the context of bone survivorship. With both cultural and non-cultural reasons for the differential preservation of pronghorn remains being outlined. Specifically, carnivore attrition, weathering and trampling are explored as possible non-cultural agents that affect the archaeological visibility of pronghorn assemblages. Cultural processes include primary/secondary butchering and processing strategies as well as carcass transportation decisions are also investigated. In addition, the pronghorn assemblages from EbPi-75 and D1Ou-72 are statistically tested to determine if bone density has any correlation to element frequency. Finally, the two recently excavated northern Plains pronghorn assemblages from EbPi-75 and D1Ou-72 are analyzed and compared to the existing body of archaeological research from the northern Plains, High Plains, and the Wyoming Basin. From this comparison and the thesis research in general, a new model for pronghorn procurement is developed that better suits the northern Plains archaeological record to date
    • 

    corecore