155 research outputs found

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    The Honeycomb Architecture: Prototype Analysis and Design

    Get PDF
    Due to the inherent potential of parallel processing, a lot of attention has focused on massively parallel computer architecture. To a large extent, the performance of a massively parallel architecture is a function of the flexibility of its communication network. The ability to configure the topology of the machine determines the ease with which problems are mapped onto the architecture. If the machine is sufficiently flexible, the architecture can be configured to match the natural structure of a wide range of problems. There are essentially four unique types of massively parallel architectures: 1. Cellular Arrays 2. Lattice Architectures [21, 30] 3. Connection Architectures [19] 4. Honeycomb Architectures [24] All four architectures are classified as SIMD. Each, however, offers a slightly different solution to the mapping problem. The first three approaches are characterized by easily distinguishable processor, communication, and memory components. In contrast, the Honeycomb architecture contains multipurpose processing/communication/memory cells. Each cell can function as either a simple CPU, a memory cell, or an element of a communication bus. The conventional approach to massive parallelism is the cellular array. It typically consists of an array of processing elements arranged in a mesh pattern with hard wired connections between neighboring processors. Due to their fixed topology, cellular arrays impose severe limitations upon interprocessor communication. The lattice architecture is a somewhat more flexible approach to massive parallelism. It consists of a lattice of processing elements embedded in an array of simple switching elements. The switching elements form a programmable interconnection network. A lattice architecture can be configured in a number of different topologies, but it is still only a partial solution to the mapping problem. The connection architecture offers a comprehensive solution to the mapping problem. It consists of a cellular array integrated into a packet-switched communication network. The network provides transparent communication between all processing elements. Note that the communication network is physically abstracted from the processor array, allowing the processors to evolve independently of the network. The Honeycomb architecture offers a unique solution to the mapping problem. It consists of an array of identical processing/communication/memory cells. Each cell can function as either a processor cell, a communication cell, or a memory cell. Collections of Honeycomb cells can be grouped into multicell CPUs, multi-cell memories, or multi-cell CPU-memory systems. Multi-cell CPU-memory systems are hereafter referred to as processing clusters. The topology of the Honeycomb is determined at compilation time. During a preprocessing phase, the Honeycomb is adjusted to the desired topology. The Honeycomb cell is extremely simple, capable of only simple arithmetic and logic operations. The simplicity of the Honeycomb cell is the key to the Honeycomb concept. As indicated in [24], there are two main research avenues to pursue in furthering the Honeycomb concept: 1. Analyzing the design of a uniform Honeycomb cell 2. Mapping algorithms onto the Honeycomb architecture This technical report concentrates on the first issue. While alluded to throughout the report, the second issue is not addressed in any detail

    Computational Strategies for Faster Combustion Simulations with Detailed Chemistry

    Get PDF
    Combustion of fossil fuels is still the biggest source of power generation in the world. However, pollutants released to the atmosphere from combustion represent a risk for human health and the environment. Hence it is desirable to design a combustor that produces the maximum useful thermal power output while keeping low concentration levels of harmful emissions such as CO, P.M., NOx, and SOx. In the past, combustor design was aided by the compilation of large sets of experimental data and the development of empirical correlations which is an expensive process. Nowadays numerical simulations have become an important tool in the research and design of combustors. Numerical simulations allow the study of combustion systems under hazardous conditions and beyond their performance limits, and they are usually inexpensive and fast (compared to experiments). The main bottle-neck in combustion simulations is the accurate prediction of the concentration of the many species involved in combustion. Current computational fluid dynamic (CFD) simulations commonly use simplified versions of the chemical reaction mechanisms. But utilization of simplified chemical models comes with the associated inaccuracy while saving computational time.;In the present study the virtues of the chemical reactor network (CRN) approach are investigated and a new integration method is proposed to accelerate the calculation of species concentrations using reduced and detailed chemical mechanisms. Utilization of the CRN approach enabled the implementation of a detailed methane-air chemical mechanism that incorporates 53 chemical species and 325 reactions. The CRN approach was applied to two combustor configurations: a premixed methane-air swirl burner, and a non-premixed methane-air swirl burner. The CRN was built using results from the CFD simulations that were obtained using simplified chemical mechanisms with just one or two reactions. Numerical predictions of the premixed combustor behavior obtained using CRN simulations were compared with other CFD simulations that used mechanisms with more reactions and chemical species. The CRN results closely matched the CFD simulations with larger chemical mechanisms, the maximum relative difference of the predicted concentration for the major species (i.e. O 2, CO2, H2O, and N2) was 2.82% when compared to the CFD simulations. The calculation time of the CRN was greatly reduced, the maximum reduction of the CRN simulation took only one seventh of the computational time when compared with a CFD simulation. The CRN simulations of the non-premixed burner were also compared with experiments. Predicted spatial profiles of velocity, temperature, and mass fraction concentrations were compared with measurements. Results showed that the velocity and some mass fraction profiles matched the experimental measurements near the dump plane but it was found that downstream of the dump plane the temperature was overpredicted. Due to the temperature overprediction, the maximum difference was 250 [K], the nitrogen oxide (NO) concentration was overpredicted by 30 [ppm]. The relative difference of the predicted NO at the outlet of the combustor is 150% when compared with the experimental value.;Further, a novel integration method named log-time integration method (LTIM) was developed to calculate the solution of ideal reactors used in the CRN simulations. The integration method consists of the transformation of the time variable to the logarithmic space along with the use of variable time steps. The LTIM approach was applied to the solution of a perfectly stirred reactor (PSR) using a detailed chemical mechanism. PSR-LTIM results were compared with a commercial PSR code which is available in the CHEMKIN software package. The maximum relatively difference of the concentration of the species of interest was only 1%. Calculated species concentration using the PSR-LTIM matched the results from CHEMKIN with comparable computational time, the computational time of the PSR-LTIM was 5.3 [s] and for CHEMKIN was 3 [s]. The integration method was compared to higher order integration methods available in the literature producing satisfactory results with less CPU time, the LTIM approach took one fifth of the computational time of a higher order integration method. The LTIM was also applied to the solution of a premixed one dimensional methane-air flame, FLAME-LTIM, where a mechanism incorporating nine chemical species and five global reactions mechanism was used. Calculated temperature and mass fraction profiles matched closely the results obtained using the equivalent commercial code CHEMKIN PREMIX. The relative temperature difference at the outlet of the domain was 0.5% and the maximum difference in the chemical specie concentration at the outlet of the domain was 13.2%.;The outcome of the present research can be used to perform a rapid design analysis of gas turbines and similar combustors to achieve low levels of emissions

    Mathematical Modeling Of Pre And Post Combustion Processes In Coal Power Plant

    Get PDF
    Coal is a brownish-black sedimentary rock with organic and inorganic constituents. It has been a vital energy resource for humans for millennia. Coal accounts for approximately one quarter of the world’s energy consumption, with 65% of this is energy utilized by residential consumers, and 35% by industrial consumers. Coal operated power stations provide 42% of U.S. electricity supply. The United States hold 96% of coal reserves in North America region, out of which 26% are known for commercial usage. The coal combusted in these power generating facilities requires certain pre-combustion processing, while by-products of coal combustion go through certain post-combustion processing. The application of hydrometallurgical extraction of Rare Earth Elements (REE) from North Dakota Lignite coal feedstock can assist coal value amplification. Extraction of REE from lignite coals liberates REEs and CMs that are vital to electronics, power storage, aviation, and magnets industries. The REE extraction process also reduces the sulfur content of ND lignite coal, along with ash components that foul heat exchange surfaces and can have benefits for post-combustion scrubbing units. When coal is combusted, the exhaust gasses contain carbon dioxide (CO2), sulfur dioxide (SO2), oxides of nitrogen (NOx), water (H2O) and nitrogen (N2). Carbon dioxide comprises approximately 8-10 vol% of the flue gas and is reported to contribute to the greenhouse effect, a primary reason for climate change. Carbon Capture and Storage (CCS) involves of CO2 by use of liquid or solid absorbents to separate CO2 from combustion flue gas. Little data is available on gas-liquid interfacial area correlations in the literature for use of second generation solvents, such as MonoEthanolAmine (MEA), in structured packing absorber columns consisting of thin corrugated metal plates or gauzes, designed to force fluids on complicated paths. While mathematical model development for existing post-combustion carbon capture (PCCC) technologies, such as carbon capture simulations using computational fluid dynamics (CFD) for prediction of mass transfer coefficients is well developed, models describing the behavior of third generation solvents is lacking. Two main research opportunities exist: (i) due to the complex chemistry of coal, there is a requirement for a modeling tool that can account for the coal composition and complex hydrometallurgical extraction processes to assist in designing and sizing pre-combustion REE extraction plants; and (ii) CFD models are required that can capture the mass transfer coefficients of third generation CO2 solvents using structured packing. Two primary hypotheses have been developed to address the research opportunities: (1.) Process modeling of hydrometallurgical extraction of REE provides some theory-based understanding that is complementary to experimental validation and, with the help of chemical kinetics and percentage carboxylation existing in feedstocks, can forecast the efficiency and leachability of other feedstocks, and (2.) A detailed Volume of Fluid (VOF) simulation of coupled mass and momentum transfer problems in small intricate regions of corrugated structured and packed panels placed at 45° angle can be used to predict mass transfer coefficients for third generation solvents by using open-source numerical C/C++ based framework called Open Fields-Operations-And-Manipulations (OpenFOAM). The hydrometallurgical process modeling is developed using METSIM, a leading hydrometallurgical process modeling software tool. The steady state process model provides an overview of REE production along with equipment inventory sizing. The model also has functions to define percentage of organic carboxylic acid bonds present in coal, since, the prior research has identified that the primary association of REE in lignite coal is as weakly-bonded complexes of carboxyl groups, which are targets of the extraction technology. The CFD modeling work is expected to determine critical mass transfer coefficients for CO2 capture using structured packing columns. Further, the developed CFD model and its validity will be tested against experimental data from various industrial and literature sources

    Forge: Thermoelectric Cookstove

    Get PDF
    Our interdisciplinary team, known as Forge, has built a cookstove that not only can be a portable cookstove, but also includes a port to charge devices such as a phone using thermoelectrics. The product has been designed for developing areas in Nicaragua where power is inaccessible and a multi-purpose cookstove/phone charger could be of use. The cookstove features a cylindrical combustion chamber that can be used for gasification. Gasification is a burning process where smoke from the fire is also burned, creating higher temperatures and a cleaner burn. The combustion chamber is insulated using refractory cement, which will drop the temperature from about 700 Celsius inside the chamber to 200 Celsius outside the chamber. The cookstove outputs heat at a rate of 4.6-6.6 kW. The cookstove has thermoelectric modules attached to the outside, which, by utilizing the Seebeck effect, convert excess heat into electrical energy. Ideally, the energy would be transferred into the phone at 5 volts and 0.5-0.6 amps and some of the electrical energy would be used to power a cooling fan to help the stove function properly. The final temperatures that were recorded ranged from around 400ºC to 700ºC in the combustion chamber and around 500ºC for the cooking surface. Gasification was successfully occurring during this stage, and the smoke was being visibly burned off. The electrical output was less successful, resulting with only around 0.08 V coming out of the thermoelectric generators due to the lack of air flow within the electrical housing and poor electrical connection. The stove does achieve its primary functionality of being more than capable of boiling water, something that presently available cookstoves in Nicaragua cannot do consistently

    Commercialization of gallium nitride nanorod arrays on silicon for solid-state lighting

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2008.Includes bibliographical references (p. 37-40).One important component in energy usage is lighting, which is currently dominated by incandescent and fluorescent lamps. However, due to potentially higher efficiencies and thus higher energy savings, solid-state lighting (SSL) is seriously being considered as a replacement. Currently, state-of-the-art white LEDs are made up of thin films of GaN and InGaN grown on sapphire substrates. A new LED structure design is proposed, in which GaN nanorod arrays are grown on silicon substrates. This new structure could be fabricated using anodized aluminum oxide's (AAO) ordered arrangement of pores as a template for growth of the nanorod array. AAO is selected for its high porosity and simple controllability of pore size and separation, which can in turn produce high density monocrystalline nanorod arrays with adjustable rod size and separation. Several advantages are enjoyed by LEDs based on rod arrays: lower cost, better yield and reliability and higher efficiencies. Two more LED designs, other than the current state-of-the-art GaN LED and the proposed LED structure, are included for comparisons. It is found that the proposed LED structure design is the best after considering costs and efficiency. For commercialization of this new LED design, the market penetration plan is to have a partnership with one of the major players in the current white LED industry. This has the advantage of having minimal capital investment and the product could be sold under an established brand. A simplified projection of earnings is calculated to illustrate sustainability of this business plan.by Qixun Wee.M.Eng

    Experimental and numerical simulation of hydraulic fracturing

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2017Hydraulic Fracturing (HF) has many applications in different fields such as stimulation of oil and gas reservoirs, in situ stress measurements, stress relief for tunneling projects as well as in underground mining applications such as block caving mining. In the HF process, high pressure fluid is injected into a well to generate fractures in tight rock formations. This technique is particularly suitable for developing hydrocarbon energy resources in tight rock formations such as shale with very low permeability. An experimental setup was designed and developed to simulate the HF process in the laboratory. Cubic plaster specimens were molded and HF experiments were conducted with simulated plaster models. Five laboratory tests were performed on cubic specimens under different stress conditions. Because the uniaxial compressive strength of the plaster was about 1600 psi, in all experiments the applied vertical stress was 1000 psi to avoid breaking the specimens before injection of fluid. The differential horizontal stress varied from 100 to 500 psi. These stress levels are related to shallow formations in a real environment. It was observed that increasing the differential horizontal stress by 100 psi, the minimum pressure required to initiate HF decreases about 100 psi. These results were in agreement by 2D failure criterion of HF. All in all, the small scale HF experiments were conducted successfully in the rock mechanics lab. It was observed that vertical hydraulic fractures would propagate along maximum horizontal stress, which is in agreement with propagation of HF theory. Three-dimensional (3D) numerical models were developed and computer simulations were conducted with ABAQUS, a commercially available finite element analysis (FEA) software. The numerical simulation results compared favorably with those from the laboratory experiments, and verification and analysis were carried out. Since the results obtained from the numerical model were in agreement with the results of experiments and verified the correctness of the model, further investigation was carried out with developed computer models. Several scenarios with different vertical stresses and different levels of horizontal stress were simulated. A statistical software, R, was used to generate a 3D failure criterion for the HF in shallow formations.... It can be stated that in shallow formations, vertical stress has the least effect among stress components on the minimum pressure required to initiate HF
    corecore