583 research outputs found
Research in the design of high-performance reconfigurable systems
An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives
Overview and Summary of the Third AIAA High Lift Prediction Workshop
The third AIAA CFD High-Lift Prediction Workshop was held in Denver, Colorado, in June 2017. The goals of the workshop continued in the tradition of the first and second high-lift workshops: to assess the numerical prediction capability of current-generation computational fluid dynamics (CFD) technology for swept, medium/high-aspect-ratio wings in landing/takeoff (high-lift) configurations. This workshop analyzed the flow over two different configurations, a clean high-lift version of the NASA Common Research Model, and the JAXA Standard Model. The former was a CFD-only study, as experimental data were not available prior to the workshop. The latter was a nacelle/pylon installation study that included comparison with experimental wind tunnel data. The workshop also included a 2-D turbulence model verification exercise. Thirty-five participants submitted a total of 79 data sets of CFD results. A variety of grid systems (both structured and unstructured) as well as different flow simulation methodologies (including Reynolds-averaged Navier-Stokes and Lattice-Boltzmann) were used. This paper analyzes the combined results from all workshop participants. A statistical summary of the CFD results is also included
Conversations on Method: Deconstructing Policy through the Researcher Reflective Journal
In this article the authors argue that the researcher reflective journal is a critical interpretive tool for conducting educational policy analysis. The idea for this research grew from the experiences of a doctoral candidate (Ruth) in pursuit of a policy focused dissertation and a series of on-going conversations with her qualitative methodologist (Valerie). The structure of the paper takes a dialogue form on the topic of policy analysis and the various uses of the journal, including found data poetry and photographic representations of the self as a research instrument, which may expand the findings and increase options for data presentation. Sections of the paper include a discussion on journal writing as a creative process, the reflective role of the researcher when examining policies, and the challenges of constructing a well-designed methodological framework
New Product Innovation with Multiple Features and Technology Constraints
We model a firm\u27s decisions about product innovation, focusing on the extent to which features should be improved or changed in the succession of models that comprise a life cycle. We show that the structure of the internal and external environment in which a firm operates suggests when to innovate to the technology frontier. The criterion is maximization of the expected present value of products during the life cycle. Computational studies complement the theoretical results and lead to insights about when to bundle innovations across features. The formalization was influenced by extensive interviews with managers in a high-technology firm that dominates its industry
An immersed discontinuous Galerkin method for compressible Navier-Stokes equations on unstructured meshes
We introduce an immersed high-order discontinuous Galerkin method for solving
the compressible Navier-Stokes equations on non-boundary-fitted meshes. The
flow equations are discretised with a mixed discontinuous Galerkin formulation
and are advanced in time with an explicit time marching scheme. The
discretisation meshes may contain simplicial (triangular or tetrahedral)
elements of different sizes and need not be structured. On the discretisation
mesh the fluid domain boundary is represented with an implicit signed distance
function. The cut-elements partially covered by the solid domain are integrated
after tessellation with the marching triangle or tetrahedra algorithms. Two
alternative techniques are introduced to overcome the excessive stable time
step restrictions imposed by cut-elements. In the first approach the cut-basis
functions are replaced with the extrapolated basis functions from the nearest
largest element. In the second approach the cut-basis functions are simply
scaled proportionally to the fraction of the cut-element covered by the solid.
To achieve high-order accuracy additional nodes are introduced on the element
faces abutting the solid boundary. Subsequently, the faces are curved by
projecting the introduced nodes to the boundary. The proposed approach is
verified and validated with several two- and three-dimensional subsonic and
hypersonic low Reynolds number flow applications, including the flow over a
cylinder, a space capsule and an aerospace vehicle
Geostatistical modeling of the spatial variability of arsenic in groundwater of southeast Michigan
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/94929/1/wrcr10188.pd
OVERFLOW Contribution to HiLiftPW-3
We plan to perform the following sets of computations: For all our contributions (except where stated) Code: OVERFLOW, Turbulence model: SAnegRCQCR2000. - 1. Results will be submitted for both the full chord flap gap (Case 1a) and partially-sealed Chord Flap gap (Case 1c): 1. Grid Refinement Study; 2. Grids: structured overset grids supplied by HiLiftPW committee; 3. Connectivity: Domain Connectivity Framework, DCF. - 2. Results will be submitted for JAXA Standard Model and Nacelle/Pylon Off (Case 2a), Nacelle/Pylon On (Case 2c): 1. Alpha Study; 2. Grids: structured overset grids supplied by HiLiftPW committee; 3. Connectivity: Pegasus 5 (Peg5). - 3. A study of the effects of different connectivity paradigms: 1. DCF vs Peg5 for HLCRM cases; 2. DCF vs. C3P (NASA Ames) vs. Peg5 for JSM cases; 3. JSM grids will be the focus where we will hopefully see some type of trends with reference to wind tunnel data. - 4. Adaption cases will be attempted for (and submitted where appropriate): 1. Cases 1c,1d: HLCRM; 2. Cases 2c and 2d: JSM; 3. Grid: Near Body grids provided by committee, OffBody grids Cartesian; 4. AMR NearBody and OffBody Adaption. - 5. Case 3 Turbulence model verification study: 1. Grid: Series of 3 finest grids as defined on http://turbmodels.larc.nasa.gov/airfoilwakeverif.html; 2. Turbulence models: SAneg and SAneg RCQCR2000. OVERFLOW 2.2 is a Reynolds-averaged Navier-Stokes (RANS) code developed by NASA..
Visualization and exploratory analysis of epidemiologic data using a novel space time information system
Abstract
Background
Recent years have seen an expansion in the use of Geographic Information Systems (GIS) in environmental health research. In this field GIS can be used to detect disease clustering, to analyze access to hospital emergency care, to predict environmental outbreaks, and to estimate exposure to toxic compounds. Despite these advances the inability of GIS to properly handle temporal information is increasingly recognised as a significant constraint. The effective representation and visualization of both spatial and temporal dimensions therefore is expected to significantly enhance our ability to undertake environmental health research using time-referenced geospatial data. Especially for diseases with long latency periods (such as cancer) the ability to represent, quantify and model individual exposure through time is a critical component of risk estimation. In response to this need a STIS – a Space Time Information System has been developed to visualize and analyze objects simultaneously through space and time.
Results
In this paper we present a "first use" of a STIS in a case-control study of the relationship between arsenic exposure and bladder cancer in south eastern Michigan. Individual arsenic exposure is reconstructed by incorporating spatiotemporal data including residential mobility and drinking water habits. The unique contribution of the STIS is its ability to visualize and analyze residential histories over different temporal scales. Participant information is viewed and statistically analyzed using dynamic views in which values of an attribute change through time. These views include tables, graphs (such as histograms and scatterplots), and maps. In addition, these views can be linked and synchronized for complex data exploration using cartographic brushing, statistical brushing, and animation.
Conclusion
The STIS provides new and powerful ways to visualize and analyze how individual exposure and associated environmental variables change through time. We expect to see innovative space-time methods being utilized in future environmental health research now that the successful "first use" of a STIS in exposure reconstruction has been accomplished.http://deepblue.lib.umich.edu/bitstream/2027.42/112824/1/12942_2004_Article_41.pd
Upgrading from Gaussian Processes to Student's-T Processes
Gaussian process priors are commonly used in aerospace design for performing
Bayesian optimization. Nonetheless, Gaussian processes suffer two significant
drawbacks: outliers are a priori assumed unlikely, and the posterior variance
conditioned on observed data depends only on the locations of those data, not
the associated sample values. Student's-T processes are a generalization of
Gaussian processes, founded on the Student's-T distribution instead of the
Gaussian distribution. Student's-T processes maintain the primary advantages of
Gaussian processes (kernel function, analytic update rule) with additional
benefits beyond Gaussian processes. The Student's-T distribution has higher
Kurtosis than a Gaussian distribution and so outliers are much more likely, and
the posterior variance increases or decreases depending on the variance of
observed data sample values. Here, we describe Student's-T processes, and
discuss their advantages in the context of aerospace optimization. We show how
to construct a Student's-T process using a kernel function and how to update
the process given new samples. We provide a clear derivation of
optimization-relevant quantities such as expected improvement, and contrast
with the related computations for Gaussian processes. Finally, we compare the
performance of Student's-T processes against Gaussian process on canonical test
problems in Bayesian optimization, and apply the Student's-T process to the
optimization of an aerostructural design problem.Comment: 2018 AIAA Non-Deterministic Approaches Conferenc
Navier-Stokes Analysis of a High Wing Transport High-Lift Configuration with Externally Blown Flaps
Insights and lessons learned from the aerodynamic analysis of the High Wing Transport (HWT) high-lift configuration are presented. Three-dimensional Navier-Stokes CFD simulations using the OVERFLOW flow solver are compared with high Reynolds test data obtained in the NASA Ames 12 Foot Pressure Wind Tunnel (PWT) facility. Computational analysis of the baseline HWT high-lift configuration with and without Externally Blown Flap (EBF) jet effects is highlighted. Several additional aerodynamic investigations, such as nacelle strake effectiveness and wake vortex studies, are presented. Technical capabilities and shortcomings of the computational method are discussed and summarized
- …