1,674 research outputs found
Optimal prefix codes for pairs of geometrically-distributed random variables
Optimal prefix codes are studied for pairs of independent, integer-valued
symbols emitted by a source with a geometric probability distribution of
parameter , . By encoding pairs of symbols, it is possible to
reduce the redundancy penalty of symbol-by-symbol encoding, while preserving
the simplicity of the encoding and decoding procedures typical of Golomb codes
and their variants. It is shown that optimal codes for these so-called
two-dimensional geometric distributions are \emph{singular}, in the sense that
a prefix code that is optimal for one value of the parameter cannot be
optimal for any other value of . This is in sharp contrast to the
one-dimensional case, where codes are optimal for positive-length intervals of
the parameter . Thus, in the two-dimensional case, it is infeasible to give
a compact characterization of optimal codes for all values of the parameter
, as was done in the one-dimensional case. Instead, optimal codes are
characterized for a discrete sequence of values of that provide good
coverage of the unit interval. Specifically, optimal prefix codes are described
for (), covering the range , and
(), covering the range . The described codes produce the expected
reduction in redundancy with respect to the one-dimensional case, while
maintaining low complexity coding operations.Comment: To appear in IEEE Transactions on Information Theor
Analytical techniques: A compilation
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques
The 1974 NASA-ASEE summer faculty fellowship aeronautics and space research program
Research activities by participants in the fellowship program are documented, and include such topics as: (1) multispectral imagery for detecting southern pine beetle infestations; (2) trajectory optimization techniques for low thrust vehicles; (3) concentration characteristics of a fresnel solar strip reflection concentrator; (4) calaboration and reduction of video camera data; (5) fracture mechanics of Cer-Vit glass-ceramic; (6) space shuttle external propellant tank prelaunch heat transfer; (7) holographic interferometric fringes; and (8) atmospheric wind and stress profiles in a two-dimensional internal boundary layer
A Data Mining Methodology for Vehicle Crashworthiness Design
This study develops a systematic design methodology based on data mining theory for decision-making in the development of crashworthy vehicles. The new data mining methodology allows the exploration of a large crash simulation dataset to discover the underlying relationships among vehicle crash responses and design variables at multiple levels and to derive design rules based on the whole-vehicle safety requirements to make decisions about component-level and subcomponent-level design. The method can resolve a major issue with existing design approaches related to vehicle crashworthiness: that is, limited abilities to explore information from large datasets, which may hamper decision-making in the design processes.
At the component level, two structural design approaches were implemented for detailed component design with the data mining method: namely, a dimension-based approach and a node-based approach to handle structures with regular and irregular shapes, respectively. These two approaches were used to design a thin-walled vehicular structure, the S-shaped beam, against crash loading. A large number of design alternatives were created, and their responses under loading were evaluated by finite element simulations. The design variables and computed responses formed a large design dataset. This dataset was then mined to build a decision tree. Based on the decision tree, the interrelationships among the design parameters were revealed, and design rules were generated to produce a set of good designs. After the data mining, the critical design parameters were identified and the design space was reduced, which can simplify the design process.
To partially replace the expensive finite element simulations, a surrogate model was used to model the relationships between design variables and response. Four machine learning algorithms, which can be used for surrogate model development, were compared. Based on the results, Gaussian process regression was determined to be the most suitable technique in the present scenario, and an optimization process was developed to tune the algorithm’s hyperparameters, which govern the model structure and training process.
To account for engineering uncertainty in the data mining method, a new decision tree for uncertain data was proposed based on the joint probability in uncertain spaces, and it was implemented to again design the S-beam structure. The findings show that the new decision tree can produce effective decision-making rules for engineering design under uncertainty.
To evaluate the new approaches developed in this work, a comprehensive case study was conducted by designing a vehicle system against the frontal crash. A publicly available vehicle model was simplified and validated. Using the newly developed approaches, new component designs in this vehicle were generated and integrated back into the vehicle model so their crash behavior could be simulated. Based on the simulation results, one can conclude that the designs with the new method can outperform the original design in terms of measures of mass, intrusion and peak acceleration. Therefore, the performance of the new design methodology has been confirmed.
The current study demonstrates that the new data mining method can be used in vehicle crashworthiness design, and it has the potential to be applied to other complex engineering systems with a large amount of design data
Introduction to Optical/IR Interferometry: history and basic principles
The present notes refer to a lecture delivered on 27 September 2017 in
Roscoff during the 2017 Evry Schatzman School. It concerns a general
introduction to optical/IR interferometry, including a brief history, a
presentation of the basic principles, some important theorems and relevant
applications.The layout of these lecture notes is as follows. After a short
introduction, we proceed with some reminders concerning the representation of a
field of electromagnetic radiation. We then present a short history of
interferometry, from the first experiment of Fizeau and Stefan to modern
optical interferometers. We then discuss the notions of light coherence,
including the van Cittert - Zernicke theorem and describe the principle of
interferometry using two telescopes. We present some examples of modern
interferometers and typical results obtained with these. Finally, we address
three important theorems: the fundamental theorem, the convolution theorem and
the Wiener-Khinchin theorem which enable to get a better insight into the field
of optical/IR interferometry.Comment: 45 pages, based on a lecture given at the 2017 edition of the Evry
Schatzman school, dedicated to the high-angular resolution imaging of stars
and their direct environment. Videos "Introduction to Optical/Infrared
Interferometry" IUCCA, PUNE, INDIA, 2018, 10 hours lectures
https://orbi.uliege.be/handle/2268/223150 &
https://www.youtube.com/playlist?list=PLgbVWzVdxoUtI_223J1QEwIeKrzWMVMC
Defect and thickness inspection system for cast thin films using machine vision and full-field transmission densitometry
Quick mass production of homogeneous thin film material is required in paper, plastic, fabric, and thin film industries. Due to the high feed rates and small thicknesses, machine vision and other nondestructive evaluation techniques are used to ensure consistent, defect-free material by continuously assessing post-production quality. One of the fastest growing inspection areas is for 0.5-500 micrometer thick thin films, which are used for semiconductor wafers, amorphous photovoltaics, optical films, plastics, and organic and inorganic membranes. As a demonstration application, a prototype roll-feed imaging system has been designed to inspect high-temperature polymer electrolyte membrane (PEM), used for fuel cells, after being die cast onto a moving transparent substrate. The inspection system continuously detects thin film defects and classifies them with a neural network into categories of holes, bubbles, thinning, and gels, with a 1.2% false alarm rate, 7.1% escape rate, and classification accuracy of 96.1%. In slot die casting processes, defect types are indicative of a misbalance in the mass flow rate and web speed; so, based on the classified defects, the inspection system informs the operator of corrective adjustments to these manufacturing parameters. Thickness uniformity is also critical to membrane functionality, so a real-time, full-field transmission densitometer has been created to measure the bi-directional thickness profile of the semi-transparent PEM between 25-400 micrometers. The local thickness of the 75 mm x 100 mm imaged area is determined by converting the optical density of the sample to thickness with the Beer-Lambert law. The PEM extinction coefficient is determined to be 1.4 D/mm and the average thickness error is found to be 4.7%. Finally, the defect inspection and thickness profilometry systems are compiled into a specially-designed graphical user interface for intuitive real-time operation and visualization.M.S.Committee Chair: Tequila Harris; Committee Member: Levent Degertekin; Committee Member: Wayne Dale
A Hardware approach to neural networks silicon retina
The primary goal of this thesis was to emulate the function of the biological eye in silicon. In both neural and silicon technologies, the active devices occupy approximately 2 percent of the space, wire fills the entire remaining space. The silicon retina was modeled on the distal portion of the vertebrate retina. This chip generates, in real time, outputs that correspond directly to signals observed in the corresponding levels of the biological retinas. The design uses the principles of signal aggregation. It demonstrates a tolerance for device imperfection that is characteristic of a collective system. The digital computer is extremely effective at producing precise answers to well-defined questions. The nervous system accepts fuzzy, poorly conditioned input, performs a computation that is ill-defined, and produces approximate output
Optics and Fluid Dynamics Department annual progress report for 2001
research within three scientific programmes: (1) laser systems and optical materials, (2) optical diagnostics and information processing and (3) plasma and fluid dynamics. The department has core competences in: optical sensors, optical materials, optical storage, biooptics, numerical modelling and information processing, non-linear dynamics and fusion plasma physics. The research is supported by several EU programmes, including EURATOM, by Danish research councils and by industry. A summary of the activities in 2001 is presented. ISBN 87-550-2993-0 (Internet
- …