294 research outputs found
Analysis of Critical Factors for Automatic Measurement of OEE
The increasing digitalization of industry provides means to automatically acquire and analyze manufacturing data. As a consequence, companies are investing in Manufacturing Execution Systems (MES) where the measurement of Overall Equipment Effectiveness (OEE) often is a central part and important reason for the investment. The purpose of this study is to identify critical factors and potential pitfalls when operating automatic measurement of OEE. It is accomplished by analyzing raw data used for OEE calculation acquired from a large data set; 23 different companies and 884 machines. The average OEE was calculated to 65%. Almost half of the recorded OEE losses could not be classified since the loss categories were either lacking or had poor descriptions. In addition, 90% of the stop time that was classified could be directly related to supporting activities performed by operators and not the automatic process itself. The findings and recommendations of this study can be incorporated to fully utilize the potential of automatic data acquisition systems and to derive accurate OEE measures that can be used to improve manufacturing performance
A model for linking shop floor improvements to manufacturing cost and profitability
Manufacturing units in the so called high-cost countries are struggling under fierce competition on the global market. In order to survive, the factory needs to generate profit to its owners. Profitability can be reached in many different ways apart from only lowering the employees' salaries. It can be improved through increased profit margins (sales in relation to costs) or with an increased capital turnover rate. Finding ways to free capacity and to improve flexibility in order to increase sales is often more interesting to the manufacturing companies than cutting the direct salary costs. A model for analysing profitability of a manufacturing unit is proposed. It is found on a production system analysis and combines in-depth production engineering analysis with economical accounting analysis of the factory. The manual work tasks are of special interest and the productivity of selected bottleneck work areas are analysed thoroughly. The model is intended for use by two industrial analysts during a one-week study. Simulation of different improvement scenarios is carried out and presented to the factory management at the end of the profitability study. A software implementation is required in order to generate the model, collect data and make simulation within the intended time. The implementation is made in spread sheet software using Visual Basic to program interfaces and automatic functions. The primary area of application is the electronics industry in Sweden where the model is used in a research project to strengthen the competitiveness of that industry
High Angular Resolution Stellar Imaging with Occultations from the Cassini Spacecraft II: Kronocyclic Tomography
We present an advance in the use of Cassini observations of stellar
occultations by the rings of Saturn for stellar studies. Stewart et al. (2013)
demonstrated the potential use of such observations for measuring stellar
angular diameters. Here, we use these same observations, and tomographic
imaging reconstruction techniques, to produce two dimensional images of complex
stellar systems. We detail the determination of the basic observational
reference frame. A technique for recovering model-independent brightness
profiles for data from each occulting edge is discussed, along with the
tomographic combination of these profiles to build an image of the source star.
Finally we demonstrate the technique with recovered images of the {\alpha}
Centauri binary system and the circumstellar environment of the evolved
late-type giant star, Mira.Comment: 8 pages, 8 figures, Accepted by MNRA
Sequential Monte Carlo Instant Radiosity
The focus of this thesis is to accelerate the synthesis of physically accurate images using computers. Such images are generated by simulating how light flows in the scene using unbiased Monte Carlo algorithms. To date, the efficiency of these algorithms has been too low for real-time rendering of error-free images. This limits the applicability of physically accurate image synthesis in interactive contexts, such as pre-visualization or video games.
We focus on the well-known Instant Radiosity algorithm by Keller [1997], that approximates the indirect light field using virtual point lights (VPLs). This approximation is unbiased and has the characteristic that the error is spread out over large areas in the image. This low-frequency noise manifests as an unwanted 'flickering' effect in image sequences if not kept temporally coherent. Currently, the limited VPL budget imposed by running the algorithm at interactive rates results in images which may noticeably differ from the ground-truth.
We introduce two new algorithms that alleviate these issues. The first, clustered hierarchical importance sampling, reduces the overall error by increasing the VPL budget without incurring a significant performance cost. It uses an unbiased Monte Carlo estimator to estimate the sensor response caused by all VPLs. We reduce the variance of this estimator with an efficient hierarchical importance sampling method. The second, sequential Monte Carlo Instant Radiosity, generates the VPLs using heuristic sampling and employs non-parametric density estimation to resolve their probability densities. As a result the algorithm is able to reduce the number of VPLs that move between frames, while also placing them in regions where they bring light to the image. This increases the quality of the individual frames while keeping the noise temporally coherent — and less noticeable — between frames.
When combined, the two algorithms form a rendering system that performs favourably against traditional path tracing methods, both in terms of performance and quality. Unlike prior VPL-based methods, our system does not suffer from the objectionable lack of temporal coherence in highly occluded scenes
Object-oriented Modeling of Manufacturing Resources Using Work Study Inputs
Resources are the core of manufacturing models. They provide information about the people and equipment that perform activities on the shop floor. Comprehensive representations of equipment are common but human resources are often defined to a very limited extent. This paper presents how work study data can be applied as input to detailed modeling of human manufacturing resources. The purpose is to provide a valid representation of manual work tasks on a shop floor level. If implemented in manufacturing models the valid representation will contribute to improve planning, control and execution of production. It also facilitates and encourages production improvement initiatives
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to
synthesize images of 3D scenes from novel views. However, they rely upon
specialized volumetric rendering algorithms based on ray marching that are
mismatched to the capabilities of widely deployed graphics hardware. This paper
introduces a new NeRF representation based on textured polygons that can
synthesize novel images efficiently with standard rendering pipelines. The NeRF
is represented as a set of polygons with textures representing binary opacities
and feature vectors. Traditional rendering of the polygons with a z-buffer
yields an image with features at every pixel, which are interpreted by a small,
view-dependent MLP running in a fragment shader to produce a final pixel color.
This approach enables NeRFs to be rendered with the traditional polygon
rasterization pipeline, which provides massive pixel-level parallelism,
achieving interactive frame rates on a wide range of compute platforms,
including mobile phones.Comment: CVPR 2023. Project page: https://mobile-nerf.github.io, code:
https://github.com/google-research/jax3d/tree/main/jax3d/projects/mobilener
Vox-E: Text-guided Voxel Editing of 3D Objects
Large scale text-guided diffusion models have garnered significant attention
due to their ability to synthesize diverse images that convey complex visual
concepts. This generative power has more recently been leveraged to perform
text-to-3D synthesis. In this work, we present a technique that harnesses the
power of latent diffusion models for editing existing 3D objects. Our method
takes oriented 2D images of a 3D object as input and learns a grid-based
volumetric representation of it. To guide the volumetric representation to
conform to a target text prompt, we follow unconditional text-to-3D methods and
optimize a Score Distillation Sampling (SDS) loss. However, we observe that
combining this diffusion-guided loss with an image-based regularization loss
that encourages the representation not to deviate too strongly from the input
object is challenging, as it requires achieving two conflicting goals while
viewing only structure-and-appearance coupled 2D projections. Thus, we
introduce a novel volumetric regularization loss that operates directly in 3D
space, utilizing the explicit nature of our 3D representation to enforce
correlation between the global structure of the original and edited object.
Furthermore, we present a technique that optimizes cross-attention volumetric
grids to refine the spatial extent of the edits. Extensive experiments and
comparisons demonstrate the effectiveness of our approach in creating a myriad
of edits which cannot be achieved by prior works.Comment: Project webpage: https://tau-vailab.github.io/Vox-E
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Neural Radiance Field training can be accelerated through the use of
grid-based representations in NeRF's learned mapping from spatial coordinates
to colors and volumetric density. However, these grid-based approaches lack an
explicit understanding of scale and therefore often introduce aliasing, usually
in the form of jaggies or missing scene content. Anti-aliasing has previously
been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone
rather than points along a ray, but this approach is not natively compatible
with current grid-based techniques. We show how ideas from rendering and signal
processing can be used to construct a technique that combines mip-NeRF 360 and
grid-based models such as Instant NGP to yield error rates that are 8% - 76%
lower than either prior technique, and that trains 22x faster than mip-NeRF
360.Comment: Project page: https://jonbarron.info/zipnerf
- …