1,854 research outputs found

    The Core Collapse Supernova Rate from the SDSS-II Supernova Survey

    Get PDF
    We use the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SNS) data to measure the volumetric core collapse supernova (CCSN) rate in the redshift range (0.03<z<0.09). Using a sample of 89 CCSN we find a volume-averaged rate of (1.06 +/- 0.19) x 10**(-4)/(yr Mpc**3) at a mean redshift of 0.072 +/- 0.009. We measure the CCSN luminosity function from the data and consider the implications on the star formation history.Comment: Minor corrections to references and affiliations to conform with published versio

    A STUDY OF MACHINE VISION IN THE AUTOMOTIVE INDUSTRY

    Get PDF
    With the growth of industrial automation, it has become increasingly important to validate the quality of every manufactured part during production. Until now, human visual inspection aided with hard tooling or machines have been the primary means to this end, but the speed of today's production lines, the complexity of production equipment and the highest standards of quality to which parts must adhere frequently, make the traditional methods of industrial inspection and control impractical, if not impossible. Subsequently, new solutions have been developed for the monitoring and control of industrial processes, in real­time. One such technology is the area of machine vision. After many years of research and development, computerised vision systems are now leaving the laboratory and are being used successfully in the factory environment. They are both robust and competitively priced as a sensing technique which has now opened up a whole new sector for automation. Machine vision systems are becoming an important integral part of the automotive manufacturing process, with applications ranging from inspection, classification, robot guidance, assembly verification through to process monitoring and control. Although the number of systems in current use is still relatively small, there can be no doubt, given the issues at stake, that the automotive industry will once again lead the way with the implementation of machine vision just as it has done robotic technology. The thesis considered the issue of machine vision and in particular, its deployment within the automotive industry. The thesis has presented work on machine vision for the prospective end-user and not the designer of such systems. It will provide sufficient background about the subject, to separate machine vision promises from reality and permit intelligent decisions regarding machine vision applications to be made. The initial part of the dissertation focussed on the strategic issues affecting the selection of machine vision at the planning stage, such as a listing of the factors to justify investment, the capability of the technology and type of problems that are associated with this relatively new but complex science. Though it is widely accepted that no two industrial machine vision systems are identical, knowledge of the basic fundamentals which underpin the structure of the technology in its application is presented. This work covered a structured description detailing typical hardware components such as camera technology, lighting systems, etc... which form an integral part of an industrial system and discussions regarding the criteria for selection are presented. To complement this work, a further section is specifically devoted to the bewildering array of vision software analysis techniques which are currently available today. A detailed description of the various techniques that are applied to images in order to make use of and understand the data contained within them are discussed and explored. Applications for machine vision fall into two main categories namely robotic guidance and inspection. Obviously within each category there are many further sub­groups. Within this context the latter part of the thesis reviews with a well structured description of several industrial case studies derived from the automotive industry, which illustrate that machine vision is capable of providing real time solutions to manufacturing based problems. In conclusion, despite the limited availability of industrially based machine vision systems, the success of implementation is not always guaranteed, as the technology imposes both technical limitations and introduce new human engineering considerations. By understanding the application and the implications of the technical requirements on both the "staging" and the "image-processing" power required of the machine vision system. The thesis has shown that the most significant elements of a successful application are indeed the lighting, optics, component design, etc... - the "Staging". From the case studies investigated, optimised "staging" has resulted in the need for less computing power in the machine vision system. Inevitably, greater computing power not only requires more time but is generally more expensive. The experience gained from the this project, has demonstrated that machine vision technology is a realistic alternative means of capturing data in real-time. Since the current limitations of the technology are well suited to the delivery process of the quality function within the manufacturing process

    Development of an automatic discharge system for small filter presses

    Get PDF

    Active Stereo Vision for 3D Profile Measurement

    Get PDF

    3D Laser Scanner Development and Analysis

    Get PDF

    COLOR MULTIPLEXED SINGLE PATTERN SLI

    Get PDF
    Structured light pattern projection techniques are well known methods of accurately capturing 3-Dimensional information of the target surface. Traditional structured light methods require several different patterns to recover the depth, without ambiguity or albedo sensitivity, and are corrupted by object movement during the projection/capture process. This thesis work presents and discusses a color multiplexed structured light technique for recovering object shape from a single image thus being insensitive to object motion. This method uses single pattern whose RGB channels are each encoded with a unique subpattern. The pattern is projected on to the target and the reflected image is captured using high resolution color digital camera. The image is then separated into individual color channels and analyzed for 3-D depth reconstruction through use of phase decoding and unwrapping algorithms thereby establishing the viability of the color multiplexed single pattern technique. Compared to traditional methods (like PMP, Laser Scan etc) only one image/one-shot measurement is required to obtain the 3-D depth information of the object, requires less expensive hardware and normalizes albedo sensitivity and surface color reflectance variations. A cosine manifold and a flat surface are measured with sufficient accuracy demonstrating the feasibility of a real-time system

    Non-destructive technologies for fruit and vegetable size determination - a review

    Get PDF
    Here, we review different methods for non-destructive horticultural produce size determination, focusing on electronic technologies capable of measuring fruit volume. The usefulness of produce size estimation is justified and a comprehensive classification system of the existing electronic techniques to determine dimensional size is proposed. The different systems identified are compared in terms of their versatility, precision and throughput. There is general agreement in considering that online measurement of axes, perimeter and projected area has now been achieved. Nevertheless, rapid and accurate volume determination of irregular-shaped produce, as needed for density sorting, has only become available in the past few years. An important application of density measurement is soluble solids content (SSC) sorting. If the range of SSC in the batch is narrow and a large number of classes are desired, accurate volume determination becomes important. A good alternative for fruit three-dimensional surface reconstruction, from which volume and surface area can be computed, is the combination of height profiles from a range sensor with a two-dimensional object image boundary from a solid-state camera (brightness image) or from the range sensor itself (intensity image). However, one of the most promising technologies in this field is 3-D multispectral scanning, which combines multispectral data with 3-D surface reconstructio

    The SDSS-III Baryon Oscillation Spectroscopic Survey: Quasar Target Selection for Data Release Nine

    Full text link
    The SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), a five-year spectroscopic survey of 10,000 deg^2, achieved first light in late 2009. One of the key goals of BOSS is to measure the signature of baryon acoustic oscillations in the distribution of Ly-alpha absorption from the spectra of a sample of ~150,000 z>2.2 quasars. Along with measuring the angular diameter distance at z\approx2.5, BOSS will provide the first direct measurement of the expansion rate of the Universe at z > 2. One of the biggest challenges in achieving this goal is an efficient target selection algorithm for quasars over 2.2 < z < 3.5, where their colors overlap those of stars. During the first year of the BOSS survey, quasar target selection methods were developed and tested to meet the requirement of delivering at least 15 quasars deg^-2 in this redshift range, out of 40 targets deg^-2. To achieve these surface densities, the magnitude limit of the quasar targets was set at g <= 22.0 or r<=21.85. While detection of the BAO signature in the Ly-alpha absorption in quasar spectra does not require a uniform target selection, many other astrophysical studies do. We therefore defined a uniformly-selected subsample of 20 targets deg^-2, for which the selection efficiency is just over 50%. This "CORE" subsample will be fixed for Years Two through Five of the survey. In this paper we describe the evolution and implementation of the BOSS quasar target selection algorithms during the first two years of BOSS operations. We analyze the spectra obtained during the first year. 11,263 new z>2.2 quasars were spectroscopically confirmed by BOSS. Our current algorithms select an average of 15 z > 2.2 quasars deg^-2 from 40 targets deg^-2 using single-epoch SDSS imaging. Multi-epoch optical data and data at other wavelengths can further improve the efficiency and completeness of BOSS quasar target selection. [Abridged]Comment: 33 pages, 26 figures, 12 tables and a whole bunch of quasars. Submitted to Ap

    Amorphous silicon e 3D sensors applied to object detection

    Get PDF
    Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm
    corecore