2 research outputs found

    Dynamic Weight Estimation of Non-Singulated Objects

    Get PDF
    Weight estimation is a common practice throughout many industries, though it typically requires that the objects to be weighed remain motionless. More often than not, it is beneficial to allow objects to move freely through a process, so that time is not lost in stopping and rerouting the object to a weight sensor. This is the basis for achieving dynamic weighing, where the object to be measured continues to have motion relative to the weighing sensor. Typically, this has been achieved with signal processing techniques that produce favourable results with singular objects. The challenge is when multiple objects are grouped and moving together; that is, they are non-singulated and cannot be weighed separately. This work reports the development of an In-Motion Weight Sensor array, which is a new dynamic weighing system with a new real-time signal processing method for estimating the weight of multiple, non-singulated objects. The array system employs a recursive least squares estimation algorithm to combine weight sensor data and the locations of boxes that are travelling through the array to attribute fractions of a box’s load to the appropriate individual sensors. To demonstrate the performance of the proposed system, a full-scale experimental setup has been built and tested. Through statistical analysis of the weight estimates of a variety of groups of objects, it is shown that the system can produce results within 10% measurement error for the majority of non-singulated cases. It is most effective for non-rigid boxes that also fall within the mid-range for package size and weight, around 0.05m² and 1-3kg, respectively. Changes to the mechanical design can vastly improve performance accuracy and precision, and recommendations for these alterations are given in the conclusion

    An innovative vision system for industrial applications

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 20-11-2015A pesar de que los sistemas de visión por computadora ocupan un puesto predominante en nuestra sociedad, su estructura no sigue ningún estándar. La implementación de aplicaciones de visión requiere de plataformas de alto rendimiento tales como GPUs o FPGAs y el uso de sensores de imagen con características muy distintas a las de la electrónica de consumo. En la actualidad, cada fabricante y equipo de investigación desarrollan sus plataformas de visión de forma independiente y sin ningún tipo de intercompatibilidad. En esta tesis se presenta una nueva plataforma de visión por computador utilizable en un amplio espectro de aplicaciones. Las características de dicha plataforma se han definido tras la implementación de tres aplicaciones de visión, basadas en: SOC, FPGA y GPU, respectivamente. Como resultado, se ha definido una plataforma modular con los siguientes componentes intercambiables: Sensor, procesador de imágenes ”al vuelo”, unidad de procesado principal, acelerador hardware y pila de software. Asimismo, se presenta un algoritmo para realizar transformaciones geométricas, sintetizable en FPGA y con una latencia de tan solo 90 líneas horizontales. Todos los elementos software de esta plataforma están desarrollados con licencias de Software Libre; durante el trascurso de esta tesis se han contribuido y aceptado más de 200 cambios a distintos proyectos de Software Libre, tales como: Linux, YoctoProject y U-boot, entre otros, promoviendo el ecosistema necesario para la creación de una comunidad alrededor de esta tesis.Tras la implementación de la plataforma en un producto comercial, Qtechnology QT5022, y su uso en varias aplicaciones industriales se ha demostrado que es posible el uso de una plataforma genérica de visión que permita reutilizar elementos y comparar resultados objetivamenteDespite the fact that computer vision systems place an important role in our society, its structure does not follow any standard. The implementation of computer vision application require high performance platforms, such as GPUs or FPGAs, and very specialized image sensors. Nowadays, each manufacturer and research lab develops their own vision platform independently without considering any inter-compatibility. This Thesis introduces a new computer vision platform that can be used in a wide spectrum of applications. The characteristics of the platform has been defined after the implementation of three different computer vision applications, based on: SOC, FPGA and GPU respectively. As a result, a new modular platform has been defined with the following interchangeably elements: Sensor, Image Processing Pipeline, Processing Unit, Acceleration unit and Computer Vision Stack. This thesis also presents an FPGA synthetizable algorithm for performing geometric transformations on the fly, with a latency under 90 horizontal lines. All the software elements of this platform have an Open Source licence; over the course of this thesis, more than 200 patches have been contributed and accepted into different Open Source projects like the Linux Kernel, Yocto Project and U-boot, among others, promoting the required ecosystem for the creation of a community around this novel system. The platform has been validated in an industrial product, Qtechnology QT5022, used on diverse industrial applications; demonstrating the great advantages of a generic computer vision system as a platform for reusing elements and comparing results objectivel
    corecore