1,127 research outputs found

    Advanced Line-Follower Robot

    Get PDF
    In this research, an Advanced Line-follower Robot (ALFR) was designed and built. The ALFR mainly consists of the sensor array (QTR-8A), the high-performance microchips (TMS320f28335, TMS320f28069) and two motors (BLY172S-24V-4000). The ALFR keeps the basic function of the Line-follower Robot (LFR) but applies more advanced control theories, such as Proportional Integral Derivative (PID), Active Disturbance Rejection Control (ADRC) and Iterative Learning Control (ILC). PID and ADRC have been tested in the ALFR. The ALFR control problems and the results have been discussed in this thesis. Suggestions are also provided for research on unsolved problems. In particular, the mathematical models of ALFR have been established for both position and speed control. The solutions based on PID, ADRC and ILC are proposed and tested in simulation. The main objective of this thesis is realized in combining methods from control theories with realities in the context of formulating and solving practical problems in a physical process

    Information System Prototyping of Strawberry Maturity Stages using Arduino Uno and TCS3200

    Get PDF
    Strawberry is one of the commodities of subtropical fruits growing in Indonesia. It has a high economic value, attractive appeal in a red and fresh sweet fruit. The color of a strawberry shows its maturity stages, and the strawberry maturity stages affect postharvest quality. The research aimed to design and implement a tool to approve and apply the maturity stages of strawberries using Arduino Uno and display the information on the web. The method included needs analysis and analysis of work, and the design consisted of hardware design and network design. The testing consisted of TCS3200 sensor testing, LCD testing, servo motor testing, and WEB testing. The results of this study found that when the strawberry fruit was declared mature by the TCS3200 sensor as the color detection sensor, the LCD would display the texts, and the servo would open. The system on the WEB would display the total information of the sorted strawberries

    Extended dynamic range from a combined linear-logarithmic CMOS image sensor

    Get PDF

    Design and development of autonomous robotic fish for object detection and tracking

    Get PDF
    In this article, an autonomous robotic fish is designed for underwater operations like object detection and tracking along with collision avoidance. The computer-aided design model for prototype robotic fish is designed using the Solid Works® software to export an stereolithography (STL) file to MakerBot, a 3D printer, to manufacture the parts of robotic fish using polylactic acid thermoplastic polymer. The precise maneuverability of the robotic fish is achieved by the propulsion of a caudal fin. The oscillation of the caudal fin is controlled by a servomotor. A combination of visual and ultrasonic sensors is used to track the position and distance of the desired object with respect to the fish and also to avoid the obstacles. The robotic fish has the ability to detect an object up to a distance of 90 cm at normal exposure conditions. A computational fluid dynamics analysis is conducted to analyze the fluid hydrodynamics (flow rate of water and pressure) around the hull of a robotic fish and the drag force acting on it. A series of experimental results have shown the effectiveness of the designed underwater robotic fish. </jats:p

    Autonomous Close Formation Flight of Small UAVs Using Vision-Based Localization

    Get PDF
    As Unmanned Aerial Vehicles (UAVs) are integrated into the national airspace to comply with the 2012 Federal Aviation Administration Reauthorization Act, new civilian uses for robotic aircraft will come about in addition to the more obvious military applications. One particular area of interest for UAV development is the autonomous cooperative control of multiple UAVs. In this thesis, a decentralized leader-follower control strategy is designed, implemented, and tested from the follower’s perspective using vision-based localization. The tasks of localization and control were carried out with separate processing hardware dedicated to each task. First, software was written to estimate the relative state of a lead UAV in real-time from video captured by a camera on-board the following UAV. The software, written using OpenCV computer vision libraries and executed on an embedded single-board computer, uses the Efficient Perspective-n-Point algorithm to compute the 3-D pose from a set of 2-D image points. High-intensity, red, light emitting diodes (LEDs) were affixed to specific locations on the lead aircraft’s airframe to simplify the task if extracting the 2-D image points from video. Next, the following vehicle was controlled by modifying a commercially available, open source, waypoint-guided autopilot to navigate using the relative state vector provided by the vision software. A custom Hardware-In-Loop (HIL) simulation station was set up and used to derive the required localization update rate for various flight patterns and levels of atmospheric turbulence. HIL simulation showed that it should be possible to maintain formation, with a vehicle separation of 50 ± 6 feet and localization estimates updated at 10 Hz, for a range of flight conditions. Finally, the system was implemented into low-cost remote controlled aircraft and flight tested to demonstrate formation convergence to 65.5 ± 15 feet of separation

    High-speed global shutter CMOS machine vision sensor with high dynamic range image acquisition and embedded intelligence

    Get PDF
    High-speed imagers are required for industrial applications, traffic monitoring, robotics and unmanned vehicles, moviemaking, etc. Many of these applications call also for large spatial resolution, high sensitivity and the ability to detect images with large intra-frame dynamic range. This paper reports a CIS intelligent digital image sensor with 5.2Mpixels which delivers 12-bit fully-corrected images at 250Fps. The new sensor embeds on-chip digital processing circuitry for a large variety of functions including: windowing; pixel binning; sub-sampling; combined windowing-binning-subsampling modes; fixed-pattern noise correction; fine gain and offset control; color processing, etc. These and other CIS functions are programmable through a simple four-wire serial port interface.Ministerio de Ciencia e Innovación IPT-2011-1625-43000

    Bio-Inspired Multi-Spectral Image Sensor and Augmented Reality Display for Near-Infrared Fluorescence Image-Guided Surgery

    Get PDF
    Background: Cancer remains a major public health problem worldwide and poses a huge economic burden. Near-infrared (NIR) fluorescence image-guided surgery (IGS) utilizes molecular markers and imaging instruments to identify and locate tumors during surgical resection. Unfortunately, current state-of-the-art NIR fluorescence imaging systems are bulky, costly, and lack both fluorescence sensitivity under surgical illumination and co-registration accuracy between multimodal images. Additionally, the monitor-based display units are disruptive to the surgical workflow and are suboptimal at indicating the 3-dimensional position of labeled tumors. These major obstacles have prevented the wide acceptance of NIR fluorescence imaging as the standard of care for cancer surgery. The goal of this dissertation is to enhance cancer treatment by developing novel image sensors and presenting the information using holographic augmented reality (AR) display to the physician in intraoperative settings. Method: By mimicking the visual system of the Morpho butterfly, several single-chip, color-NIR fluorescence image sensors and systems were developed with CMOS technologies and pixelated interference filters. Using a holographic AR goggle platform, an NIR fluorescence IGS display system was developed. Optoelectronic evaluation was performed on the prototypes to evaluate the performance of each component, and small animal models and large animal models were used to verify the overall effectiveness of the integrated systems at cancer detection. Result: The single-chip bio-inspired multispectral logarithmic image sensor I developed has better main performance indicators than the state-of-the-art NIR fluorescence imaging instruments. The image sensors achieve up to 140 dB dynamic range. The sensitivity under surgical illumination achieves 6108 V/(mW/cm2), which is up to 25 times higher. The signal-to-noise ratio is up to 56 dB, which is 11 dB greater. These enable high sensitivity fluorescence imaging under surgical illumination. The pixelated interference filters enable temperature-independent co-registration accuracy between multimodal images. Pre-clinical trials with small animal model demonstrate that the sensor can achieve up to 95% sensitivity and 94% specificity with tumor-targeted NIR molecular probes. The holographic AR goggle provides the physician with a non-disruptive 3-dimensional display in the clinical setup. This is the first display system that co-registers a virtual image with human eyes and allows video rate image transmission. The imaging system is tested in the veterinary science operating room on canine patients with naturally occurring cancers. In addition, a time domain pulse-width-modulation address-event-representation multispectral image sensor and a handheld multispectral camera prototype are developed. Conclusion: The major problems of current state-of-the-art NIR fluorescence imaging systems are successfully solved. Due to enhanced performance and user experience, the bio-inspired sensors and augmented reality display system will give medical care providers much needed technology to enable more accurate value-based healthcare

    Propuesta de arquitectura y circuitos para la mejora del rango dinámico de sistemas de visión en un chip diseñados en tecnologías CMOS profundamente submicrométrica

    Get PDF
    El trabajo presentado en esta tesis trata de proponer nuevas técnicas para la expansión del rango dinámico en sensores electrónicos de imagen. En este caso, hemos dirigido nuestros estudios hacia la posibilidad de proveer dicha funcionalidad en un solo chip. Esto es, sin necesitar ningún soporte externo de hardware o software, formando un tipo de sistema denominado Sistema de Visión en un Chip (VSoC). El rango dinámico de los sensores electrónicos de imagen se define como el cociente entre la máxima y la mínima iluminación medible. Para mejorar este factor surgen dos opciones. La primera, reducir la mínima luz medible mediante la disminución del ruido en el sensor de imagen. La segunda, incrementar la máxima luz medible mediante la extensión del límite de saturación del sensor. Cronológicamente, nuestra primera opción para mejorar el rango dinámico se basó en reducir el ruido. Varias opciones se pueden tomar para mejorar la figura de mérito de ruido del sistema: reducir el ruido usando una tecnología CIS o usar circuitos dedicados, tales como calibración o auto cero. Sin embargo, el uso de técnicas de circuitos implica limitaciones, las cuales sólo pueden ser resueltas mediante el uso de tecnologías no estándar que están especialmente diseñadas para este propósito. La tecnología CIS utilizada está dirigida a la mejora de la calidad y las posibilidades del proceso de fotosensado, tales como sensibilidad, ruido, permitir imagen a color, etcétera. Para estudiar las características de la tecnología en más detalle, se diseñó un chip de test, lo cual permite extraer las mejores opciones para futuros píxeles. No obstante, a pesar de un satisfactorio comportamiento general, las medidas referentes al rango dinámico indicaron que la mejora de este mediante sólo tecnología CIS es muy limitada. Es decir, la mejora de la corriente oscura del sensor no es suficiente para nuestro propósito. Para una mayor mejora del rango dinámico se deben incluir circuitos dentro del píxel. No obstante, las tecnologías CIS usualmente no permiten nada más que transistores NMOS al lado del fotosensor, lo cual implica una seria restricción en el circuito a usar. Como resultado, el diseño de un sensor de imagen con mejora del rango dinámico en tecnologías CIS fue desestimado en favor del uso de una tecnología estándar, la cual da más flexibilidad al diseño del píxel. En tecnologías estándar, es posible introducir una alta funcionalidad usando circuitos dentro del píxel, lo cual permite técnicas avanzadas para extender el límite de saturación de los sensores de imagen. Para este objetivo surgen dos opciones: adquisición lineal o compresiva. Si se realiza una adquisición lineal, se generarán una gran cantidad de datos por cada píxel. Como ejemplo, si el rango dinámico de la escena es de 120dB al menos se necesitarían 20-bits/píxel, log2(10120/20)=19.93, para la representación binaria de este rango dinámico. Esto necesitaría de amplios recursos para procesar esta gran cantidad de datos, y un gran ancho de banda para moverlos al circuito de procesamiento. Para evitar estos problemas, los sensores de imagen de alto rango dinámico usualmente optan por utilizar una adquisición compresiva de la luz. Por lo tanto, esto implica dos tareas a realizar: la captura y la compresión de la imagen. La captura de la imagen se realiza a nivel de píxel, en el dispositivo fotosensor, mientras que la compresión de la imagen puede ser realizada a nivel de píxel, de sistema, o mediante postprocesado externo. Usando el postprocesado, existe un campo de investigación que estudia la compresión de escenas de alto rango dinámico mientras se mantienen los detalles, produciendo un resultado apropiado para la percepción humana en monitores convencionales de bajo rango dinámico. Esto se denomina Mapeo de Tonos (Tone Mapping) y usualmente emplea solo 8-bits/píxel para las representaciones de imágenes, ya que éste es el estándar para las imágenes de bajo rango dinámico. Los píxeles de adquisición compresiva, por su parte, realizan una compresión que no es dependiente de la escena de alto rango dinámico a capturar, lo cual implica una baja compresión o pérdida de detalles y contraste. Para evitar estas desventajas, en este trabajo, se presenta un píxel de adquisición compresiva que aplica una técnica de mapeo de tonos que permite la captura de imágenes ya comprimidas de una forma optimizada para mantener los detalles y el contraste, produciendo una cantidad muy reducida de datos. Las técnicas de mapeo de tonos ejecutan normalmente postprocesamiento mediante software en un ordenador sobre imágenes capturadas sin compresión, las cuales contienen una gran cantidad de datos. Estas técnicas han pertenecido tradicionalmente al campo de los gráficos por ordenador debido a la gran cantidad de esfuerzo computacional que requieren. Sin embargo, hemos desarrollado un nuevo algoritmo de mapeo de tonos especialmente adaptado para aprovechar los circuitos dentro del píxel y que requiere un reducido esfuerzo de computación fuera de la matriz de píxeles, lo cual permite el desarrollo de un sistema de visión en un solo chip. El nuevo algoritmo de mapeo de tonos, el cual es un concepto matemático que puede ser simulado mediante software, se ha implementado también en un chip. Sin embargo, para esta implementación hardware en un chip son necesarias algunas adaptaciones y técnicas avanzadas de diseño, que constituyen en sí mismas otra de las contribuciones de este trabajo. Más aún, debido a la nueva funcionalidad, se han desarrollado modificaciones de los típicos métodos a usar para la caracterización y captura de imágenes

    Multi-robot Tethering Using Camera

    Get PDF
    An autonomous multi-robot or swarm robot able to perform various cooperative mission such as search and rescue, exploration of unknown or partially known area, transportation, surveillance, defence system, and also firefighting. However, multi-robot application often requires synchronised robotic configuration, reliable communication system and various sensors installed on each robot. This approach has resulted system complexity and very high cost of development
    corecore