35 research outputs found

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Implementation of a distributed real-time video panorama pipeline for creating high quality virtual views

    Get PDF
    Today, we are continuously looking for more immersive video systems. Such systems, however, require more content, which can be costly to produce. A full panorama, covering regions of interest, can contain all the information required, but can be difficult to view in its entirety. In this thesis, we discuss a method for creating virtual views from a cylindrical panorama, allowing multiple users to create individual virtual cameras from the same panorama video. We discuss how this method can be used for video delivery, but emphasize on the creation of the initial panorama. The panorama must be created in real-time, and with very high quality. We design and implement a prototype recording pipeline, installed at a soccer stadium, as a part of the Bagadus project. We describe a pipeline capable of producing 4K panorama videos from five HD cameras, in real-time, with possibilities for further upscaling. We explain how the cylindrical panorama can be created, with minimal computational cost and without visible seams. The cameras of our prototype system record video in the incomplete Bayer format, and we also investigate which debayering algorithms are best suited for recording multiple high resolution video streams in real-time

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    NASA Tech Briefs, September 2012

    Get PDF
    Topics covered include: Beat-to-Beat Blood Pressure Monitor; Measurement Techniques for Clock Jitter; Lightweight, Miniature Inertial Measurement System; Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts; Fuel Cell/Electrochemical Cell Voltage Monitor; Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor; Measuring Air Leaks into the Vacuum Space of Large Liquid Hydrogen Tanks; Antenna Calibration and Measurement Equipment; Glass Solder Approach for Robust, Low-Loss, Fiber-to-Waveguide Coupling; Lightweight Metal Matrix Composite Segmented for Manufacturing High-Precision Mirrors; Plasma Treatment to Remove Carbon from Indium UV Filters; Telerobotics Workstation (TRWS) for Deep Space Habitats; Single-Pole Double-Throw MMIC Switches for a Microwave Radiometer; On Shaft Data Acquisition System (OSDAS); ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays; Flexible Architecture for FPGAs in Embedded Systems; Polyurea-Based Aerogel Monoliths and Composites; Resin-Impregnated Carbon Ablator: A New Ablative Material for Hyperbolic Entry Speeds; Self-Cleaning Particulate Prefilter Media; Modular, Rapid Propellant Loading System/Cryogenic Testbed; Compact, Low-Force, Low-Noise Linear Actuator; Loop Heat Pipe with Thermal Control Valve as a Variable Thermal Link; Process for Measuring Over-Center Distances; Hands-Free Transcranial Color Doppler Probe; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Developing Physiologic Models for Emergency Medical Procedures Under Microgravity; PMA-Linked Fluorescence for Rapid Detection of Viable Bacterial Endospores; Portable Intravenous Fluid Production Device for Ground Use; Adaptation of a Filter Assembly to Assess Microbial Bioburden of Pressurant Within a Propulsion System; Multiplexed Force and Deflection Sensing Shell Membranes for Robotic Manipulators; Whispering Gallery Mode Optomechanical Resonator; Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles; Self-Sealing Wet Chemistry Cell for Field Analysis; General MACOS Interface for Modeling and Analysis for Controlled Optical Systems; Mars Technology Rover with Arm-Mounted Percussive Coring Tool, Microimager, and Sample-Handling Encapsulation Containerization Subsystem; Fault-Tolerant, Real-Time, Multi-Core Computer System; Water Detection Based on Object Reflections; SATPLOT for Analysis of SECCHI Heliospheric Imager Data; Plug-in Plan Tool v3.0.3.1; Frequency Correction for MIRO Chirp Transformation Spectroscopy Spectrum; Nonlinear Estimation Approach to Real-Time Georegistration from Aerial Images; Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications; Low-Cost Telemetry System for Small/Micro Satellites; Operator Interface and Control Software for the Reconfigurable Surface System Tri-ATHLETE; and Algorithms for Determining Physical Responses of Structures Under Load

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Bio-Inspired Motion Vision for Aerial Course Control

    No full text

    NASA Tech Briefs, August 2011

    Get PDF
    Topics covered include: Miniature, Variable-Speed Control Moment Gyroscope; NBL Pistol Grip Tool for Underwater Training of Astronauts; HEXPANDO Expanding Head for Fastener-Retention Hexagonal Wrench; Diagonal-Axes Stage for Pointing an Optical Communications Transceiver; Improvements in Speed and Functionality of a 670-GHz Imaging Radar; IONAC-Lite; Large Ka-Band Slot Array for Digital Beam-Forming Applications; Development of a 150-GHz MMIC Module Prototype for Large-Scale CMB Radiation; Coupling Between Waveguide-Fed Slot Arrays; PCB-Based Break-Out Box; Multiple-Beam Detection of Fast Transient Radio Sources; Router Agent Technology for Policy-Based Network Management; Remote Asynchronous Message Service Gateway; Automatic Tie Pointer for In-Situ Pointing Correction; Jitter Correction; MSLICE Sequencing; EOS MLS Level 2 Data Processing Software Version 3; DspaceOgre 3D Graphics Visualization Tool; Metallization for Yb14MnSb11-Based Thermoelectric Materials; Solvent/Non-Solvent Sintering To Make Microsphere Scaffolds; Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing; Self-Cleaning Coatings and Materials for Decontaminating Field-Deployable Land and Water-Based Optical Systems; Separation of Single-Walled Carbon Nanotubes with DEP-FFF; Li Anode Technology for Improved Performance; Post-Fragmentation Whole Genome Amplification-Based Method; Microwave Tissue Soldering for Immediate Wound Closure; Principles, Techniques, and Applications of Tissue Microfluidics; Robotic Scaffolds for Tissue Engineering and Organ Growth; Stress-Driven Selection of Novel Phenotypes; Method for Accurately Calibrating a Spectrometer Using Broadband Light; Catalytic Microtube Rocket Igniter; Stage Cylindrical Immersive Display; Vacuum Camera Cooler; Atomic Oxygen Fluence Monitor; Thermal Management Tools for Propulsion System Trade Studies and Analysis; Introduction to Physical Intelligence; Technique for Solving Electrically Small to Large Structures for Broadband Applications; Accelerated Adaptive MGS Phase Retrieval; Large Eddy Simulation Study for Fluid Disintegration and Mixing; Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay; Technique for Calculating Solution Derivatives With Respect to Geometry Parameters in a CFD Code; Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface; Probabilistic Path Planning of Montgolfier Balloons in Strong, Uncertain Wind Fields; Flight Simulation of ARES in the Mars Environment; Low-Outgassing Photogrammetry Targets for Use in Outer Space; Planning the FUSE Mission Using the SOVA Algorithm; Monitoring Spacecraft Telemetry Via Optical or RF Link; and Robust Thermal Control of Propulsion Lines for Space Missions

    Proof-of-concept of a single-point Time-of-Flight LiDAR system and guidelines towards integrated high-accuracy timing, advanced polarization sensing and scanning with a MEMS micromirror

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física (área de especialização em Dispositivos, Microssistemas e Nanotecnologias)The core focus of the work reported herein is the fulfillment of a functional Light Detection and Ranging (LiDAR) sensor to validate the direct Time-of-Flight (ToF) ranging concept and the acquisition of critical knowledge regarding pivotal aspects jeopardizing the sensor’s performance, for forthcoming improvements aiming a realistic sensor targeted towards automotive applications. Hereupon, the ToF LiDAR system is implemented through an architecture encompassing both optical and electronical functions and is subsequently characterized under a sequence of test procedures usually applied in benchmarking of LiDAR sensors. The design employs a hybrid edge-emitting laser diode (pulsed at 6kHz, 46ns temporal FWHM, 7ns rise-time; 919nm wavelength with 5nm FWHM), a PIN photodiode to detect the back-reflected radiation, a transamplification stage and two Time-to-Digital Converters (TDCs), with leading-edge discrimination electronics to mark the transit time between emission and detection events. Furthermore, a flexible modular design is adopted using two separate Printed Circuit Boards (PCBs), comprising the transmitter (TX) and the receiver (RX), i.e. detection and signal processing. The overall output beam divergence is 0.4º×1º and an optical peak power of 60W (87% overall throughput) is realized. The sensor is tested indoors from 0.56 to 4.42 meters, and the distance is directly estimated from the pulses transit time. The precision within these working distances ranges from 4cm to 7cm, reflected in a Signal-to-Noise Ratio (SNR) between 12dB and 18dB. The design requires a calibration procedure to correct systematic errors in the range measurements, induced by two sources: the timing offset due to architecture-inherent differences in the optoelectronic paths and a supplementary bias resulting from the design, which renders an intensity dependence and is denoted time-walk. The calibrated system achieves a mean accuracy of 1cm. Two distinct target materials are used for characterization and performance evaluation: a metallic automotive paint and a diffuse material. This selection is representative of two extremes of actual LiDAR applications. The optical and electronic characterization is thoroughly detailed, including the recognition of a good agreement between empirical observations and simulations in ZEMAX, for optical design, and in a SPICE software, for the electrical subsystem. The foremost meaningful limitation of the implemented design is identified as an outcome of the leading-edge discrimination. A proposal for a Constant Fraction Discriminator addressing sub-millimetric accuracy is provided to replace the previous signal processing element. This modification is mandatory to virtually eliminate the aforementioned systematic bias in range sensing due to the intensity dependency. A further crucial addition is a scanning mechanism to supply the required Field-of-View (FOV) for automotive usage. The opto-electromechanical guidelines to interface a MEMS micromirror scanner, achieving a 46º×17º FOV, with the LiDAR sensor are furnished. Ultimately, a proof-of-principle to the use of polarization in material classification for advanced processing is carried out, aiming to complement the ToF measurements. The original design is modified to include a variable wave retarder, allowing the simultaneous detection of orthogonal linear polarization states using a single detector. The material classification with polarization sensing is tested with the previously referred materials culminating in an 87% and 11% degree of linear polarization retention from the metallic paint and the diffuse material, respectively, computed by Stokes parameters calculus. The procedure was independently validated under the same conditions with a micro-polarizer camera (92% and 13% polarization retention).O intuito primordial do trabalho reportado no presente documento é o desenvolvimento de um sensor LiDAR funcional, que permita validar o conceito de medição direta do tempo de voo de pulsos óticos para a estimativa de distância, e a aquisição de conhecimento crítico respeitante a aspetos fundamentais que prejudicam a performance do sensor, ambicionando melhorias futuras para um sensor endereçado para aplicações automóveis. Destarte, o sistema LiDAR é implementado através de uma arquitetura que engloba tanto funções óticas como eletrónicas, sendo posteriormente caracterizado através de uma sequência de testes experimentais comumente aplicáveis em benchmarking de sensores LiDAR. O design tira partido de um díodo de laser híbrido (pulsado a 6kHz, largura temporal de 46ns; comprimento de onda de pico de 919nm e largura espetral de 5nm), um fotodíodo PIN para detetar a radiação refletida, um andar de transamplificação e dois conversores tempo-digital, com discriminação temporal com threshold constante para marcar o tempo de trânsito entre emissão e receção. Ademais, um design modular flexível é adotado através de duas PCBs independentes, compondo o transmissor e o recetor (deteção e processamento de sinal). A divergência global do feixe emitido para o ambiente circundante é 0.4º×1º, apresentando uma potência ótica de pico de 60W (eficiência de 87% na transmissão). O sensor é testado em ambiente fechado, entre 0.56 e 4.42 metros. A precisão dentro das distâncias de trabalho varia entre 4cm e 7cm, o que se reflete numa razão sinal-ruído entre 12dB e 18dB. O design requer calibração para corrigir erros sistemáticos nas distâncias adquiridas devido a duas fontes: o desvio no ToF devido a diferenças nos percursos optoeletrónicos, inerentes à arquitetura, e uma dependência adicional da intensidade do sinal refletido, induzida pela técnica de discriminação implementada e denotada time-walk. A exatidão do sistema pós-calibração perfaz um valor médio de 1cm. Dois alvos distintos são utilizados durante a fase de caraterização e avaliação performativa: uma tinta metálica aplicada em revestimentos de automóveis e um material difusor. Esta seleção é representativa de dois cenários extremos em aplicações reais do LiDAR. A caraterização dos subsistemas ótico e eletrónico é minuciosamente detalhada, incluindo a constatação de uma boa concordância entre observações empíricas e simulações óticas em ZEMAX e elétricas num software SPICE. O principal elemento limitante do design implementado é identificado como sendo a técnica de discriminação adotada. Por conseguinte, é proposta a substituição do anterior bloco por uma técnica de discriminação a uma fração constante do pulso de retorno, com exatidões da ordem sub-milimétrica. Esta modificação é imperativa para eliminar o offset sistemático nas medidas de distância, decorrente da dependência da intensidade do sinal. Uma outra inclusão de extrema relevância é um mecanismo de varrimento que assegura o cumprimento dos requisitos de campo de visão para aplicações automóveis. As diretrizes para a integração de um micro-espelho no sensor concebido são providenciadas, permitindo atingir um campo de visão de 46º×17º. Conclusivamente, é feita uma prova de princípio para a utilização da polarização como complemento das medições do tempo de voo, de modo a suportar a classificação de materiais em processamento avançado. A arquitetura original é modificada para incluir uma lâmina de atraso variável, permitindo a deteção de estados de polarização ortogonais com um único fotodetetor. A classificação de materiais através da aferição do estado de polarização da luz refletida é testada para os materiais supramencionados, culminando numa retenção de polarização de 87% (tinta metálica) e 11% (difusor), calculados através dos parâmetros de Stokes. O procedimento é independentemente validado com uma câmara polarimétrica nas mesmas condições (retenção de 92% e 13%)

    Using reconstructed visual reality in ant navigation research

    Get PDF
    Insects have low resolution eyes and a tiny brain, yet they continuously solve very complex navigational problems; an ability that underpins fundamental biological processes such as pollination and parental care. Understanding the methods they employ would have profound impact on the fields of machine vision and robotics. As our knowledge on insect navigation grows, our physical, physiological and neural models get more complex and detailed. To test these models we need to perform increasingly sophisticated experiments. Evolution has optimised the animals to operate in their natural environment. To probe the fine details of the methods they utilise we need to use natural visual scenery which, for experimental purposes, we must be able to manipulate arbitrarily. Performing physiological experiments on insects outside the laboratory is not practical and our ability to modify the natural scenery for outdoor behavioural experiments is very limited. The solution is reconstructed visual reality, a projector that can present the visual aspect of the natural environment to the animal with high fidelity, taking the peculiarities of insect vision into account. While projectors have been used in insect research before, during my candidature I designed and built a projector specifically tuned to insect vision. To allow the ant to experience a full panoramic view, the projector completely surrounds her. The device (Antarium) is a polyhedral approximation of a sphere. It contains 20 thousand pixels made out of light emitting diodes (LEDs) that match the spectral sensitivity of Myrmecia. Insects have a much higher fusion frequency limit than humans, therefore the device has a very high flicker frequency (9kHz) and also a high frame rate (190fps). In the Antarium the animal is placed in the centre of the projector on a trackball. To test the trackball and to collect reference data, outdoor experiments were performed where ants were captured, tethered and placed on the trackball. The apparatus with the ant on it was then placed at certain locations relative to the nest and the foraging tree and the movements of the animal on the ball were recorded and analysed. The outdoor experiments proved that the trackball was well suited for our ants, and also provided the baseline behaviour reference for the subsequent Antarium experiments. To assess the Antarium, the natural habitat of the experimental animals was recreated as a 3-dimensional model. That model was then projected for the ants and their movements on the trackball was recorded, just like in the outdoor experiments Initial feasibility tests were performed by projecting a static image, which matches what the animals experienced during the outdoor experiments. To assess whether the ant was orienting herself relative to the scene we rotated the projected scene around her and her response monitored. Statistical methods were used to compare the outdoor and in-Antarium behaviour. The results proved that the concept was solid, but they also uncovered several shortcomings of the Antarium. Nevertheless, even with its limitations the Antarium was used to perform experiments that would be very hard to do in a real environment. In one experiment the foraging tree was repositioned in or deleted from the scene to see whether the animals go to where the tree is or where by their knowledge it should be. The results suggest the latter but the absence or altered location of the foraging tree certainly had a significant effect on the animals. In another experiment the scene, including the sky, were re-coloured to see whether colour plays a significant role in navigation. Results indicate that even very small amount of UV information statistically significantly improves the navigation of the animals. To rectify the device limitations discovered during the experiments a new, improved projector was designed and is currently being built
    corecore