7 research outputs found

    Computer Vision Control for Phased Array Beam Steering

    Get PDF
    This work proves a concept for a wireless access point that uses image identification and tracking algorithms to automate the electronic control of a phased antenna array. Phased arrays change the direction of their radiation electronically by adjusting the phase of the signal applied to the individual antenna elements of the array. This ability can improve a user’s connectivity to a wireless network by directing radiation from an access point to a user, provided that the user’s location is known. Open source image processing and machine learning libraries provided a basis for developing a Python program that determines the position of a target using a single camera. This program uses the position information acquired from the camera to calculate the phases required to steer the radiation of the array to the target. The Python program sends the required phases to another piece of software that controls the phases of the phased array. This software adjusts the phases of the antenna elements and steers the main beam. Experiments were conducted to evaluate the identification, tracking, and control capabilities of the system. Finally, a full system demonstration was performed to benchmark the wireless performance, study the trade-offs in performance for complexity, and compare the connectivity to the current standard in multi-antenna access points

    New Robust Obstacle Detection System Using Color Stereo Vision

    Get PDF
    Intelligent transportation systems (ITS) are divided into intelligent infrastructure systems and intelligent vehicle systems. Intelligent vehicle systems are typically classified in three categories, namely 1) Collision Avoidance Systems; 2) Driver Assistance Systems and 3) Collision Notification Systems. Obstacle detection is one of crucial tasks for Collision Avoidance Systems and Driver Assistance Systems. Obstacle detection systems use vehiclemounted sensors to detect obstuctions, such as other vehicles, bicyclists, pedestrians, road debris, or animals, in a vehicleâs path and alert the driver. Obstacle detection systems are proposed to help drivers see farther and therefore have more time to react to road hazards. These systems also help drivers to get a large visibility area when the visibility conditions is reduced such as night, fog, snow, rain, ... Obstacle detection systems process data acquired from one or several sensors: radar Kruse et al. (2004), lidar Gao & Coifman (2006), monocular vision Lombardi & Zavidovique (2004), stereo vision Franke (2000) Bensrhair et al. (2002) Cabani et al. (2006b) Kogler et al. (2006) Woodfill et al. (2007), vision fused with active sensors Gern et al. (2000) Steux et al. (2002) Mobus & Kolbe (2004)Zhu et al. (2006) Alessandretti et al. (2007)Cheng et al. (2007). It is clear now that most obstacle detection systems cannot work without vision. Typically, vision-based systems consist of cameras that provide gray level images. When visibility conditions are reduced (night, fog, twilight, tunnel, snow, rain), vision systems are almost blind. Obstacle detection systems are less robust and reliable. To deal with the problem of reduced visibility conditions, infrared or color cameras can be used. Thermal imaging cameras are initially used by militaries. Over the last few years, these systems became accessible to the commercial market, and can be found in select 2006 BMW cars. For example, vehicle headlight systems provide between 75 to 140 meters of moderate illumination; at 90 K meters per hour this means less than 4 seconds to react to hazards. When with PathFindIR PathFindIR (n.d.) (a commercial system), a driver can have more than 15 seconds. Other systems still in the research stage assist drivers to detect pedestrians Xu & Fujimura (2002) Broggi et al. (2004) Bertozzi et al. (2007). Color is appropriate to various visibility conditions and various environments. In Betke et al. (2000) and Betke & Nguyen (1998), Betke et al. have demonstrated that the tracking o

    Computer Vision Control for Phased Array Beam Steering

    Get PDF
    This work proves a concept for a wireless access point that uses image identification and tracking algorithms to automate the electronic control of a phased antenna array. Phased arrays change the direction of their radiation electronically by adjusting the phase of the signal applied to the individual antenna elements of the array. This ability can improve a user’s connectivity to a wireless network by directing radiation from an access point to a user, provided that the user’s location is known. Open source image processing and machine learning libraries provided a basis for developing a Python program that determines the position of a target using a single camera. This program uses the position information acquired from the camera to calculate the phases required to steer the radiation of the array to the target. The Python program sends the required phases to another piece of software that controls the phases of the phased array. This software adjusts the phases of the antenna elements and steers the main beam. Experiments were conducted to evaluate the identification, tracking, and control capabilities of the system. Finally, a full system demonstration was performed to benchmark the wireless performance, study the trade-offs in performance for complexity, and compare the connectivity to the current standard in multi-antenna access points

    Design And Implementation Of An Omnidirectional Mobile Robot Platform

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu çalısmada robotik alanında yapılan akademik çalısmaların genis bir bölümünde uygulama gelistirme platformu olarak kullanılmak üzere; gerekli islemci gücü, algılama yetileri, hareket kabiliyeti ve iletisim altyapılarını sunan bir mobil robot platform tasarlanmıs ve gerçeklenmistir. Gerçeklenen robotun tabanı, iki diferansiyel sürümlü platformun üzerine sabitlenmistir. Bu sayede serbestlik derecesi dört olan taban, diferansiyel sürümlü platformları kontrol ederek her yöne hareket edebilme yeteneğine sahiptir. Gerçeklenen mekanik tasarımda, odometri tabanlı hassas konumlandırmanın mümkün olabilmesi için, robotun tasarımının kendine has geometrik avantajlarını kullanarak odometri hatalarının azaltılmasına olanak veren bir yöntem sunulmustur. Hareketli platformun üzerindeki donanım bataryalar, üç eksende hareketli bir kamera, çift çekirdekli bir DSP sistemi, Linux tabanlı bir kontrol kartı, kablosuz ağ ve video bağlantısı, grafik LCD ve detayları sunulmus olan, iki eksende hareketli bir lazer isaretçi ile kameranın kullanıldığı, çalısmaya özel olarak gelistirilmis üç boyutlu mesafe ölçerinden olusmaktadır.In this study, an omnidirectional mobile robot with sufficient processing power, sensory units and communication facilities for being utilized as an application development platform for a wide range of academic research in the field of robotics was designed and implemented. The base plane of the robot is attached onto two differential drive platforms, giving four-degrees-of-freedom to the base. This makes the robot able to move to any direction with proper control of the differential drive platforms, giving the property of omnidirectionality. A method to reduce odometric errors and make odometry based accurate positioning possible was also presented which utilizes the geometrical advantages particular to the robot’s mechanic design. The hardware on the moving base consists of batteries, a camera moving in three axes, a dual core DSP system, a Linux based control card, wireless network and video connection, graphical LCD and a laser pointer moving in two axes. An algorithm that uses the laser and the camera to obtain three dimensional distance measurements was also derived.Yüksek LisansM.Sc
    corecore