20,293 research outputs found

    Embedded video stabilization system on field programmable gate array for unmanned aerial vehicle

    Get PDF
    Unmanned Aerial Vehicles (UAVs) equipped with lightweight and low-cost cameras have grown in popularity and enable new applications of UAV technology. However, the video retrieved from small size UAVs is normally in low-quality due to high frequency jitter. This thesis presents the development of video stabilization algorithm implemented on Field Programmable Gate Array (FPGA). The video stabilization algorithm consists of three main processes, which are motion estimation, motion stabilization and motion compensation to minimize the jitter. Motion estimation involves block matching and Random Sample Consensus (RANSAC) to estimate the affine matrix that defines the motion perspective between two consecutive frames. Then, parameter extraction, motion smoothing and motion vector correction, which are parts of the motion stabilization, are tasked in removing unwanted camera movement. Finally, motion compensation stabilizes two consecutive frames based on filtered motion vectors. In order to facilitate the ground station mobility, this algorithm needs to be processed onboard the UAV in real-time. The nature of parallelization of video stabilization processing is suitable to be utilized by using FPGA in order to achieve real-time capability. The implementation of this system is on Altera DE2-115 FPGA board. Full hardware dedicated cores without Nios II processor are designed in stream-oriented architecture to accelerate the computation. Furthermore, a parallelized architecture consisting of block matching and highly parameterizable RANSAC processor modules show that the proposed system is able to achieve up to 30 frames per second processing and a good stabilization improvement up to 1.78 Interframe Transformation Fidelity value. Hence, it is concluded that the proposed system is suitable for real-time video stabilization for UAV application

    A cloud robotics architecture for an emergency management and monitoring service in a smart cityenvironment

    Get PDF
    Cloud robotics is revolutionizing not only the robotics industry but also the ICT world, giving robots more storage and computing capacity, opening new scenarios that blend the physical to the digital world. In this vision new IT architectures are required to manage robots, retrieve data from them and create services to interact with users. In this paper a possible implementation of a cloud robotics architecture for the interaction between users and UAVs is described. Using the latter as monitoring agents, a service for fighting crime in urban environment is proposed, making one step forward towards the idea of smart cit

    Suboptimal eye movements for seeing fine details.

    Get PDF
    Human eyes are never stable, even during attempts of maintaining gaze on a visual target. Considering transient response characteristics of retinal ganglion cells, a certain amount of motion of the eyes is required to efficiently encode information and to prevent neural adaptation. However, excessive motion of the eyes leads to insufficient exposure to the stimuli, which creates blur and reduces visual acuity. Normal miniature eye movements fall in between these extremes, but it is unclear if they are optimally tuned for seeing fine spatial details. We used a state-of-the-art retinal imaging technique with eye tracking to address this question. We sought to determine the optimal gain (stimulus/eye motion ratio) that corresponds to maximum performance in an orientation-discrimination task performed at the fovea. We found that miniature eye movements are tuned but may not be optimal for seeing fine spatial details

    Rethinking affordance

    Get PDF
    n/a – Critical survey essay retheorising the concept of 'affordance' in digital media context. Lead article in a special issue on the topic, co-edited by the authors for the journal Media Theory

    Understanding educational change : Agency-structure dynamics in a novel design and making environment

    Get PDF
    This study investigates agency-structure dynamics in students and teachers' social activity in a novel design and making environment in the context of the Finnish school system, which has recently undergone major curricular reform. Understanding that agency is an important mediator of educational change, we ask the following questions: How are agency-structure dynamics manifested in the social activity of students and their teachers in a novel design and making environment? How do agency-structure dynamics create possibilities and obstacles for educational change? The data comprise 65 hours of video recordings and field notes of the social activity of students aged 9-12 years old (N = 94) and their teachers collected over a period of one semester. Our study shows how the introduction of the novel learning environment created a boundary space in which traditional teacher-centered activity patterns interacted and came into tension with student-centered modes of teaching and learning. Our study reveals three distinctive agency-structure dynamics that illuminate how the agentive actions of both teachers and students stabilized existing teacher-centered practices and, at other, times ruptured and broke away from existing patterns, thus giving rise to possibilities for educational change.Peer reviewe

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024×1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    A solar magnetic and velocity field measurement system for Spacelab 2: The Solar Optical Universal Polarimeter (SOUP)

    Get PDF
    The Solar Optical Universal Polarimeter (SOUP) flew on the shuttle mission Spacelab 2 (STS-51F) in August, 1985, and collected historic solar observations. SOUP is the only solar telescope on either a spacecraft or balloon which has delivered long sequences of diffraction-limited images. These movies led to several discoveries about the solar atmosphere which were published in the scientific journals. After Spacelab 2, reflights were planned on the shuttle Sunlab mission, which was cancelled after the Challenger disaster, and on a balloon flights, which were also cancelled for funding reasons. In the meantime, the instrument was used in a productive program of ground-based observing, which collected excellent scientific data and served as instrument tests. Given here is an overview of the history of the SOUP program, the scientific discoveries, and the instrument design and performance
    corecore