24 research outputs found

    Совершенствование организации труда руководителей и специалистов предприятия (на примере филиала «Гомельское производственное управление» РПУП «Гомельоблгаз»)

    Get PDF
    In this paper we propose a new approach to real-time view-based pose recognition and interpolation. Pose recognition is particularly useful for identifying camera views in databases, video sequences, video streams, and live recordings. All of these applications require a fast pose recognition process, in many cases video real-time. It should further be possible to extend the database with new material, i.e., to update the recognition system online. The method that we propose is based on P-channels, a special kind of information representation which combines advantages of histograms and local linear models. Our approach is motivated by its similarity to information representation in biological systems but its main advantage is its robustness against common distortions such as clutter and occlusion. The recognition algorithm consists of three steps: (1) low-level image features for color and local orientation are extracted in each point of the image; (2) these features are encoded into P-channels by combining similar features within local image regions; (3) the query P-channels are compared to a set of prototype P-channels in a database using a least-squares approach. The algorithm is applied in two scene registration experiments with fisheye camera data, one for pose interpolation from synthetic images and one for finding the nearest view in a set of real images. The method compares favorable to SIFT-based methods, in particular concerning interpolation. The method can be used for initializing pose-tracking systems, either when starting the tracking or when the tracking has failed and the system needs to re-initialize. Due to its real-time performance, the method can also be embedded directly into the tracking system, allowing a sensor fusion unit choosing dynamically between the frame-by-frame tracking and the pose recognition.Original Publication: Michael Felsberg and Johan Hedborg, Real-Time View-Based Pose Recognition and Interpolation for Tracking Initialization, 2007, Journal of real-time image processing, (2), 2-3, 103-115. http://dx.doi.org/10.1007/s11554-007-0044-y Copyright: Springer Science Business Medi

    Structure and motion estimation from rolling shutter video

    Full text link
    The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion

    Motion and Structure Estimation From Video

    No full text
    Digital camera equipped cell phones were introduced in Japan in 2001, they quickly became popular and by 2003 outsold the entire stand-alone digital camera market. In 2010 sales passed one billion units and the market is still growing. Another trend is the rising popularity of smartphones which has led to a rapid development of the processing power on a phone, and many units sold today bear close resemblance to a personal computer. The combination of a powerful processor and a camera which is easily carried in your pocket, opens up a large eld of interesting computer vision applications. The core contribution of this thesis is the development of methods that allow an imaging device such as the cell phone camera to estimates its own motion and to capture the observed scene structure. One of the main focuses of this thesis is real-time performance, where a real-time constraint does not only result in shorter processing times, but also allows for user interaction. In computer vision, structure from motion refers to the process of estimating camera motion and 3D structure by exploring the motion in the image plane caused by the moving camera. This thesis presents several methods for estimating camera motion. Given the assumption that a set of images has known camera poses associated to them, we train a system to solve the camera pose very fast for a new image. For the cases where no a priory information is available a fast minimal case solver is developed. The solver uses ve points in two camera views to estimate the cameras relative position and orientation. This type of minimal case solver is usually used within a RANSAC framework. In order to increase accuracy and performance a renement to the random sampling strategy of RANSAC is proposed. It is shown that the new scheme doubles the performance for the ve point solver used on video data. For larger systems of cameras a new Bundle Adjustment method is developed which are able to handle video from cell phones. Demands for reduction in size, power consumption and price has led to a redesign of the image sensor. As a consequence the sensors have changed from a global shutter to a rolling shutter, where a rolling shutter image is acquired row by row. Classical structure from motion methods are modeled on the assumption of a global shutter and a rolling shutter can severely degrade their performance. One of the main contributions of this thesis is a new Bundle Adjustment method for cameras with a rolling shutter. The method accurately models the camera motion during image exposure with an interpolation scheme for both position and orientation. The developed methods are not restricted to cellphones only, but is rather applicable to any type of mobile platform that is equipped with cameras, such as a autonomous car or a robot. The domestic robot comes in many  avors, everything from vacuum cleaners to service and pet robots. A robot equipped with a camera that is capable of estimating its own motion while sensing its environment, like the human eye, can provide an eective means of navigation for the robot. Many of the presented methods are well suited of robots, where low latency and real-time constraints are crucial in order to allow them to interact with their environment.Virtual Photo Set (VPS

    Pölyhallinnan käytännön toteutus

    No full text
    Yksi merkittävimpiä terveydelle haitallisia tekijöitä rakennusalalla on pöly, varsinkin kvartsipöly. Sen ehkäiseminen rakennettavassa ympäristössä on elintärkeää terveellisen ja turvallisen rakennustyömaan ylläpitämiseksi. Siitä syystä siihen on puututtu lainsäädännöllisin keinoin kuin viranomaisten ohjeistuksen osalta. Viralliset säädökset ovat kuitenkin yleispiirteisiä ja rakennustyömaan kannalta vaikeasti tulkittavia. Sen takia tämän opinnäytetyön aiheena on tutkia ja tarjota käytännönläheisiä keinoja, menetelmiä ja työkaluja pölynhallinnan toteuttamiseen, jotta sitä voidaan suorittaa yksinkertaisesti sekä tehokkaasti. Opinnäytetyössä on avattu lainsäädännön tärkeimpiä kohtia ja viranomaisten antamia ohjeita, joiden pohjalta on tehty ohjeistuksia eri työvaiheisiin. Pääpaino tämän opinnäytetyön pohjalla on Työterveyslaitoksen luomat ohjekortit kvartsipölyn hallintaan sekä kirjallisuuslähteet. Työvälineitä ja suojauskeinoja on tutkittu eri laitevalmistajilta, jälleenmyyjiltä sekä konevuokraamoilta. Lisäksi tämän opinnäytetyön yhteyteen on luotu esimerkkimallit työmaan pölynhallinnan aluesuunnitelmasta sekä tehtäväkohtaisista ohjekorteista työmaan suunnitteluun. Mittauskeinoina on esitelty NCC Turun Herttuankulman työmailla käytettäviä olosuhdemittareita realiaikaisten ja luotettavien tulosten saamiseksi pölynhallinnan osalta

    GPGPU : Bildbehandling på grafikkort

    No full text
    GPGPU is a collective term for research involving general computation on graphics cards. A modern graphics card typically provides more than ten times the computational power of an ordinary PC processor. This is a result of the high demands for speed and image quality in computer games. This thesis investigates the possibility of exploiting this computational power for image processing purposes. Three well known methods where implemented on a graphics card: FFT (Fast Fourier Transform), KLT (Kanade Lucas Tomasi point tracking) and the generation of scale pyramids. All algorithms where successfully implemented and they are tree to ten times faster than correspondning optimized CPU implementation

    Motion and Structure Estimation From Video

    Get PDF
    Digital camera equipped cell phones were introduced in Japan in 2001, they quickly became popular and by 2003 outsold the entire stand-alone digital camera market. In 2010 sales passed one billion units and the market is still growing. Another trend is the rising popularity of smartphones which has led to a rapid development of the processing power on a phone, and many units sold today bear close resemblance to a personal computer. The combination of a powerful processor and a camera which is easily carried in your pocket, opens up a large eld of interesting computer vision applications. The core contribution of this thesis is the development of methods that allow an imaging device such as the cell phone camera to estimates its own motion and to capture the observed scene structure. One of the main focuses of this thesis is real-time performance, where a real-time constraint does not only result in shorter processing times, but also allows for user interaction. In computer vision, structure from motion refers to the process of estimating camera motion and 3D structure by exploring the motion in the image plane caused by the moving camera. This thesis presents several methods for estimating camera motion. Given the assumption that a set of images has known camera poses associated to them, we train a system to solve the camera pose very fast for a new image. For the cases where no a priory information is available a fast minimal case solver is developed. The solver uses ve points in two camera views to estimate the cameras relative position and orientation. This type of minimal case solver is usually used within a RANSAC framework. In order to increase accuracy and performance a renement to the random sampling strategy of RANSAC is proposed. It is shown that the new scheme doubles the performance for the ve point solver used on video data. For larger systems of cameras a new Bundle Adjustment method is developed which are able to handle video from cell phones. Demands for reduction in size, power consumption and price has led to a redesign of the image sensor. As a consequence the sensors have changed from a global shutter to a rolling shutter, where a rolling shutter image is acquired row by row. Classical structure from motion methods are modeled on the assumption of a global shutter and a rolling shutter can severely degrade their performance. One of the main contributions of this thesis is a new Bundle Adjustment method for cameras with a rolling shutter. The method accurately models the camera motion during image exposure with an interpolation scheme for both position and orientation. The developed methods are not restricted to cellphones only, but is rather applicable to any type of mobile platform that is equipped with cameras, such as a autonomous car or a robot. The domestic robot comes in many  avors, everything from vacuum cleaners to service and pet robots. A robot equipped with a camera that is capable of estimating its own motion while sensing its environment, like the human eye, can provide an eective means of navigation for the robot. Many of the presented methods are well suited of robots, where low latency and real-time constraints are crucial in order to allow them to interact with their environment.Virtual Photo Set (VPS

    Pose Estimation and Structure Analysisof Image Sequences

    No full text
    Autonomous navigation for ground vehicles has many challenges. Autonomous systems must be able to self-localise, avoid obstacles and determine navigable surfaces. This thesis studies several aspects of autonomous navigation with a particular emphasis on vision, motivated by it being a primary component for navigation in many high-level biological organisms.  The key problem of self-localisation or pose estimation can be solved through analysis of the changes in appearance of rigid objects observed from different view points. We therefore describe a system for structure and motion estimation for real-time navigation and obstacle avoidance. With the explicit assumption of a calibrated camera, we have studied several schemes for increasing accuracy and speed of the estimation.The basis of most structure and motion pose estimation algorithms is a good point tracker. However point tracking is computationally expensive and can occupy a large portion of the CPU resources. In thisthesis we show how a point tracker can be implemented efficiently on the graphics processor, which results in faster tracking of points and the CPU being available to carry out additional processing tasks.In addition we propose a novel view interpolation approach, that can be used effectively for pose estimation given previously seen views. In this way, a vehicle will be able to estimate its location by interpolating previously seen data.Navigation and obstacle avoidance may be carried out efficiently using structure and motion, but only whitin a limited range from the camera. In order to increase this effective range, additional information needs to be incorporated, more specifically the location of objects in the image. For this, we propose a real-time object recognition method, which uses P-channel matching, which may be used for improving navigation accuracy at distances where structure estimation is unreliable.Diplec

    Ledning av Specialoperationer

    No full text
    Specialförband har under mer än ett halvt sekel ökat i betydelse i egenskap av komplement till de traditionella stridskrafterna. Under denna period har förhållandevis lite forskning genomförts kring hur ledning (planering, förberedelser och genomförande) av specialoperationer på taktisk nivå bör bedrivas. William H. McRaven publicerade 1993, ”The Theory of Special Operations” där han skapade en teori för specialoperationer på taktisk nivå. Denna teori har fått stor genomslagskraft inom västvärldens specialförband och är än idag den enda och i stort sett även oemotsagda teori som beskriver hur specialoperationer i form av kvalificerade stridsinsatser bör utformas. Denna teori är baserad på specialoperationer som genomfördes mellan åren 1940 till 1976. Omvärldsutvecklingen sedan 1976 har naturligtvis även påverkat specialförbanden, som i stor utsträckning numera används annorlunda, har tillgång till ny teknik och i flera avseenden tillämpar annan taktik. Även motståndaren har förändrats och utgörs numera ofta av utomstatliga organisationer med ett helt annat uppträdande och förhållningssätt till gällande lagar och konventioner. Syftet med den här studien är att utreda förutsättningarna för specialförband att använda McRavens teori för att skapa framgång vid dagens specialoperationer. Genom tre fallstudier av specialoperationer genomförda 1993, 2000 och 2011 svarar vi på undersökningens syfte. De undersökta fallen skiljer sig åt i flera avseenden som till exempel genomförande nation, syfte, typ av motståndare och uppgifter. Resultatet av denna undersökning visar att McRavens teori i hög grad fortfarande är giltig. Därmed ger en tillämpning av teorin goda förutsättningarna till framgång även vid dagens specialoperationer. Vi är övertygade om att uppsatsen kan vara användbar och intressant för Försvarsmakten, men framförallt Specialförbanden, avseende hur man numera kan förhålla sig till McRavens teori. 

    KLT Tracking Implementation on the GPU

    No full text
    The GPU is the main processing unit on a graphics card. A modern GPU typically provides more than ten times the computational power of an ordinary PC processor. This is a result of the high demands for speed and image quality in computer games. This paper investigates the possibility of exploiting this computational power for tracking points in image sequences. Tracking points is used in many computer vision tasks, such as tracking moving objects, structure from motion, face tracking etc. The algorithm was successfully implemented on the GPU and a large speed up was achieved

    Resource management analysis at the prehospital emergency care unit in north-western Skåne

    No full text
    The purpose of this study is to investigate the preparedness at the prehospital emergency care unit in north-western Skåne. Measuring preparedness is important to ensure that the ability to respond on emergency calls is satisfactory. To do this for north-western Skåne historical data from 2015 was extracted from SOS Alarm\u92s database. It was used to calculate preparedness using workload and coverage as measurements. The workload was calculated by taking the busy periods and comparing them to the ambulances working times. The coverage was calculated by defining neighbouring stations to cover for each station and then finding the amount of hours when there was no ambulance at either station. These calculations show that two of the six stations in north-western Skåne are in need of improvement. To increase the preparedness to a good level resources will have to be added at the liable stations. These resources would be new ambulances. There is a possibility to relocate ambulances from stations within the district but that would lead to a worsened preparedness for the stations which these ambulances belonged to in the first place
    corecore