166 research outputs found

    Text Detection In Indonesian Identity Card Based On Maximally Stable Extremal Regions

    Get PDF
    Most of Indonesian organizations either it is government or non government sometime required their member to provide their identity card (E-KTP) as legal document collection in their database. This collection of image usually being used as manual verification method. These document images acquired by each person with their own device, there are variations of angles they are used to acquire the image. This situation created problems in text recognition by OCR softwares especially in text detection part, orientation and noise will affect their accuracy. These cases making the text detection more complex and cannot be solved by simple vertical projection profile of black pixels.  This research proposed a method to improve text detection in identity document by fixing the orientation first, then using MSER regions to form text region. We fix the orientation using the line that made by Progressive Probabilistic Hough Transform. Then we used MSER to obtain all candidate regions and Horizontal RLSA acts as connector between those candidate. The orientation fixing strategy reach average of margin error 0.377o (in 360o system) and the text detection method reach 84.49% accuracy in best condition

    Parking lot monitoring system using an autonomous quadrotor UAV

    Get PDF
    The main goal of this thesis is to develop a drone-based parking lot monitoring system using low-cost hardware and open-source software. Similar to wall-mounted surveillance cameras, a drone-based system can monitor parking lots without affecting the flow of traffic while also offering the mobility of patrol vehicles. The Parrot AR Drone 2.0 is the quadrotor drone used in this work due to its modularity and cost efficiency. Video and navigation data (including GPS) are communicated to a host computer using a Wi-Fi connection. The host computer analyzes navigation data using a custom flight control loop to determine control commands to be sent to the drone. A new license plate recognition pipeline is used to identify license plates of vehicles from video received from the drone

    Assessment of Driver\u27s Attention to Traffic Signs through Analysis of Gaze and Driving Sequences

    Get PDF
    A driver’s behavior is one of the most significant factors in Advance Driver Assistance Systems. One area that has received little study is just how observant drivers are in seeing and recognizing traffic signs. In this contribution, we present a system considering the location where a driver is looking (points of gaze) as a factor to determine that whether the driver has seen a sign. Our system detects and classifies traffic signs inside the driver’s attentional visual field to identify whether the driver has seen the traffic signs or not. Based on the results obtained from this stage which provides quantitative information, our system is able to determine how observant of traffic signs that drivers are. We take advantage of the combination of Maximally Stable Extremal Regions algorithm and Color information in addition to a binary linear Support Vector Machine classifier and Histogram of Oriented Gradients as features detector for detection. In classification stage, we use a multi class Support Vector Machine for classifier also Histogram of Oriented Gradients for features. In addition to the detection and recognition of traffic signs, our system is capable of determining if the sign is inside the attentional visual field of the drivers. It means the driver has kept his gaze on traffic signs and sees the sign, while if the sign is not inside this area, the driver did not look at the sign and sign has been missed

    Text localization and recognition in natural scene images

    Get PDF
    Text localization and recognition (text spotting) in natural scene images is an interesting task that finds many practical applications. Algorithms for text spotting may be used in helping visually impaired subjects during navigation in unknown environments; building autonomous driving systems that automatically avoid collisions with pedestrians or automatically identify speed limits and warn the driver about possible infractions that are being committed; and to ease or solve some tedious and repetitive data entry tasks that are still manually carried out by humans. While Optical Character Recognition (OCR) from scanned documents is a solved problem, the same cannot be said for text spotting in natural images. In fact, this latest class of images contains plenty of difficult situations that algorithms for text spotting need to deal with in order to reach acceptable recognition rates. During my PhD research I focused my studies on the development of novel systems for text localization and recognition in natural scene images. The two main works that I have presented during these three years of PhD studies are presented in this thesis: (i) in my first work I propose a hybrid system which exploits the key ideas of region-based and connected components (CC)-based text localization approaches to localize uncommon fonts and writings in natural images; (ii) in my second work I describe a novel deep-based system which exploits Convolutional Neural Networks and enhanced stable CC to achieve good text spotting results on challenging data sets. During the development of both these methods, my focus has always been on maintaining an acceptable computational complexity and a high reproducibility of the achieved results

    Text localization and recognition in natural scene images

    Get PDF
    Text localization and recognition (text spotting) in natural scene images is an interesting task that finds many practical applications. Algorithms for text spotting may be used in helping visually impaired subjects during navigation in unknown environments; building autonomous driving systems that automatically avoid collisions with pedestrians or automatically identify speed limits and warn the driver about possible infractions that are being committed; and to ease or solve some tedious and repetitive data entry tasks that are still manually carried out by humans. While Optical Character Recognition (OCR) from scanned documents is a solved problem, the same cannot be said for text spotting in natural images. In fact, this latest class of images contains plenty of difficult situations that algorithms for text spotting need to deal with in order to reach acceptable recognition rates. During my PhD research I focused my studies on the development of novel systems for text localization and recognition in natural scene images. The two main works that I have presented during these three years of PhD studies are presented in this thesis: (i) in my first work I propose a hybrid system which exploits the key ideas of region-based and connected components (CC)-based text localization approaches to localize uncommon fonts and writings in natural images; (ii) in my second work I describe a novel deep-based system which exploits Convolutional Neural Networks and enhanced stable CC to achieve good text spotting results on challenging data sets. During the development of both these methods, my focus has always been on maintaining an acceptable computational complexity and a high reproducibility of the achieved results

    Design and development of DrawBot using image processing

    Get PDF
    Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn

    Real-time object detection using monocular vision for low-cost automotive sensing systems

    Get PDF
    This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain

    Detection and identifitication of registration and fishing gear in vessels

    Get PDF
    Illegal, unreported and unregulated (IUU) fishing is a global menace to both marine ecosystems and sustainable fisheries. IUU products often come from fisheries lacking conservation and management measures, which allows the violation of bycatch limits or unreported catching. To counteract such issue, some countries adopted vessel monitoring systems (VMS) in order to track and monitor the activities of fishing vessels. The VMS approach is not flawless and as such, there are still known cases of IUU fishing. The present work is integrated in a project PT2020 SeeItAll of the company Xsealence and was included in INOV tasks in which a monitoring system using video cameras in the Ports (Non-boarded System) was developed, in order to detect registrations of vessels. This system registers the time of entry or exit of the vessel in the port. A second system (Boarded System) works with a camera placed in each vessel and an automatic learning algorithm detects and records fishing activities, for a comparison with the vessel’s fishing report.A pesca ilegal, não declarada e não regulamentada (INDNR) é uma ameaça global tanto para os ecossistemas marinhos quanto para a pesca sustentável. Os produtos INDNR são frequentemente provenientes de pescas que não possuem medidas de conservação e de gestão, o que permite a violação dos limites das capturas ou a captura não declarada. Para contrariar esse problema, alguns países adotaram sistemas de monitoramento de embarcações (VMS) para acompanhar e monitorar as atividades dos navios de pesca. A abordagem VMS não é perfeita e, como tal, ainda há casos conhecidos de pesca INDNR. O presente trabalho encontra-se integrado num projeto PT2020 SeeItAll da empresa Xsealence. Este trabalho integrado nas tarefas do INOV no qual foi desenvolvido um sistema de monitorização das entradas dos navios nos Portos (Sistema não embarcado) no qual pretende-se desenvolver um sistema que detete as matriculas dos navios registando a hora de entrada e saída do porto com recurso da camaras de vídeo. A outra componente (sistema embarcado) é colocada em cada embarcação uma camara de video e, recorrendo a aprendizagem automática e um sistema de CCTV, são detetadas as atividades de pesca e gravadas, para posterior comparação com o relatório de pesca do navio
    corecore