11 research outputs found
Review of Traffic Sign Detection and Recognition Techniques
Text, as one of the most compelling developments of humankind, has assumed a significant job in human life, so distant from antiquated occasions. The rich and exact data epitomized in content is extremely helpful in a wide scope of vision-based applications; along these lines content detection and recognition in regular scenes have turned out to be significant and dynamic research points in PC vision and report investigation. Traffic sign detection and recognition is a field of connected PC vision research worried about the programmed detection and grouping or recognition of traffic signs in scene pictures procured from a moving vehicle. Driving is an assignment dependent on visual data handling. The traffic signs characterize a visual language translated by drivers. Traffic signs convey much data important for effective driving; they portray current traffic circumstance, characterize option to proceed, preclude or grant certain headings. In this paper, talked about different detection and recognition schemes
All you need is a second look: Towards Tighter Arbitrary shape text detection
Deep learning-based scene text detection methods have progressed
substantially over the past years. However, there remain several problems to be
solved. Generally, long curve text instances tend to be fragmented because of
the limited receptive field size of CNN. Besides, simple representations using
rectangle or quadrangle bounding boxes fall short when dealing with more
challenging arbitrary-shaped texts. In addition, the scale of text instances
varies greatly which leads to the difficulty of accurate prediction through a
single segmentation network. To address these problems, we innovatively propose
a two-stage segmentation based arbitrary text detector named \textit{NASK}
(\textbf{N}eed \textbf{A} \textbf{S}econd loo\textbf{K}). Specifically,
\textit{NASK} consists of a Text Instance Segmentation network namely
\textit{TIS} ( stage), a Text RoI Pooling module and a Fiducial pOint
eXpression module termed as \textit{FOX} ( stage). Firstly,
\textit{TIS} conducts instance segmentation to obtain rectangle text proposals
with a proposed Group Spatial and Channel Attention module (\textit{GSCA}) to
augment the feature expression. Then, Text RoI Pooling transforms these
rectangles to the fixed size. Finally, \textit{FOX} is introduced to
reconstruct text instances with a more tighter representation using the
predicted geometrical attributes including text center line, text line
orientation, character scale and character orientation. Experimental results on
two public benchmarks including \textit{Total-Text} and \textit{SCUT-CTW1500}
have demonstrated that the proposed \textit{NASK} achieves state-of-the-art
results.Comment: 5 pages, 6 figure
All You Need Is Boundary: Toward Arbitrary-Shaped Text Spotting
Recently, end-to-end text spotting that aims to detect and recognize text
from cluttered images simultaneously has received particularly growing interest
in computer vision. Different from the existing approaches that formulate text
detection as bounding box extraction or instance segmentation, we localize a
set of points on the boundary of each text instance. With the representation of
such boundary points, we establish a simple yet effective scheme for end-to-end
text spotting, which can read the text of arbitrary shapes. Experiments on
three challenging datasets, including ICDAR2015, TotalText and COCO-Text
demonstrate that the proposed method consistently surpasses the
state-of-the-art in both scene text detection and end-to-end text recognition
tasks.Comment: Accepted to AAAI202
Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes
Recently, models based on deep neural networks have dominated the fields of
scene text detection and recognition. In this paper, we investigate the problem
of scene text spotting, which aims at simultaneous text detection and
recognition in natural images. An end-to-end trainable neural network model for
scene text spotting is proposed. The proposed model, named as Mask TextSpotter,
is inspired by the newly published work Mask R-CNN. Different from previous
methods that also accomplish text spotting with end-to-end trainable deep
neural networks, Mask TextSpotter takes advantage of simple and smooth
end-to-end learning procedure, in which precise text detection and recognition
are acquired via semantic segmentation. Moreover, it is superior to previous
methods in handling text instances of irregular shapes, for example, curved
text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the
proposed method achieves state-of-the-art results in both scene text detection
and end-to-end text recognition tasks.Comment: To appear in ECCV 201
Rancang Bangun Sistem Pengenalan Rambu Petunjuk Arah Berbasis Raspberry Pi Menggunakan Metode OCR (Optical Character Recognition)
Rambu petunjuk arah merupakan salah satu sarana yang memberikan
petunjuk atau keterangan kepada pengemudi atau pemakai jalan lainnya, tentang
arah yang harus ditempuh atau letak kota yang akan dituju lengkap dengan nama
dan arah letak itu berada. Rambu petunjuk arah diperlukan agar pengendara fokus
pada jalan ketika berkendara. Namun pengendara seringkali melewati rambu lalu
lintas tanpa membaca pesan yang tersirat di dalamnya, dibutuhkan suatu sistem
yang dapat mengolah citra dari rambu petunjuk arah agar pengendara fokus pada
jalan maka informasi berupa suara kepada pengendara. Sehingga dalam penelitian
ini dibuatlah sistem pengenalan rambu petunjuk arah berbasis raspberry pi
menggunakan metode OCR (Optical Character Recognition). Perancangan sistem
dimulai dari pembuatan berupa perangkat keras Raspberry pi dan kamera.
Perangkat lunak yang digunakan menggunakan Bahasa pemrograman python dan
library Opencv. Kemudian sistem akan memisahkan warna lain selain hijau
disebabkan karena warna dari rambu petunjuk arah berwarna hijau, setelah itu
sistem akan mencari citra yang berbentuk kotak, lalu pengolahan karakter huruf
serta arah panah. Tahap terakhir perancangan sistem adalah merubah huruf yang
sudah dideteksi pada proses sebelumnya dan kemudian dikenali menjadi suara.
Setalah perancangan selesai, sistem tersebut diimplementasikan. Sistem yang
telah diimplemasikan akan dilakukan pengujian dan analisis. Sistem menguji
dengan mendeteksi karakter huruf dan arah panah kemudian dirubah menjadi
suara. Waktu minimum dalam mengeksekusi gambar menjadi suara adalah 4.7
detik, maksimum 8.02 detik dan rata rata 6.402 detik. Berdasarkan hasil penelitian
dapat disimpulkan bahwa metode Optical Character Recognition terbukti mampu
mengenali citra yang dideteksi pada data latih. Sehingga mempercepat sistem
dalam mengenali kharakter huruf pada citra yang telah dideteksi