35 research outputs found
μ λ ¬ νΉμ±λ€ κΈ°λ°μ λ¬Έμ λ° μ₯λ©΄ ν μ€νΈ μμ ννν κΈ°λ²
νμλ
Όλ¬Έ (λ°μ¬)-- μμΈλνκ΅ λνμ 곡과λν μ κΈ°Β·μ»΄ν¨ν°κ³΅νλΆ, 2017. 8. μ‘°λ¨μ΅.μΉ΄λ©λΌλ‘ 촬μν ν
μ€νΈ μμμ λν΄μ, κ΄ν λ¬Έμ μΈμ(OCR)μ 촬μλ μ₯λ©΄μ λΆμνλλ° μμ΄μ λ§€μ° μ€μνλ€. νμ§λ§ μ¬λ°λ₯Έ ν
μ€νΈ μμ κ²μΆ νμλ, 촬μν μμμ λν λ¬Έμ μΈμμ μ¬μ ν μ΄λ €μ΄ λ¬Έμ λ‘ μ¬κ²¨μ§λ€. μ΄λ μ’
μ΄μ ꡬλΆλ¬μ§κ³Ό μΉ΄λ©λΌ μμ μ μν κΈ°ννμ μΈ μ곑 λλ¬Έμ΄κ³ , λ°λΌμ μ΄λ¬ν ν
μ€νΈ μμμ λν νννλ λ¬Έμ μΈμμ μμ΄μ νμμ μΈ μ μ²λ¦¬ κ³Όμ μΌλ‘ μ¬κ²¨μ§λ€. μ΄λ₯Ό μν μ곑λ 촬μ μμμ μ λ©΄ μμ μΌλ‘ 볡μνλ ν
μ€νΈ μμ ννν λ°©λ²λ€μ νλ°ν μ°κ΅¬λμ΄μ§κ³ μλ€. μ΅κ·Όμλ, νννκ° μ λ ν
μ€νΈμ μ±μ§μ μ΄μ μ λ§μΆ μ°κ΅¬λ€μ΄ μ£Όλ‘ μ§νλκ³ μλ€. μ΄λ¬ν κ΄μ μμ, λ³Έ νμ λ
Όλ¬Έμ ν
μ€νΈ μμ νννλ₯Ό μνμ¬ μλ‘μ΄ μ λ ¬ νΉμ±λ€μ λ€λ£¬λ€. μ΄λ¬ν μ λ ¬ νΉμ±λ€μ λΉμ© ν¨μλ‘ μ€κ³λμ΄μ§κ³ , λΉμ© ν¨μλ₯Ό μ΅μννλ λ°©λ²μ ν΅ν΄μ νννμ μ¬μ©λμ΄μ§λ ννν λ³μλ€μ΄ ꡬν΄μ§λ€. λ³Έ νμ λ
Όλ¬Έμ λ¬Έμ μμ ννν, μ₯λ©΄ ν
μ€νΈ ννν, μΌλ° λ°°κ²½ μμ νμ΄μ§ νλ©΄ νννμ κ°μ΄ 3κ°μ§ μΈλΆ μ£Όμ λ‘ λλ μ§λ€.
첫 λ²μ§Έλ‘, λ³Έ νμ λ
Όλ¬Έμ ν
μ€νΈ λΌμΈλ€κ³Ό μ λΆλ€μ μ λ ¬ νΉμ±μ κΈ°λ°μ λ¬Έμ μμ ννν λ°©λ²μ μ μνλ€. κΈ°μ‘΄μ ν
μ€νΈ λΌμΈ κΈ°λ°μ λ¬Έμ μμ ννν λ°©λ²λ€μ κ²½μ°, λ¬Έμκ° λ³΅μ‘ν λ μ΄μμ ννμ΄κ±°λ μ μ μμ ν
μ€νΈ λΌμΈμ ν¬ν¨νκ³ μμ λ λ¬Έμ κ° λ°μνλ€. μ΄λ λ¬Έμμ ν
μ€νΈ λμ κ·Έλ¦Ό, κ·Έλν νΉμ νμ κ°μ μμμ΄ λ§μ κ²½μ°μ΄λ€. λ°λΌμ λ μ΄μμμ κ°μΈν λ¬Έμ μμ νννλ₯Ό μνμ¬ μ μνλ λ°©λ²μ μ λ ¬λ ν
μ€νΈ λΌμΈλΏλ§ μλλΌ μ λΆλ€λ μ΄μ©νλ€. μ¬λ°λ₯΄κ² ννν λ μ λΆλ€μ μ¬μ ν μΌμ§μ μ ννμ΄κ³ , λλΆλΆ κ°λ‘ νΉμ μΈλ‘ λ°©ν₯μΌλ‘ μ λ ¬λμ΄ μλ€λ κ°μ λ° κ΄μΈ‘μ κ·Όκ±°νμ¬, μ μνλ λ°©λ²μ μ΄λ¬ν μ±μ§λ€μ μμννκ³ μ΄λ₯Ό ν
μ€νΈ λΌμΈ κΈ°λ°μ λΉμ© ν¨μμ κ²°ν©νλ€. κ·Έλ¦¬κ³ λΉμ© ν¨μλ₯Ό μ΅μν νλ λ°©λ²μ ν΅ν΄, μ μνλ λ°©λ²μ μ’
μ΄μ ꡬλΆλ¬μ§, μΉ΄λ©λΌ μμ , μ΄μ 거리μ κ°μ ννν λ³μλ€μ μΆμ νλ€. λν, μ€κ²μΆλ ν
μ€νΈ λΌμΈλ€κ³Ό μμμ λ°©ν₯μ κ°μ§λ μ λΆλ€κ³Ό κ°μ μ΄μμ (outlier)μ κ³ λ €νμ¬, μ μνλ λ°©λ²μ λ°λ³΅μ μΈ λ¨κ³λ‘ μ€κ³λλ€. κ° λ¨κ³μμ, μ λ ¬ νΉμ±μ λ§μ‘±νμ§ μλ μ΄μμ λ€μ μ κ±°λκ³ , μ κ±°λμ§ μμ ν
μ€νΈ λΌμΈ λ° μ λΆλ€λ§μ΄ λΉμ©ν¨μ μ΅μ νμ μ΄μ©λλ€. μνν μ€ν κ²°κ³Όλ€μ μ μνλ λ°©λ²μ΄ λ€μν λ μ΄μμμ λνμ¬ κ°μΈν¨μ 보μ¬μ€λ€.
λ λ²μ§Έλ‘λ, λ³Έ λ
Όλ¬Έμ μ₯λ©΄ ν
μ€νΈ ννν λ°©λ²μ μ μνλ€. κΈ°μ‘΄ μ₯λ©΄ ν
μ€νΈ ννν λ°©λ²λ€μ κ²½μ°, κ°λ‘/μΈλ‘ λ°©ν₯μ ν, λμΉ ννμ κ°μ λ¬Έμκ° κ°μ§λ κ³ μ μ μκΉμμ κ΄λ ¨λ νΉμ±μ μ΄μ©νλ€. νμ§λ§, μ΄λ¬ν λ°©λ²λ€μ λ¬Έμλ€μ μ λ ¬ ννλ κ³ λ €νμ§ μκ³ , κ°κ° κ°λ³ λ¬Έμμ λν νΉμ±λ€λ§μ μ΄μ©νκΈ° λλ¬Έμ μ¬λ¬ λ¬Έμλ€λ‘ ꡬμ±λ ν
μ€νΈμ λν΄μ μ μ λ ¬λμ§ μμ κ²°κ³Όλ₯Ό μΆλ ₯νλ€. μ΄λ¬ν λ¬Έμ μ μ ν΄κ²°νκΈ° μνμ¬, μ μνλ λ°©λ²μ λ¬Έμλ€μ μ λ ¬ μ 보λ₯Ό μ΄μ©νλ€. μ ννκ²λ, λ¬Έμ κ³ μ μ λͺ¨μλΏλ§ μλλΌ μ λ ¬ νΉμ±λ€λ ν¨κ» λΉμ©ν¨μλ‘ μμνλκ³ , λΉμ©ν¨μλ₯Ό μ΅μννλ λ°©λ²μ ν΅ν΄μ νννκ° μ§νλλ€. λν, λ¬Έμλ€μ μ λ ¬ νΉμ±μ μμννκΈ° μνμ¬, μ μνλ λ°©λ²μ ν
μ€νΈλ₯Ό κ°κ° κ°λ³ λ¬Έμλ€λ‘ λΆλ¦¬νλ λ¬Έμ λΆλ¦¬ λν μννλ€. κ·Έ λ€, ν
μ€νΈμ μ, μλ μ λ€μ RANSAC μκ³ λ¦¬μ¦μ μ΄μ©ν μ΅μ μ κ³±λ²μ ν΅ν΄ μΆμ νλ€. μ¦, μ 체 μκ³ λ¦¬μ¦μ λ¬Έμ λΆλ¦¬μ μ μΆμ , νννκ° λ°λ³΅μ μΌλ‘ μνλλ€. μ μνλ λΉμ©ν¨μλ λ³Όλ‘(convex)ννκ° μλκ³ λν λ§μ λ³μλ€μ ν¬ν¨νκ³ μκΈ° λλ¬Έμ, μ΄λ₯Ό μ΅μ ννκΈ° μνμ¬ Augmented Lagrange Multiplier λ°©λ²μ μ΄μ©νλ€. μ μνλ λ°©λ²μ μΌλ° 촬μ μμκ³Ό ν©μ±λ ν
μ€νΈ μμμ ν΅ν΄ μ€νμ΄ μ§νλμκ³ , μ€ν κ²°κ³Όλ€μ μ μνλ λ°©λ²μ΄ κΈ°μ‘΄ λ°©λ²λ€μ λΉνμ¬ λμ μΈμ μ±λ₯μ 보μ΄λ©΄μ λμμ μκ°μ μΌλ‘λ μ’μ κ²°κ³Όλ₯Ό 보μμ 보μ¬μ€λ€.
λ§μ§λ§μΌλ‘, μ μνλ λ°©λ²μ μΌλ° λ°°κ²½ μμ νμ΄μ§ νλ©΄ ννν λ°©λ²μΌλ‘λ νμ₯λλ€. μΌλ° λ°°κ²½μ λν΄μ, μ½λ³μ΄λ μλ£μ μΊκ³Ό κ°μ΄ μν΅ ννμ 물체λ λ§μ΄ μ‘΄μ¬νλ€. κ·Έλ€μ νλ©΄μ μΌλ° μν΅ νλ©΄(GCS)μΌλ‘ λͺ¨λΈλ§μ΄ κ°λ₯νλ€. μ΄λ¬ν νμ΄μ§ νλ©΄λ€μ λ§μ λ¬Έμμ κ·Έλ¦Όλ€μ ν¬ν¨νκ³ μμ§λ§, ν¬ν¨λ λ¬Έμλ λ¬Έμμ λΉν΄μ λ§€μ° λΆκ·μΉμ μΈ κ΅¬μ‘°λ₯Ό κ°μ§κ³ μλ€. λ°λΌμ κΈ°μ‘΄μ λ¬Έμ μμ ννν λ°©λ²λ€λ‘λ μΌλ° λ°°κ²½ μ νμ΄μ§ νλ©΄ μμμ ννννκΈ° νλ€λ€. λ§μ νμ΄μ§ νλ©΄μ μ μ λ ¬λ μ λΆλ€ (ν
λ리 μ νΉμ λ°μ½λ)μ ν¬ν¨νκ³ μλ€λ κ΄μΈ‘μ κ·Όκ±°νμ¬, μ μνλ λ°©λ²μ μμ μ μν μ λΆλ€μ λν ν¨μλ₯Ό μ΄μ©νμ¬ νμ΄μ§ νλ©΄μ ννννλ€. λ€μν λ₯κ·Ό 물체μ νμ΄μ§ νλ©΄ μμλ€μ λν μ€ν κ²°κ³Όλ€μ μ μνλ λ°©λ²μ΄ νννλ₯Ό μ ννκ² μνν¨μ 보μ¬μ€λ€.The optical character recognition (OCR) of text images captured by cameras plays an important role for scene understanding.
However, the OCR of camera-captured image is still considered a challenging problem, even after the text detection (localization).
It is mainly due to the geometric distortions caused by page curve and perspective view, therefore their rectification has been an essential pre-processing step for their recognition.
Thus, there have been many text image rectification methods which recover the fronto-parallel view image from a single distorted image.
Recently, many researchers have focused on the properties of the well-rectified text.
In this respect, this dissertation presents novel alignment properties for text image rectification, which are encoded into the proposed cost functions.
By minimizing the cost functions, the transformation parameters for rectification are obtained.
In detail, they are applied to three topics: document image dewarping, scene text rectification, and curved surface dewarping in real scene.
First, a document image dewarping method is proposed based on the alignments of text-lines and line segments.
Conventional text-line based document dewarping methods have problems when handling complex layout and/or very few text-lines. When there are few aligned text-lines in the image, this usually means that photos, graphics and/or tables take large portion of the input instead.
Hence, for the robust document dewarping, the proposed method uses line segments in the image in addition to the aligned text-lines.
Based on the assumption and observation that all the transformed line segments are still straight (line to line mapping), and many of them are horizontally or vertically aligned in the well-rectified images, the proposed method encodes this properties into the cost function in addition to the text-line based cost.
By minimizing the function, the proposed method can obtain transformation parameters for page curve, camera pose, and focal length, which are used for document image rectification. Considering that there are many outliers in line segment directions and miss-detected text-lines in some cases, the overall algorithm is designed in an iterative manner. At each step, the proposed method removes the text-lines and line segments that are not well aligned, and then minimizes the cost function with the updated information.
Experimental results show that the proposed method is robust to the variety of page layouts.
This dissertation also presents a method for scene text rectification. Conventional methods for scene text rectification mainly exploited the glyph property, which means that the characters in many language have horizontal/vertical strokes and also some symmetric shapes.
However, since they consider the only shape properties of individual character, without considering the alignments of characters, they work well for only images with a single character, and still yield mis-aligned results for images with multiple characters.
In order to alleviate this problem, the proposed method explicitly imposes alignment constraints on rectified results. To be precise, character alignments as well as glyph properties are encoded in the proposed cost function, and the transformation parameters are obtained by minimizing the function.
Also, in order to encode the alignments of characters into the cost function, the proposed method separates the text into individual characters using a projection profile method before optimizing the cost function. Then, top and bottom lines are estimated using a least squares line fitting with RANSAC. Overall algorithm is designed to perform character segmentation, line fitting, and rectification iteratively.
Since the cost function is non-convex and many variables are involved in the function, the proposed method also develops an optimization method using Augmented Lagrange Multiplier method.
This dissertation evaluates the proposed method on real and synthetic text images and experimental results show that the proposed method achieves higher OCR accuracy than the conventional approach and also yields visually pleasing results.
Finally, the proposed method can be extended to the curved surface dewarping in real scene.
In real scene, there are many circular objects such as medicine bottles or cans of drinking water, and their curved surfaces can be modeled as Generalized Cylindrical Surfaces (GCS). These curved surfaces include many significant text and figures, however their text has irregular structure compared to documents. Therefore, the conventional dewarping methods based on the properties of well-rectified text have problems in their rectification.
Based on the observation that many curved surfaces include well-aligned line segments (boundary lines of objects or barcode), the proposed method rectifies the curved surfaces by exploiting the proposed line segment terms.
Experimental results on a range of images with curved surfaces of circular objects show that the proposed method performs rectification robustly.1 Introduction 1
1.1 Document image dewarping 3
1.2 Scene text rectification 5
1.3 Curved surface dewarping in real scene 7
1.4 Contents 8
2 Related work 9
2.1 Document image dewarping 9
2.1.1 Dewarping methods using additional information 9
2.1.2 Text-line based dewarping methods 10
2.2 Scene text rectification 11
2.3 Curved surface dewarping in real scene 12
3 Document image dewarping 15
3.1 Proposed cost function 15
3.1.1 Parametric model of dewarping process 15
3.1.2 Cost function design 18
3.1.3 Line segment properties and cost function 19
3.2 Outlier removal and optimization 26
3.2.1 Jacobian matrix of the proposed cost function 27
3.3 Document region detection and dewarping 31
3.4 Experimental results 32
3.4.1 Experimental results on text-abundant document images 33
3.4.2 Experimental results on non conventional document images 34
3.5 Summary 47
4 Scene text rectification 49
4.1 Proposed cost function for rectification 49
4.1.1 Cost function design 49
4.1.2 Character alignment properties and alignment terms 51
4.2 Overall algorithm 54
4.2.1 Initialization 55
4.2.2 Character segmentation 56
4.2.3 Estimation of the alignment parameters 57
4.2.4 Cost function optimization for rectification 58
4.3 Experimental results 63
4.4 Summary 66
5 Curved surface dewarping in real scene 73
5.1 Proposed curved surface dewarping method 73
5.1.1 Pre-processing 73
5.1 Experimental results 74
5.2 Summary 76
6 Conclusions 83
Bibliography 85
Abstract (Korean) 93Docto
Learning to Read by Spelling: Towards Unsupervised Text Recognition
This work presents a method for visual text recognition without using any
paired supervisory data. We formulate the text recognition task as one of
aligning the conditional distribution of strings predicted from given text
images, with lexically valid strings sampled from target corpora. This enables
fully automated, and unsupervised learning from just line-level text-images,
and unpaired text-string samples, obviating the need for large aligned
datasets. We present detailed analysis for various aspects of the proposed
method, namely - (1) impact of the length of training sequences on convergence,
(2) relation between character frequencies and the order in which they are
learnt, (3) generalisation ability of our recognition network to inputs of
arbitrary lengths, and (4) impact of varying the text corpus on recognition
accuracy. Finally, we demonstrate excellent text recognition accuracy on both
synthetically generated text images, and scanned images of real printed books,
using no labelled training examples
Recommended from our members
Bridging the Gap Between People, Mobile Devices, and the Physical World
Human-computer interaction (HCI) is being revolutionized by computational design and artificial intelligence. As the diversity of user interfaces shifts from personal desktops to mobile and wearable devices, yesterdayβs tools and interfaces are insufficient to meet the demands of tomorrowβs devices. This dissertation describes my research on leveraging different physical channels (e.g., vibration, light, capacitance) to enable novel interaction opportunities. We first introduce FontCode, an information embedding technique for text documents. Given a text document with specific fonts, our method can embed user-specified information (e.g., URLs, meta data, etc) in the text by perturbing the glyphs of text characters while preserving the text content. The embedded information can later be retrieved using a smartphone in real time. Then, we present Vidgets, a family of mechanical widgets, specifically push buttons and rotary knobs that augment mobile devices with tangible user interfaces. When these widgets are attached to a mobile device and a user interacts with them, the nonlinear mechanical response of the widgets shifts the device slightly and quickly. Subsequently, this subtle motion can be detected by the Inertial Measurement Units (IMUs), which is commonly installed on mobile devices.
Next, we propose BackTrack, a trackpad placed on the back of a smartphone to track finegrained finger motions. Our system has a small form factor, with all the circuits encapsulated in a thin layer attached to a phone case. It can be used with any off-the-shelf smartphone, requiring no power supply or modification of the operating systems. BackTrack simply extends the finger tracking area of the front screen, without interrupting the use of the front screen.
Lastly, we demonstrate MoirΓ©Board, a new camera tracking method that leverages a seemingly irrelevant visual phenomenon, the moirΓ© effect. Based on a systematic analysis of the moirΓ© effect under camera projection, MoirΓ©Board requires no power nor camera calibration. It can easily be made at a low cost (e.g., through 3D printing) and ready to use with any stock mobile device with a camera. Its tracking algorithm is computationally efficient and can run at a high frame rate. It is not only simple to implement, but also tracks devices at a high accuracy, comparable to the state-of-the-art commercial VR tracking systems
Real-Time On-Site OpenGL-Based Object Speed Measuring Using Constant Sequential Image
This thesis presents a method that can detect moving objects and measure their speed of movement, using a constant rate series of sequential images, such as video recordings. It uses the industry standard non-vendor specific OpenGL ES so can be implemented on any platform with OpenGL ES support. It can run on low-end embedded system as it uses simple and basic foundations based on a few assumptions to lowering the overall implementation complexity in OpenGL ES. It also does not require any special peripheral devices, so existing infrastructure can be used with minimal modification, which will further lower the cost of this system.
The sequential images are streamed from an IO device via the CPU into the GPU where a custom shader is used to detect changing pixels between frames to find potential moving objects. The GPU shader continues by measuring the pixel displacement of each object, and then maps this into a practical distance. These results are then sent back to the CPU for future processing.
The algorithm was tested on two real world traffic videos (720p video at 10 FPS) and it successfully extracted the speed data of road vehicles in view on a low-end embedded system (Raspberry Pi 4)
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Facial expression recognition in the wild : from individual to group
The progress in computing technology has increased the demand for smart systems capable of understanding human affect and emotional manifestations. One of the crucial factors in designing systems equipped with such intelligence is to have accurate automatic Facial Expression Recognition (FER) methods. In computer vision, automatic facial expression analysis is an active field of research for over two decades now. However, there are still a lot of questions unanswered. The research presented in this thesis attempts to address some of the key issues of FER in challenging conditions mentioned as follows: 1) creating a facial expressions database representing real-world conditions; 2) devising Head Pose Normalisation (HPN) methods which are independent of facial parts location; 3) creating automatic methods for the analysis of mood of group of people. The central hypothesis of the thesis is that extracting close to real-world data from movies and performing facial expression analysis on movies is a stepping stone in the direction of moving the analysis of faces towards real-world, unconstrained condition. A temporal facial expressions database, Acted Facial Expressions in the Wild (AFEW) is proposed. The database is constructed and labelled using a semi-automatic process based on closed caption subtitle based keyword search. Currently, AFEW is the largest facial expressions database representing challenging conditions available to the research community. For providing a common platform to researchers in order to evaluate and extend their state-of-the-art FER methods, the first Emotion Recognition in the Wild (EmotiW) challenge based on AFEW is proposed. An image-only based facial expressions database Static Facial Expressions In The Wild (SFEW) extracted from AFEW is proposed. Furthermore, the thesis focuses on HPN for real-world images. Earlier methods were based on fiducial points. However, as fiducial points detection is an open problem for real-world images, HPN can be error-prone. A HPN method based on response maps generated from part-detectors is proposed. The proposed shape-constrained method does not require fiducial points and head pose information, which makes it suitable for real-world images. Data from movies and the internet, representing real-world conditions poses another major challenge of the presence of multiple subjects to the research community. This defines another focus of this thesis where a novel approach for modeling the perception of mood of a group of people in an image is presented. A new database is constructed from Flickr based on keywords related to social events. Three models are proposed: averaging based Group Expression Model (GEM), Weighted Group Expression Model (GEM_w) and Augmented Group Expression Model (GEM_LDA). GEM_w is based on social contextual attributes, which are used as weights on each person's contribution towards the overall group's mood. Further, GEM_LDA is based on topic model and feature augmentation. The proposed framework is applied to applications of group candid shot selection and event summarisation. The application of Structural SIMilarity (SSIM) index metric is explored for finding similar facial expressions. The proposed framework is applied to the problem of creating image albums based on facial expressions, finding corresponding expressions for training facial performance transfer algorithms
What remains is the book: The idea of the book in and around electronic space
The purpose of this study is to question the idea of the book in general and how this idea is transforming in electronic space, understood as a space of flows as distinct to a space of places (Castells, 1989, p. 349). In order to question the idea of the book in electronic space we must begin at its ending, or more specifically, at a point in the histories of the book that is widely understood as representing a closing of a parenthesis - that began with the invention of the printing press, up to the end of printβspanning some 500 years, beginning half way through the 15th century in Western Europe
Advanced document data extraction techniques to improve supply chain performance
In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the systemβs methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information
Reading poetry and dreams in the wake of Freud
Adapting the question at the end of Keats's 'Ode to a Nightingale', this thesis argues that reading poetic texts involves a form of suspension between waking and sleeping. Poems are not the product of an
empirical dreamer, but psychoanalytic understandings of dream-work help to provide an account of certain poetic effects. Poetic texts resemble dreams in that both induce identificatory desires within, while
simultaneously estranging, the reading process. In establishing a theoretical connection between poetic texts and drearit-work, the discussion raises issues concerning death, memory and the body.
The introduction relates Freudian and post-Freudian articulations of dream-work to the language of poetry, and addresses the problem of attributing desire "in" a literary text. Interweaving the work of Borch-Jacobsen, Derrida and Blanchot, the discussion proposes a different space of poetry. By reconfiguring the subject-of-desire and the structure of poetic address, the thesis argues that poetic "dreams"
characterize points in texts which radically question the identity and position of the reader.
Several main chapters focus on texts - poems by Frost and Keats, and Freud's reading of literary dreams - in which distinctions between waking and sleeping, familiarity and strangeness, order and confusion are profoundly disturbed. The latter part of the thesis concentrates on a textual "unconscious" that insists undecidably between the cultural and the individual. Poems by Eliot, Tennyson, Arnold and Walcott are shown to figure strange dreams and enact displacements that blur the
categories of public and private. Throughout, the study confronts the
recurrent interpretive problem of reading "inside" and "outside" textual
dreams.
This thesis offers an original perspective on reading poetry in conjunction with psychoanalysis, in that it challenges traditional assumptions about phantasy and poetry dependent upon a subject constituted in advance of a poetic event or scene of phantasy. It brings poetry into systematic relation with Freud's work on dreams and
consistently identifies conceptual and performative links between psychoanalysis and literature in later modernity
Big Data Computing for Geospatial Applications
The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms