257 research outputs found
Reconstructing vectorised photographic images
We address the problem of representing captured images in the continuous mathematical space more usually associated with certain forms of drawn ('vector') images. Such an image is resolution-independent so can be used as a master for varying resolution-specific formats. We briefly describe the main features of a vectorising codec for photographic images, whose significance is that drawing programs can access images and image components as first-class vector objects. This paper focuses on the problem of rendering from the isochromic contour form of a vectorised image and demonstrates a new fill algorithm which could also be used in drawing generally. The fill method is described in terms of level set diffusion equations for clarity. Finally we show that image warping is both simplified and enhanced in this form and that we can demonstrate real histogram equalisation with genuinely rectangular histograms
Recommended from our members
Intelligent laser scanning for computer aided manufacture.
Reverse engineering requires the acquisition of large amounts of data describing the surface of an object, sufficient to replicate that object accurately using appropriate fabrication techniques. This is important within a wide range of commercial and scientific fields where CAD models may be unavailable for parts that must be duplicated or modified, or where a physical model is used as a prototype. The three-dimensional digitisation of objects is an essential first step in reverse engineering. Optical triangulation laser sensors are one of the most popular and common non-contact methods used in the data acquisition process today. They provide the means for high resolution scanning of complex objects. Multiple scans of the object are usually required to capture the full 3D profile of the object. A number of factors, including scan resolution, system optics and the precision of the mechanical parts comprising the system may affect the accuracy of the process. A single perspective optical triangulation sensor provides an inexpensive method for the acquisition of 3D range image data
Arbitrary topology meshes in geometric design and vector graphics
Meshes are a powerful means to represent objects and shapes both in 2D and 3D, but the techniques based on meshes can only be used in certain regular settings and restrict their usage. Meshes with an arbitrary topology have many interesting applications in geometric design and (vector) graphics, and can give designers more freedom in designing complex objects. In the first part of the thesis we look at how these meshes can be used in computer aided design to represent objects that consist of multiple regular meshes that are constructed together. Then we extend the B-spline surface technique from the regular setting to work on extraordinary regions in meshes so that multisided B-spline patches are created. In addition, we show how to render multisided objects efficiently, through using the GPU and tessellation. In the second part of the thesis we look at how the gradient mesh vector graphics primitives can be combined with procedural noise functions to create expressive but sparsely defined vector graphic images. We also look at how the gradient mesh can be extended to arbitrary topology variants. Here, we compare existing work with two new formulations of a polygonal gradient mesh. Finally we show how we can turn any image into a vector graphics image in an efficient manner. This vectorisation process automatically extracts important image features and constructs a mesh around it. This automatic pipeline is very efficient and even facilitates interactive image vectorisation
3D silhouette rendering algorithms using vectorisation technique from Kedah topography map
Most of the applications in real world today are more toward non-photorealistic rather than photorealistic. Slihouette Rendering Algorithms is one of the technique that important in creating non-photorealistic image. It has been successful used in various applications such as game engine, communicating shape, cartoon rendering and 3D terrain visualization. This paper explores how silhouette rendering algorithms could be created using a data that have been extracted from Kedah topography map. Contour data from a topography map are convert from raster to vector (vectorisation) in order to create a grid terrain Digital Elevation Model (DEM) data. The vectorisation software has been used for producing these data. The data then convert into the format that suitable for existing 3D silhouette software. The results produced compatible terrain images of Sik District, Kedah that are closer to human drawn illustration and look like an artistic style
Classification of consumer goods into 5-digit COICOP 2018 codes
The survey of consumer expenditure is a national survey conducted by Statistics Norway (SSB) with the purpose of collecting detailed data about Norwegian householdsâ annual consumption of different goods and services. The survey has up until its most recent publication in 2012 relied on employees at SSB to manually categorise all registered expenditures into COICOP (Classification of Individual Consumption by Purpose) item codes to produce consumption statistics. This has involved large workloads and high implementation costs, and because of this, SSB wants to modernise and improve the efficiency of the survey for its next planned implementation in 2022.
This study is the result of a 3-month collaboration with SSB to explore the application of supervised machine learning for classification of consumer goods to 5-digit COICOP codes. The purpose of this study has been to explore the potential of using machine learning to automate parts of the survey of consumer expenditure.
This thesis demonstrates how different data sets from separate sources can be combined into a COICOP training data set that can be used to develop and evaluate COICOP classification models. Furthermore, this study explores how these models can be incorporated into a âhuman-in-the-loopâ-based classification system to facilitate automatic classification of consumer goods while also maintaining sufficient levels of data quality.
The findings indicate that supervised machine learning is a suited method for classifying consumer goods into 5-digit COICOP codes. Additionally, the results show that the modelsâ prediction probabilities are good indicators of where misclassifications occur. Together, these findings show a promising potential for implementation of a âhuman-in-the-loopâ-based classification system for reliable classification of consumer goods. At the same time, the findings uncover important limitations with the data used in this thesis, as the models were trained on data that the survey of consumer expenditure will not be based on. This thesis has used data sets that were available, and these were not necessarily the most relevant. Therefore, it is not expected that the developed models will provide immediate value to the objectives of SSB without first being trained on more relevant data.ForbruksundersĂžkelsen er en nasjonal undersĂžkelse som er utfĂžrt av Statistisk SentralbyrĂ„ (SSB) med den hensikt Ă„ samle inn detaljert forbruksstatistikk om norske husholdninger. Inntil dens forelĂžpig siste gjennomfĂžring i 2012, har ansatte ved SSB mĂ„ttet manuelt kode alle registrerte varekjĂžp inn i COICOP (Classification of Individual Consumption by Purpose) varekoder for Ă„ produsere forbruksstatistikk fra undersĂžkelsen. Dette har medfĂžrt store arbeidsmengder og hĂžye kostnader, og SSB Ăžnsker derfor nĂ„ Ă„ modernisere og effektivisere undersĂžkelsen i forbindelse med dens neste planlagte gjennomfĂžring i 2022.
Denne oppgaven er et resultat av et 3 mÄneders samarbeid med SSB for Ä utforske anvendelse av veiledet maskinlÊring for Ä klassifisere forbruksvarer i 5-sifrede COICOP varegrupper. Dette har hatt som hensikt Ä kartlegge effektiviseringspotensialet ved Ä bruke maskinlÊring til Ä automatisere deler av forbruksundersÞkelsen.
I denne oppgaven demonstreres det hvordan ulike datasett fra ulike kilder kan kombineres til et COICOP treningsdatasett som kan brukes til Ă„ utvikle og evalurere COICOP klassifiseringsmodeller. Videre utforsker oppgaven hvordan disse modellene kan brukes i kombinasjon med et âhuman-in-the-loopâ-basert klassifieringssystem forĂ„ tilrettelegge for automatisk klassifiering av varer og samtidig ivareta tilstrekkelig datakvalitet.
Funnene antyder at veiledet maskinlĂŠring er en egnet metode for klassifisering av varer til 5-sifrede COICOP varekoder, og i tillegg viser resultatene at modellenes prediksjonssannsynligheter gir en god indikasjon for hvor feil oppstĂ„r. Dette gir et godt grunnlag for bruk av et âhuman-in-the-loopâ-basert klassifiseringssystem for pĂ„litelig klassifisering av forbruksvarer. Samtidig avdekker funnene sentrale begrensninger med dataen brukt i denne oppgaven, da modellene ble trent pĂ„ data som forbruksundersĂžkelsen ikke vil basere seg pĂ„. Bakgrunnen for dette er at oppgaven har brukt de data som var tilgjengelige, og disse var ikke nĂždvendigvis de mest relevante. Det kan dermed ikke forventes at de utviklede modellene gir umiddelbar verdi til SSBs formĂ„l uten fĂžrst Ă„ bli trent pĂ„ mer relevante data.M-I
Photorealistic retrieval of occluded facial information using a performance-driven face model
Facial occlusions can cause both human observers and computer algorithms
to fail in a variety of important tasks such as facial action analysis and
expression classification. This is because the missing information is not
reconstructed accurately enough for the purpose of the task in hand. Most
current computer methods that are used to tackle this problem implement
complex three-dimensional polygonal face models that are generally timeconsuming
to produce and unsuitable for photorealistic reconstruction of
missing facial features and behaviour.
In this thesis, an image-based approach is adopted to solve the occlusion
problem. A dynamic computer model of the face is used to retrieve the
occluded facial information from the driver faces. The model consists of a
set of orthogonal basis actions obtained by application of principal
component analysis (PCA) on image changes and motion fields extracted
from a sequence of natural facial motion (Cowe 2003). Examples of
occlusion affected facial behaviour can then be projected onto the model to
compute coefficients of the basis actions and thus produce photorealistic
performance-driven animations.
Visual inspection shows that the PCA face model recovers aspects of
expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database
of test sequences affected by a considerable set of artificial and natural
occlusions is created. A number of suitable metrics is developed to measure
the accuracy of the reconstructions. Regions of the face that are most
important for performance-driven mimicry and that seem to carry the best
information about global facial configurations are revealed using Bubbles,
thus in effect identifying facial areas that are most sensitive to occlusions.
Recovery of occluded facial information is enhanced by applying an
appropriate scaling factor to the respective coefficients of the basis actions
obtained by PCA. This method improves the reconstruction of the facial
actions emanating from the occluded areas of the face. However, due to the
fact that PCA produces bases that encode composite, correlated actions,
such an enhancement also tends to affect actions in non-occluded areas of
the face. To avoid this, more localised controls for facial actions are
produced using independent component analysis (ICA). Simple projection
of the data onto an ICA model is not viable due to the non-orthogonality of
the extracted bases. Thus occlusion-affected mimicry is first generated using
the PCA model and then enhanced by accordingly manipulating the
independent components that are subsequently extracted from the mimicry.
This combination of methods yields significant improvements and results in
photorealistic reconstructions of occluded facial actions
Massiv-Parallele Algorithmen zum Laden von Daten auf Moderner Hardware
While systems face an ever-growing amount of data that needs to be ingested, queried and analysed, processors are seeing only moderate improvements in sequential processing performance. This thesis addresses the fundamental shift towards increasingly parallel processors and contributes multiple massively parallel algorithms to accelerate different stages of the ingestion pipeline, such as data parsing and sorting.Systeme sehen sich mit einer stetig anwachsenden Menge an Daten konfrontiert, die geladen und analysiert, sowie Anfragen darauf bearbeitet werden mĂŒssen. Gleichzeitig nimmt die sequentielle Verarbeitungsgeschwindigkeit von Prozessoren nur noch moderat zu. Diese Arbeit adressiert den Wandel hin zu zunehmend parallelen Prozessoren und leistet mit mehreren massiv-parallelen Algorithmen einen Beitrag um unterschiedliche Phasen der Datenverarbeitung wie zum Beispiel Parsing und Sortierung zu beschleunigen
- âŠ