420 research outputs found
Interdisciplinarity in the Age of the Triple Helix: a Film Practitioner's Perspective
This integrative chapter contextualises my research including articles I have published as well as one of the creative artefacts developed from it, the feature film The Knife That Killed Me. I review my work considering the ways in which technology, industry methods and academic practice have evolved as well as how attitudes to interdisciplinarity have changed, linking these to Etzkowitz and Leydesdorffâs âTriple Helixâ model (1995). I explore my own experiences and observations of opportunities and challenges that have been posed by the intersection of different stakeholder needs and expectations, both from industry and academic perspectives, and argue that my work provides novel examples of the applicability of the âTriple Helixâ to the creative industries. The chapter concludes with a reflection on the evolution and direction of my work, the relevance of the âTriple Helixâ to creative practice, and ways in which this relationship could be investigated further
Women in the History of Science
Women in the History of Science brings together primary sources that highlight womenâs involvement in scientific knowledge production around the world. Drawing on texts, images and objects, each primary source is accompanied by an explanatory text, questions to prompt discussion, and a bibliography to aid further research. Arranged by time period, covering 1200 BCE to the twenty-first century, and across 12 inclusive and far-reaching themes, this book is an invaluable companion to students and lecturers alike in exploring womenâs history in the fields of science, technology, mathematics, medicine and culture.
While women are too often excluded from traditional narratives of the history of science, this book centres on the voices and experiences of women across a range of domains of knowledge. By questioning our understanding of what science is, where it happens, and who produces scientific knowledge, this book is an aid to liberating the curriculum within schools and universities
Synthetic image generation and the use of virtual environments for image enhancement tasks
Deep learning networks are often difficult to train if there are insufficient image samples. Gathering real-world images tailored for a specific job takes a lot of work to perform. This dissertation explores techniques for synthetic image generation and virtual environments for various image enhancement/ correction/restoration tasks, specifically distortion correction, dehazing, shadow removal, and intrinsic image decomposition. First, given various image formation equations, such as those used in distortion correction and dehazing, synthetic image samples can be produced, provided that the equation is well-posed. Second, using virtual environments to train various image models is applicable for simulating real-world effects that are otherwise difficult to gather or replicate, such as dehazing and shadow removal. Given synthetic images, one cannot train a network directly on it as there is a possible gap between the synthetic and real domains. We have devised several techniques for generating synthetic images and formulated domain adaptation methods where our trained deep-learning networks perform competitively in distortion correction, dehazing, and shadow removal. Additional studies and directions are provided for the intrinsic image decomposition problem and the exploration of procedural content generation, where a virtual Philippine city was created as an initial prototype.
Keywords: image generation, image correction, image dehazing, shadow removal, intrinsic image decomposition, computer graphics, rendering, machine learning, neural networks, domain adaptation, procedural content generation
Machine learning for the automation and optimisation of optical coordinate measurement
Camera based methods for optical coordinate metrology are growing in popularity due to their non-contact probing technique, fast data acquisition time, high point density and high surface coverage. However, these optical approaches are often highly user dependent, have high dependence on accurate system characterisation, and can be slow in processing the raw data acquired during measurement. Machine learning approaches have the potential to remedy the shortcomings of such optical coordinate measurement systems. The aim of this thesis is to remove dependence on the user entirely by enabling full automation and optimisation of optical coordinate measurements for the first time. A novel software pipeline is proposed, built, and evaluated which will enable automated and optimised measurements to be conducted. No such automated and optimised system for performing optical coordinate measurements currently exists. The pipeline can be roughly summarised as follows:
intelligent characterisation -> view planning -> object pose estimation -> automated data acquisition -> optimised reconstruction.
Several novel methods were developed in order to enable the embodiment of this pipeline. Chapter 4 presents an intelligent camera characterisation (the process of determining a mathematical model of the optical system) is performed using a hybrid approach wherein an EfficientNet convolutional neural network provides sub-pixel corrections to feature locations provided by the popular OpenCV library. The proposed characterisation scheme is shown to robustly refine the characterisation result as quantified by a 50 % reduction in the mean residual magnitude. The camera characterisation is performed before measurements are performed and the results are fed as an input to the pipeline. Chapter 5 presents a novel genetic optimisation approach is presented to create an imaging strategy, ie. the positions from which data should be captured relative to partâs specific geometry. This approach exploits the computer aided design (CAD) data of a given part, ensuring any measurement is optimal given a specific target geometry. This view planning approach is shown to give reconstructions with closer agreement to tactile coordinate measurement machine (CMM) results from 18 images compared to unoptimised measurements using 60 images. This view planning algorithm assumes the part is perfectly placed in the centre of the measurement volume so is first adjusted for an arbitrary placement of the part before being used for data acquistion. Chapter 6 presents a generative model for the creation of surface texture data is presented, allowing the generation of synthetic butt realistic datasets for the training of statistical models. The surface texture generated by the proposed model is shown to be quantitatively representative of real focus variation microscope measurements. The model developed in this chapter is used to produce large synthetic but realistic datasets for the training of further statistical models. Chapter 7 presents an autonomous background removal approach is proposed which removes superfluous data from images captured during a measurement. Using images processed by this algorithm to reconstruct a 3D measurement of an object is shown to be effective in reducing data processing times and improving measurement results. Use the proposed background removal on images before reconstruction are shown to benefit from up to a 41 % reduction in data processing times, a reduction in superfluous background points of up to 98 %, an increase in point density on the object surface of up to 10 %, and an improved agreement with CMM as measured by both a reduction in outliers and reduction in the standard deviation of point to mesh distances of up to 51 microns. The background removal algorithm is used to both improve the final reconstruction and within stereo pose estimation. Finally, in Chapter 8, two methods (one monocular and one stereo) for establishing the initial pose of the part to be measured relative to the measurement volume are presented. This is an important step to enabling automation as it allows the user to place the object at an arbitrary location in the measurement volume and for the pipeline to adjust the imaging strategy to account for this placement, enabling the optimised view plan to be carried out without the need for special part fixturing. It is shown that the monocular method can locate a part to within an average of 13 mm and the stereo method can locate apart to within an average of 0.44 mm as evaluated on 240 test images. Pose estimation is used to provide a correction to the view plan for an arbitrary part placement without the need for specialised fixturing or fiducial marking.
This pipeline enables an inexperienced user to place a part anywhere in the measurement volume of a system and, from the partâs associated CAD data, the system will perform an optimal measurement without the need for any user input. Each new method which was developed as part of this pipeline has been validated against real experimental data from current measurement systems and shown to be effective.
In future work given in Section 9.1, a possible hardware integration of the methods developed in this thesis is presented. Although the creation of this hardware is beyond the scope of this thesis
Privaatsust sÀilitava raalnÀgemise meetodi arendamine kehalise aktiivsuse automaatseks jÀlgimiseks koolis
VÀitekirja elektrooniline versioon ei sisalda publikatsiooneKuidas vaadelda inimesi ilma neid nÀgemata?
Ăeldakse, et ei ole viisakas jĂ”llitada. Ăigus privaatsusele on lausa inimĂ”igus. Siiski on inimkĂ€itumises palju sellist, mida teadlased tahaksid uurida inimesi vaadeldes. NĂ€iteks tahame teada, kas lapsed hakkavad vahetunnis rohkem liikuma, kui koolis keelatakse nutitelefonid? Selle vĂ€lja selgitamiseks peaks teadlane kĂŒsima lapsevanematelt nĂ”usolekut vĂ”sukeste vaatlemiseks. Eeldusel, et lapsevanemad annavad loa, oleks klassikaliseks vaatluseks vaja tohutult palju tööjĂ”udu â mitu vaatlejat koolimajas iga pĂ€ev piisavalt pikal perioodil enne ja pĂ€rast nutitelefoni keelu kehtestamist. Doktoritööga pĂŒĂŒdsin lahendada korraga privaatsuse probleemi ja tööjĂ”u probleemi, asendades inimvaatleja tehisaruga.
Kaasaegsed masinĂ”ppe meetodid vĂ”imaldavad luua mudeleid, mis tuvastavad automaatselt pildil vĂ”i videos kujutatud objekte ja nende omadusi. Kui tahame tehisaru, mis tunneb pildil Ă€ra inimese, tuleb moodustada masinĂ”ppe andmestik, kus on pilte inimestest ja pilte ilma inimesteta. Kui tahame tehisaru, mis eristaks videos madalat ja kĂ”rget kehalist aktiivsust, on vaja vastavat videoandmestikku. Doktoritöös kogusingi andmestiku, kus video laste liikumisest on sĂŒnkroniseeritud puusal kantavate aktseleromeetritega, et treenida mudel, mis eristaks videopikslites madalamat ja kĂ”rgemat liikumise intensiivsust. Koostöös Tehonoloogiainstituudi iCV laboriga arendasime vĂ€lja videoanalĂŒĂŒsi sensori prototĂŒĂŒbi, mis suudab reaalaja kiirusel hinnata kaamera vaatevĂ€ljas olevate inimeste kehalise aktiivsuse taset. Just see, et tehisaru suudab tuletada videost kehalise aktiivsuse informatsiooni ilma neid videokaadreid salvestamata ega inimestele ĂŒldsegi nĂ€itamata, vĂ”imaldab vaadelda inimesi ilma neid nĂ€gemata.
VĂ€ljatöötatud meetod on mĂ”eldud kehalise aktiivsuse mÔÔtmiseks koolipĂ”histes teadusuuringutes ning seetĂ”ttu on arenduses rĂ”hutatud privaatsuse kaitsmist ja teaduseetikat. Laiemalt vaadates illustreerib doktoritöö aga raalnĂ€gemistehnoloogiate potentsiaali töötlemaks visuaalset infot linnaruumis ja töökohtadel ning mitte ainult kehalise aktiivsuse mÔÔtmiseks kĂ”rgete teaduseetika kriteerimitega. Siin ongi koht avalikuks aruteluks â millistel tingimustel vĂ”i kas ĂŒldse on OK, kui sind jĂ”llitab robot?
âHow to observe people without seeing them?
They say it's not polite to stare. The right to privacy is considered a human right. However, there is much in human behavior that scientists would like to study via observation. For example, we want to know whether children will start moving more during recess if smartphones are banned at school? To figure this out, scientists would have to ask parental consent to carry out the observation. Assuming parents grant permission, a huge amount of labour would be needed for classical observation - several observers in the schoolhouse every day for a sufficiently long period before and after the smartphone ban. With my doctoral thesis, I tried to solve both the problem of privacy and of labor by replacing the human observer with artificial intelligence (AI).
Modern machine learning methods allow training models that automatically detect objects and their properties in images or video. If we want an AI that recognizes people in images, we need to form a machine learning dataset with pictures of people and pictures without people. If we want an AI that differentiates between low and high physical activity in video, we need a corresponding video dataset. In my doctoral thesis, I collected a dataset where video of children's movement is synchronized with hip-worn accelerometers to train a model that could differentiate between lower and higher levels of physical activity in video. In collaboration with the ICV lab at the Institute of Technology, we developed a prototype video analysis sensor that can estimate the level of physical activity of people in the camera's field of view at real-time speed. The fact that AI can derive information about physical activity from the video without recording the footage or showing it to anyone at all, makes it possible to observe without seeing.
The method is designed for measuring physical activity in school-based research and therefore highly prioritizes privacy protection and research ethics. But more broadly, the thesis illustrates the potential of computer vision technologies for processing visual information in urban spaces and workplaces, and not only for measuring physical activity or adhering to high ethical standards. This warrants wider public discussion â under what conditions or whether at all is it OK to have a robot staring at you?https://www.ester.ee/record=b555972
Chapter 34 - Biocompatibility of nanocellulose: Emerging biomedical applications
Nanocellulose already proved to be a highly relevant material for biomedical
applications, ensued by its outstanding mechanical properties and, more importantly, its biocompatibility. Nevertheless, despite their previous intensive
research, a notable number of emerging applications are still being developed.
Interestingly, this drive is not solely based on the nanocellulose features, but also
heavily dependent on sustainability. The three core nanocelluloses encompass
cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNC). All these different types of nanocellulose display highly interesting biomedical properties per se, after modification and when used in
composite formulations. Novel applications that use nanocellulose includewell-known areas, namely, wound dressings, implants, indwelling medical
devices, scaffolds, and novel printed scaffolds. Their cytotoxicity and biocompatibility using recent methodologies are thoroughly analyzed to reinforce their
near future applicability. By analyzing the pristine core nanocellulose, none
display cytotoxicity. However, CNF has the highest potential to fail long-term
biocompatibility since it tends to trigger inflammation. On the other hand, neverdried BNC displays a remarkable biocompatibility. Despite this, all nanocelluloses clearly represent a flag bearer of future superior biomaterials, being
elite materials in the urgent replacement of our petrochemical dependence
Multiple View Texture Mapping: A Rendering Approach Designed for Driving Simulation
Simulation provides a safe and controlled environment ideal for human
testing [49, 142, 120]. Simulation of real environments has reached
new heights in terms of photo-realism. Often, a team of professional
graphical artists would have to be hired to compete with modern commercial
simulators. Meanwhile, machine vision methods are currently
being developed that attempt to automatically provide geometrically
consistent and photo-realistic 3D models of real scenes [189, 139, 115,
19, 140, 111, 132]. Often the only requirement is a set of images of
that scene. A road engineer wishing to simulate the environment of a
real road for driving experiments could potentially use these tools.
This thesis develops a driving simulator that uses machine vision
methods to reconstruct a real road automatically. A computer graphics
method called projective texture mapping is applied to enhance
the photo-realism of the 3D models[144, 43]. This essentially creates
a virtual projector in the 3D environment to automatically assign image
coordinates to a 3D model. These principles are demonstrated
using custom shaders developed for an OpenGL rendering pipeline.
Projective texture mapping presents a list of challenges to overcome,
these include reverse projection and projection onto surfaces not immediately
in front of the projector [53]. A significant challenge was
the removal of dynamic foreground objects. 3D reconstruction systems
create 3D models based on static objects captured in images.
Dynamic objects are rarely reconstructed. Projective texture mapping
of images, including these dynamic objects, can result in visual
artefacts. A workflow is developed to resolve this, resulting in videos
and 3D reconstructions of streets with no moving vehicles on the scene.
The final simulator using 3D reconstruction and projective texture
mapping is then developed. The rendering camera had a motion
model introduced to enable human interaction. The final system is
presented, experimentally tested, and future potential works are discussed
Exploring the relationship between perceptual-cognitive function and driver safety : prediction and transfer
La conduite automobile continue d'ĂȘtre le mode de transport dominant dans le monde et le nombre de vĂ©hicules sur la route ne devrait quâaugmenter au cours des prochaines dĂ©cennies. Dans un mĂȘme temps, l'Ă©volution dĂ©mographique qui se produit actuellement dans le monde industrialisĂ© implique que la proportion de conducteurs ĂągĂ©s sur la route devrait augmenter considĂ©rablement. L'Ăąge s'accompagne de changements de grande envergure dans les systĂšmes physiques, sensoriels et cognitifs, entraĂźnant des changements fonctionnels qui peuvent ĂȘtre subtils ou profonds. Nous commençons seulement Ă comprendre comment la variabilitĂ© normale et pathologique de ces mesures fonctionnelles affecte les performances de conduite et la sĂ©curitĂ©.
Le dĂ©veloppement d'un outil fiable et fondĂ© sur des donnĂ©es probantes pour distinguer les conducteurs prudents des conducteurs dangereux continue d'ĂȘtre une prĂ©occupation majeure pour les chercheurs en gĂ©rontologie, en accidentologie et en clinique. L'accumulation de preuves suggĂšre maintenant qu'il existe un lien important entre des capacitĂ©s cognitives spĂ©cifiques telles que la vitesse de traitement de lâinformation et l'attention, et les performances de conduite. Continuer Ă explorer cette relation pour peut-ĂȘtre un jour dĂ©velopper un tel outil est une entreprise importante. Une autre implication de la relation entre les capacitĂ©s cognitives et les performances de conduite est que les interventions conçues pour les amĂ©liorer ou les maintenir pourraient Ă©ventuellement amĂ©liorer ou maintenir la sĂ©curitĂ© et le confort de conduite des individus Ă court et Ă long terme.
L'objectif de cette thĂšse est triple. PremiĂšrement, il dĂ©veloppe et valide une nouvelle mĂ©thodologie pour Ă©valuer les performances de conduite des jeunes adultes et des adultes plus ĂągĂ©s Ă l'aide de scĂ©narios de simulation de conduite personnalisĂ©s. DeuxiĂšmement, elle pousse l'Ă©tat de nos connaissances sur la façon dont les capacitĂ©s cognitives sont liĂ©es Ă la performance de conduite en dĂ©montrant que la performance sur un test intĂ©gratif d'attention dynamique et de vitesse de traitement - c'est-Ă -dire le suivi d'objets multiples en 3D (3D-MOT) - prĂ©dit les performances des conducteurs de diffĂ©rents groupes d'Ăąge. Enfin, elle offre des preuves suggĂ©rant que la formation 3D-MOT amĂ©liore rĂ©ellement la fonction attentionnelle et la vitesse de traitement en transfĂ©rant la performance sur un test indĂ©pendant de ces capacitĂ©s et, finalement, que cette amĂ©lioration pourrait se traduire par une amĂ©lioration des performances de conduite.Driving continues to be the worldâs dominant form of transportation and the number of vehicles on the road is only projected to increase in the coming decades. At the same time, the demographic shift currently occurring in the industrialized world implies that the proportion of older adult drivers on the road is set to increase substantially. With age comes wide-ranging changes in physical, sensory and cognitive systems resulting in functional changes that can be subtle or profound. We are only beginning to understand how both normal and pathological variability in these functional measures affect driving performance and safety.
Developing a reliable, evidence-based tool to distinguish safe from unsafe drivers continues to be a major preoccupation for gerontology, accidentology, and clinical researchers alike. Accumulating evidence now suggests that there is an important link between specific cognitive abilities such as speed-of-processing, attention, and driving performance. Continuing to explore this relationship in order to perhaps one day develop such a tool is an important endeavour. Another implication of the relationship between cognitive abilities and driving performance is that interventions designed to improve or sustain these might conceivably enhance or maintain individualsâ driving safety and comfort in the short- and long-term.
The purpose of this thesis is threefold. First, it develops and validates a novel methodology for assessing both young adult and older adult driving performance using custom driving simulator scenarios. Second, it pushes the state of our knowledge of how cognitive abilities relate to driving performance by demonstrating that performance on an integrative test of dynamic attention and speed-of-processingâi.e., 3-dimensional multiple object tracking (3D-MOT)â predicts how drivers of different age groups perform. Finally, it offers evidence to suggest that training 3D-MOT actually enhances attentional function and speed-of-processing by transferring to performance on an unrelated test of these abilities and, ultimately, that this improvement might translate to improved driving performance
ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS
Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems.
This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models).
For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures
- âŠ