73 research outputs found

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonomous driving is one of the rising technology in today’s society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first imple- mented on a small premade book-size vehicle. This system is then used to test the software’s functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle’s environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where block- ades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonom kjøring er en av de fremtredende teknologiene i dagens samfunn. Et bredt spekter av applikasjoner bruker derfor denne teknologien for fordelene den gir. For eksempel vil en autonom kjørende robot frigjøre arbeidskraft og øke produktiviteten i bransjer som krever rask transport. For å oppnå disse fordelene krever det imidlertid utvikling av pålitelig og nøyaktig programvare og algoritmer som skal implementeres i disse autonome kjøresytemene. Ettersom dette feltet har vokst gjennom årene, har forskjellige selskaper implementert denne teknologien med stor suksess. Dermed gjør det økte fokuset på autonom kjøre teknologi dette til et aktuelt tema å forske på. Siden utvikling av et autonomt kjøresystem er et krevende tema, fokuserer dette prosjektet kun på hvordan kunstig syn kan brukes i autonome kjøresystemer. Først og fremst utvikles en kunstig syns basert programvare for autonom kjøring. Programvaren er først implementert på et lite forhåndslaget kjøretøy i bok størrelse. Dette systemet brukes deretter til å teste programvarens funksjonalitet. Autonome kjørefunksjoner som fungerer tilfredsstillende på det lille test kjøretøyet blir også testet på et større kjøretøy for å se om programvaren fungerer for andre systemer. Videre er den en utviklede programvaren begrenset til enkelte autonome kjørehandlinger. Dette inkluderer handlinger som å stoppe når en hindring eller et stoppskilt er oppdaget, kjøring på en enkel vei og parkering. Selv om dette bare er noen få autonome kjøre funksjoner, er de grunnleggende operasjoner som kan gjøre det autonome kjøresystemet allerede anvendelig for forskjellige brukstilfeller. Ulike kunstig syn metode for gjenstands deteksjon har blitt implementert for å oppdage ulike typer gjenstander som hindringer og skilt for å bestemme kjøretøyets miljø. Programvaren inkluderer også bruk av en linje deteksjonsmetode for å oppdage vei- og parkerings linjer som brukes til å sentrere og parkere kjøretøyet. Dessuten skapes et fuglebilde av den fysiske verden fra kamera bilder som skal brukes som et miljøkart for å planlegge den mest optimale rute i forskjellige scenarier. Til slutt blir disse implementeringene kombinert for å bygge kjørelogikken til kjøretøyet, noe som gjør det i stand til å utføre kjørehandlingene nevnt i forrige avsnitt. Ved bruk av den utviklede programvaren for kjøreoppgave, deteksjon av hindringer, viste resultatet at selv om de faktiske hindringene ble oppdaget, var det scenarier der blokkader ble oppdaget selv om det ikke var noen. På den annen side var den utviklede funksjonen med å stoppe når et stoppskilt blir oppdaget svært nøyaktig og pålitelig ettersom den utførte som forventet. Når det gjelder de resterende to implementerte handlingene, sentrering og parkering av kjøretøyet, slet systemet med å oppnå et lovende resultat. Til tross for det viste de fysiske valideringstestene uten bruk av kjøretøymodell positive resultater, men med mindre avvik fra ønsket resultat. Samlet sett har programvaren potensial for å bli anvendelig i mer krevende scenarier, men det er behov for videre utvikling for å fikse noen problemområder først.Autonomous driving is one of the rising technology in today's society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first implemented on a small premade book-size vehicle. This system is then used to test the software's functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle's environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where blockades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonom kjøring er en av de fremtredende teknologiene i dagens samfunn. Et bredt spekter av applikasjoner bruker derfor denne teknologien for fordelene den gir. For eksempel vil en autonom kjørende robot frigjøre arbeidskraft og øke produktiviteten i bransjer som krever rask transport. For å oppnå disse fordelene krever det imidlertid utvikling av pålitelig og nøyaktig programvare og algoritmer som skal implementeres i disse autonome kjøresytemene. Ettersom dette feltet har vokst gjennom årene, har forskjellige selskaper implementert denne teknologien med stor suksess. Dermed gjør det økte fokuset på autonom kjøre teknologi dette til et aktuelt tema å forske på. Siden utvikling av et autonomt kjøresystem er et krevende tema, fokuserer dette prosjektet kun på hvordan kunstig syn kan brukes i autonome kjøresystemer. Først og fremst utvikles en kunstig syns basert programvare for autonom kjøring. Programvaren er først implementert på et lite forhåndslaget kjøretøy i bok størrelse. Dette systemet brukes deretter til å teste programvarens funksjonalitet. Autonome kjørefunksjoner som fungerer tilfredsstillende på det lille test kjøretøyet blir også testet på et større kjøretøy for å se om programvaren fungerer for andre systemer. Videre er den en utviklede programvaren begrenset til enkelte autonome kjørehandlinger. Dette inkluderer handlinger som å stoppe når en hindring eller et stoppskilt er oppdaget, kjøring på en enkel vei og parkering. Selv om dette bare er noen få autonome kjøre funksjoner, er de grunnleggende operasjoner som kan gjøre det autonome kjøresystemet allerede anvendelig for forskjellige brukstilfeller. Ulike kunstig syn metode for gjenstands deteksjon har blitt implementert for å oppdage ulike typer gjenstander som hindringer og skilt for å bestemme kjøretøyets miljø. Programvaren inkluderer også bruk av en linje deteksjonsmetode for å oppdage vei- og parkerings linjer som brukes til å sentrere og parkere kjøretøyet. Dessuten skapes et fuglebilde av den fysiske verden fra kamera bilder som skal brukes som et miljøkart for å planlegge den mest optimale rute i forskjellige scenarier. Til slutt blir disse implementeringene kombinert for å bygge kjørelogikken til kjøretøyet, noe som gjør det i stand til å utføre kjørehandlingene nevnt i forrige avsnitt. Ved bruk av den utviklede programvaren for kjøreoppgave, deteksjon av hindringer, viste resultatet at selv om de faktiske hindringene ble oppdaget, var det scenarier der blokkader ble oppdaget selv om det ikke var noen. På den annen side var den utviklede funksjonen med å stoppe når et stoppskilt blir oppdaget svært nøyaktig og pålitelig ettersom den utførte som forventet. Når det gjelder de resterende to implementerte handlingene, sentrering og parkering av kjøretøyet, slet systemet med å oppnå et lovende resultat. Til tross for det viste de fysiske valideringstestene uten bruk av kjøretøymodell positive resultater, men med mindre avvik fra ønsket resultat. Samlet sett har programvaren potensial for å bli anvendelig i mer krevende scenarier, men det er behov for videre utvikling for å fikse noen problemområder først.Autonomous driving is one of the rising technology in today’s society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first imple- mented on a small premade book-size vehicle. This system is then used to test the software’s functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle’s environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where block- ades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    Deep Segmentation of the Mandibular Canal: a New 3D Annotated Dataset of CBCT Volumes

    Get PDF
    Inferior Alveolar Nerve (IAN) canal detection has been the focus of multiple recent works in dentistry and maxillofacial imaging. Deep learning-based techniques have reached interesting results in this research field, although the small size of 3D maxillofacial datasets has strongly limited the performance of these algorithms. Researchers have been forced to build their own private datasets, thus precluding any opportunity for reproducing results and fairly comparing proposals. This work describes a novel, large, and publicly available mandibular Cone Beam Computed Tomography (CBCT) dataset, with 2D and 3D manual annotations, provided by expert clinicians. Leveraging this dataset and employing deep learning techniques, we are able to improve the state of the art on the 3D mandibular canal segmentation. The source code which allows to exactly reproduce all the reported experiments is released as an open-source project, along with this article

    Pseudo-Saliency for Human Gaze Simulation

    Get PDF
    Understanding and modeling human vision is an endeavor which can be; and has been, approached from multiple disciplines. Saliency prediction is a subdomain of computer vision which tries to predict human eye movements made during either guided or free viewing of static images. In the context of simulation and animation, vision is often also modeled for the purposes of realistic and reactive autonomous agents. These often focus more on plausible gaze movements of the eyes and head, and are less concerned with scene understanding through visual stimuli. In order to bring techniques and knowledge over from computer vision fields into simulated virtual humans requires a methodology to generate saliency maps. Traditional saliency models are ill suited for this due to large computational costs as well as a lack of control due to the nature of most deep network based models. The primary contribution of this thesis is a proposed model for generating pseudo-saliency maps for virtual characters, Parametric Saliency Maps (PSM). This parametric model calculates saliency as a weighted combination of 7 factors selected from saliency and attention literature. Experiments conducted show that the model is expressive enough to mimic results from state-of-the-art saliency models to a high degree of similarity, as well as being extraordinarily cheap to compute by virtue of being done using the graphics processing pipeline of a simulation. The secondary contribution, two models are proposed for saliency driven gaze control. These models are expressive and present novel approaches for controlling the gaze of a virtual character using only visual saliency maps as input

    XR, music and neurodiversity: design and application of new mixed reality technologies that facilitate musical intervention for children with autism spectrum conditions

    Get PDF
    This thesis, accompanied by the practice outputs,investigates sensory integration, social interaction and creativity through a newly developed VR-musical interface designed exclusively for children with a high-functioning autism spectrum condition (ASC).The results aim to contribute to the limited expanse of literature and research surrounding Virtual Reality (VR) musical interventions and Immersive Virtual Environments (IVEs) designed to support individuals with neurodevelopmental conditions. The author has developed bespoke hardware, software and a new methodology to conduct field investigations. These outputs include a Virtual Immersive Musical Reality Intervention (ViMRI) protocol, a Supplemental Personalised, immersive Musical Experience(SPiME) programme, the Assisted Real-time Three-dimensional Immersive Musical Intervention System’ (ARTIMIS) and a bespoke (and fully configurable) ‘Creative immersive interactive Musical Software’ application (CiiMS). The outputs are each implemented within a series of institutional investigations of 18 autistic child participants. Four groups are evaluated using newly developed virtual assessment and scoring mechanisms devised exclusively from long-established rating scales. Key quantitative indicators from the datasets demonstrate consistent findings and significant improvements for individual preferences (likes), fear reduction efficacy, and social interaction. Six individual case studies present positive qualitative results demonstrating improved decision-making and sensorimotor processing. The preliminary research trials further indicate that using this virtual-reality music technology system and newly developed protocols produces notable improvements for participants with an ASC. More significantly, there is evidence that the supplemental technology facilitates a reduction in psychological anxiety and improvements in dexterity. The virtual music composition and improvisation system presented here require further extensive testing in different spheres for proof of concept

    Schwerpunkt Entwerfen

    Get PDF
    Entwerfen ist ein äusserst unscharfer Begriff. Mit ihm kann je nach Kontext ebenso Zeichnen, Planen, Modellieren, Projektieren oder Darstellen gemeint sein wie Erfi nden, Entwickeln, Konzipieren, Komponieren und ähnliches. Wenn Architekten vom Entwurf reden, verwenden sie das Wort meist in einer Bedeutung, die auf den kunsttheoretischen Diskurs zurückgeht, der im Florenz des 16. Jahrhunderts entstanden ist: Entwurf als disegno. Dementsprechend konnte Entwerfen in der kunsthermeneutischen Rezeption schließlich mit dem ›künstlerischen Schaff ensprozess‹ selbst synonym werden. Im Entwerfen meint man der geistigen Vermögen und Prozesse im künstlerischen Subjekt habhaft zu werden. An diese Tradition soll hier bewusst nicht angeknüpft werden. Um das Entwerfen als Kulturtechnik in seiner historischen Bedingtheit zu beschreiben, muss es aus dem anthropozentrischen Ursprung herausgerückt werden, an den es der florentinische kunsttheoretische Diskurs versetzt hat. Statt das Entwerfen als fundamentalen Akt künstlerischen Schaff ens zu begreifen und als anthropologische Konstante der Geschichte zu entziehen, wäre eben diese Konzeption als historisches Resultat von diskursiven, technischen und institutionellen Praktiken zu befragen

    Pseudo-Saliency for Human Gaze Simulation

    Get PDF
    Understanding and modeling human vision is an endeavor which can be; and has been, approached from multiple disciplines. Saliency prediction is a subdomain of computer vision which tries to predict human eye movements made during either guided or free viewing of static images. In the context of simulation and animation, vision is often also modeled for the purposes of realistic and reactive autonomous agents. These often focus more on plausible gaze movements of the eyes and head, and are less concerned with scene understanding through visual stimuli. In order to bring techniques and knowledge over from computer vision fields into simulated virtual humans requires a methodology to generate saliency maps. Traditional saliency models are ill suited for this due to large computational costs as well as a lack of control due to the nature of most deep network based models. The primary contribution of this thesis is a proposed model for generating pseudo-saliency maps for virtual characters, Parametric Saliency Maps (PSM). This parametric model calculates saliency as a weighted combination of 7 factors selected from saliency and attention literature. Experiments conducted show that the model is expressive enough to mimic results from state-of-the-art saliency models to a high degree of similarity, as well as being extraordinarily cheap to compute by virtue of being done using the graphics processing pipeline of a simulation. The secondary contribution, two models are proposed for saliency driven gaze control. These models are expressive and present novel approaches for controlling the gaze of a virtual character using only visual saliency maps as input

    The appendicular skeleton variability of the Sauropoda Titanosauria from the Upper Cretaceous of Lo Hueco (Cuenca, Spain)

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Facultad de Ciencias, Departamento de Biología. Fecha de lectura: 24-01-2020Esta tesis tiene embargado el acceso al texto completo hasta el 24-07-2021En este volumen se presentan nuevos datos acerca del esqueleto apendicular de los titanosaurios del yacimiento Campano-Maastrichtiense de Lo Hueco (Cuenca, España). En este yacimiento se ha recuperado una muestra abundante de restos referidos a saurópodos titanosaurios, con varios ejemplares en conexión y decenas de ejemplares aislados. En esta muestra se identifica una elevada variabilidad morfológica en cada tipo de elemento apendicular y la presencia de ejemplares de pequeño tamaño. Hasta ahora solo se ha descrito en el yacimiento una forma exclusiva de titanosaurio, Lohuecotitan pandafilandi. No obstante, los estudios de los abundantes restos encontrados en el yacimiento habían permitido identificar dos morfotipos principales de dientes, dos tipos de basicraneos de titanosaurio, tres posibles morfotipos identificados en el esqueleto axial correspondiente a las vértebras dorsales, y cuatro morfotipos en el estudio de las vértebras caudales. En el presente estudio se explora la elevada variabilidad encontrada en la muestra de restos apendiculares. Para ello se utilizan una serie de técnicas analíticas relacionadas con el machine learning y la morfometría geométrica en 3D con el objetivo de identificar posibles morfotipos que ayuden a explicar esta variabilidad. Se desarrolla un flujo de trabajo de digitalización del ejemplar en 3D, proceso de restauración virtual en caso de ser ejemplares fragmentarios, y su posterior análisis estadístico. Mediante estas técnicas se determina la presencia de dos morfotipos principales. A partir de esta identificación, se procede a la cuantificación de la variabilidad intraespecífica en cada uno de ellos, así como la determinación de posibles secuencias ontogenéticas y la variabilidad debida a cambios durante el crecimiento del esqueleto apendicular de los titanosaurios. Algunos indicios apuntan a la que los dos morfotipos identificados en el yacimiento pertenecerían a dos gremios distintos que tendrían dos estrategias tróficas distintas. En el presente trabajo se discuten las posibles implicaciones en las diferencias morfológicas observadas entre ambos morfotipos principales. Se realiza un modelo aproximado con el que relacionar la morfología general de las extremidades en neosaurópodos con estos dos tipos de gremios y se relacionan los dos morfotipos principales con dos estrategias tróficas congruentes con los datos del estudio de material craneal. La variabilidad intraespecífica observada en cada morfotipo permite determinar sus implicaciones en la codificación de caracteres morfológicos apendiculares. En este trabajo se han identificado varias secuencias ontogenéticas relativas a cada tipo de elemento analizado. Se describe en detalle por primera vez las secuencias de transformaciones ontogenéticas en estos titanosaurios, así como el estadio y tiempo relativo en que se producen dichos cambios y sus implicaciones en las codificaciones de caracteres morfológicosIn the current dissertation a revision of new data of the appendicular skeleton of the Campanian-Maastrichtian fossil site of Lo Hueco (Cuenca, Spain) is presented. This fossil site have yielded an abundant sample of specimens referable to titanosaur sauropods, with several individuals partially articulated and tens of isolated specimens. There has been identified a high morphological variability in each appendicular element and the presence of several small-sized specimens in this sample. Until now, a single titanosaur exclusive form have been described, Lohuecotitan pandafilandi. However, the study of abundant isolated specimens from the fossil site have allowed to identify two main teeth morphotypes, two types of braincase, three morphotypes identified in the axial skeleton of the dorsal region, and four morphotypes among the caudal vertebrae. The current study explores the high variability found in the sample of appendicular elements. For this matter, a series of analytical techniques related with modern machine learning and 3D geometric morphometrics are used with the objective of identifying the probable morphotypes that help explain the morphological variance. A 3D digitizing workflow of the specimens of study is herein proposed, with a new proposal for virtual restoration of fragmentary elements and its incorporation to statistical analyses. Using these techniques it has been identified two main appendicular morphotypes. Based on this morphotypes, the intraspecific variability has been quantified in each of them, the ontogenetic sequences have been identified and the variability related to transformations during titanosaur ontogenetic development. Previous studies indicates that two titanosaur morphotype from Lo Hueco could have been pertain to two different guilds with two different types of feeding niche exploitation. In the current study, the implications of several morphological differences between both main morphotypes are discussed under the hypothesis of differences in the ecomorphological specialization. A statistical proxy model was created to test the relationships between main appendicular morphology with ecomorphological specialization related with the height of the feeding envelope among neosauropods. The results allow relating the two main morphotypes with two different feeding niche exploitation strategies congruent with previous analyses in the cranial material. The observed intraspecific variability in each morphotype allows determining its impact on morphological character scoring. In the current dissertation it has been identified the presence of several ontogenetic sequences in each morphotype. The ontogenetic sequences have been comprehensively described for first time in this group, as well as the ontogenetic stage and relative time estimation of the morphological character changes with implications for character scoringsEsta tesis fue realizada gracias a la Ayuda para Contratos Predoctorales para la Formación de Doctores BES-2013-065509 - Ministerio de Economía y Competitividad. Esta beca doctoral está asociada al Proyecto de Investigación CGL2012-35199 - Ministerio de Economía y Competitivida
    corecore