13,630 research outputs found

    CLIP-S4^4: Language-Guided Self-Supervised Semantic Segmentation

    Full text link
    Existing semantic segmentation approaches are often limited by costly pixel-wise annotations and predefined classes. In this work, we present CLIP-S4^4 that leverages self-supervised pixel representation learning and vision-language models to enable various semantic segmentation tasks (e.g., unsupervised, transfer learning, language-driven segmentation) without any human annotations and unknown class information. We first learn pixel embeddings with pixel-segment contrastive learning from different augmented views of images. To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) embedding consistency, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP; and 2) semantic consistency, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes. Thus, CLIP-S4^4 enables a new task of class-free semantic segmentation where no unknown class information is needed during training. As a result, our approach shows consistent and substantial performance improvement over four popular benchmarks compared with the state-of-the-art unsupervised and language-driven semantic segmentation methods. More importantly, our method outperforms these methods on unknown class recognition by a large margin.Comment: The IEEE/CVF Conference on Computer Vision and Pattern Recognition 202

    Event-based tracking of human hands

    Full text link
    This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting 3D hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human-robot interactions and in obstacle avoidance for human-robot safety applications. Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception. Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured. Tracking of human hands in 3D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space)

    Audio-Visual Automatic Speech Recognition Towards Education for Disabilities

    Get PDF
    Education is a fundamental right that enriches everyone’s life. However, physically challenged people often debar from the general and advanced education system. Audio-Visual Automatic Speech Recognition (AV-ASR) based system is useful to improve the education of physically challenged people by providing hands-free computing. They can communicate to the learning system through AV-ASR. However, it is challenging to trace the lip correctly for visual modality. Thus, this paper addresses the appearance-based visual feature along with the co-occurrence statistical measure for visual speech recognition. Local Binary Pattern-Three Orthogonal Planes (LBP-TOP) and Grey-Level Co-occurrence Matrix (GLCM) is proposed for visual speech information. The experimental results show that the proposed system achieves 76.60 % accuracy for visual speech and 96.00 % accuracy for audio speech recognition

    SOFIA and ALMA Investigate Magnetic Fields and Gas Structures in Massive Star Formation: The Case of the Masquerading Monster in BYF 73

    Full text link
    We present SOFIA+ALMA continuum and spectral-line polarisation data on the massive molecular cloud BYF 73, revealing important details about the magnetic field morphology, gas structures, and energetics in this unusual massive star formation laboratory. The 154μ\mum HAWC+ polarisation map finds a highly organised magnetic field in the densest, inner 0.55×\times0.40 pc portion of the cloud, compared to an unremarkable morphology in the cloud's outer layers. The 3mm continuum ALMA polarisation data reveal several more structures in the inner domain, including a pc-long, \sim500 M_{\odot} "Streamer" around the central massive protostellar object MIR 2, with magnetic fields mostly parallel to the east-west Streamer but oriented north-south across MIR 2. The magnetic field orientation changes from mostly parallel to the column density structures to mostly perpendicular, at thresholds NcritN_{\rm crit} = 6.6×\times1026^{26} m2^{-2}, ncritn_{\rm crit} = 2.5×\times1011^{11} m3^{-3}, and BcritB_{\rm crit} = 42±\pm7 nT. ALMA also mapped Goldreich-Kylafis polarisation in 12^{12}CO across the cloud, which traces in both total intensity and polarised flux, a powerful bipolar outflow from MIR 2 that interacts strongly with the Streamer. The magnetic field is also strongly aligned along the outflow direction; energetically, it may dominate the outflow near MIR 2, comprising rare evidence for a magnetocentrifugal origin to such outflows. A portion of the Streamer may be in Keplerian rotation around MIR 2, implying a gravitating mass 1350±\pm50 M_{\odot} for the protostar+disk+envelope; alternatively, these kinematics can be explained by gas in free fall towards a 950±\pm35 M_{\odot} object. The high accretion rate onto MIR 2 apparently occurs through the Streamer/disk, and could account for \sim33% of MIR 2's total luminosity via gravitational energy release.Comment: 33 pages, 32 figures, accepted by ApJ. Line-Integral Convolution (LIC) images and movie versions of Figures 3b, 7, and 29 are available at https://gemelli.spacescience.org/~pbarnes/research/champ/papers

    Review of Methodologies to Assess Bridge Safety During and After Floods

    Get PDF
    This report summarizes a review of technologies used to monitor bridge scour with an emphasis on techniques appropriate for testing during and immediately after design flood conditions. The goal of this study is to identify potential technologies and strategies for Illinois Department of Transportation that may be used to enhance the reliability of bridge safety monitoring during floods from local to state levels. The research team conducted a literature review of technologies that have been explored by state departments of transportation (DOTs) and national agencies as well as state-of-the-art technologies that have not been extensively employed by DOTs. This review included informational interviews with representatives from DOTs and relevant industry organizations. Recommendations include considering (1) acquisition of tethered kneeboard or surf ski-mounted single-beam sonars for rapid deployment by local agencies, (2) acquisition of remote-controlled vessels mounted with single-beam and side-scan sonars for statewide deployment, (3) development of large-scale particle image velocimetry systems using remote-controlled drones for stream velocity and direction measurement during floods, (4) physical modeling to develop Illinois-specific hydrodynamic loading coefficients for Illinois bridges during flood conditions, and (5) development of holistic risk-based bridge assessment tools that incorporate structural, geotechnical, hydraulic, and scour measurements to provide rapid feedback for bridge closure decisions.IDOT-R27-SP50Ope

    Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos

    Get PDF
    Data charts are widely used in our daily lives, being present in regular media, such as newspapers, magazines, web pages, books, and many others. A well constructed data chart leads to an intuitive understanding of its underlying data and in the same way, when data charts have wrong design choices, a redesign of these representations might be needed. However, in most cases, these charts are shown as a static image, which means that the original data are not usually available. Therefore, automatic methods could be applied to extract the underlying data from the chart images to allow these changes. The task of recognizing charts and extracting data from them is complex, largely due to the variety of chart types and their visual characteristics. Computer Vision techniques for image classification and object detection are widely used for the problem of recognizing charts, but only in images without any disturbance. Other features in real-world images that can make this task difficult are not present in most literature works, like photo distortions, noise, alignment, etc. Two computer vision techniques that can assist this task and have been little explored in this context are perspective detection and correction. These methods transform a distorted and noisy chart in a clear chart, with its type ready for data extraction or other uses. The task of reconstructing data is straightforward, as long the data is available the visualization can be reconstructed, but the scenario of reconstructing it on the same context is complex. Using a Visualization Grammar for this scenario is a key component, as these grammars usually have extensions for interaction, chart layers, and multiple views without requiring extra development effort. This work presents a model for automated support for custom recognition, and reconstruction of charts in images. The model automatically performs the process steps, such as reverse engineering, turning a static chart back into its data table for later reconstruction, while allowing the user to make modifications in case of uncertainties. This work also features a model-based architecture along with prototypes for various use cases. Validation is performed step by step, with methods inspired by the literature. This work features three use cases providing proof of concept and validation of the model. The first use case features usage of chart recognition methods focused on documents in the real-world, the second use case focus on vocalization of charts, using a visualization grammar to reconstruct a chart in audio format, and the third use case presents an Augmented Reality application that recognizes and reconstructs charts in the same context (a piece of paper) overlaying the new chart and interaction widgets. The results showed that with slight changes, chart recognition and reconstruction methods are now ready for real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando presentes nos meios de comunicação regulares, tais como jornais, revistas, páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma compreensão intuitiva dos seus dados inerentes e da mesma forma, quando os gráficos de dados têm escolhas de conceção erradas, poderá ser necessário um redesenho destas representações. Contudo, na maioria dos casos, estes gráficos são mostrados como uma imagem estática, o que significa que os dados originais não estão normalmente disponíveis. Portanto, poderiam ser aplicados métodos automáticos para extrair os dados inerentes das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande parte devido à variedade de tipos de gráficos e às suas características visuais. As técnicas de Visão Computacional para classificação de imagens e deteção de objetos são amplamente utilizadas para o problema de reconhecimento de gráficos, mas apenas em imagens sem qualquer ruído. Outras características das imagens do mundo real que podem dificultar esta tarefa não estão presentes na maioria das obras literárias, como distorções fotográficas, ruído, alinhamento, etc. Duas técnicas de visão computacional que podem ajudar nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e correção da perspetiva. Estes métodos transformam um gráfico distorcido e ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que os dados estejam disponíveis a visualização pode ser reconstruída, mas o cenário de reconstrução no mesmo contexto é complexo. A utilização de uma Gramática de Visualização para este cenário é um componente chave, uma vez que estas gramáticas têm normalmente extensões para interação, camadas de gráficos, e visões múltiplas sem exigir um esforço extra de desenvolvimento. Este trabalho apresenta um modelo de suporte automatizado para o reconhecimento personalizado, e reconstrução de gráficos em imagens estáticas. O modelo executa automaticamente as etapas do processo, tais como engenharia inversa, transformando um gráfico estático novamente na sua tabela de dados para posterior reconstrução, ao mesmo tempo que permite ao utilizador fazer modificações em caso de incertezas. Este trabalho também apresenta uma arquitetura baseada em modelos, juntamente com protótipos para vários casos de utilização. A validação é efetuada passo a passo, com métodos inspirados na literatura. Este trabalho apresenta três casos de uso, fornecendo prova de conceito e validação do modelo. O primeiro caso de uso apresenta a utilização de métodos de reconhecimento de gráficos focando em documentos no mundo real, o segundo caso de uso centra-se na vocalização de gráficos, utilizando uma gramática de visualização para reconstruir um gráfico em formato áudio, e o terceiro caso de uso apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos gráficos e widgets de interação. Os resultados mostraram que com pequenas alterações, os métodos de reconhecimento e reconstrução dos gráficos estão agora prontos para os gráficos do mundo real, tendo em consideração o tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury

    Fiabilité de l’underfill et estimation de la durée de vie d’assemblages microélectroniques

    Get PDF
    Abstract : In order to protect the interconnections in flip-chip packages, an underfill material layer is used to fill the volumes and provide mechanical support between the silicon chip and the substrate. Due to the chip corner geometry and the mismatch of coefficient of thermal expansion (CTE), the underfill suffers from a stress concentration at the chip corners when the temperature is lower than the curing temperature. This stress concentration leads to subsequent mechanical failures in flip-chip packages, such as chip-underfill interfacial delamination and underfill cracking. Local stresses and strains are the most important parameters for understanding the mechanism of underfill failures. As a result, the industry currently relies on the finite element method (FEM) to calculate the stress components, but the FEM may not be accurate enough compared to the actual stresses in underfill. FEM simulations require a careful consideration of important geometrical details and material properties. This thesis proposes a modeling approach that can accurately estimate the underfill delamination areas and crack trajectories, with the following three objectives. The first objective was to develop an experimental technique capable of measuring underfill deformations around the chip corner region. This technique combined confocal microscopy and the digital image correlation (DIC) method to enable tri-dimensional strain measurements at different temperatures, and was named the confocal-DIC technique. This techique was first validated by a theoretical analysis on thermal strains. In a test component similar to a flip-chip package, the strain distribution obtained by the FEM model was in good agreement with the results measured by the confocal-DIC technique, with relative errors less than 20% at chip corners. Then, the second objective was to measure the strain near a crack in underfills. Artificial cracks with lengths of 160 μm and 640 μm were fabricated from the chip corner along the 45° diagonal direction. The confocal-DIC-measured maximum hoop strains and first principal strains were located at the crack front area for both the 160 μm and 640 μm cracks. A crack model was developed using the extended finite element method (XFEM), and the strain distribution in the simulation had the same trend as the experimental results. The distribution of hoop strains were in good agreement with the measured values, when the model element size was smaller than 22 μm to capture the strong strain gradient near the crack tip. The third objective was to propose a modeling approach for underfill delamination and cracking with the effects of manufacturing variables. A deep thermal cycling test was performed on 13 test cells to obtain the reference chip-underfill delamination areas and crack profiles. An artificial neural network (ANN) was trained to relate the effects of manufacturing variables and the number of cycles to first delamination of each cell. The predicted numbers of cycles for all 6 cells in the test dataset were located in the intervals of experimental observations. The growth of delamination was carried out on FEM by evaluating the strain energy amplitude at the interface elements between the chip and underfill. For 5 out of 6 cells in validation, the delamination growth model was consistent with the experimental observations. The cracks in bulk underfill were modelled by XFEM without predefined paths. The directions of edge cracks were in good agreement with the experimental observations, with an error of less than 2.5°. This approach met the goal of the thesis of estimating the underfill initial delamination, areas of delamination and crack paths in actual industrial flip-chip assemblies.Afin de protéger les interconnexions dans les assemblages, une couche de matériau d’underfill est utilisée pour remplir le volume et fournir un support mécanique entre la puce de silicium et le substrat. En raison de la géométrie du coin de puce et de l’écart du coefficient de dilatation thermique (CTE), l’underfill souffre d’une concentration de contraintes dans les coins lorsque la température est inférieure à la température de cuisson. Cette concentration de contraintes conduit à des défaillances mécaniques dans les encapsulations de flip-chip, telles que la délamination interfaciale puce-underfill et la fissuration d’underfill. Les contraintes et déformations locales sont les paramètres les plus importants pour comprendre le mécanisme des ruptures de l’underfill. En conséquent, l’industrie utilise actuellement la méthode des éléments finis (EF) pour calculer les composantes de la contrainte, qui ne sont pas assez précises par rapport aux contraintes actuelles dans l’underfill. Ces simulations nécessitent un examen minutieux de détails géométriques importants et des propriétés des matériaux. Cette thèse vise à proposer une approche de modélisation permettant d’estimer avec précision les zones de délamination et les trajectoires des fissures dans l’underfill, avec les trois objectifs suivants. Le premier objectif est de mettre au point une technique expérimentale capable de mesurer la déformation de l’underfill dans la région du coin de puce. Cette technique, combine la microscopie confocale et la méthode de corrélation des images numériques (DIC) pour permettre des mesures tridimensionnelles des déformations à différentes températures, et a été nommée le technique confocale-DIC. Cette technique a d’abord été validée par une analyse théorique en déformation thermique. Dans un échantillon similaire à un flip-chip, la distribution de la déformation obtenues par le modèle EF était en bon accord avec les résultats de la technique confocal-DIC, avec des erreurs relatives inférieures à 20% au coin de puce. Ensuite, le second objectif est de mesurer la déformation autour d’une fissure dans l’underfill. Des fissures artificielles d’une longueuer de 160 μm et 640 μm ont été fabriquées dans l’underfill vers la direction diagonale de 45°. Les déformations circonférentielles maximales et principale maximale étaient situées aux pointes des fissures correspondantes. Un modèle de fissure a été développé en utilisant la méthode des éléments finis étendue (XFEM), et la distribution des contraintes dans la simuation a montré la même tendance que les résultats expérimentaux. La distribution des déformations circonférentielles maximales était en bon accord avec les valeurs mesurées lorsque la taille des éléments était plus petite que 22 μm, assez petit pour capturer le grand gradient de déformation près de la pointe de fissure. Le troisième objectif était d’apporter une approche de modélisation de la délamination et de la fissuration de l’underfill avec les effets des variables de fabrication. Un test de cyclage thermique a d’abord été effectué sur 13 cellules pour obtenir les zones délaminées entre la puce et l’underfill, et les profils de fissures dans l’underfill, comme référence. Un réseau neuronal artificiel (ANN) a été formé pour établir une liaison entre les effets des variables de fabrication et le nombre de cycles à la délamination pour chaque cellule. Les nombres de cycles prédits pour les 6 cellules de l’ensemble de test étaient situés dans les intervalles d’observations expérimentaux. La croissance de la délamination a été réalisée par l’EF en évaluant l’énergie de la déformation au niveau des éléments interfaciaux entre la puce et l’underfill. Pour 5 des 6 cellules de la validation, le modèle de croissance du délaminage était conforme aux observations expérimentales. Les fissures dans l’underfill ont été modélisées par XFEM sans chemins prédéfinis. Les directions des fissures de bord étaient en bon accord avec les observations expérimentales, avec une erreur inférieure à 2,5°. Cette approche a répondu à la problématique qui consiste à estimer l’initiation des délamination, les zones de délamination et les trajectoires de fissures dans l’underfill pour des flip-chips industriels

    Karttatypografia: luettavuuden parantaminen kirjainmuotoilun keinoin topografisissa kartoissa

    Get PDF
    This thesis examines the legibility of type on maps and aims to find out ways to improve it through type design. As type often is an integral part of maps – something that helps the map user navigate, understand, and perceive a wide range of information in an effective way – type design and legibility must be regarded as important design elements. However, even though cartography and typography have extensive theoretical bases, the subject of legibility has not been comprehensively researched in cartographic context. Thus, by combining type design theory and scientific legibility studies with cartographic theory, the legibility of type on maps could be improved. The topic is first studied by an extensive literature review to cover existing concepts and theories of cartography, cartographic typography, and typography. After a competent knowledge basis of these concepts and theories is acquired, the findings are utilised in the design component. The design component is a type family designed specifically to be used with topographic maps: it consists of two elements, a project description that follows the design process of the type family, relating design choices to the theoretical findings and perspectives presented in the literary review, and the finished type family. In conclusion of the design component, several visual studies are made both to compare the design component (type family) to other relevant typefaces, and to validate the possible functionality of the design component in the chosen cartographic application (topographic map). A broad understanding of the topics of the literature review was formed. Cartographic theory observed the overall nature of maps and specified the various map elements and their intended uses. Cartographic typography deepened the understanding of type on maps – it highlighted the specific needs that must be taken into consideration, demonstrated the diversity of typographic situations that might occur, and presented a large set of guidelines to help the mapmaker to achieve better results. Typography and type design focused on the micro-level of type: how the minor design choices affect the whole, and furthermore, through legibility studies, validated certain views and brought new topics into consideration. By combining theoretical literature from these domains, this thesis helped to form a foundation for an improved framework for type de-sign for (topographic) maps. Furthermore, the domains of cartographic typography and typography and type design gave clear suggestions on how the legibility of type on topographic maps can be improved: legibility of type in this context constitutes from multiple components that must be both taken into consideration and be applied to processes of mapmaking and type design.Tässä opinnäytetyössä tutkitaan karttatypografiaa ja pyritään löytämään keinoja parantaa luettavuutta kirjainmuotoilun keinoin. Teksti on usein elimellinen osa karttoja: se helpottaa kartan käyttäjää navigoimaan ja sisäistämään suuren määrän informaatiota tehokkaasti. Siispä kirjainmuotoilua ja luettavuutta tulee pitää tärkeinä karttasuunnittelun työkaluina. Vaikka sekä kartografiassa että typografiassa on olemassa laajat teoreettiset perustat, luettavuutta ei ole kattavasti tutkittu kartografisessa kontekstissa. Yhdistämällä kirjainmuotoilun ja tieteelliset luettavuustutkimukset kartografiseen teoriaan, karttatekstien luettavuutta voidaan parantaa. Aluksi tutustutaan olemassa oleviin konsepteihin ja kartografisiin teorioihin kattavan kirjallisuuskatsauksen avulla. Kun tarpeellinen tietopohja on rakennettu, saavutettua tietämystä hyödynnetään opinnäytetyön projektiosassa, joka tässä tapauksessa on topografisten karttojen yhteydessä käytettävä kirjainperhe. Projektiosio on kaksijakoinen ja pitää sisällään sekä valmiin kirjainperheen, että projektikuvauksen. Projektikuvaus seuraa suunnitteluprosessia ja peilaa tehtyjä valintoja kirjallisuuskatsauksessa esiteltyihin löydöksiin. Projektiosion päätelmässä tutkitaan visuaalisesti kirjainperheen toimintaa ja käyttökelpoisuutta topografisessa karttaympäristössä, sekä verrataan kirjainperheen toimivuutta suhteessa muihin kirjaintyyppeihin. Tutkimuksen perusteella muodostuu laaja ymmärrys aiheesta. Kartografinen teoria valottaa yleisesti karttojen olemusta ja toimintaa, sekä esittelee erilaisia karttalementtejä ja niiden toimintatapoja. Karttatypografian teoria syventää ymmärrystä tekstin käyttäytymisestä karttaympäristössä, esittelee karttatypografian erityispiirteitä, ja tarjoaa laajan karttatypografisen ohjeiston. Typografian ja kirjainmuotoilun teoria keskittyy mikrotason aiheisiin: kuinka vähäpätöisiltä vaikuttavat suunnitteluvalinnat vaikuttavat kokonaisuuteen, ja kuinka luettavuustutkimukset auttavat näkemään asioita uudessa valossa. Tämä opinnäytetyö auttaa parantamaan kirjainmuotoilua (topografisessa) karttaympäristössä yhdistämällä edellä mainittujen alojen teorioita keskenään ja pohjustamalla paranneltuja suunniteluvalintoja. Yhdistetty teoria viittaa selkeästi siihen, että luettavuus karttaympäristössä koostuu lukuisista osatekijöistä – nämä osatekijät tulee ymmärtää, ottaa huomioon, ja soveltaa sekä karttojen että niille suunniteltujen kirjaintyyppien suunnitteluprosesseissa
    corecore