325 research outputs found

    Assessing Online-form Complexity for the Development of Assistive Technologies for Older Adults

    Get PDF
    Although cognitive declines occur as a natural product of the ageing process, the majority of online-forms do not cater specifically for the needs of older adult users. As a consequence, online-forms pose significant usability challenges to this target user group. The Delivering Inclusive Access to Disabled and Elderly Members of the community (DIADEM) project aims to develop a plug-in to a web browser that adapts existing online-form content so that it is more accessible and usable for older adults with cognitive decline. In order to identify requirements for developing the DIADEM application, it is necessary to observe users interacting with online-forms, and identify the usability challenges that occur as a result of this. However, the format and functionality of online-form content presented on the web varies greatly. Identifying a representative sample of online-forms that may be presented to users within a trials setting to elicit key usability challenges, has proved to be a non-trivial task. Consequently, we have developed a set of Bespoke Online-form Selection (BOFS) criteria which are used to help identify appropriate and representative candidate online-forms that may be used within the user trials setting to formulate initial requirements for developing the DIADEM application. In the context of the DIADEM project, BOFS has proved to be a valuable tool which has been used to successfully identify online-forms for use in our user trials. This paper presents the BOFS criteria, and shows how these are aligned with cognitive declines that are typically presented by the older adult user group, and demonstrates how BOFS has been of value within the context of the DIADEM project.</p

    Automating the Reconstruction of Neuron Morphological Models: the Rivulet Algorithm Suite

    Get PDF
    The automatic reconstruction of single neuron cells is essential to enable large-scale data-driven investigations in computational neuroscience. The problem remains an open challenge due to various imaging artefacts that are caused by the fundamental limits of light microscopic imaging. Few previous methods were able to generate satisfactory neuron reconstruction models automatically without human intervention. The manual tracing of neuron models is labour heavy and time-consuming, making the collection of large-scale neuron morphology database one of the major bottlenecks in morphological neuroscience. This thesis presents a suite of algorithms that are developed to target the challenge of automatically reconstructing neuron morphological models with minimum human intervention. We first propose the Rivulet algorithm that iteratively backtracks the neuron fibres from the termini points back to the soma centre. By refining many details of the Rivulet algorithm, we later propose the Rivulet2 algorithm which not only eliminates a few hyper-parameters but also improves the robustness against noisy images. A soma surface reconstruction method was also proposed to make the neuron models biologically plausible around the soma body. The tracing algorithms, including Rivulet and Rivulet2, normally need one or more hyper-parameters for segmenting the neuron body out of the noisy background. To make this pipeline fully automatic, we propose to use 2.5D neural network to train a model to enhance the curvilinear structures of the neuron fibres. The trained neural networks can quickly highlight the fibres of interests and suppress the noise points in the background for the neuron tracing algorithms. We evaluated the proposed methods in the data released by both the DIADEM and the BigNeuron challenge. The experimental results show that our proposed tracing algorithms achieve the state-of-the-art results

    Model and Appearance Based Analysis of Neuronal Morphology from Different Microscopy Imaging Modalities

    Get PDF
    The neuronal morphology analysis is key for understanding how a brain works. This process requires the neuron imaging system with single-cell resolution; however, there is no feasible system for the human brain. Fortunately, the knowledge can be inferred from the model organism, Drosophila melanogaster, to the human system. This dissertation explores the morphology analysis of Drosophila larvae at single-cell resolution in static images and image sequences, as well as multiple microscopy imaging modalities. Our contributions are on both computational methods for morphology quantification and analysis of the influence of the anatomical aspect. We develop novel model-and-appearance-based methods for morphology quantification and illustrate their significance in three neuroscience studies. Modeling of the structure and dynamics of neuronal circuits creates understanding about how connectivity patterns are formed within a motor circuit and determining whether the connectivity map of neurons can be deduced by estimations of neuronal morphology. To address this problem, we study both boundary-based and centerline-based approaches for neuron reconstruction in static volumes. Neuronal mechanisms are related to the morphology dynamics; so the patterns of neuronal morphology changes are analyzed along with other aspects. In this case, the relationship between neuronal activity and morphology dynamics is explored to analyze locomotion procedures. Our tracking method models the morphology dynamics in the calcium image sequence designed for detecting neuronal activity. It follows the local-to-global design to handle calcium imaging issues and neuronal movement characteristics. Lastly, modeling the link between structural and functional development depicts the correlation between neuron growth and protein interactions. This requires the morphology analysis of different imaging modalities. It can be solved using the part-wise volume segmentation with artificial templates, the standardized representation of neurons. Our method follows the global-to-local approach to solve both part-wise segmentation and registration across modalities. Our methods address common issues in automated morphology analysis from extracting morphological features to tracking neurons, as well as mapping neurons across imaging modalities. The quantitative analysis delivered by our techniques enables a number of new applications and visualizations for advancing the investigation of phenomena in the nervous system

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Decoding ancient Egyptian diadems: symbolism and iconography as a means of interpreting feminine identity

    Get PDF
    Text in EnglishAncient Egyptian distinctive headdresses made from precious or semi-precious materials date to prehistoric times, indicating a growing sense of individuality and hierarchy. Women’s headdresses were indicators of rulership, divinity, social status, cultic affiliation and wealth. Visual evidence indicates that female identity was emphasised by external and outward appearance and headdresses in the form of diadems followed recognised stylistic dictates throughout the Dynastic Period. The floral and faunal motifs used in the embellishment were believed to have protective amuletic and magical powers. Although a considerable amount of investigation has been undertaken into the use of materials and techniques used in the manufacture of diadems, the incorporation of symbolism and iconography of these gendered artefacts as a means of interpreting visual messages and self-expression has largely been unexplored. The study has been limited to well-provenanced, extant Old, Middle and New Kingdom diadems housed in various museums worldwide.Old Testament and Ancient Near Eastern StudiesM.A. (Ancient Near Eastern Studies

    Acquisition des contenus intelligents dans l’archivage du Web

    Get PDF
    Web sites are dynamic by nature with content and structure changing overtime; many pages on the Web are produced by content management systems (CMSs). Tools currently used by Web archivists to preserve the content of the Web blindly crawl and store Web pages, disregarding the CMS the site is based on and whatever structured content is contained in Web pages. We first present an application-aware helper (AAH) that fits into an archiving crawl processing chain to perform intelligent and adaptive crawling of Web applications, given a knowledge base of common CMSs. The AAH has been integrated into two Web crawlers in the framework of the ARCOMEM project: the proprietary crawler of the Internet Memory Foundation and a customized version of Heritrix. Then we propose an efficient unsupervised Web crawling system ACEBot (Adaptive Crawler Bot for data Extraction), a structure-driven crawler that utilizes the inner structure of the pages and guides the crawling process based on the importance of their content. ACEBot works intwo phases: in the offline phase, it constructs a dynamic site map (limiting the number of URLs retrieved), learns a traversal strategy based on the importance of navigation patterns (selecting those leading to valuable content); in the online phase, ACEBot performs massive downloading following the chosen navigation patterns. The AAH and ACEBot makes 7 and 5 times, respectively, fewer HTTP requests as compared to a generic crawler, without compromising on effectiveness. We finally propose OWET (Open Web Extraction Toolkit) as a free platform for semi-supervised data extraction. OWET allows a user to extract the data hidden behind Web formsLes sites Web sont par nature dynamiques, leur contenu et leur structure changeant au fil du temps ; de nombreuses pages sur le Web sont produites par des systĂšmes de gestion de contenu (CMS). Les outils actuellement utilisĂ©s par les archivistes du Web pour prĂ©server le contenu du Web collectent et stockent de maniĂšre aveugle les pages Web, en ne tenant pas compte du CMS sur lequel le site est construit et du contenu structurĂ© de ces pages Web. Nous prĂ©sentons dans un premier temps un application-aware helper (AAH) qui s’intĂšgre Ă  une chaine d’archivage classique pour accomplir une collecte intelligente et adaptative des applications Web, Ă©tant donnĂ©e une base de connaissance deCMS courants. L’AAH a Ă©tĂ© intĂ©grĂ©e Ă  deux crawlers Web dans le cadre du projet ARCOMEM : le crawler propriĂ©taire d’Internet Memory Foundation et une version personnalisĂ©e d’Heritrix. Nous proposons ensuite un systĂšme de crawl efficace et non supervisĂ©, ACEBot (Adaptive Crawler Bot for data Extraction), guidĂ© par la structure qui exploite la structure interne des pages et dirige le processus de crawl en fonction de l’importance du contenu. ACEBot fonctionne en deux phases : dans la phase hors-ligne, il construit un plan dynamique du site (en limitant le nombre d’URL rĂ©cupĂ©rĂ©es), apprend une stratĂ©gie de parcours basĂ©e sur l’importance des motifs de navigation (sĂ©lectionnant ceux qui mĂšnent Ă  du contenu de valeur) ; dans la phase en-ligne, ACEBot accomplit un tĂ©lĂ©chargement massif en suivant les motifs de navigation choisis. L’AAH et ACEBot font 7 et 5 fois moins, respectivement, de requĂȘtes HTTP qu’un crawler gĂ©nĂ©rique, sans compromis de qualitĂ©. Nous proposons enfin OWET (Open Web Extraction Toolkit), une plate-forme libre pour l’extraction de donnĂ©es semi-supervisĂ©e. OWET permet Ă  un utilisateur d’extraire les donnĂ©es cachĂ©es derriĂšre des formulaires Web
    • 

    corecore