146 research outputs found

    Perceptual recognition of familiar objects in different orientations

    Get PDF
    Recent approaches to object recognition have suggested that representations are view-dependent and not object-centred as was previously asserted by Marr (Marr and Nishihara, 1978). The exact nature of these view-centred representations however does not concord across the different theories. Palmer suggested that a single canonical view represents an object in memory (Palmer et al., 1981) whereas other studies have shown that each object may have more than one view-point representation (Tarr and Pinker 1989).A set of experiments were run to determine the nature of the visual representation of rigid, familiar objects in memory that were presented foveally and in peripheral vision. In the initial set of experiments recognition times were measured to a selection of common, elongated objects rotated in increments of 30˚ degrees in the 3 different axes and their combinations. Significant main effects of orientation were found in all experiments. This effect was attributed to the delay in recognising objects when foreshortened. Objects with strong gravitational uprights yielded the same orientation effects as objects without gravitational uprights. Recognition times to objects rotated around the picture plane were found to be independent of orientation. The results were not dependent on practice with the objects. There was no benefit found for shaded objects over silhouetted objects. The findings were highly consistent across the experiments. Four experiments were also carried out which tested the detectability of objects presented foveally among a set of similar objects. The subjects viewed an object picture (target) surrounded by eight search pictures arranged in a circular array. The task was to locate the picture-match of the target object (which was sometimes absent) as fast as possible. All of the objects had prominent elongated axes and were viewed perpendicular to this axis. When the object was present in the search array, it could appear in one of five orientations: in its original orientation, rotated in the picture plane by 30 or 60 , or rotated by 30 or 60 in depth. Highly consistent results were found across the four experiments. It was found that objects rotated in depth by 60 took longer to find and were less likely to be found in the first saccade than all other orientations. These findings were independent of the type of display (i.e. randomly rotated distractors or aligned distractors) and also of the task (matching to a picture or a name of an object). It was concluded that there was no evidence that an abstract 3-dimensional representation was used in searching for an object. The results from these experiments are compatible with the notion of multiple-view representations of objects in memory. There was no evidence found that objects were stored as single, object-centred representations. It was found that representations are initially based on the familiar views of the objects but with practice on other views, those views which hold the maximum information about the object are stored. Novel views of objects are transformed to match these stored views and different candidates for the transformation process are discussed

    Psychometric Analysis of the Medical Terminology 350 Final Test Using Item Analysis and KR20

    Get PDF
    Meaningful quantitative research studies require the use of instruments that have acceptable validity and reliability. The purpose of this study was to determine the reliability and validity of the Medical Terminology 350 Final Test (MT350) in a population of secondary health science students. The MT350 is an assessment instrument that measures participants’ recall of medical terminology meanings and is currently being used to assess learning in health science education. A review of literature has revealed a lack of psychometric analysis of the commonly used MT350. Past practice has suggested instruction that uses mnemonics can favorably influence medical vocabulary retention, but an absence of valid and reliable assessment instruments prevented the proper research of the practice. Archival and anonymous data from completed MT350 results was used. Participants in this study consisted of secondary health science students from Tennessee and Missouri with a total sample size of 102 students. Internal consistency was determined through reliability analysis using KR20. Content validity was established through a review by 10 content experts from the fields of health and education. The experts were asked to rate each of the 350 items on a 3-point Likert type scale (3 = essential, 2 = useful, but not essential, 1 = not necessary). It was concluded that the Medical Terminology 350 Final Test (MT350) is a reliable measure of medical terminology retention

    Hytexpros : a hypermedia information retrieval system

    Get PDF
    The Hypermedia information retrieval system makes use of the specific capabilities of hypermedia systems with information retrieval operations and provides new kind of information management tools. It combines both hypermedia and information retrieval to offer end-users the possibility of navigating, browsing and searching a large collection of documents to satisfy an information need. TEXPROS is an intelligent document processing and retrieval system that supports storing, extracting, classifying, categorizing, retrieval and browsing enterprise information. TEXPROS is a perfect application to apply hypermedia information retrieval techniques. In this dissertation, we extend TEXPROS to a hypermedia information retrieval system called HyTEXPROS with hypertext functionalities, such as node, typed and weighted links, anchors, guided-tours, network overview, bookmarks, annotations and comments, and external linkbase. It describes the whole information base including the metadata and the original documents as network nodes connected by links. Through hypertext functionalities, a user can construct dynamically an information path by browsing through pieces of the information base. By adding hypertext functionalities to TEXPROS, HyTEXPROS is created. It changes its working domain from a personal document process domain to a personal library domain accompanied with citation techniques to process original documents. A four-level conceptual architecture is presented as the system architecture of HyTEXPROS. Such architecture is also referred to as the reference model of HyTEXPROS. Detailed description of HyTEXPROS, using the First Order Logic Calculus, is also proposed. An early version of a prototype is briefly described

    Incentive spirometry prescription and inspiratory capacity recovery guideline for the early period after open heart surgery

    Get PDF
    Incentive spirometry (IS) is often used as lung expansion therapy for increasing postoperative IS inspiratory capacity (ISIC) in open heart surgery (OHS) patients. However, currently there is a lack of guidelines and prescription on how this therapy should be administered for these patients. Although there is some information on several patient- and surgery- related factors associated with ISIC volumes after OHS, the role of IS performance variables such as IS inspiration volumes (ISv) and IS inspiration frequency (ISf) has not been investigated. In order to formulate evidence-based IS therapy guidelines and prescription, this study investigated factors, which included ISv and ISf, to identify predictors of ISIC recovery in a cohort of OHS patients in Hospital Sultanah Aminah, Johor Bahru (HSAJB). This study involved collection of objective and precise IS performance data of 95 OHS patients using a newly developed and validated multisensor data collection device (ISDCD) for five consecutive postoperative days (POD). Data analysis identified ISv as the sole predictor of ISIC recovery which explaines 23%, 24%, 17% and 25% of variances for ISIC recovery on POD2, POD3, POD4 and POD5 respectively. Three pathways for postoperative ISIC recovery were also identified, namely for patients following the fastest pathway having the highest ISIC recovery rate of 19% for each POD, followed by 16% for the middle pathway and 12% for slowest. The findings facilitated the formulation of evidence-based IS therapy prescription and ISIC recovery guidelines from POD1 to POD4. However, these findings need to be verified further through research involving comprehensive and objective evaluation of IS performance using appropriate technology devices

    Incentive spirometry prescription and inspiratorycapacity recovery guideline for early period after open heart surgery

    Get PDF
    Incentive spirometry (IS) is often used as lung expansion therapy for increasing postoperative IS inspiratory capacity (ISIC) in open heart surgery (OHS) patients. However, currently there is a lack of guidelines and prescription on how this therapy should be administered for these patients. Although there is some information on several patient- and surgery- related factors associated with ISIC volumes after OHS, the role of IS performance variables such as IS inspiration volumes (ISv) and IS inspiration frequency (ISf) has not been investigated. In order to formulate evidence-based IS therapy guidelines and prescription, this study investigated factors, which included ISv and ISf, to identify predictors of ISIC recovery in a cohort of OHS patients inHospital Sultanah Aminah, Johor Bahru (HSAJB). This studyinvolved collection of objective and precise IS performance data of 95 OHS patients using a newly developed and validated multisensor data collection device (ISDCD) for five consecutive postoperative days (POD). Data analysis identified ISv as the sole predictor of ISIC recovery which explaines 23%, 24%, 17% and 25% of variances for ISIC recovery on POD2, POD3, POD4 and POD5 respectively. Three pathways for postoperative ISIC recovery were also identified, namelyfor patients following the fastest pathway having the highest ISIC recovery rate of 19% for each POD, followed by 16% for the middle pathway and 12% for slowest. The findings facilitated the formulation of evidence-based IS therapy prescription and ISIC recovery guidelines from POD1 to POD4. However, these findings need to be verified further through research involving comprehensive and objective evaluation of IS performance using appropriate technology devices

    Binary Neural Networks for Memory-Efficient and Effective Visual Place Recognition in Changing Environments

    Get PDF
    Visual place recognition (VPR) is a robot’s ability to determine whether a place was visited before using visual data. While conventional handcrafted methods for VPR fail under extreme environmental appearance changes, those based on convolutional neural networks (CNNs) achieve state-of-the-art performance but result in heavy runtime processes and model sizes that demand a large amount of memory. Hence, CNN-based approaches are unsuitable for resource-constrained platforms, such as small robots and drones. In this article, we take a multistep approach of decreasing the precision of model parameters, combining it with network depth reduction and fewer neurons in the classifier stage to propose a new class of highly compact models that drastically reduces the memory requirements and computational effort while maintaining state-of-the-art VPR performance. To the best of our knowledge, this is the first attempt to propose binary neural networks for solving the VPR problem effectively under changing conditions and with significantly reduced resource requirements. Our best-performing binary neural network, dubbed FloppyNet, achieves comparable VPR performance when considered against its full-precision and deeper counterparts while consuming 99% less memory and increasing the inference speed by seven times

    Construction de modèles de données relationnels temporalisés guidée par les ontologies

    Get PDF
    Au sein d’une organisation, de même qu’entre des organisations, il y a plusieurs intervenants qui doivent prendre des décisions en fonction de la vision qu’ils se font de l’organisation concernée, de son environnement et des interactions entre les deux. Dans la plupart des cas, les données sont fragmentées en plusieurs sources non coordonnées ce qui complique, notamment, le fait de retracer leur évolution chronologique. Ces différentes sources sont hétérogènes par leur structure, par la sémantique des données qu’elles contiennent, par les technologies informatiques qui les manipulent et par les règles de gouvernance qui les contrôlent. Dans ce contexte, un système de santé apprenant (Learning Health System) a pour objectif d’unifier les soins de santé, la recherche biomédicale et le transfert des connaissances, en offrant des outils et des services pour améliorer la collaboration entre les intervenants ; l’optique sous-jacente à cette collaboration étant de fournir à un individu de meilleurs services qui soient personnalisés. Les méthodes classiques de construction de modèle de données sont fondées sur des règles de pratique souvent peu précises, ad hoc, non automatisables. L’extraction des données d’intérêt implique donc d’importantes mobilisations de ressources humaines. De ce fait, la conciliation et l’agrégation des sources sont sans cesse à recommencer parce que les besoins ne sont pas tous connus à l’avance, qu’ils varient au gré de l’évolution des processus et que les données sont souvent incomplètes. Pour obtenir l’interopérabilité, il est nécessaire d’élaborer une méthode automatisée de construction de modèle de données qui maintient conjointement les données brutes des sources et leur sémantique. Cette thèse présente une méthode qui permet, une fois qu’un modèle de connaissance est choisi, la construction d’un modèle de données selon des critères fondamentaux issus d’un modèle ontologique et d’un modèle relationnel temporel basé sur la logique des intervalles. De plus, la méthode est semi- automatisée par un prototype, OntoRelα. D’une part, l’utilisation des ontologies pour définir la sémantique des données est un moyen intéressant pour assurer une meilleure interopérabilité sémantique étant donné que l’ontologie permet d’exprimer de façon exploitable automatiquement différents axiomes logiques qui permettent la description de données et de leurs liens. D’autre part, l’utilisation d’un modèle relationnel temporalisé permet l’uniformisation de la structure du modèle de données, l’intégration des contraintes temporelles ainsi que l’intégration des contraintes du domaine qui proviennent des ontologies.Within an organization, many stakeholders must make decisions based on their vision of the organization, its environment, and the interactions between these two. In most cases, the data are fragmented in several uncoordinated sources, making it difficult, in particular, to trace their chronological evolution. These different sources are heterogeneous in their structure, in the semantics of the data they contain, in the computer technologies that manipulate them, and in the governance rules that control them. In this context, a Learning Health System aims to unify health care, biomedical research and knowledge transfer by providing tools and services to enhance collaboration among stakeholders in the health system to provide better and personalized services to the patient. The implementation of such a system requires a common data model with semantics, structure, and consistent temporal traceability that ensures data integrity. Traditional data model design methods are based on vague, non-automatable best practice rules where the extraction of data of interest requires the involvement of very important human resources. The reconciliation and the aggregation of sources are constantly starting over again because not all needs are known in advance and vary with the evolution of processes and data are often incomplete. To obtain an interoperable data model, an automated construction method that jointly maintains the source raw data and their semantics is required. This thesis presents a method that build a data model according to fundamental criteria derived from an ontological model, a relational model and a temporal model based on the logic of intervals. In addition, the method is semi-automated by an OntoRelα prototype. On the one hand, the use of ontologies to define the semantics of data is an interesting way to ensure a better semantic interoperability since it automatically expresses different logical axioms allowing the description of data and their links. On the other hand, the use of a temporal relational model allows the standardization of data model structure and the integration of temporal constraints as well as the integration of domain constraints defines in the ontologies

    Small nets and short paths optimising neural computation

    Get PDF

    Investigating quantum many-body systems with tensor networks, machine learning and quantum computers

    Get PDF
    (English) We perform quantum simulation on classical and quantum computers and set up a machine learning framework in which we can map out phase diagrams of known and unknown quantum many-body systems in an unsupervised fashion. The classical simulations are done with state-of-the-art tensor network methods in one and two spatial dimensions. For one dimensional systems, we utilize matrix product states (MPS) that have many practical advantages and can be optimized using the efficient density matrix renormalization group (DMRG) algorithm. The data for two dimensional systems is obtained from entangled projected pair states (PEPS) optimized via imaginary time evolution. Data in form of observables, entanglement spectra, or parts of the state vectors from these simulations, is then fed into a deep learning (DL) pipeline where we perform anomaly detection to map out the phase diagram. We extend this notion to quantum computers and introduce quantum variational anomaly detection. Here, we first simulate the ground state and then process it in a quantum machine learning (QML) manner. Both simulation and QML routines are performed on the same device, which we demonstrate both in classical simulation and on a physical quantum computer hosted by IBM.(Español) En esta tesis, realizamos simulaciónes cuánticas en ordenadores clásicos y cuánticos y diseñamos un marco de aprendizaje automático en el que podemos construir diagramas de fase de sistemas cuánticos de muchas partículas de manera no supervisada. Las simulaciones clásicas se realizan con métodos de red de tensores de última generación en una y dos dimensiones espaciales. Para sistemas unidimensionales, utilizamos estados de productos de matrices (MPS) que tienen muchas ventajas prácticas y pueden optimizarse utilizando el eficiente algoritmo del grupo de renormalización de matrices de densidad (DMRG). Los datos para sistemas bidimensionales se obtienen mediante los denominados estados de pares entrelazados proyectados (PEPS) optimizados a través de la evolución en tiempo imaginario. Los datos, en forma de observables, espectros de entrelazamiento o partes de los vectores de estado de estas simulaciones, se introducen luego en un algoritmo de aprendizaje profundo (DL) donde realizamos la detección de anomalías para construir el diagrama de fase. Extendemos esta noción a los ordenadores cuánticos e introducimos la detección de anomalías cuánticas variacionales. Aquí, primero simulamos el estado fundamental y luego lo procesamos utilizando el aprendizaje automático cuántico (QML). Tanto las rutinas de simulación como el QML se realizan en el mismo dispositivo, lo que demostramos tanto en una simulación clásica como en un ordenador cuántico real de IBM.Postprint (published version

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF
    corecore