1,791,686 research outputs found

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Utilizing RxNorm to Support Practical Computing Applications: Capturing Medication History in Live Electronic Health Records

    Full text link
    RxNorm was utilized as the basis for direct-capture of medication history data in a live EHR system deployed in a large, multi-state outpatient behavioral healthcare provider in the United States serving over 75,000 distinct patients each year across 130 clinical locations. This tool incorporated auto-complete search functionality for medications and proper dosage identification assistance. The overarching goal was to understand if and how standardized terminologies like RxNorm can be used to support practical computing applications in live EHR systems. We describe the stages of implementation, approaches used to adapt RxNorm's data structure for the intended EHR application, and the challenges faced. We evaluate the implementation using a four-factor framework addressing flexibility, speed, data integrity, and medication coverage. RxNorm proved to be functional for the intended application, given appropriate adaptations to address high-speed input/output (I/O) requirements of a live EHR and the flexibility required for data entry in multiple potential clinical scenarios. Future research around search optimization for medication entry, user profiling, and linking RxNorm to drug classification schemes holds great potential for improving the user experience and utility of medication data in EHRs.Comment: Appendix (including SQL/DDL Code) available by author request. Keywords: RxNorm; Electronic Health Record; Medication History; Interoperability; Unified Medical Language System; Search Optimizatio

    Understanding the Impact of the New Aesthetics and New Media Works on Future Curatorial Resource Responsibilities for Research Collections

    Get PDF
    The author examines the emerging impact of the works of the “New Aesthetic,” along with other works that have their genesis in the rapid technological changes of the last fifty-plus years. Consideration is given to the history of digital audio/visual works that will eventually be held by repositories of cultural heritage and how this history has, or has not, been documented. These creations have developed out of an environment of networked, shared, re-usable and re-purposed data. The article briefly examines how these works are utilized while looking at the future impact of the growing creation and use of complex, compound multimedia digital re- search and cultural collections as evidenced by augmented and virtual reality environments such as smartphone apps and Second Life.Ye

    Search Costs and Risky Investment in Quality

    Get PDF
    One striking development associated with the explosion of e-commerce is the increased transparency of sellers' quality history. In this paper we analyze how this affects fi…rms' incentives to invest in quality when the outcome of investment is uncertain. We identify two conflicting effects. On the one hand, reducing the consumer's cost of search for quality exacerbates the negative effects of past poor performance. This increases incentives to invest, leading to higher quality. On the other hand, the fact that a fi…rm, despite its best efforts, may fail to live up to consumers' more demanding expectations, makes investment less attractive. This discourages investment, leading to lower quality. We show that reducing the search cost leads to higher quality if the initial level of the search cost is sufficiently high but may lead to lower quality if the initial level of the search cost is sufficiently low.search, internet search, quality, risky investment

    Structuring a Modern Web Service for Users and Search Engines

    Get PDF
    This thesis aims to study how search engines should be taken into account when developing an early stage web service. The thesis introduces practical methods for improving the presentation of a web service’s pages in search engine results and a general overview of the working principles of search engines is provided. The thesis also goes over the history of web search engines in general before covering the history of Google in a bit more detailed manner. The practical aspect of this thesis is conducted as a case study, where search engine legibility related improvements are made to the Skole application. Skole is a very early stage web service, were higher educations students can share knowledge and discuss their studies. The case study aims to get information about how and what kind of improvements can be made, and how they make a difference in practice. The case study also seeks to find out how important it is to follow the guidelines of search engine providers, when doing development for this kind of very early stage web service. The results of the case study will mainly be measured by documenting how the application’s pages show up in Google before and after changes have been made. Some aggregated data from analytics providers is also used to see how the made improvements have affected the usage of the application

    Establishing National Innovation Foundation: How Does a Tail Wag the Dog?

    Get PDF
    A simple search on the web about unaided technological innovations by common people from the unorganized sector will reveal the paucity of information worldwide. It is this gap, which Honey Bee Network started at IIMA about two decades ago tried to fill. In this paper, a very brief history of the steps taken to establish National Innovation Foundation (NIF) has been given. A detailed history remains to be written. Now that NIF will become an autonomous Institute of Department of Science and Technology, its role within India and outside needs to be redefined. How a small academic initiative has spawned multiple institutional innovations is a subject that deserves further study.

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming
    corecore