374,815 research outputs found

    Services for annotation of biomedical text

    Get PDF
    Motivation: Text mining in the biomedical domain in recent years has focused on the development of tools for recognizing named entities and extracting relations. Such research resulted from the need for such tools as basic components for more advanced solutions. Named entity recognition, entity mention normalization, and relationship extraction now have reached a stage where they perform comparably to human annotators (considering inter--annotator agreement, measured in many studies to be around 90%). Many tools have been made available, through web--interfaces or as downloadable software using non--standardized formats for in-- and output. To advance progress in text mining, solutions are needed to both provide and combine the results of \u27basic\u27 information retrieval and extraction tools. Results: Our groups at Technical University Dresden, Humboldt--Universit"{a}t zu Berlin, and Arizona State University developed systems for named entity recognition, normalization, and relationship extraction. As evaluated during and after the BioCreative 2 challenge, recognition of proteins achieves 86% f--measure, normalization of gene mentions 85%, and extraction of protein--protein interactions including mapping to UniProt 25%. Conclusions: We consider the BioCreative meta-service an ideal framework to make available information extraction tools to a variety of users: researchers from the biomedical domain, database curators, and researchers in text mining who can use the services as input for subsequent analyses. At the time of writing this abstract, twelve groups provide their tools as services to the BCMS server. We currently participates with tools for recognizing names of genes/proteins and species, normalization of gene mentions to EntrezGene, protein mentions to UniProt, mentions of species to NCBI Taxonomy, as well as classifying abstracts for protein-protein interactions. Availability: For more information, please refer to http://alibaba.informatik.hu-berlin.de/bcms/. BCMS is available at http://bcms.bioinfo.cnio.es/

    Sign Language Recognition using Machine Learning

    Get PDF
    Deaf and dumb people communicate with others and within their own groups by using sign language. Beginning with the acquisition of sign gestures, computer recognition of sign language continues until text or speech is produced. There are two types of sign gestures: static and dynamic. Both gesture recognition systems, though static gesture recognition is easier to use than dynamic gesture recognition, are crucial to the human race. In this survey, the steps for sign language recognition are detailed. Examined are the data collection, preprocessing, transformation, feature extraction, classification, and outcomes. There were also some recommendations for furthering this field of study

    Depressed patients demographic characteristics referring to psychiatric of Ardabil Fatemi hospital

    Get PDF
    Depression is the most common psychologic disorder that exists among 35–40 percent of people referring to doctor. Attentive to effect of demographic factors in prevalence and incidence of depressive disorders, recognition of high risk groups for depressive disease will be useful certainly. The aim of this study was to determine depressed patients demographic characteristics referring to psychiatric clinc of Fatemi hospital in Ardabil. In this retrospective descriptive study, 120 records of patient having to depressive disorder referring to psychiatric clinic of Ardabil Fatemi hospital were studied and with using flosheet chart, demographic characteristics of patients were extraction of their records and were analyzed by SPSS software. Frequency tables were used in this analysis. The findings indicated that between depressed patients 51/66 % were female, 80% married, 59/16% had 1–4 baby and 36/66% had underdiploma education, 38/33% were housekeeper, 81/66% lived in city, 68/34% were in the age 24–45 and 43/34% had anxious personality before disease. The recognition demographic characteristics help us in diagnosis of high risk groups of depressive disorder and as a result prevention of that

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Off-line Arabic Handwriting Recognition System Using Fast Wavelet Transform

    Get PDF
    In this research, off-line handwriting recognition system for Arabic alphabet is introduced. The system contains three main stages: preprocessing, segmentation and recognition stage. In the preprocessing stage, Radon transform was used in the design of algorithms for page, line and word skew correction as well as for word slant correction. In the segmentation stage, Hough transform approach was used for line extraction. For line to words and word to characters segmentation, a statistical method using mathematic representation of the lines and words binary image was used. Unlike most of current handwriting recognition system, our system simulates the human mechanism for image recognition, where images are encoded and saved in memory as groups according to their similarity to each other. Characters are decomposed into a coefficient vectors, using fast wavelet transform, then, vectors, that represent a character in different possible shapes, are saved as groups with one representative for each group. The recognition is achieved by comparing a vector of the character to be recognized with group representatives. Experiments showed that the proposed system is able to achieve the recognition task with 90.26% of accuracy. The system needs only 3.41 seconds a most to recognize a single character in a text of 15 lines where each line has 10 words on average

    You smell different! Temperature interferes with intracolonial recognition in Odontomachus brunneus

    Get PDF
    Intracolonial recognition among social insects is performed mainly by means of cuticular hydrocarbons (CHCs) that provide chemical communication, although their primary function is the avoidance of desiccation. Therefore, the ability to adjust to climatic variation may be related to the composition of CHCs. The hypothesis adopted in this work was that workers of the ant Odontomachus brunneus, when exposed to higher or lower average temperatures, change the CHCs composition, as a readjustment to the new conditions, and that this, in turn, leads to a change in intraspecific recognition capacity. To test this hypothesis, colonies of O. brunneus reared in the laboratory were subdivided into four groups. Two groups were kept at the same temperature, in order to assess the effect of isolation itself, while one group was kept at high temperature and another was kept at low temperature. Two groups were maintained at 25 °C, with no further conditions imposed. Subsequently, encounters were induced between individuals from these groups and from the high and low temperature groups, followed by the extraction of CHCs from each individual. The results indicated significant differences in recognition time and CHC composition between the high/low temperature groups and those kept at 25 °C. Antennation time during nestmate encounters was significantly longer for the groups submitted to temperature treatments (high and low), compared to those kept at 25 °C, suggesting recognition difficulty. In order to adjust to changing temperature conditions, O. brunneus undergoes changes in the composition of CHCs and in intraspecific recognition capacity

    Gesture Recognition in Robotic Surgery: a Review

    Get PDF
    OBJECTIVE: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE: This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field

    Social experience does not abolish cultural diversity in eye movements.

    Get PDF
    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed "Eastern" eye movement strategies, while approximately 25% of participants displayed "Western" strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that "culture" alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required

    Connected Component Algorithm for Gestures Recognition

    Get PDF
    This paper presents head and hand gestures recognition system for Human Computer Interaction (HCI). Head and Hand gestures are an important modality for human computer interaction. Vision based recognition system can give computers the capability of understanding and responding to the hand and head gestures. The aim of this paper is the proposal of real time vision system for its application within a multimedia interaction environment. This recognition system consists of four modules, i.e. capturing the image, image extraction, pattern matching and command determination. If hand and head gestures are shown in front of the camera, hardware will perform respective action. Gestures are matched with the stored database of gestures using pattern matching. Corresponding to matched gesture, the hardware is moved in left, right, forward and backward directions. An algorithm for optimizing connected component in gesture recognition is proposed, which makes use of segmentation in two images. Connected component algorithm scans an image and group its pixels into component based on pixel connectivity i.e. all pixels in connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a color according to component it was assigned to

    A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning

    Full text link
    Owing to the recent developments in Generative Artificial Intelligence (GenAI) and Large Language Models (LLM), conversational agents are becoming increasingly popular and accepted. They provide a human touch by interacting in ways familiar to us and by providing support as virtual companions. Therefore, it is important to understand the user's emotions in order to respond considerately. Compared to the standard problem of emotion recognition, conversational agents face an additional constraint in that recognition must be real-time. Studies on model architectures using audio, visual, and textual modalities have mainly focused on emotion classification using full video sequences that do not provide online features. In this work, we present a novel paradigm for contextualized Emotion Recognition using Graph Convolutional Network with Reinforcement Learning (conER-GRL). Conversations are partitioned into smaller groups of utterances for effective extraction of contextual information. The system uses Gated Recurrent Units (GRU) to extract multimodal features from these groups of utterances. More importantly, Graph Convolutional Networks (GCN) and Reinforcement Learning (RL) agents are cascade trained to capture the complex dependencies of emotion features in interactive scenarios. Comparing the results of the conER-GRL model with other state-of-the-art models on the benchmark dataset IEMOCAP demonstrates the advantageous capabilities of the conER-GRL architecture in recognizing emotions in real-time from multimodal conversational signals.Comment: 5 pages (4 main + 1 reference), 2 figures. Submitted to IEEE FG202
    • …
    corecore