229 research outputs found

    Uso de descritores morfológicos e cinemáticos na identificação automática de comportamentos de animais de laboratório

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2011O comportamento animal é um sinal biológico pouco explorado pelas disciplinas de processamento de sinais e inteligência computacional em Engenharia Biomédica. As neurociências usam registros e quantificações do comportamento animal para examinar os mecanismos neurais de controle comportamental. Estes registros são geralmente realizados por um observador humano, e estão sujeitos a vieses de interpretação (e.g., cansaço, experiência, e ambiguidades entre as categorias). Neste trabalho, examinou-se o uso de descritores de imagens (morfológicos: como área e comprimento; e cinemáticos, como distância percorrida e variação angular) como parâmetros de entrada de redes neurais artificiais (RNAs), na identificação automática de comportamentos de animais de laboratório. Os descritores foram extraídos de comportamentos de ratos Wistar em arenas de campo aberto (locomoção: LOC, imobilidade: IMO, limpeza corporal: LIC, exploração vertical: EXP), tratados com cafeína (2 ou 6 mg/Kg) ou com seu veículo (salina), usando um software de etografia e rastreamento desenvolvido durante essa tese (o ETHOWATCHER®). Empregou-se RNAs perceptron de múltiplas camadas (MLP), avaliadas por múltiplos índices de desempenho de diagnóstico (AUC, Kappa). Os descritores foram previamente avaliados quanto a sua relevância na diferenciação entre os comportamentos usando o teste estatístico de Kruskal-Wallis. Em animais tratados com veículo, as MLPs identificaram 97,3 ± 2,0 % dos casos de IMO (AUC, média ± desvio padrão); 95,6 ± 8,0 % de LOC; 94,6 ± 3,0 % de EXP; e 83,6% ± 16,0 % de LIC. Em animais tratados com cafeína, obteve-se: 85,2 ± 1,8 % em IMO; 83,5 ± 0,9 % em LOC; 67,0 ± 2,0 % em EXP; e 78,0 ± 10,0 % em LIC. Os resultados indicam que as MLP usando os descritores cinemáticos e morfológicos identificam com sucesso variável os comportamentos investigados. As diferenças estatisticamente significantes entre os desempenhos dos classificadores usando parâmetros relevantes e aqueles usando irrelevantes validaram o uso do teste Kruskal-Wallis na seleção de descritores adequados para a identificação comportamental. A redução de desempenho da MLP em comportamentos de animais tratados com cafeína em dose sub-efetiva (0,2 mg/Kg) pode sugerir que os procedimentos aqui utilizados são capazes de detectar variações em padrões morfológicos e cinemáticos dos comportamentos (Mann-Whitney, p<0,05) não detectáveis pelos procedimentos usuais de análise comportamental. Embora reduzido, o desempenho da MLP foi superior ao medido em observadores iniciantes no registro comportamental de um rato ingênuo a tratamento (Kappa: 35,48%), evidenciando a viabilidade do uso dessas RNAs na avaliação de alterações em padrões comportamentais

    Toward an Unsupervised Colorization Framework for Historical Land Use Classification

    Get PDF
    International audienceWe present an unsupervised colorization framework to improve both the visualization and the automatic land use clas-sification of historical aerial images. We introduce a novel algorithm built upon a cyclic generative adversarial neural network and a texture replacement method to homogeneously and automatically colorize unpaired VHR images. We apply our framework on historical aerial images acquired in France between 1970 and 1990. We demonstrate that our approach helps to disentangle hard to classify land use classes and hence improves the overall land use classification

    Combining Multiple Sensors for Event Detection of Older People

    Get PDF
    International audienceWe herein present a hierarchical model-based framework for event detection using multiple sensors. Event models combine a priori knowledge of the scene (3D geometric and semantic information, such as contextual zones and equipment) with moving objects (e.g., a Person) detected by a video monitoring system. The event models follow a generic ontology based on natural language, which allows domain experts to easily adapt them. The framework novelty lies on combining multiple sensors at decision (event) level, and handling their conflict using a proba-bilistic approach. The event conflict handling consists of computing the reliability of each sensor before their fusion using an alternative combination rule for Dempster-Shafer Theory. The framework evaluation is performed on multisensor recording of instrumental activities of daily living (e.g., watching TV, writing a check, preparing tea, organizing week intake of prescribed medication) of participants of a clinical trial for Alzheimer's disease study. Two fusion cases are presented: the combination of events (or activities) from heterogeneous sensors (RGB ambient camera and a wearable inertial sensor) following a deterministic fashion, and the combination of conflicting events from video cameras with partially overlapped field of view (a RGB-and a RGB-D-camera, Kinect). Results showed the framework improves the event detection rate in both cases

    A Multi-Sensor Approach for Activity Recognition in Older Patients

    Get PDF
    in pressInternational audienceExisting surveillance systems for older people activity analysis are focused on video and sensors analysis (e.g., accelerometers, pressure, infrared) applied for frailty assessment, fall detection, and the automatic identification of self-maintenance activities (e.g., dressing, self-feeding) at home. This paper proposes a multi-sensor surveillance system (accelerometers and video-camera) for the automatic detection of instrumental activities of daily living (IADL, e.g., preparing coffee, making a phone call) in a lab-based clinical protocol. IADLs refer to more complex activities than self-maintenance which decline in performance has been highlighted as an indicator of early symptoms of dementia. Ambient video analysis is used to describe older people activity in the scene, and an accelerometer wearable device is used to complement visual information in body posture identification (e.g., standing, sitting). A generic constraint-based ontology language is used to model IADL events using sensors reading and semantic information of the scene (e.g., presence in goal-oriented zones of the environment, temporal relationship of events, estimated postures). The proposed surveillance system is tested with 9 participants (healthy: 4, MCI: 5) in an observation room equipped with home appliances at the Memory Center of Nice Hospital. Experiments are recorded using a 2D video camera (8 fps) and an accelerometer device (MotionPod®). The multi-sensor approach presents an average sensitivity of 93.51% and an average precision of 63.61%, while the vision-based approach has a sensitivity of 77.23%, and a precision of 57.65%. The results show an improvement of the multi-sensor approach over the vision-based at IADL detection. Future work will focus on system use to evaluate the differences between the activity profile of healthy participants and early to mild stage Alzheimer's patients

    SWEET-HOME ICT technologies for the assessment of elderly subjects

    Get PDF
    International audienceFunctional assessments are designed to ascertain a person's ability to perform activities of daily living (ADL) and provide valuable diagnostic as well as care-planning information. Currently, the gold-standard for the assessment of functional ability involves clinical rating scales however scales are often limited in their ability to provide objective and sensitive information. In contrast, information and communication technologies (ICT) may overcome these limitations by capturing more fully the functional, well as behavioural and cognitive disturbances associated with Alzheimer disease (AD)

    Disponibilidade de matéria seca nas diferentes fitofisionomias do Pantanal, sub-região da Nhecolândia, MS.

    Get PDF
    O experimento mensurou a altimetria de três fitofisionomias (baixadas, campo limpo e campo cerrado), caracterizou e avaliou a produção de matéria seca e o levantamento florístico (seca e cheia), nessas áreas, na sub-região da Nhecolãndia (MS), na Fazenda Nhumirim, região do Pantanal

    BEHAVE - Behavioral analysis of visual events for assisted living scenarios

    Get PDF
    International audienceThis paper proposes BEHAVE, a person-centered pipeline for probabilistic event recognition. The proposed pipeline firstly detects the set of people in a video frame, then it searches for correspondences between people in the current and previous frames (i.e., people tracking). Finally, event recognition is carried for each person using proba-bilistic logic models (PLMs, ProbLog2 language). PLMs represent interactions among people, home appliances and semantic regions. They also enable one to assess the probability of an event given noisy observations of the real world. BEHAVE was evaluated on the task of online (non-clipped videos) and open-set event recognition (e.g., target events plus none class) on video recordings of seniors carrying out daily tasks. Results have shown that BEHAVE improves event recognition accuracy by handling missed and partially satisfied logic models. Future work will investigate how to extend PLMs to represent temporal relations among events

    Semi-supervised understanding of complex activities from temporal concepts

    Get PDF
    International audienceMethods for action recognition have evolved considerably over the past years and can now automatically learn and recognize short term actions with satisfactory accuracy. Nonetheless, the recognition of complex activities-compositions of actions and scene objects-is still an open problem due to the complex temporal and composite structure of this category of events. Existing methods focus either on simple activities or oversimplify the modeling of complex activities by targeting only whole-part relations between its sub-parts (e.g., actions). In this paper, we propose a semi-supervised approach that learns complex activities from the temporal patterns of concept compositions (e.g., " slicing-tomato " before " pouring into-pan "). We demonstrate that our method outperforms prior work in the task of automatic modeling and recognition of complex activities learned out of the interaction of 218 distinct concepts
    corecore