4,585 research outputs found

    Automated identification and behaviour classification for modelling social dynamics in group-housed mice

    Get PDF
    Mice are often used in biology as exploratory models of human conditions, due to their similar genetics and physiology. Unfortunately, research on behaviour has traditionally been limited to studying individuals in isolated environments and over short periods of time. This can miss critical time-effects, and, since mice are social creatures, bias results. This work addresses this gap in research by developing tools to analyse the individual behaviour of group-housed mice in the home-cage over several days and with minimal disruption. Using data provided by the Mary Lyon Centre at MRC Harwell we designed an end-to-end system that (a) tracks and identifies mice in a cage, (b) infers their behaviour, and subsequently (c) models the group dynamics as functions of individual activities. In support of the above, we also curated and made available a large dataset of mouse localisation and behaviour classifications (IMADGE), as well as two smaller annotated datasets for training/evaluating the identification (TIDe) and behaviour inference (ABODe) systems. This research constitutes the first of its kind in terms of the scale and challenges addressed. The data source (side-view single-channel video with clutter and no identification markers for mice) presents challenging conditions for analysis, but has the potential to give richer information while using industry standard housing. A Tracking and Identification module was developed to automatically detect, track and identify the (visually similar) mice in the cluttered home-cage using only single-channel IR video and coarse position from RFID readings. Existing detectors and trackers were combined with a novel Integer Linear Programming formulation to assign anonymous tracks to mouse identities. This utilised a probabilistic weight model of affinity between detections and RFID pickups. The next task necessitated the implementation of the Activity Labelling module that classifies the behaviour of each mouse, handling occlusion to avoid giving unreliable classifications when the mice cannot be observed. Two key aspects of this were (a) careful feature-selection, and (b) judicious balancing of the errors of the system in line with the repercussions for our setup. Given these sequences of individual behaviours, we analysed the interaction dynamics between mice in the same cage by collapsing the group behaviour into a sequence of interpretable latent regimes using both static and temporal (Markov) models. Using a permutation matrix, we were able to automatically assign mice to roles in the HMM, fit a global model to a group of cages and analyse abnormalities in data from a different demographic

    Funduse sinine ja lähi-infrapuna autofluorestsentsuuring autosoom-retsessiivse Stardgardti tõve, koroidereemia, PROM1-maakuli düstroofia ja okulaarse albinismi patsientidel

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneFunduse sinine ja lähi-infrapuna autofluorestsentsuuring autosoom-retsessiivse Stardgardti tõve, koroidereemia, PROM1-maakuli düstroofia ja okulaarse albinismi patsientidel Pärilikud võrkkestahaigused on juhtivaks nägemiskaotuse põhjuseks tööealise elanikkonna seas arenenud riikides. Tegemist on kliiniliselt ja geneetiliselt väga heterogeense haiguste grupiga, mistõttu diagnostika ja haiguse patogeneesi uurimine on olnud vaevarikas. Võrkkesta piltdiagnostika on oluline mitte-invasiivne meetod haiguste diagnoosimiseks ja uurimiseks. Konfokaalne skanneeriv laseroftalmoskoop valgustab võrkkesta erineva lainepikkusega laserkiirega ning salvestab tagasikiirgavat valgust luues silmapõhjast pildi. Funduse autofluorestsents (AF) uuringul kasutatakse ära silmapõhja enda naturaalseid fluorofoore. Lipofustsiini ergastamiseks kasutatakse sinise spektri laserkiirt (sinine AF) ja melaniini jaoks lähipuna laserkiirt (lähipuna AF). Nende fluorofooride jaotus ja kogus silmapõhjas muutub erinevate haigusprotsesside mõjul ning need muutused on tuvastatavad AF uuringul. Antud doktoritöös uurisime sinise ja lähipuna AF uuringu pilte autosoom-retsesiivse Stargardti tõve (STGD1), koroidereemia, PROM1-maakuli düstroofia ning okulaarse albinismi patsientidel. Töö eesmärgiks oli paremini mõista sinise ja lähipuna AF signaali allikaid erinevate haigusseisundite korral, kus võrkkesta fluorofooride jaotus ning kogused on muutunud. Lisaks kvalitatiivsele piltide hindamisele kasutamise kvantitatiivset AF signaali tugevuse mõõtmist hindamaks lipofustsiini ja melaniini taset. Uurimustöös näitasime, et melaniin on lähipuna AF signaali peamiseks allikaks. Lisaks näitasime, et melanin võib kaudselt moduleerida lipofustsiinist tuleneva sinise AF signaali, sest okulaarse albinismi kandjate hüpopigmenteeritud võrkkesta alade sinise AF signal oli tavapärasest kõrgem. AF signaali tugevuse mõõtmisel leidsime, et lipofustsiini kuhjumine võrkkestas põhjustab lisaks sinise AF signaali tõusule ka lähipuna AF signaali tõusu STGD1 patsientidel. Kvantitatiivsel analüüsil näitasime ka, et PROM1-maakuli düstroofia patsientide sinise AF signaal oli võrreldav terve silmapõhja signaali tugevusega, eristades seda fenotüübiliselt sarnasest STGD1 haigusest ning viidates ka sellele, et lipofustsiini üleliigne kuhjumine ei ole antud haigusele omane mehhanism. Koroidereemia ja STGD1 haigete uurimisel leidsime, et pigmentepiteeli rakkude kärbumine on nähtav AF signaali hääbumisena, samas lähipuna AF uuringaitab tuvastada varasemaid muutusi kui sinine AF uuring. Lipofustsiin ja melanin on mõlemad olulised võrkkesta rakkude seisundi biomarkerid, mida on võimalik mitte-invasiivsel moel AF uuringu abil analüüsida ning hinnata haiguse progressiooni.Inherited retinal diseases are the leading cause of visual impairment among the working age-group in the developed countries. Because of genetic and phenotypical heterogeneity, diagnosis and understanding pathogenesis of inherited retinal disease has been challenging. Retinal imaging studies which are noninvasive, are an invaluable source of information. Fundus autofluorescence (FAF) utilizes natural fluorophores to create an image of the retina. Lipofuscin is the primary source for short-wavelength autofluorescence (SW-AF) and melanin for near-infrared autofluorescence (NIR-AF). The amount and distribution of these fluorophores changes in the different disease processes and is detectable in FAF images. In this study we analyzed SW-AF and NIR-AF images in cases of genetically confirmed recessive Stargardt disease (STGD1), choroideremia, PROM1-macular disease and ocular albinism. The aim was to qualitatively describe FAF in conditions with varying levels of lipofuscin or melanin as well as to quantify FAF signal intensities. We also aimed at finding new clinical implications for autofluorescence imaging in evaluating inherited retinal disease. We confirmed that melanin is the major source of NIR-AF signal by analyzing ocular albinism carriers and mice models with varying fundus pigmentation, but we also found that presence of melanin can modulate SW-AF signal strength. As a novel finding we confirmed that lipofuscin contributes to NIR-AF signal intensity in cases with excessive bisretinoid lipofuscin levels like seen in STGD1. The analysis of choroideremia and STGD1 patients showed that retinal pigment epithelium atrophy causes loss of signal in both SW-AF and NIR-AF, but NIR-AF could be more sensitive in detecting early cell degeneration. Quantifying the autofluorescence signal intensity helps to further understand disease processes as it is an indirect measure for levels of retinal fluorophores. We showed PROM1-macular dystrophy does not present with elevated levels of SW-AF indicating that excessive lipofuscin accumulation is likely not part of its disease mechanism. That knowledge is valuable in differentiating it from phenotypically similar STGD1 or when developing therapeutic approaches. Lipofuscin and melanin are both valuable retinal biomarkers for evaluating retinal health by using non-invasive autofluorescence imaging.https://www.ester.ee/record=b555738

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Parallel proccessing applied to object detection with a Jetson TX2 embedded system.

    Get PDF
    Video streams from panoramic cameras represent a powerful tool for automated surveillance systems, but naïve implementations typically require very intensive computational loads for applying deep learning models for automated detection and tracking of objects of interest, since these models require relatively high resolution to reliably perform object detection. In this paper, we report a host of improvements to our previous state-of-the-art software system to reliably detect and track objects in video streams from panoramic cameras, resulting in an increase in the processing framerate in a Jetson TX2 board, with respect to our previous results. Depending on the number of processes and the load profile, we observe up to a five-fold increase in the framerate.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Spatial frequency domain imaging towards improved detection of gastrointestinal cancers

    Get PDF
    Early detection and treatment of gastrointestinal cancers has been shown to drastically improve patients survival rates. However, wide population based screening for gastrointestinal cancers is not feasible due to its high cost, risk of potential complications, and time consuming nature. This thesis forms the proposal for the development of a cost-effective, minimally invasive device to return quantitative tissue information for gastrointestinal cancer detection in-vivo using spatial frequency domain imaging (SFDI). SFDI is a non-invasive imaging technique which can return close to real time maps of absorption and reduced scattering coefficients by projecting a 2D sinusoidal pattern onto a sample of interest. First a low-cost, conventional bench top system was constructed to characterise tissue mimicking phantoms. Phantoms were fabricated with specific absorption and reduced scattering coefficients, mimicking the variation in optical properties typically seen in healthy, cancerous, and pre-cancerous oesophageal tissue. The system shows accurate retrieval of absorption and reduced scattering coefficients of 19% and 11% error respectively. However, this bench top system consists of a bulky projector and is therefore not feasible for in-vivo imaging. For SFDI systems to be feasible for in-vivo imaging, they are required to be miniaturised. Many conditions must be considered when doing this such as various illumination conditions, lighting conditions and system geometries. Therefore to aid in the miniaturisation of the bench top system, an SFDI system was simulated in the open-source ray tracing software Blender, where the capability to simulate these conditions is possible. A material of tunable absorption and scattering properties was characterised such that the specific absorption and reduced scattering coefficients of the material were known. The simulated system shows capability in detecting optical properties of typical gastrointestinal conditions in an up-close, planar geometry, as well in a non-planar geometry of a tube simulating a lumen. Optical property imaging in the non-planar, tubular geometry was done with the use of a novel illumination pattern, developed for this work. Finally, using the knowledge gained from the simulation model, the bench top system was miniaturised to a 3 mm diameter prototype. The novel use of a fiber array producing the necessary interfering fringe patterns replaced the bulky projector. The system showed capability to image phantoms simulating typical gastrointestinal conditions at two wavelengths (515 and 660 nm), measuring absorption and reduced scattering coefficients with 15% and 6% accuracy in comparison to the bench top system for the fabricated phantoms. It is proposed that this system may be used for cost-effective, minimally invasive, quantitative imaging of the gastrointestinal tract in-vivo, providing enhanced contrast for difficult to detect cancers

    Deep learning in crowd counting: A survey

    Get PDF
    Counting high-density objects quickly and accurately is a popular area of research. Crowd counting has significant social and economic value and is a major focus in artificial intelligence. Despite many advancements in this field, many of them are not widely known, especially in terms of research data. The authors proposed a three-tier standardised dataset taxonomy (TSDT). The Taxonomy divides datasets into small-scale, large-scale and hyper-scale, according to different application scenarios. This theory can help researchers make more efficient use of datasets and improve the performance of AI algorithms in specific fields. Additionally, the authors proposed a new evaluation index for the clarity of the dataset: average pixel occupied by each object (APO). This new evaluation index is more suitable for evaluating the clarity of the dataset in the object counting task than the image resolution. Moreover, the authors classified the crowd counting methods from a data-driven perspective: multi-scale networks, single-column networks, multi-column networks, multi-task networks, attention networks and weak-supervised networks and introduced the classic crowd counting methods of each class. The authors classified the existing 36 datasets according to the theory of three-tier standardised dataset taxonomy and discussed and evaluated these datasets. The authors evaluated the performance of more than 100 methods in the past five years on different levels of popular datasets. Recently, progress in research on small-scale datasets has slowed down. There are few new datasets and algorithms on small-scale datasets. The studies focused on large or hyper-scale datasets appear to be reaching a saturation point. The combined use of multiple approaches began to be a major research direction. The authors discussed the theoretical and practical challenges of crowd counting from the perspective of data, algorithms and computing resources. The field of crowd counting is moving towards combining multiple methods and requires fresh, targeted datasets. Despite advancements, the field still faces challenges such as handling real-world scenarios and processing large crowds in real-time. Researchers are exploring transfer learning to overcome the limitations of small datasets. The development of effective algorithms for crowd counting remains a challenging and important task in computer vision and AI, with many opportunities for future research.BHF, AA/18/3/34220Hope Foundation for Cancer Research, RM60G0680GCRF, P202PF11;Sino‐UK Industrial Fund, RP202G0289LIAS, P202ED10, P202RE969Data Science Enhancement Fund, P202RE237Sino‐UK Education Fund, OP202006Fight for Sight, 24NN201Royal Society International Exchanges Cost Share Award, RP202G0230MRC, MC_PC_17171BBSRC, RM32G0178B

    Application of improved you only look once model in road traffic monitoring system

    Get PDF
    The present research focuses on developing an intelligent traffic management solution for tracking the vehicles on roads. Our proposed work focuses on a much better you only look once (YOLOv4) traffic monitoring system that uses the CSPDarknet53 architecture as its foundation. Deep-sort learning methodology for vehicle multi-target detection from traffic video is also part of our research study. We have included features like the Kalman filter, which estimates unknown objects and can track moving targets. Hungarian techniques identify the correct frame for the object. We are using enhanced object detection network design and new data augmentation techniques with YOLOv4, which ultimately aids in traffic monitoring. Until recently, object identification models could either perform quickly or draw conclusions quickly. This was a big improvement, as YOLOv4 has an astoundingly good performance for a very high frames per second (FPS). The current study is focused on developing an intelligent video surveillance-based vehicle tracking system that tracks the vehicles using a neural network, image-based tracking, and YOLOv4. Real video sequences of road traffic are used to test the effectiveness of the method that has been suggested in the research. Through simulations, it is demonstrated that the suggested technique significantly increases graphics processing unit (GPU) speed and FSP as compared to baseline algorithms
    corecore