5,907 research outputs found

    Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification

    Get PDF
    The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks

    The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management

    Get PDF
    Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure

    Study of neural circuits using multielectrode arrays in movement disorders

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Rodríguez Allué, Manuel JoséNeurodegenerative movement-related disorders are characterized by a progressive degeneration and loss of neurons, which lead to motor control impairment. Although the precise mechanisms underlying these conditions are still unknown, an increasing number of studies point towards the analysis of neural networks and functional connectivity to unravel novel insights. The main objective of this work is to understand cellular mechanisms related to dysregulated motor control symptoms in movement disorders, such as Chorea-Acanthocytosis (ChAc), by employing multielectrode arrays to analyze the electrical activity of neuronal networks in mouse models. We found no notable differences in cell viability between neurons with and without VPS13A knockdown, that is the only gene known to be implicated in the disease, suggesting that the absence of VPS13A in neurons may be partially compensated by other proteins. The MEA setup used to capture the electrical activity from neuron primary cultures is described in detail, pointing out its specific characteristics. At last, we present the alternative backup approach implemented to overcome the challenges faced during the research process and to explore the advanced algorithms for signal processing and analysis. In this report, we present a thorough account of the conception and implementation of our research, outlining the multiple limitations that have been encountered all along the course of the project. We provide a detailed analysis on the project’s economical and technical feasibility, as well as a comprehensive overview of the ethical and legal aspects considered during the execution

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange

    Get PDF
    Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy. In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur. In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Inferior Alveolar Canal Automatic Detection with Deep Learning CNNs on CBCTs: Development of a Novel Model and Release of Open-Source Dataset and Algorithm

    Get PDF
    Featured Application Convolutional neural networks can accurately identify the Inferior Alveolar Canal, rapidly generating precise 3D data. The datasets and source code used in this paper are publicly available, allowing the reproducibility of the experiments performed. Introduction: The need of accurate three-dimensional data of anatomical structures is increasing in the surgical field. The development of convolutional neural networks (CNNs) has been helping to fill this gap by trying to provide efficient tools to clinicians. Nonetheless, the lack of a fully accessible datasets and open-source algorithms is slowing the improvements in this field. In this paper, we focus on the fully automatic segmentation of the Inferior Alveolar Canal (IAC), which is of immense interest in the dental and maxillo-facial surgeries. Conventionally, only a bidimensional annotation of the IAC is used in common clinical practice. A reliable convolutional neural network (CNNs) might be timesaving in daily practice and improve the quality of assistance. Materials and methods: Cone Beam Computed Tomography (CBCT) volumes obtained from a single radiological center using the same machine were gathered and annotated. The course of the IAC was annotated on the CBCT volumes. A secondary dataset with sparse annotations and a primary dataset with both dense and sparse annotations were generated. Three separate experiments were conducted in order to evaluate the CNN. The IoU and Dice scores of every experiment were recorded as the primary endpoint, while the time needed to achieve the annotation was assessed as the secondary end-point. Results: A total of 347 CBCT volumes were collected, then divided into primary and secondary datasets. Among the three experiments, an IoU score of 0.64 and a Dice score of 0.79 were obtained thanks to the pre-training of the CNN on the secondary dataset and the creation of a novel deep label propagation model, followed by proper training on the primary dataset. To the best of our knowledge, these results are the best ever published in the segmentation of the IAC. The datasets is publicly available and algorithm is published as open-source software. On average, the CNN could produce a 3D annotation of the IAC in 6.33 s, compared to 87.3 s needed by the radiology technician to produce a bidimensional annotation. Conclusions: To resume, the following achievements have been reached. A new state of the art in terms of Dice score was achieved, overcoming the threshold commonly considered of 0.75 for the use in clinical practice. The CNN could fully automatically produce accurate three-dimensional segmentation of the IAC in a rapid setting, compared to the bidimensional annotations commonly used in the clinical practice and generated in a time-consuming manner. We introduced our innovative deep label propagation method to optimize the performance of the CNN in the segmentation of the IAC. For the first time in this field, the datasets and the source codes used were publicly released, granting reproducibility of the experiments and helping in the improvement of IAC segmentation

    Coworking through the Pandemic: Flexibly Yours

    Get PDF
    Coworking can be defined as a paid for service (usually) providing shared workspace and amenities to users. When the pandemic hit, owing to the business model’s in-person foundations of physical proximity and shared amenities, the coworking industry was expected to be seriously impacted. Yet fast forward, and as the pandemic has played out, coworking businesses are uniquely positioned in this uncertain and changing workscape. This dissertation presents one of the first academic explorations into how independent coworking businesses fared in the initial year of the pandemic. Specifically, the research explores the following questions: 1. How did independent coworking businesses manage and adapt to the pandemic? 2. What is virtual coworking and what are the experiences of workers in these virtual coworking spaces? 3. How does coworking flexibility affect social support and connection? Using a critically interpretive poststructural approach, this ethnography included virtual fieldwork and interviews. Sixty hours of virtual participant observation and 30 loosely structured interviews were conducted with coworking stakeholders (i.e., owner-operators, managers, and users) over videoconferencing platforms. Secondary data included written fieldnotes and coworking documents. Results capture the strategies used by coworking business owner-operators and managers to sustain their businesses and the attendant relationships with coworking users, irrespective of whether or not a physical location could be provided under pandemic lockdowns. Given the expansion of coworking businesses into virtual service offerings, a key contribution of my research is the finding that co-location in a physical coworking space is not necessary to cultivate vibes and a sense of community. By removing the physical infrastructure of coworking, the virtual coworking product in which I participated points to both a reinforcement of and an emphasis on the centrality of social connection, support, and community. By de-centering the priority of a physical co-location, I conceptualize coworking businesses as commodified support infrastructures—affective atmospheres produced through the entanglement of human bodies, other living things, objects, and technologies in a space. In viewing coworking businesses as fluid affective atmospheres of support, my research adds to the emerging coworking scholarship that attends to the atmospheric qualities of coworking, the role of affective labour, and the possibilities of encounters and interactions as bodies, objects, and technologies interconnect. My results reinforce the deep ambivalence of coworking, capturing tensions between productivity and sociality, and a blurring of boundaries between professional and private, and work and leisure. The analysis also suggests that the inherent flexibility, informality, turnover, and autonomy in coworking practices can make creating stable social connections and support difficult. Finally, the COVID-19 crisis brought to light how coworking lies primarily outside the scope of current employment legislation, which includes occupational health and safety, employment standards, and workers’ compensation. In the absence of well-defined policy directions, coworking business owner-operators and managers made individualized decisions, thereby ultimately downloading further risk and responsibility onto their coworking users

    Self-supervised learning techniques for monitoring industrial spaces

    Get PDF
    Dissertação de mestrado em Matemática e ComputaçãoEste documento é uma Dissertação de Mestrado com o título ”Self-Supervised Learning Techniques for Monitoring Industrial Spaces”e foi realizada e ambiente empresarial na empresa Neadvance - Machine Vision S.A. em conjunto com a Universidade do Minho. Esta dissertação surge de um grande projeto que consiste no desenvolvimento de uma plataforma de monitorização de operações específicas num espaço industrial, denominada SMARTICS (Plataforma tecnoló gica para monitorização inteligente de espaços industriais abertos). Este projeto continha uma componente de investigação para explorar um paradigma de aprendizagem diferente e os seus métodos - self-supervised learning, que foi o foco e principal contributo deste trabalho. O supervised learning atingiu um limite, pois exige anotações caras e dispendiosas. Em problemas reais, como em espaços industriais nem sempre é possível adquirir um grande número de imagens. O self-supervised learning ajuda nesses problemas, ex traindo informações dos próprios dados e alcançando bom desempenho em conjuntos de dados de grande escala. Este trabalho fornece uma revisão geral da literatura sobre a estrutura de self-supervised learning e alguns métodos. Também aplica um método para resolver uma tarefa de classificação para se assemelhar a um problema em um espaço industrial.This document is a Master’s Thesis with the title ”Self-Supervised Learning Techniques for Monitoring Industrial Spaces” and was carried out in a business environment at Neadvance - Machine Vision S.A. together with the University of Minho. This dissertation arises from a major project that consists of developing a platform to monitor specific operations in an industrial space, named SMARTICS (Plataforma tecnológica para monitorização inteligente de espaços industriais abertos). This project contained a research component to explore a different learning paradigm and its methods - self-supervised learning, which was the focus and main contribution of this work. Supervised learning has reached a bottleneck as they require expensive and time-consuming annotations. In real problems, such as in industrial spaces it is not always possible to require a large number of images. Self-supervised learning helps these issues by extracting information from the data itself and has achieved good performance in large-scale datasets. This work provides a comprehensive literature review of the self supervised learning framework and some methods. It also applies a method to solve a classification task to resemble a problem in an industrial space and evaluate its performance

    k-Means

    Get PDF
    • …
    corecore