5,488 research outputs found

    Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification

    Get PDF
    The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks

    The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management

    Get PDF
    Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Knowledge-based Modelling of Additive Manufacturing for Sustainability Performance Analysis and Decision Making

    Get PDF
    Additiivista valmistusta on pidetty käyttökelpoisena monimutkaisissa geometrioissa, topologisesti optimoiduissa kappaleissa ja kappaleissa joita on muuten vaikea valmistaa perinteisillä valmistusprosesseilla. Eduista huolimatta, yksi additiivisen valmistuksen vallitsevista haasteista on ollut heikko kyky tuottaa toimivia osia kilpailukykyisillä tuotantomäärillä perinteisen valmistuksen kanssa. Mallintaminen ja simulointi ovat tehokkaita työkaluja, jotka voivat auttaa lyhentämään suunnittelun, rakentamisen ja testauksen sykliä mahdollistamalla erilaisten tuotesuunnitelmien ja prosessiskenaarioiden nopean analyysin. Perinteisten ja edistyneiden valmistusteknologioiden mahdollisuudet ja rajoitukset määrittelevät kuitenkin rajat uusille tuotekehityksille. Siksi on tärkeää, että suunnittelijoilla on käytettävissään menetelmät ja työkalut, joiden avulla he voivat mallintaa ja simuloida tuotteen suorituskykyä ja siihen liittyvän valmistusprosessin suorituskykyä, toimivien korkea arvoisten tuotteiden toteuttamiseksi. Motivaation tämän väitöstutkimuksen tekemiselle on, meneillään oleva kehitystyö uudenlaisen korkean lämpötilan suprajohtavan (high temperature superconducting (HTS)) magneettikokoonpanon kehittämisessä, joka toimii kryogeenisissä lämpötiloissa. Sen monimutkaisuus edellyttää monitieteisen asiantuntemuksen lähentymistä suunnittelun ja prototyyppien valmistuksen aikana. Tutkimus hyödyntää tietopohjaista mallinnusta valmistusprosessin analysoinnin ja päätöksenteon apuna HTS-magneettien mekaanisten komponenttien suunnittelussa. Tämän lisäksi, tutkimus etsii mahdollisuuksia additiivisen valmistuksen toteutettavuuteen HTS-magneettikokoonpanon tuotannossa. Kehitetty lähestymistapa käyttää fysikaalisiin kokeisiin perustuvaa tuote-prosessi-integroitua mallinnusta tuottamaan kvantitatiivista ja laadullista tietoa, joka määrittelee prosessi-rakenne-ominaisuus-suorituskyky-vuorovaikutuksia tietyille materiaali-prosessi-yhdistelmille. Tuloksina saadut vuorovaikutukset integroidaan kaaviopohjaiseen malliin, joka voi auttaa suunnittelutilan tutkimisessa ja täten auttaa varhaisessa suunnittelu- ja valmistuspäätöksenteossa. Tätä varten testikomponentit valmistetaan käyttämällä kahta metallin additiivista valmistus prosessia: lankakaarihitsaus additiivista valmistusta (wire arc additive manufacturing) ja selektiivistä lasersulatusta (selective laser melting). Rakenteellisissa sovelluksissa yleisesti käytetyistä metalliseoksista (ruostumaton teräs, pehmeä teräs, luja niukkaseosteinen teräs, alumiini ja kupariseokset) testataan niiden mekaaniset, lämpö- ja sähköiset ominaisuudet. Lisäksi tehdään metalliseosten mikrorakenteen karakterisointi, jotta voidaan ymmärtää paremmin valmistusprosessin parametrien vaikutusta materiaalin ominaisuuksiin. Integroitu mallinnustapa yhdistää kerätyn kokeellisen tiedon, olemassa olevat analyyttiset ja empiiriset vuorovaikutus suhteet, sekä muut tietopohjaiset mallit (esim. elementtimallit, koneoppimismallit) päätöksenteon tukijärjestelmän muodossa, joka mahdollistaa optimaalisen materiaalin, valmistustekniikan, prosessiparametrien ja muitten ohjausmuuttujien valinnan, lopullisen 3d-tulosteun komponentin halutun rakenteen, ominaisuuksien ja suorituskyvyn saavuttamiseksi. Valmistuspäätöksenteko tapahtuu todennäköisyysmallin, eli Bayesin verkkomallin toteuttamisen kautta, joka on vankka, modulaarinen ja sovellettavissa muihin valmistusjärjestelmiin ja tuotesuunnitelmiin. Väitöstyössä esitetyn mallin kyky parantaa additiivisien valmistusprosessien suorituskykyä ja laatua, täten edistää kestävän tuotannon tavoitteita.Additive manufacturing (AM) has been considered viable for complex geometries, topology optimized parts, and parts that are otherwise difficult to produce using conventional manufacturing processes. Despite the advantages, one of the prevalent challenges in AM has been the poor capability of producing functional parts at production volumes that are competitive with traditional manufacturing. Modelling and simulation are powerful tools that can help shorten the design-build-test cycle by enabling rapid analysis of various product designs and process scenarios. Nevertheless, the capabilities and limitations of traditional and advanced manufacturing technologies do define the bounds for new product development. Thus, it is important that the designers have access to methods and tools that enable them to model and simulate product performance and associated manufacturing process performance to realize functional high value products. The motivation for this dissertation research stems from ongoing development of a novel high temperature superconducting (HTS) magnet assembly, which operates in cryogenic environment. Its complexity requires the convergence of multidisciplinary expertise during design and prototyping. The research applies knowledge-based modelling to aid manufacturing process analysis and decision making in the design of mechanical components of the HTS magnet. Further, it explores the feasibility of using AM in the production of the HTS magnet assembly. The developed approach uses product-process integrated modelling based on physical experiments to generate quantitative and qualitative information that define process-structure-property-performance interactions for given material-process combinations. The resulting interactions are then integrated into a graph-based model that can aid in design space exploration to assist early design and manufacturing decision-making. To do so, test components are fabricated using two metal AM processes: wire and arc additive manufacturing and selective laser melting. Metal alloys (stainless steel, mild steel, high-strength low-alloyed steel, aluminium, and copper alloys) commonly used in structural applications are tested for their mechanical-, thermal-, and electrical properties. In addition, microstructural characterization of the alloys is performed to further understand the impact of manufacturing process parameters on material properties. The integrated modelling approach combines the collected experimental data, existing analytical and empirical relationships, and other data-driven models (e.g., finite element models, machine learning models) in the form of a decision support system that enables optimal selection of material, manufacturing technology, process parameters, and other control variables for attaining desired structure, property, and performance characteristics of the final printed component. The manufacturing decision making is performed through implementation of a probabilistic model i.e., a Bayesian network model, which is robust, modular, and can be adapted for other manufacturing systems and product designs. The ability of the model to improve throughput and quality of additive manufacturing processes will boost sustainable manufacturing goals

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Bio-inspired Optimization: Algorithm, Analysis and Scope of Application

    Get PDF
    In the last few years, bio-inspired optimization techniques have been widely adopted in fields such as computer science, mathematics, and biology in order to optimize solutions. Bio inspired optimization problems are usually nonlinear and restricted to multiple nonlinear constraints to tackle the problems of the traditional optimization algorithms, the recent trends tend to apply bio-inspired optimization algorithms which represent a promising approach for solving complex optimization problems. This work comprises state-of-art of ten recent bio-inspired algorithms, gap analysis, and its applications namely; Particle swarm optimization (PSO), Genetic Bee Colony (GBC) Algorithm, Fish Swarm Algorithm (FSA), Cat Swarm Optimization (CSO), Whale Optimization Algorithm (WOA), Artificial Algae Algorithm (AAA), Elephant Search Algorithm (ESA), Cuckoo Search Optimization Algorithm (CSOA), Moth flame optimization (MFO), and Grey Wolf Optimization (GWO) algorithm. The previous related works collected from Scopus databases are presented. Also, we explore some key issues in optimization and some applications for further research. We also analyze in-depth discussions on the essence of these algorithms and their connections to self-organization and their applications in different areas of research are presented. As a result, the proposed analysis of these algorithms leads to some key problems that have to be addressed in the future

    A Data-driven Approach to Large Knowledge Graph Matching

    Get PDF
    In the last decade, a remarkable number of open Knowledge Graphs (KGs) were developed, such as DBpedia, NELL, and YAGO. While some of such KGs are curated via crowdsourcing platforms, others are semi-automatically constructed. This has resulted in a significant degree of semantic heterogeneity and overlapping facts. KGs are highly complementary; thus, mapping them can benefit intelligent applications that require integrating different KGs such as recommendation systems, query answering, and semantic web navigation. Although the problem of ontology matching has been investigated and a significant number of systems have been developed, the challenges of mapping large-scale KGs remain significant. KG matching has been a topic of interest in the Semantic Web community since it has been introduced to the Ontology Alignment Evaluation Initiative (OAEI) in 2018. Nonetheless, a major limitation of the current benchmarks is their lack of representation of real-world KGs. This work also highlights a number of limitations with current matching methods, such as: (i) they are highly dependent on string-based similarity measures, and (ii) they are primarily built to handle well-formed ontologies. These features make them unsuitable for large, (semi/fully) automatically constructed KGs with hundreds of classes and millions of instances. Another limitation of current work is the lack of benchmark datasets that represent the challenging task of matching real-world KGs. This work addresses the limitation of the current datasets by first introducing two gold standard datasets for matching the schema of large, automatically constructed, less-well-structured KGs based on common KGs such as NELL, DBpedia, and Wikidata. We believe that the datasets which we make public in this work make the largest domain-independent benchmarks for matching KG classes. As many state-of-the-art methods are not suitable for matching large-scale and cross-domain KGs that often suffer from highly imbalanced class distribution, recent studies have revisited instance-based matching techniques in addressing this task. This is because such large KGs often lack a well-defined structure and descriptive metadata about their classes, but contain numerous class instances. Therefore, inspired by the role of instances in KGs, we propose a hybrid matching approach. Our method composes an instance-based matcher that casts the schema-matching process as a text classification task by exploiting instances of KG classes, and a string-based matcher. Our method is domain-independent and is able to handle KG classes with imbalanced populations. Further, we show that incorporating an instance-based approach with the appropriate data balancing strategy results in significant results in matching large and common KG classes

    Traffic Prediction using Artificial Intelligence: Review of Recent Advances and Emerging Opportunities

    Full text link
    Traffic prediction plays a crucial role in alleviating traffic congestion which represents a critical problem globally, resulting in negative consequences such as lost hours of additional travel time and increased fuel consumption. Integrating emerging technologies into transportation systems provides opportunities for improving traffic prediction significantly and brings about new research problems. In order to lay the foundation for understanding the open research challenges in traffic prediction, this survey aims to provide a comprehensive overview of traffic prediction methodologies. Specifically, we focus on the recent advances and emerging research opportunities in Artificial Intelligence (AI)-based traffic prediction methods, due to their recent success and potential in traffic prediction, with an emphasis on multivariate traffic time series modeling. We first provide a list and explanation of the various data types and resources used in the literature. Next, the essential data preprocessing methods within the traffic prediction context are categorized, and the prediction methods and applications are subsequently summarized. Lastly, we present primary research challenges in traffic prediction and discuss some directions for future research.Comment: Published in Transportation Research Part C: Emerging Technologies (TR_C), Volume 145, 202

    Using video games to study the acquisition and performance of psychomotor skills

    Get PDF
    Understanding how humans learn complex skills is a fundamental aim of cognitive science. Digital games offer promising opportunities to study cognitive factors associated with skill acquisition and performance, as they motivate longitudinal engagement and produce rich, multivariate data sets. By applying mutlivariate analysis techniques to data arising from gameplay, this thesis extended the literature on cognition as it pertains to psychomotor skill. We describe three studies that were conducted in this regard. In the first study, we analyzed the relationship between the temporal distribution of play instances and performance in a commercial digital game (League of Legends). Using clustering techniques and big data, we demonstrated that players who cram gameplay into short time frames ultimately perform worse than those who space the same number of games over longer periods. In the second study, we examined an experimental data set of participants who played Meta-T, a laboratory version of Tetris. Using Principal Components Analysis and regression techniques, we identified cognitive-behavioural markers of performance, such as action-latency and motor coordination. We also applied Hidden Markov models (HMM) to time series of these markers, showing that moment-to-moment dynamics in performance can be segmented into behavioural states related to latent psychological states. In the third study, we investigated the neural correlates of behavioural states during performance. Using simultaneous MEG and behavioural recordings of participants playing Tetris, we segmented time series datasets of neural activity based on time stamps of behavioural epochs derived from HMMs. We compared behavioural epochs based on neural markers, showing that cognitive states derived from multivariate behavioural data correlate with neural activity in the alpha band power. Taken together, this thesis advances our understanding of using digital game data to study cognition and learning. It demonstrates the feasibility of recording high-density neuroimaging data during complex behavioural tasks and obtaining reliable measures of internal neuronal states during complex behaviour
    corecore