13 research outputs found

    A review on deep-learning-based cyberbullying detection

    Get PDF
    Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s world, as the prevalence of cyberbullying is continually growing, resulting in mental health issues. Conventional machine learning models were previously used to identify cyberbullying. However, current research demonstrates that deep learning surpasses traditional machine learning algorithms in identifying cyberbullying for several reasons, including handling extensive data, efficiently classifying text and images, extracting features automatically through hidden layers, and many others. This paper reviews the existing surveys and identifies the gaps in those studies. We also present a deep-learning-based defense ecosystem for cyberbullying detection, including data representation techniques and different deep-learning-based models and frameworks. We have critically analyzed the existing DL-based cyberbullying detection techniques and identified their significant contributions and the future research directions they have presented. We have also summarized the datasets being used, including the DL architecture being used and the tasks that are accomplished for each dataset. Finally, several challenges faced by the existing researchers and the open issues to be addressed in the future have been presented

    An Analytic Training Approach for Recognition in Still Images and Videos

    Get PDF
    This dissertation proposes a general framework to efficiently identify the objects of interest (OI) in still images and its application can be further extended to human action recognition in videos. The frameworks utilized in this research to process still images and videos are similar in architecture except they have different content representations. Initially, global level analysis is employed to extract distinctive feature sets from an input data. For the global analysis of data the bidirectional two dimensional principal component analysis (2D-PCA) is employed to preserve correlation amongst neighborhood pixels. Furthermore, to cope with the inherent limitations within the holistic approach local information is introduced into the framework. The local information of OI is identified utilizing FERNS and affine SIFT (ASIFT) approaches for spatial and temporal datasets, respectively. For supportive local information, the feature detection is followed by an effective pruning strategy to divide these features into inliers and outliers. A cluster of inliers represents local features which exhibit stable behavior and geometric consistency. Incremental learning is a significant but often overlooked problem in action recognition. The final part of this dissertation proposes a new action recognition algorithm based on sequential learning and adaptive representation of the human body using Pyramid of Histogram of Oriented Gradients (PHOG) features. The changing shape and appearance of human body parts is tracked based on the weak appearance constancy assumption. The constantly changing shape of an OI is maximally covered by the small blocks to approximate the body contour of a segmented foreground object. In addition, the analytically determined learning phase guarantees lower computational burden for classification. The utilization of a minimum number of video frames in a causal way to recognize an action is also explored in this dissertation. The use of PHOG features adaptively extracted from individual frames allows the recognition of an incoming action video using a small group of frames which eliminates the need of large look-ahead

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    AUTOMATED FEATURE EXTRACTION AND CONTENT-BASED RETRIEVAL OFPATHOLOGY MICROSCOPIC IMAGES USING K-MEANS CLUSTERING AND CODE RUN-LENGTH PROBABILITY DISTRIBUTION

    Get PDF
    The dissertation starts with an extensive literature survey on the current issues in content-based image retrieval (CBIR) research, the state-of-the-art theories, methodologies, and implementations, covering topics such as general information retrieval theories, imaging, image feature identification and extraction, feature indexing and multimedia database search, user-system interaction, relevance feedback, and performance evaluation. A general CBIR framework has been proposed with three layers: image document space, feature space, and concept space. The framework emphasizes that while the projection from the image document space to the feature space is algorithmic and unrestricted, the connection between the feature space and the concept space is based on statistics instead of semantics. The scheme favors image features that do not rely on excessive assumptions about image contentAs an attempt to design a new CBIR methodology following the above framework, k-means clustering color quantization is applied to pathology microscopic images, followed by code run-length probability distribution feature extraction. Kulback-Liebler divergence is used as distance measure for feature comparison. For content-based retrieval, the distance between two images is defined as a function of all individual features. The process is highly automated and the system is capable of working effectively across different tissues without human interference. Possible improvements and future directions have been discussed

    Restauration adaptative des contours par une approche inspirée de la prédiction des performances

    Get PDF
    En tĂ©lĂ©dĂ©tection, les cartes de contours peuvent servir, entre autres choses, Ă  la restitution gĂ©omĂ©trique, Ă  la recherche d'Ă©lĂ©ments linĂ©aires, ainsi qu'Ă  la segmentation. La crĂ©ation de ces cartes est faite relativement tĂŽt dans la chaĂźne de traitements d'une image. Pour assurer la qualitĂ© des opĂ©rations subsĂ©quentes, il faut veiller Ă  obtenir une carte de contours prĂ©cise. Notre problĂ©matique est de savoir s'il est possible de diminuer la perte de temps liĂ©e au choix d'algorithme et de paramĂštre en corrigeant automatiquement la carte de contours. Nous concentrerons donc nos efforts sur le dĂ©veloppement d'une mĂ©thode de dĂ©tection/restauration de contours adaptative. Notre mĂ©thode s'inspire d'une technique de prĂ©diction des performances d'algorithmes de bas niveau. Elle consiste Ă  intĂ©grer un traitement par rĂ©seau de neurones Ă  une mĂ©thode"classique" de dĂ©tection de contours. Plus prĂ©cisĂ©ment, nous proposons de combiner la carte de performances avec la carte de gradient pour permettre des dĂ©cisions plus exactes. La prĂ©sente Ă©tude a permis de dĂ©velopper un logiciel comprenant un rĂ©seau de neurones entraĂźnĂ© pour prĂ©dire la prĂ©sence de contours. Ce rĂ©seau de neurones permet d'amĂ©liorer les dĂ©cisions de dĂ©tecteurs de contours, en rĂ©duisant le nombre de pixels de fausses alarmes et de contours manquĂ©s. La premiĂšre Ă©tape de ce travail consiste en une mĂ©thode d'Ă©valuation de performance pour les cartes de contours. Une fois ce choix effectuĂ©, il devient possible de comparer les cartes entre elles. Il est donc plus aisĂ© de dĂ©terminer, pour chaque image, la meilleure dĂ©tection de contours. La revue de la littĂ©rature rĂ©alisĂ©e simultanĂ©ment a permis de faire un choix d'un groupe d'indicateurs prometteurs pour la restauration de contours. Ces derniers ont servi Ă  la calibration et Ă  l'entrainement d'un rĂ©seau de neurones pour modĂ©liser les contours. Par la suite, l'information fournie par ce rĂ©seau a Ă©tĂ© combinĂ©e par multiplication arithmĂ©tique avec les cartes d'amplitudes de dĂ©tecteurs"classiques" afin de fournir de nouvelles cartes d'amplitude du gradient. Le seuillage de ces contours donne des cartes de contours"optimisĂ©es". Sur les images aĂ©roportĂ©es du jeu de donnĂ©es South Florida, la mĂ©diane des mesures-F de la pour l'algorithme de Sobel passe de 51,3 % avant la fusion Ă  56,4 % aprĂšs. La mĂ©diane des mesures-F pour l'algorithme de Kirsch amĂ©liorĂ© est de 56,3 % et celle de Frei-Chen amĂ©liorĂ© est de 56,3 %. Pour l'algorithme de Sobel avec seuillage adaptatif, la mesure-F mĂ©diane est de 52,3 % avant fusion et de 57,2 % aprĂšs fusion.En guise de comparaison, la mesure-F mĂ©diane pour le dĂ©tecteur de Moon, mathĂ©matiquement optimal pour contours de type"rampe", est de 53,3 % et celle de l'algorithme de Canny, est de 61,1 %. L'applicabilitĂ© de notre algorithme se limite aux images qui, aprĂšs filtrage, ont un rapport signal sur bruit supĂ©rieur ou Ă©gal Ă  20. Sur les photos au sol du jeu de donnĂ©es de South Florida, les rĂ©sultats sont comparables Ă  ceux obtenus sur les images aĂ©roportĂ©es. Par contre, sur le jeu de donnĂ©es de Berkeley, les rĂ©sultats n'ont pas Ă©tĂ© concluants. Sur une imagette IKONOS du campus de l'UniversitĂ© de Sherbrooke, pour l'algorithme de Sobel, la mesure-F est de 45,7 % «0,9 % avant la fusion et de 50,8 % aprĂšs. Sur une imagette IKONOS de l'Agence Spatiale Canadienne, pour l'algorithme de Sobel avec seuillage adaptatif, la mesure-F est de 35,4 % «0,9 % avant la fusion et de 42,2 % aprĂšs. Sur cette mĂȘme image, l'algorithme de Argyle (Canny sans post-traitement) a une mesure-F de 35,1 % «0,9 % avant fusion et de 39,5 % aprĂšs. Nos travaux ont permis d'amĂ©liorer la banque d'indicateurs de Chalmond, rendant possible le prĂ©traitement avant le seuillage de la carte de gradient. À chaque Ă©tape, nous proposons un choix de paramĂštres permettant d'utiliser efficacement la mĂ©thode proposĂ©e. Les contours corrigĂ©s sont plus fins, plus complets et mieux localisĂ©s que les contours originaux. Une Ă©tude de sensibilitĂ© a Ă©tĂ© effectuĂ©e et permet de mieux comprendre la contribution de chaque indicateur. L'efficacitĂ© de l'outil dĂ©veloppĂ© est comparable Ă  celle d'autres mĂ©thodes de dĂ©tection de contours et en fait un choix intĂ©ressant pour la dĂ©tection de contours. Les diffĂ©rences de qualitĂ© observĂ©es entre notre mĂ©thode et celle de Canny semble ĂȘtre dues Ă  l'utilisation, ou non, de post-traitements. GrĂące au logiciel dĂ©veloppĂ©, il est possible de rĂ©utiliser la mĂ©thodologie; cette derniĂšre a permis d'opĂ©rationnaliser la mĂ©thode proposĂ©e. La possibilitĂ© de rĂ©utiliser le filtre, sans rĂ©entrainement est intĂ©ressante. La simplicitĂ© du paramĂ©trage lors de l'utilisation est aussi un avantage. Ces deux facteurs rĂ©pondent Ă  un besoin de rĂ©duire le temps d'utilisation du logiciel

    Generative Mesh Modeling

    Get PDF
    Generative Modeling is an alternative approach for the description of three-dimensional shape. The basic idea is to represent a model not as usual by an agglomeration of geometric primitives (triangles, point clouds, NURBS patches), but by functions. The paradigm change from objects to operations allows for a procedural representation of procedural shapes, such as most man-made objects. Instead of storing only the result of a 3D construction, the construction process itself is stored in a model file. The generative approach opens truly new perspectives in many ways, among others also for 3D knowledge management. It permits for instance to resort to a repository of already solved modeling problems, in order to re-use this knowledge also in different, slightly varied situations. The construction knowledge can be collected in digital libraries containing domain-specific parametric modeling tools. A concrete realization of this approach is a new general description language for 3D models, the "Generative Modeling Language" GML. As a Turing-complete "shape programming language" it is a basis of existing, primitv based 3D model formats. Together with its Runtime engine the GML permits - to store highly complex 3D models in a compact form, - to evaluate the description within fractions of a second, - to adaptively tesselate and to interactively display the model, - and even to change the models high-level parameters at runtime.Die generative Modellierung ist ein alternativer Ansatz zur Beschreibung von dreidimensionaler Form. Zugrunde liegt die Idee, ein Modell nicht wie ĂŒblich durch eine Ansammlung geometrischer Primitive (Dreiecke, Punkte, NURBS-Patches) zu beschreiben, sondern durch Funktionen. Der Paradigmenwechsel von Objekten zu Geometrie-erzeugenden Operationen ermöglicht es, prozedurale Modelle auch prozedural zu reprĂ€sentieren. Statt das Resultat eines 3D-Konstruktionsprozesses zu speichern, kann so der Konstruktionsprozess selber reprĂ€sentiert werden. Der generative Ansatz eröffnet unter anderem gĂ€nzlich neue Perspektiven fĂŒr das Wissensmanagement im 3D-Bereich. Er ermöglicht etwa, auf einen Fundus bereits gelöster Konstruktions-Aufgaben zurĂŒckzugreifen, um sie in Ă€hnlichen, aber leicht variierten Situationen wiederverwenden zu können. Das Konstruktions-Wissen kann dazu in Form von Bibliotheken parametrisierter, DomĂ€nen-spezifischer Modellier-Werkzeuge gesammelt werden. Konkret wird dazu eine neue allgemeine Modell-Beschreibungs-Sprache vorgeschlagen, die "Generative Modeling Language" GML. Als Turing-mĂ€chtige "Programmiersprache fĂŒr Form" stellt sie eine echte Verallgemeinerung existierender Primitiv-basierter 3D-Modellformate dar. Zusammen mit ihrer Runtime-Engine erlaubt die GML, - hochkomplexe 3D-Objekte extrem kompakt zu beschreiben, - die Beschreibung innerhalb von Sekundenbruchteilen auszuwerten, - das Modell adaptiv darzustellen und interaktiv zu betrachten, - und die Modell-Parameter interaktiv zu verĂ€ndern

    The Adaptive City

    Get PDF

    The 45th Australasian Universities Building Education Association Conference: Global Challenges in a Disrupted World: Smart, Sustainable and Resilient Approaches in the Built Environment, Conference Proceedings, 23 - 25 November 2022, Western Sydney University, Kingswood Campus, Sydney, Australia

    Get PDF
    This is the proceedings of the 45th Australasian Universities Building Education Association (AUBEA) conference which will be hosted by Western Sydney University in November 2022. The conference is organised by the School of Engineering, Design, and Built Environment in collaboration with the Centre for Smart Modern Construction, Western Sydney University. This year’s conference theme is “Global Challenges in a Disrupted World: Smart, Sustainable and Resilient Approaches in the Built Environment”, and expects to publish over a hundred double-blind peer review papers under the proceedings
    corecore