3,046 research outputs found

    A spatio-temporal learning approach for crowd activity modelling to detect anomalies

    Get PDF
    With security and surveillance gaining paramount importance in recent years, it has become important to reliably automate some surveillance tasks for monitoring crowded areas. The need to automate this process also supports human operators who are overwhelmed with a large number of security screens to monitor. Crowd events like excess usage throughout the day, sudden peaks in crowd volume, chaotic motion (obvious to spot) all emerge over time which requires constant monitoring in order to be informed of the event build up. To ease this task, the computer vision community has been addressing some surveillance tasks using image processing and machine learning techniques. Currently tasks such as crowd density estimation or people counting, crowd detection and abnormal crowd event detection are being addressed. Most of the work has focused on crowd detection and estimation with the focus slowly shifting on crowd event learning for abnormality detection.This thesis addresses crowd abnormality detection. However, by way of the modelling approach used, implicitly, the tasks of crowd detection and estimation are also handled. The existing approaches in the literature have a number of drawbacks that keep them from being scalable for any public scene. Most pieces of work use simple scene settings where motion occurs wholly in the near-field or far-field of the camera view. Thus, with assumptions on the expected location of person motion, small blobs are arbitrarily filtered out as noise when they may be legitimate motion in the far-field. Such an approach makes it difficult to deal with complex scenes where entry/exit points occur in the centre of the scene or multiple pathways running from the near to the far-field of the camera view that produce blobs of differing sizes. Further, most authors assume the number of directions people motion should exhibit rather than discover what these may be. Approaches with such assumptions would result in loss of accuracy while dealing with (say) a railway platform which shows a number of motion directions, namely two-way, one-way, dispersive, etc. Finally, very few contributions of work use time as a video feature to model the human intuitiveness of time-of-day abnormalities. That is certain motion patterns may be abnormal if they have not been seen for a given time of day. Most works use it (time) as an extra qualifier to spatial data for trajectory definition.In this thesis most of these drawbacks have been addressed by dealing with these in the modelling of crowd activity. Firstly, no assumptions are made on scene structure or blob sizes resulting therefrom. The optical flow algorithm used is robust and even the noise presented (which is infact unwanted motion of swaying hands and legs as opposed to that from the torso) is fairly consistent and therefore can be factored into the modelling. Blobs, no matter what the size are not discarded as they may be legitimate emerging motion in the far-field. The modelling also deals with paths extending from the far to the near-field of the camera view and segments these such that each segment contains self-comparable fields of motion. The need for a normalisation factor for comparisons across near and far field motion fields implies prior knowledge of the scene. As the system is intended for generic public locations having varying scene structures, normalisation is not an option in the processing used and yet the near & far-field motion changes are accounted for. Secondly, this thesis describes a system that learns the true distribution of motion along the detected paths and maintains these. The approach is such that doing so does not generalise the direction distributions which would cause loss in precision. No impositions are made on expected motion and if the underlying motion is well defined (one-way or two-way), then this is represented as a well defined distribution and as a mixture of directions if the underlying motion presents itself as so.Finally, time as a video feature is used to allow for activity to re-enforce itself on a daily basis such that motion patterns for a given time and space begin to define themselves through re-enforcement which acts as the model used for abnormality detection in time and space (spatio-temporal). The system has been tested with real-world data datasets with varying fields of camera view. The testing has shown no false negatives, very few false positives and detects crowd abnormalities quite well with respect to the ground truths of the datasets used

    Stand for Something or Fall for Everything: Predict Misinformation Spread with Stance-Aware Graph Neural Networks

    Get PDF
    Although pervasive spread of misinformation on social media platforms has become a pressing challenge, existing platform interventions have shown limited success in curbing its dissemination. In this study, we propose a stance-aware graph neural network (stance-aware GNN) that leverages users’ stances to proactively predict misinformation spread. As different user stances can form unique echo chambers, we customize four information passing paths in stance-aware GNN, while the trainable attention weights provide explainability by highlighting each structure\u27s importance. Evaluated on a real-world dataset, stance-aware GNN outperforms benchmarks by 32.65% and exceeds advanced GNNs without user stance by over 4.69%. Furthermore, the attention weights indicate that users’ opposition stances have a higher impact on their neighbors’ behaviors than supportive ones, which function as social correction to halt misinformation propagation. Overall, our study provides an effective predictive model for platforms to combat misinformation, and highlights the impact of user stances in the misinformation propagation

    Wastes from Industrialized Nations: A Socio-economic Inquiry on E-waste Management for the Recycling Sector in Nigeria

    Get PDF
    An “assessment of waste electrical and electronic equipment (WEEE or e-waste) management strategies in Southeastern Nigeria” was conducted towards suggesting appropriate implementable measures. This submission presents a key outcome of a socioeconomic study on factors influencing the paths of e-waste generation and control with a view to suggesting innovative measures and market potentials for firms in the recycling sector. The concept of the study highlighted strategic features in-line with the socioeconomic assessment of e-waste management. Potentials for innovation in e-waste recycling were discussed in-line with elements of sustainability. The research introduced investigative methods by questionnaire administration. Purposive selections of local government areas were made from five mutually exclusive states. Data were analyzed using descriptive statistics. Results revealed the reasons limiting e-waste management trends to include cheap pricing, availability, quality, as well as superiority of obsolete e-devices to newer EEE. Sustainable benchmarks for evaluating and adopting e-waste recycling technologies were recommended

    Soft Computing approaches in ocean wave height prediction for marine energy applications

    Get PDF
    El objetivo de esta tesis consiste en investigar el uso de técnicas de Soft Computing (SC) aplicadas a la energía producida por las olas o energía undimotriz. Ésta es, entre todas las energías marinas disponibles, la que exhibe el mayor potencial futuro porque, además de ser eficiente desde el punto de vista técnico, no causa problemas ambientales significativos. Su importancia práctica radica en dos hechos: 1) es aproximadamente 1000 veces más densa que la energía eólica, y 2) hay muchas regiones oceánicas con abundantes recursos de olas que están cerca de zonas pobladas que demandan energía eléctrica. La contrapartida negativa se encuentra en que las olas son más difíciles de caracterizar que las mareas debido a su naturaleza estocástica. Las técnicas SC exhiben resultados similares e incluso superiores a los de otros métodos estadísticos en las estimaciones a corto plazo (hasta 24 h), y tienen la ventaja adicional de requerir un esfuerzo computacional mucho menor que los métodos numérico-físicos. Esta es una de las razones por la que hemos decidido explorar el uso de técnicas de SC en la energía producida por el oleaje. La otra se encuentra en el hecho de que su intermitencia puede afectar a la forma en la que se integra la electricidad que genera con la red eléctrica. Estas dos son las razones que nos han impulsado a explorar la viabilidad de nuevos enfoques de SC en dos líneas de investigación novedosas. La primera de ellas es un nuevo enfoque que combina un algoritmo genético (GA: Genetic Algorithm) con una Extreme Learning Machine (ELM) aplicado a un problema de reconstrucción de la altura de ola significativa (en un boya donde los datos se han perdido, por ejemplo, por una tormenta) utilizando datos de otras boyas cercanas. Nuestro algoritmo GA-ELM es capaz de seleccionar un conjunto reducido de parámetros del oleaje que maximizan la reconstrucción de la altura de ola significativa en la boya cuyos datos se han perdido utilizando datos de boyas vecinas. El método y los resultados de esta investigación han sido publicados en: Alexandre, E., Cuadra, L., Nieto-Borge, J. C., Candil-García, G., Del Pino, M., & Salcedo-Sanz, S. (2015). A hybrid genetic algorithm—extreme learning machine approach for accurate significant wave height reconstruction. Ocean Modelling, 92, 115-123. La segunda contribución combina conceptos de SC, Smart Grids (SG) y redes complejas (CNs: Complex Networks). Está motivada por dos aspectos importantes, mutuamente interrelacionados: 1) la forma en la que los conversores WECs (wave energy converters) se interconectan eléctricamente para formar un parque, y 2) cómo conectar éste con la red eléctrica en la costa. Ambos están relacionados con el carácter aleatorio e intermitente de la energía eléctrica producida por las olas. Para poder integrarla mejor sin afectar a la estabilidad de la red se debería recurrir al concepto Smart Wave Farm (SWF). Al igual que una SG, una SWF utiliza sensores y algoritmos para predecir el olaje y controlar la producción y/o almacenamiento de la electricidad producida y cómo se inyecta ésta en la red. En nuestro enfoque, una SWF y su conexión con la red eléctrica se puede ver como una SG que, a su vez, se puede modelar como una red compleja. Con este planteamiento, que se puede generalizar a cualquier red formada por generadores renovables y nodos que consumen y/o almacenan energía, hemos propuesto un algoritmo evolutivo que optimiza la robustez de dicha SG modelada como una red compleja ante fallos aleatorios o condiciones anormales de funcionamiento. El modelo y los resultados han sido publicados en: Cuadra, L., Pino, M. D., Nieto-Borge, J. C., & Salcedo-Sanz, S. (2017). Optimizing the Structure of Distribution Smart Grids with Renewable Generation against Abnormal Conditions: A Complex Networks Approach with Evolutionary Algorithms. Energies, 10(8), 1097

    VIDEO FOREGROUND LOCALIZATION FROM TRADITIONAL METHODS TO DEEP LEARNING

    Get PDF
    These days, detection of Visual Attention Regions (VAR), such as moving objects has become an integral part of many Computer Vision applications, viz. pattern recognition, object detection and classification, video surveillance, autonomous driving, human-machine interaction (HMI), and so forth. The moving object identification using bounding boxes has matured to the level of localizing the objects along their rigid borders and the process is called foreground localization (FGL). Over the decades, many image segmentation methodologies have been well studied, devised, and extended to suit the video FGL. Despite that, still, the problem of video foreground (FG) segmentation remains an intriguing task yet appealing due to its ill-posed nature and myriad of applications. Maintaining spatial and temporal coherence, particularly at object boundaries, persists challenging, and computationally burdensome. It even gets harder when the background possesses dynamic nature, like swaying tree branches or shimmering water body, and illumination variations, shadows cast by the moving objects, or when the video sequences have jittery frames caused by vibrating or unstable camera mounts on a surveillance post or moving robot. At the same time, in the analysis of traffic flow or human activity, the performance of an intelligent system substantially depends on its robustness of localizing the VAR, i.e., the FG. To this end, the natural question arises as what is the best way to deal with these challenges? Thus, the goal of this thesis is to investigate plausible real-time performant implementations from traditional approaches to modern-day deep learning (DL) models for FGL that can be applicable to many video content-aware applications (VCAA). It focuses mainly on improving existing methodologies through harnessing multimodal spatial and temporal cues for a delineated FGL. The first part of the dissertation is dedicated for enhancing conventional sample-based and Gaussian mixture model (GMM)-based video FGL using probability mass function (PMF), temporal median filtering, and fusing CIEDE2000 color similarity, color distortion, and illumination measures, and picking an appropriate adaptive threshold to extract the FG pixels. The subjective and objective evaluations are done to show the improvements over a number of similar conventional methods. The second part of the thesis focuses on exploiting and improving deep convolutional neural networks (DCNN) for the problem as mentioned earlier. Consequently, three models akin to encoder-decoder (EnDec) network are implemented with various innovative strategies to improve the quality of the FG segmentation. The strategies are not limited to double encoding - slow decoding feature learning, multi-view receptive field feature fusion, and incorporating spatiotemporal cues through long-shortterm memory (LSTM) units both in the subsampling and upsampling subnetworks. Experimental studies are carried out thoroughly on all conditions from baselines to challenging video sequences to prove the effectiveness of the proposed DCNNs. The analysis demonstrates that the architectural efficiency over other methods while quantitative and qualitative experiments show the competitive performance of the proposed models compared to the state-of-the-art

    Validated force-based modeling of pedestrian dynamics

    Get PDF
    This dissertation investigates force-based modeling of pedestrian dynamics. Having the quantitative validation of mathematical models in focus principle questions will be addressed throughout this work: Is it manageable to describe pedestrian dynamics solely with the equations of motion derived from the Newtonian dynamics? On the road to giving answers to this question we investigate the consequences and side-effects of completing a force-based model with additional rules and imposing restrictions on the state variables. Another important issue is the representation of modeled pedestrians. Does the geometrical shape of a two dimensional projection of the human body matter when modeling pedestrian movement? If yes which form is most suitable? This point is investigated in the second part while introducing a new force-based model. Moreover, we highlight a frequently underestimated aspect in force-based modeling which is to what extent the steering of pedestrians influences their dynamics? In the third part we introduce four possible strategies to define the desired direction of each pedestrian when moving in a facility. Finally, the effects of the aforementioned approaches are discussed by means of numerical tests in different geometries with one set of model parameters. Furthermore, the validation of the developed model is questioned by comparing simulation results with empirical data

    Ambidexterity: a possible balance to manage Complexity

    Get PDF
    The present article originates from the effort to answer the following question: is it possible for an organizational structure to steer between organizational routines and Black Swans? (Taleb, 2007). Unexpected, unique and low-frequency events are “unknown variable” that, despite the planning and precautions deployed, catch an organization off-guard, and might have catastrophic consequences. Unexpected events impact organizations, undermining the knowledge and redefining the list of competences that an organization needs in order to be competitive. The main goal of the present article is to shed light on the role and the challenges that firms undertake in their defining moments of adaptation of their organizational assets – the structure –. The rational pattern of adaptation is exemplified by the use of ambidextrous organizational structures, which focus on activities that can be defined as exploration and exploitation. Within the analysis of “the science of complexity”, parallels, paradoxes and metaphors representing a synthesis of a largely shared doctrine will be investigated: firms need to utilize known variables, or sometimes unknown ones, that are inevitably complex, in order to find the right fit, react swiftly to change, successfully compete, and obtain results

    Selective Subtraction: An Extension of Background Subtraction

    Get PDF
    Background subtraction or scene modeling techniques model the background of the scene using the stationarity property and classify the scene into two classes of foreground and background. In doing so, most moving objects become foreground indiscriminately, except for perhaps some waving tree leaves, water ripples, or a water fountain, which are typically learned as part of the background using a large training set of video data. Traditional techniques exhibit a number of limitations including inability to model partial background or subtract partial foreground, inflexibility of the model being used, need for large training data and computational inefficiency. In this thesis, we present our work to address each of these limitations and propose algorithms in two major areas of research within background subtraction namely single-view and multi-view based techniques. We first propose the use of both spatial and temporal properties to model a dynamic scene and show how Mapping Convergence framework within Support Vector Mapping Convergence (SVMC) can be used to minimize training data. We also introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a selective subtraction method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Our novel use of projective depth as a decision boundary allows us to extend the traditional definition of background subtraction and propose a much more powerful framework. Furthermore, we show that the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We present diverse set of examples to show that: (i) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; (ii) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one; (iii) the technique can be used for a variety of situations including when images are captured using stationary cameras or hand-held cameras and for both indoor and outdoor scenes. We provide extensive results to show the effectiveness of the proposed framework in a variety of very challenging environments

    A statistical framework for embodied music cognition

    Get PDF
    • …
    corecore