85 research outputs found

    Cyberspace and Real-World Behavioral Relationships: Towards the Application of Internet Search Queries to Identify Individuals At-risk for Suicide

    Get PDF
    The Internet has become an integral and pervasive aspect of society. Not surprisingly, the growth of ecommerce has led to focused research on identifying relationships between user behavior in cyberspace and the real world - retailers are tracking items customers are viewing and purchasing in order to recommend additional products and to better direct advertising. As the relationship between online search patterns and real-world behavior becomes more understood, the practice is likely to expand to other applications. Indeed, Google Flu Trends has implemented an algorithm that accurately charts the relationship between the number of people searching for flu-related topics on the Internet, and the number of people who actually have flu symptoms in that region. Because the results are real-time, studies show Google Flu Trends estimates are typically two weeks ahead of the Center for Disease Control. The Air Force has devoted considerable resources to suicide awareness and prevention. Despite these efforts, suicide rates have remained largely unaffected. The Air Force Suicide Prevention Program assists family, friends, and co-workers of airmen in recognizing and discussing behavioral changes with at-risk individuals. Based on other successes in correlating behaviors in cyberspace and the real world, is it possible to leverage online activities to help identify individuals that exhibit suicidal or depression-related symptoms? This research explores the notion of using Internet search queries to classify individuals with common search patterns. Text mining was performed on user search histories for a one-month period from nine Air Force installations. The search histories were clustered based on search term probabilities, providing the ability to identify relationships between individuals searching for common terms. Analysis was then performed to identify relationships between individuals searching for key terms associated with suicide, anxiety, and post-traumatic stress

    Toward human-like pathfinding with hierarchical approaches and the GPS of the brain theory

    Get PDF
    Pathfinding for autonomous agents and robots has been traditionally driven by finding optimal paths. Where typically optimality means finding the shortest path between two points in a given environment. However, optimality may not always be strictly necessary. For example, in the case of video games, often computing the paths for non-player characters (NPC) must be done under strict time constraints to guarantee real time simulation. In those cases, performance is more important than finding the shortest path, specially because often a sub-optimal path can be just as convincing from the point of view of the player. When simulating virtual humanoids, pathfinding has also been used with the same goal: finding the shortest path. However, humans very rarely follow precise shortest paths, and thus there are other aspects of human decision making and path planning strategies that should be incorporated in current simulation models. In this thesis we first focus on improving performance optimallity to handle as many virtual agents as possible, and then introduce neuroscience research to propose pathfinding algorithms that attempt to mimic humans in a more realistic manner.In the case of simulating NPCs for video games, one of the main challenges is to compute paths as efficiently as possible for groups of agents. As both the size of the environments and the number of autonomous agents increase, it becomes harder to obtain results in real time under the constraints of memory and computing resources. For this purpose we explored hierarchical approaches for two reasons: (1) they have shown important performance improvements for regular grids and other abstract problems, and (2) humans tend to plan trajectories also following an topbottom abstraction, focusing first on high level location and then refining the path as they move between those high level locations. Therefore, we believe that hierarchical approaches combine the best of our two goals: improving performance for multi-agent pathfinding and achieving more human-like pathfinding. Hierarchical approaches, such as HNA* (Hierarchical A* for Navigation Meshes) can compute paths more efficiently, although only for certain configurations of the hierarchy. For other configurations, the method suffers from a bottleneck in the step that connects the Start and Goal positions with the hierarchy. This bottleneck can drop performance drastically.In this thesis we present different approaches to solve the HNA* bottleneck and thus obtain a performance boost for all hierarchical configurations. The first method relies on further memory storage, and the second one uses parallelism on the GPU. Our comparative evaluation shows that both approaches offer speed-ups as high as 9x faster than A*, and show no limitations based on hierarchical configuration. Then we further exploit the potential of CUDA parallelism, to extend our implementation to HNA* for multi-agent path finding. Our method can now compute paths for over 500K agents simultaneously in real-time, with speed-ups above 15x faster than a parallel multi-agent implementation using A*. We then focus on studying neurosience research to learn about the way that humans build mental maps, in order to propose novel algorithms that take those finding into account when simulating virtual humans. We propose a novel algorithm for path finding that is inspired by neuroscience research on how the brain learns and builds cognitive maps. Our method represents the space as a hexagonal grid, based on the GPS of the brain theory, and fires memory cells as counters. Our path finder then combines a method for exploring unknown environments while building such a cognitive map, with an A* search using a modified heuristic that takes into account the GPS of the brain cognitive map.El problema de Pathfinding para agentes autónomos o robots, ha consistido tradicionalmente en encontrar un camino óptimo, donde por óptimo se entiende el camino de distancia mínima entre dos posiciones de un entorno. Sin embargo, en ocasiones puede que no sea estrictamente necesario encontrar una solución óptima. Para ofrecer la simulación de multitudes de agentes autónomos moviéndose en tiempo real, es necesario calcular sus trayectorias bajo condiciones estrictas de tiempo de computación, pero no es fundamental que las soluciones sean las de distancia mínima ya que, con frecuencia, el observador no apreciará la diferencia entre un camino óptimo y un sub-óptimo. Por tanto, suele ser suficiente con que la solución encontrada sea visualmente creíble para el observado. Cuando se simulan humanoides virtuales en aplicaciones de realidad virtual que requieren avatares que simulen el comportamiento de los humanos, se tiende a emplear los mismos algoritmos que en video juegos, con el objetivo de encontrar caminos de distancia mínima. Pero si realmente queremos que los avatares imiten el comportamiento humano, tenemos que tener en cuenta que, en el mundo real, los humanos rara vez seguimos precisamente el camino más corto, y por tanto se deben considerar otros aspectos que influyen en la toma de decisiones de los humanos y la selección de rutas en el mundo real. En esta tesis nos centraremos primero en mejorar el rendimiento de la búsqueda de caminos para poder simular grandes números de humanoides virtuales autónomos, y a continuación introduciremos conceptos de investigación con mamíferos en neurociencia, para proponer soluciones al problema de pathfinding que intenten imitar con mayor realismo, el modo en el que los humanos navegan el entorno que les rodea. A medida que aumentan tanto el tamaño de los entornos virtuales como el número de agentes autónomos, resulta más difícil obtener soluciones en tiempo real, debido a las limitaciones de memoria y recursos informáticos. Para resolver este problema, en esta tesis exploramos enfoques jerárquicos porque consideramos que combinan dos objetivos fundamentales: mejorar el rendimiento en la búsqueda de caminos para multitudes de agentes y lograr una búsqueda de caminos similar a la de los humanos. El primer método presentado en esta tesis se basa en mejorar el rendimiento del algoritmo HNA* (Hierarchical A* for Navigation Meshes) incrementando almacenamiento de datos en memoria, y el segundo utiliza el paralelismo para mejorar drásticamente el rendimiento. La evaluación cuantitativa realizada en esta tesis, muestra que ambos enfoques ofrecen aceleraciones que pueden llegar a ser hasta 9 veces más rápidas que el algoritmo A* y no presentan limitaciones debidas a la configuración jerárquica. A continuación, aprovechamos aún más el potencial del paralelismo ofrecido por CUDA para extender nuestra implementación de HNA* a sistemas multi-agentes. Nuestro método permite calcular caminos simultáneamente y en tiempo real para más de 500.000 agentes, con una aceleración superior a 15 veces la obtenida por una implementación paralela del algoritmo A*. Por último, en esta tesis nos hemos centrado en estudiar los últimos avances realizados en el ámbito de la neurociencia, para comprender la manera en la que los humanos construyen mapas mentales y poder así proponer nuevos algoritmos que imiten de forma más real el modo en el que navegamos los humanos. Nuestro método representa el espacio como una red hexagonal, basada en la distribución de ¿place cells¿ existente en el cerebro, e imita las activaciones neuronales como contadores en dichas celdas. Nuestro buscador de rutas combina un método para explorar entornos desconocidos mientras construye un mapa cognitivo hexagonal, utilizando una búsqueda A* con una nueva heurística adaptada al mapa cognitivo del cerebro y sus contadores

    Evolutionary Computation for Digital Artefact Design

    Get PDF
    This thesis presents novel systems for the automatic and semi-automatic design of digital artefacts. Currently, users wanting to create digital models, such as three-dimensional (3D) digital landscapes and website colour schemes, need to possess significant expertise, as the tools involved demand a high level of knowledge and skill. By developing an intuitive algorithmic process, founded on evolutionary computation (EC), this research enables non-specialist human designers to create digital assets more efficiently. This is achieved by replacing design activities that require significant manual input with algorithmic functions, thereby greatly improving the efficiency and accessibility of the practices involved. This research places an initial focus on the generation of 3D landscapes, but the latter aspect concentrates on the identification of text and background colour combinations more amenable to the reading process, particularly for readers with vision impairments. Choosing an ideal combination of colours requires knowledge of the cognitive and psychological procedures involved. Designers need to be aware of colour contrast ratios, brightness, and variations, which would require a series of aesthetic measurements if they are to be manually tested. In an effort to provide a colour design facility, this research offers algorithms that can generate colour schemes, based on the aforementioned principles, which can be used to derive an optimum scheme for a website. This research demonstrates a novel interactive genetic algorithm (IGA), coupled with the use of computational aesthetics, suitable for use in the evolution of terrain generation and digital landscape design. It also provides a tool for automatically creating EC-driven colour palettes for web design via evolutionary searches. Experimental trials use the EC framework developed from this research using both IGA technique and the computational aesthetic measures. Results indicate that the end-users can build any target digital landscape design with less inputs and more comfort, and if required can also automate the whole process to evolve aesthetically pleasing landscape designs. The results obtained for designing colour schemes for website design have proven that end-users can quickly develop a colour scheme, without the need for fine-tuning of colour combinations. Results can compete in quality the colour schemes that are designed by the professional website developers

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Novel computational techniques for mapping and classifying Next-Generation Sequencing data

    Get PDF
    Since their emergence around 2006, Next-Generation Sequencing technologies have been revolutionizing biological and medical research. Quickly obtaining an extensive amount of short or long reads of DNA sequence from almost any biological sample enables detecting genomic variants, revealing the composition of species in a metagenome, deciphering cancer biology, decoding the evolution of living or extinct species, or understanding human migration patterns and human history in general. The pace at which the throughput of sequencing technologies is increasing surpasses the growth of storage and computer capacities, which creates new computational challenges in NGS data processing. In this thesis, we present novel computational techniques for read mapping and taxonomic classification. With more than a hundred of published mappers, read mapping might be considered fully solved. However, the vast majority of mappers follow the same paradigm and only little attention has been paid to non-standard mapping approaches. Here, we propound the so-called dynamic mapping that we show to significantly improve the resulting alignments compared to traditional mapping approaches. Dynamic mapping is based on exploiting the information from previously computed alignments, helping to improve the mapping of subsequent reads. We provide the first comprehensive overview of this method and demonstrate its qualities using Dynamic Mapping Simulator, a pipeline that compares various dynamic mapping scenarios to static mapping and iterative referencing. An important component of a dynamic mapper is an online consensus caller, i.e., a program collecting alignment statistics and guiding updates of the reference in the online fashion. We provide Ococo, the first online consensus caller that implements a smart statistics for individual genomic positions using compact bit counters. Beyond its application to dynamic mapping, Ococo can be employed as an online SNP caller in various analysis pipelines, enabling SNP calling from a stream without saving the alignments on disk. Metagenomic classification of NGS reads is another major topic studied in the thesis. Having a database with thousands of reference genomes placed on a taxonomic tree, the task is to rapidly assign a huge amount of NGS reads to tree nodes, and possibly estimate the relative abundance of involved species. In this thesis, we propose improved computational techniques for this task. In a series of experiments, we show that spaced seeds consistently improve the classification accuracy. We provide Seed-Kraken, a spaced seed extension of Kraken, the most popular classifier at present. Furthermore, we suggest ProPhyle, a new indexing strategy based on a BWT-index, obtaining a much smaller and more informative index compared to Kraken. We provide a modified version of BWA that improves the BWT-index for a quick k-mer look-up

    Automatic identification of mechanical parts for robotic disassembly using deep neural network techniques

    Get PDF
    This work addressed the automatic visual identification of mechanical objects from 3D camera scans, and is part of a wider project focusing on automatic disassembly for remanufacturing. The main challenge of the task was the intrinsic uncertainties on the state of end-of-life products, which required a highly robust identification system. The use of point cloud models implied also the need to deal with significant computational overheads. The state-of-the-art PointNet deep neural network was chosen as the classifier system, due to its learning capabilities, suitability to processing 3D models, and ability to recognise objects irrespective of their pose. To obviate the need for collecting a large set of training models, it was decided that PointNet was to be trained using examples generated from 3D CAD models, and used on scans of real objects. Different tests were carried out to assess PointNet ability to deal with imprecise sensor readings and partial views. Due to restrictions on access due to the pandemic, it was not possible to collect a sufficiently systematic set of scans of physical objects in the lab. Various tests were thus carried out using combinations of CAD models of mechanical and everyday objects, primitive geometric shapes, and real scans of everyday objects from popular machine vision benchmarks. The investigation confirmed PointNet’s ability to recognise complex mechanical objects and irregular everyday shapes with good accuracy, generalising the results of learning from geometric shapes and CAD models. The performance of PointNet was not significantly affected by the use of partial views of the objects, a very common case in industrial applications. PointNet showed some limitations when tasked with recognising noisy scenes, and a practical solution was suggested to minimise this problem. To reduce the computational complexity of training a deep architecture using large data sets of 3D scenes, a predator-prey coevolutionary scheme was devised. The proposed algorithm evolves subsets of the training set, selecting for these subsets the most difficult examples. The remaining training samples are discarded by the evolutionary procedure, which thus reduces the number of examples that are presented to the classifier. The experimental results showed that this economy of training samples allows reducing the execution time of the learning procedure, without affecting the neural network recognition accuracy. This simplification of the learning procedure is of general importance for the whole deep learning field, since practical implementations are often hindered by the complexity of the training process

    Cogitator : a parallel, fuzzy, database-driven expert system

    Get PDF
    The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.KMBT_22

    A Template Matching Table for Speeding Up Game Tree Searches for Hex

    No full text
    Transposition tables have long been a viable tool in the pruning mechanisms of game-tree search algorithms. In such applications, a transposition table can reduce a game-tree to a game-graph with unique board positions at the nodes. This paper proposes a transposition table extension, called a template matching table, where templates that prove winning positions are used to map features of board positions to board values. This paper demonstrates that a game-tree search for the game of Hex can have a more effective pruning mechanism using a template matching table than it does using a transposition table

    Computer Science Principles with C++

    Get PDF
    This textbook is intended to be used for a first course in computer science, such as the College Board’s Advanced Placement course known as AP Computer Science Principles (CSP). This book includes all the topics on the CSP exam, plus some additional topics. It takes a breadth-first approach, with an emphasis on the principles which form the foundation for hardware and software. No prior experience with programming should be required to use this book. This version of the book uses the C++ programming language.https://rdw.rowan.edu/oer/1025/thumbnail.jp
    • …
    corecore