170,787 research outputs found

    AI: Inventing a new kind of machine.

    Get PDF
    A means-ends approach to engineering an artificial intelligence machine now suggests that we focus on the differences between human capabilities and the best computer programs. These differences suggest two basic limitations in the "symbolic" approach. First, human memory is much more than a storehouse where structures are put away, indexed, and rotely retrieved. Second, human reasoning involves more than searching, matching, and recombining previously stored descriptions of situations and action plans. Indeed, these hypotheses are related: Remembering and reasoning both involve reconceptualization. This short paper outlines recent work in situated cognition, robotics, and neural networks that suggests we frame the problem if AI in terms of inventing a new kind of machine

    AIDED REMOTE RENDERING IN A 5G-AS-A-SERVICE MOBILE NETWORK ENVIRONMENT

    Get PDF
    Artificial Intelligence (AI)/Machine Learning (ML)-based mobile applications are becoming increasingly computation-intensive, memory-consuming, and power-consuming. In addition, end devices usually have stringent energy consumption, compute, and memory limitations for running a complete offline AI/ML inference on-board. Many AI/ML applications currently offload inference processing from mobile devices to internet data centers (IDC). Techniques presented herein provide for discovering the closest offload server so that UE can offload some of the rending work to the offload server

    COMBINED ARTIFICIAL INTELLIGENCE BEHAVIOUR SYSTEMS IN SERIOUS GAMING

    Get PDF
    This thesis proposes a novel methodology for creating Artificial Agents with semi-realistic behaviour, with such behaviour defined as overcoming common limitations of mainstream behaviour systems; rapidly switching between actions, ignoring “obvious” event priorities, etc. Behaviour in these Agents is not fully realistic as some limitations remain; Agents have a “perfect” knowledge about the surrounding environment, and an inability to transfer knowledge to other Agents (no communication). The novel methodology is achieved by hybridising existing Artificial Intelligence (AI) behaviour systems. In most artificial agents (Agents) behaviour is created using a single behaviour system, whereas this work combines several systems in a novel way to overcome the limitations of each. A further proposal is the separation of behavioural concerns into behaviour systems that are best suited to their needs, as well as describing a biologically inspired memory system that further aids in the production of semi-realistic behaviour. Current behaviour systems are often inherently limited, and in this work it is shown that by combining systems that are complementary to each other, these limitations can be overcome without the need for a workaround. This work examines in detail Belief Desire Intention systems, as well as Finite State Machines and explores how these methodologies can complement each other when combined appropriately. By combining these systems together a hybrid system is proposed that is both fast to react and simple to maintain by separating behaviours into fast-reaction (instinctual) and slow-reaction (behavioural) behaviours, and assigning these to the most appropriate system. Computational intelligence learning techniques such as Artificial Neural Networks have been intentionally avoided, as these techniques commonly present their data in a “black box” system, whereas this work aims to make knowledge explicitly available to the user. A biologically inspired memory system has further been proposed in order to generate additional behaviours in Artificial Agents, such as behaviour related to forgetfulness. This work explores how humans can quickly recall information while still being able to store millions of pieces of information, and how this can be achieved in an artificial system

    Sequence learning in Associative Neuronal-Astrocytic Network

    Full text link
    The neuronal paradigm of studying the brain has left us with limitations in both our understanding of how neurons process information to achieve biological intelligence and how such knowledge may be translated into artificial intelligence and its most brain-derived branch, neuromorphic computing. Overturning our fundamental assumptions of how the brain works, the recent exploration of astrocytes is revealing that these long-neglected brain cells dynamically regulate learning by interacting with neuronal activity at the synaptic level. Following recent experimental evidence, we designed an associative, Hopfield-type, neuronal-astrocytic network and analyzed the dynamics of the interaction between neurons and astrocytes. We show that astrocytes were sufficient to trigger transitions between learned memories in the neuronal component of the network. Further, we mathematically derived the timing of the transitions that was governed by the dynamics of the calcium-dependent slow-currents in the astrocytic processes. Overall, we provide a brain-morphic mechanism for sequence learning that is inspired by, and aligns with, recent experimental findings. To evaluate our model, we emulated astrocytic atrophy and showed that memory recall becomes significantly impaired after a critical point of affected astrocytes was reached. This brain-inspired and brain-validated approach supports our ongoing efforts to incorporate non-neuronal computing elements in neuromorphic information processing.Comment: 8 pages, 5 figure

    Sparse Training Theory for Scalable and Efficient Agents

    Get PDF
    A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study

    Sparse Training Theory for Scalable and Efficient Agents:Blue Sky Ideas Track

    Get PDF
    A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study

    The Legacy of Sycamore Gap: The Potential of Photogrammetric AI for Reverse Engineering Lost Heritage with Crowdsourced Data

    Get PDF
    \ua9 Author(s) 2024.The orientation of crowdsourced and multi-temporal image datasets presents a challenging task for traditional photogrammetry. Indeed, traditional image matching approaches often struggle to find accurate and reliable tie points in images that appear significantly different from one another. In this paper, in order to preserve the memory of the Sycamore Gap tree, a symbol of Hadrian\u27s Wall that was felled in an act of vandalism in September 2023, deep-learning-based features trained specifically on challenging image datasets were employed to overcome limitations of traditional matching approaches. We demonstrate how unordered crowdsourced images and UAV videos can be oriented and used for 3D reconstruction purposes, together with a recently acquired terrestrial laser scanner point cloud for scaling and referencing. This allows the memory of the Sycamore Gap tree to live on and exhibits the potential of photogrammetric AI (Artificial Intelligence) for reverse engineering lost heritage
    corecore