108,005 research outputs found

    IEEE Access special section editorial: Artificial intelligence enabled networking

    Get PDF
    With today’s computer networks becoming increasingly dynamic, heterogeneous, and complex, there is great interest in deploying artificial intelligence (AI) based techniques for optimization and management of computer networks. AI techniques—that subsume multidisciplinary techniques from machine learning, optimization theory, game theory, control theory, and meta-heuristics—have long been applied to optimize computer networks in many diverse settings. Such an approach is gaining increased traction with the emergence of novel networking paradigms that promise to simplify network management (e.g., cloud computing, network functions virtualization, and software-defined networking) and provide intelligent services (e.g., future 5G mobile networks). Looking ahead, greater integration of AI into networking architectures can help develop a future vision of cognitive networks that will show network-wide intelligent behavior to solve problems of network heterogeneity, performance, and quality of service (QoS)

    High-Performance Deep learning to Detection and Tracking Tomato Plant Leaf Predict Disease and Expert Systems

    Get PDF
    Nowadays, technology and computer science are rapidly developing many tools and algorithms, especially in the field of artificial intelligence. Machine learning is involved in the development of new methodologies and models that have become a novel machine learning area of applications for artificial intelligence. In addition to the architectures of conventional neural network methodologies, deep learning refers to the use of artificial neural network architectures which include multiple processing layers./nIn this paper, models of the Convolutional neural network were designed to detect (diagnose) plant disorders by applying samples of healthy and unhealthy plant images analyzed by means of methods of deep learning. The models were trained using an open data set containing (18,000) images of ten different plants, including healthy plants. Several model architectures have been trained to achieve the best performance of (97 percent) when the respectively [plant, disease] paired are detected. This is a very useful information or early warning technique and a method that can be further improved with the substantially high-performance rate to support an automated plant disease detection system to work in actual farm conditions

    Novel deep learning architectures for marine and aquaculture applications

    Get PDF
    Alzayat Saleh's research was in the area of artificial intelligence and machine learning to autonomously recognise fish and their morphological features from digital images. Here he created new deep learning architectures that solved various computer vision problems specific to the marine and aquaculture context. He found that these techniques can facilitate aquaculture management and environmental protection. Fisheries and conservation agencies can use his results for better monitoring strategies and sustainable fishing practices

    ARTIFICIAL INTELLIGENCE IN BLOCKCHAIN-PROVIDE DIGITAL TECHNOLOGY

    Get PDF
    Artificial intelligence technologies, today, are rapidly developing and are an important branch of Computer Science. Artificial intelligence is at the heart of research and development of theory, methods, technologies, and applications for modeling and expanding human intelligence. Artificial intelligence technology has three key aspects, namely data, algorithm, and computing power, in the sense that training an algorithm to produce a classification model requires significant data, and the learning process requires improved computing capabilities. In the age of big data, information can come from a variety of sources (such as sensor systems, Internet of Things (IoT) devices and systems, as well as social media platforms) and/or belong to different stakeholders. This mostly leads to a number of problems. One of the key problems is isolated data Islands, where data from a single source/stakeholder is not available to other parties or training an artificial intelligence model, or it is financially difficult or impractical to collect a large amount of distributed data for Centralized Processing and training. There is also a risk of becoming a single point of failure in centralized architectures, which can lead to data intrusion. In addition, data from different sources may be unstructured and differ in quality, and it may also be difficult to determine the source and validity of the data. There is also a risk of invalid or malicious data. All these restrictions may affect the accuracy of the forecast. In practice, artificial intelligence models are created, trained, and used by various subjects. The learning process is not transparent to users, and users may not fully trust the model they are using. In addition, as artificial intelligence algorithms become more complex, it is difficult for people to understand how the result of training is obtained. So, recently there has been a tendency to move away from centralized approaches to artificial intelligence to decentralized ones

    Artificial intelligence in nanotechnology

    Full text link
    This is the author’s version of a work that was accepted for publication in Nanotechnology. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Nanotechnology 24.45 (2013): 452002During the last decade there has been an increasing use of artificial intelligence tools in nanotechnology research. In this paper we review some of these efforts in the context of interpreting scanning probe microscopy, the study of biological nanosystems, the classification of material properties at the nanoscale, theoretical approaches and simulations in nanoscience, and generally in the design of nanodevices. Current trends and future perspectives in the development of nanocomputing hardware that can boost artificial intelligence based applications are also discussed. Convergence between artificial intelligence and nanotechnology can shape the path for many technological developments in the field of information sciences that will rely on new computer architectures and data representations, hybrid technologies that use biological entities and nanotechnological devices, bioengineering, neuroscience and a large variety of related disciplines

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)
    • …
    corecore