2,575 research outputs found

    Influence of artificial intelligence on public employment and its impact on politics: A systematic literature review

    Get PDF
    Goal:Public administration is constantly changing in response to new challenges, including the implementation of new technologies such as robotics and artificial intelligence (AI). This new dynamic has caught the attention of political leaders who are finding ways to restrain or regulate AI in public services, but also of scholars who are raising legitimate concerns about its impacts on public employment. In light of the above, the aim of this research is to analyze the influence of AI on public employment and the ways politics are reacting. Design / Methodology / Approach: We have performed a systematic literature review to disclose the state-of-the-art and to find new avenues for future research. Results: The results indicate that public services require four kinds of intelligence – mechanical, analytical, intuitive, and empathetic – albeit, with much less expression than in private services. Limitations of the investigation: This systematic review provides a snapshot of the influence of AI on public employment. Thus, our research does not cover the whole body of knowledge, but it presents a holistic understanding of the phenomenon. Practical implications: As private companies are typically more advanced in the implementation of AI technologies, the for-profit sector may provide significant contributions in the way states can leverage public services through the deployment of AI technologies. Originality / Value: This article highlights the need for states to create the necessary conditions to legislate and regulate key technological advances, which, in our opinion, has been done, but at a very slow pace.info:eu-repo/semantics/publishedVersio

    Artificial cognitive architecture with self-learning and self-optimization capabilities. Case studies in micromachining processes

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura : 22-09-201

    Improved gravitational search algorithm for proportional integral derivative controller tuning in process control system

    Get PDF
    Proportional-Integral-Derivative (PID) controller is one of the most used controllers in the industry due to the reliability and simplicity of its structure. However, despite its simple structure controller, the tuning process of PID controller for nonlinear, high-order and complex plant is difficult and faces lots of challenges. Conventional method such as Ziegler-Nichols are still being used for PID tuning process despite its lack of tuning accuracy. Nowadays researchers around the world shift their attention from conventional method to optimisation-based methods. For the last five years, optimisation techniques become one of the most popular methods used for tuning process of PID controller. Optimisation techniques such as Genetic Algorithm (GA), Particle Swarm Optimisation (PSO) as well as Gravitational Search Algorithm (GSA) are widely used for the PID controller application. Despite the effectiveness of GSA for PID controller tuning process compared to the GA and PSO technique, there is still a room for improvement of GSA performance for PID controller tuning process. This research represents the additional characters in GSA to enhance the PID controller parameter tuning performance which are Linear Weight Summation (LWS) and alpha parameter range tuning. Performance of optimisation-based PID controllers are measured based on the transient response performance specification (i.e. rise time, settling time, and percentage overshoot). By implementing these two approaches, results show that Improved Gravitational Search Algorithm (IGSA) based PID controller produced 20% to 30% faster rise and settling time and 25% to 35% smaller percentage overshoot compared to GA-PID and PSO-PID. For real implementation analysis, IGSA based PID controller also produced faster settling time and lower percentage overshoot than other optimisation-based PID controller. A good controller viewed as a controller that produced a stable dynamic system. Therefore, by producing a good transient response, IGSA based PID controller is able to provide a stable dynamic system performance compared to other controllers

    BLOCKGRID: A BLOCKCHAIN-MEDIATED CYBER-PHYSICAL INSTRUCTIONAL PLATFORM

    Get PDF
    Includes supplementary material, which may be found at https://calhoun.nps.edu/handle/10945/66767Blockchain technology has garnered significant attention for its disruptive potential in several domains of national security interest. For the United States government to meet the challenge of incorporating blockchain technology into its IT infrastructure and cyber warfare strategy, personnel must be educated about blockchain technology and its applications. This thesis presents both the design and prototype implementation for a blockchain-mediated cyber-physical system called a BlockGrid. The system consists of a cluster of microcomputers that form a simple smart grid controlled by smart contracts on a private blockchain. The microcomputers act as private blockchain nodes and are programmed to activate microcomputer-attached circuits in response to smart-contract transactions. LEDs are used as visible circuit elements that serve as indicators of the blockchain’s activity and allow demonstration of the technology to observers. Innovations in networking configuration and physical layout allow the prototype to be highly portable and pre-configured for use upon assembly. Implementation options allow the use of BlockGrid in a variety of instructional settings, thus increasing its potential benefit to educators.Civilian, CyberCorps: Scholarship for ServiceApproved for public release. distribution is unlimite

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems

    Navigating Generative Artificial Intelligence Promises and Perils for Knowledge and Creative Work

    Get PDF
    Generative artificial intelligence (GenAI) is rapidly becoming a viable tool to enhance productivity and act as a catalyst for innovation across various sectors. Its ability to perform tasks that have traditionally required human judgment and creativity is transforming knowledge and creative work. Yet it also raises concerns and implications that could reshape the very landscape of knowledge and creative work. In this editorial, we undertake an in-depth examination of both the opportunities and challenges presented by GenAI for future IS research

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to present their current research, and to discuss topics with other students in order to look for synergies and common research topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big data management, training, contributing to glue disparate researchers working across different areas and provide a meeting ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in research topics such as sustainable software solutions (applications and system software stack), data management, energy efficiency, and resilience.European Cooperation in Science and Technology. COS

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria
    • …
    corecore