46,447 research outputs found
Intelligent systems in manufacturing: current developments and future prospects
Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS
Impact Assessment of Hypothesized Cyberattacks on Interconnected Bulk Power Systems
The first-ever Ukraine cyberattack on power grid has proven its devastation
by hacking into their critical cyber assets. With administrative privileges
accessing substation networks/local control centers, one intelligent way of
coordinated cyberattacks is to execute a series of disruptive switching
executions on multiple substations using compromised supervisory control and
data acquisition (SCADA) systems. These actions can cause significant impacts
to an interconnected power grid. Unlike the previous power blackouts, such
high-impact initiating events can aggravate operating conditions, initiating
instability that may lead to system-wide cascading failure. A systemic
evaluation of "nightmare" scenarios is highly desirable for asset owners to
manage and prioritize the maintenance and investment in protecting their
cyberinfrastructure. This survey paper is a conceptual expansion of real-time
monitoring, anomaly detection, impact analyses, and mitigation (RAIM) framework
that emphasizes on the resulting impacts, both on steady-state and dynamic
aspects of power system stability. Hypothetically, we associate the
combinatorial analyses of steady state on substations/components outages and
dynamics of the sequential switching orders as part of the permutation. The
expanded framework includes (1) critical/noncritical combination verification,
(2) cascade confirmation, and (3) combination re-evaluation. This paper ends
with a discussion of the open issues for metrics and future design pertaining
the impact quantification of cyber-related contingencies
Performance measurement : challenges for tomorrow
This paper demonstrates that the context within which performance measurement is used is changing. The key questions posed are: Is performance measurement ready for the emerging context? What are the gaps in our knowledge? and Which lines of enquiry do we need to pursue? A literature synthesis conducted by a team of multidisciplinary researchers charts the evolution of the performance-measurement literature and identifies that the literature largely follows the emerging business and global trends. The ensuing discussion introduces the currently emerging and predicted future trends and explores how current knowledge on performance measurement may deal with the emerging context. This results in identification of specific challenges for performance measurement within a holistic systems-based framework. The principle limitation of the paper is that it covers a broad literature base without in-depth analysis of a particular aspect of performance measurement. However, this weakness is also the strength of the paper. What is perhaps most significant is that there is a need for rethinking how we research the field of performance measurement by taking a holistic systems-based approach, recognizing the integrated and concurrent nature of challenges that the practitioners, and consequently the field, face
Quality assessment technique for ubiquitous software and middleware
The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
Assessing and augmenting SCADA cyber security: a survey of techniques
SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability
Memory-full context-aware predictive mobility management in dual connectivity 5G networks
Network densification with small cell deployment is being considered as one of the dominant themes in the fifth generation (5G) cellular system. Despite the capacity gains, such deployment scenarios raise several challenges from mobility management perspective. The small cell size, which implies a small cell residence time, will increase the handover (HO) rate dramatically. Consequently, the HO latency will become a critical consideration in the 5G era. The latter requires an intelligent, fast and light-weight HO procedure with minimal signalling overhead. In this direction, we propose a memory-full context-aware HO scheme with mobility prediction to achieve the aforementioned objectives. We consider a dual connectivity radio access network architecture with logical separation between control and data planes because it offers relaxed constraints in implementing the predictive approaches. The proposed scheme predicts future HO events along with the expected HO time by combining radio frequency performance to physical proximity along with the user context in terms of speed, direction and HO history. To minimise the processing and the storage requirements whilst improving the prediction performance, a user-specific prediction triggering threshold is proposed. The prediction outcome is utilised to perform advance HO signalling whilst suspending the periodic transmission of measurement reports. Analytical and simulation results show that the proposed scheme provides promising gains over the conventional approach
A brief network analysis of Artificial Intelligence publication
In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.Comment: 18 pages, 7 figure
- …