9,586 research outputs found

    Improving the quality of the industrial enterprise management based on the network-centric approach

    Full text link
    The article examines the network-centric approach to the industrial enterprise management to improve the ef ciency and effectiveness in the implementation of production plans and maximize responsiveness to customers. A network-centric management means the decentralized enterprise group management. A group means a set of enterprise divisions, which should solve by joint efforts a certain case that occurs in the production process. The network-centric management involves more delegation of authority to the lower elements of the enterprise’s organizational structure. The industrial enterprise is considered as a large complex system (production system) functioning and controlled amidst various types of uncertainty: information support uncertainty and goal uncertainty or multicriteria uncertainty. The information support uncertainty occurs because the complex system functioning always takes place in the context of incomplete and fuzzy information. Goal uncertainty or multicriteria uncertainty caused by a great number of goalsestablished for the production system. The network-centric management task de nition by the production system is formulated. The authors offer a mathematical model for optimal planning of consumers’ orders production with the participation of the main enterprise divisions. The methods of formalization of various types of uncertainty in production planning tasks are considered on the basis of the application of the fuzzy sets theory. An enterprise command center is offered as an effective tool for making management decisions by divisions. The article demonstrates that decentralized group management methods can improve the ef ciency and effectiveness of the implementation of production plans through the self-organization mechanisms of enterprise divisions.The work has been prepared with the financial support from the Russian Ministry of Education and Science (Contract No. 02.G25.31.0068 of 23.05.2013 as part of the measure to implement Decision of the Russian Government No. 218)

    An immune algorithm based fuzzy predictive modeling mechanism using variable length coding and multi-objective optimization allied to engineering materials processing

    Get PDF
    In this paper, a systematic multi-objective fuzzy modeling approach is proposed, which can be regarded as a three-stage modeling procedure. In the first stage, an evolutionary based clustering algorithm is developed to extract an initial fuzzy rule base from the data. Based on this model, a back-propagation algorithm with momentum terms is used to refine the initial fuzzy model. The refined model is then used to seed the initial population of an immune inspired multi-objective optimization algorithm in the third stage to obtain a set of fuzzy models with improved transparency. To tackle the problem of simultaneously optimizing the structure and parameters, a variable length coding scheme is adopted to improve the efficiency of the search. The proposed modeling approach is applied to a real data set from the steel industry. Results show that the proposed approach is capable of eliciting not only accurate but also transparent fuzzy models

    Computational Intelligence Inspired Data Delivery for Vehicle-to-Roadside Communications

    Get PDF
    We propose a vehicle-to-roadside communication protocol based on distributed clustering where a coalitional game approach is used to stimulate the vehicles to join a cluster, and a fuzzy logic algorithm is employed to generate stable clusters by considering multiple metrics of vehicle velocity, moving pattern, and signal qualities between vehicles. A reinforcement learning algorithm with game theory based reward allocation is employed to guide each vehicle to select the route that can maximize the whole network performance. The protocol is integrated with a multi-hop data delivery virtualization scheme that works on the top of the transport layer and provides high performance for multi-hop end-to-end data transmissions. We conduct realistic computer simulations to show the performance advantage of the protocol over other approaches

    Predictive intelligence to the edge through approximate collaborative context reasoning

    Get PDF
    We focus on Internet of Things (IoT) environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer the appearance of a specific phenomenon (event). Pushing processing and knowledge inference to the edge of the IoT network allows the complexity of the event reasoning process to be distributed into many manageable pieces and to be physically located at the source of the contextual information. This enables a huge amount of rich data streams to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized Cloud system. We propose a lightweight, energy-efficient, distributed, adaptive, multiple-context perspective event reasoning model under uncertainty on each IoT device (sensor/actuator). Each device senses and processes context data and infers events based on different local context perspectives: (i) expert knowledge on event representation, (ii) outliers inference, and (iii) deviation from locally predicted context. Such novel approximate reasoning paradigm is achieved through a contextualized, collaborative belief-driven clustering process, where clusters of devices are formed according to their belief on the presence of events. Our distributed and federated intelligence model efficiently identifies any localized abnormality on the contextual data in light of event reasoning through aggregating local degrees of belief, updates, and adjusts its knowledge to contextual data outliers and novelty detection. We provide comprehensive experimental and comparison assessment of our model over real contextual data with other localized and centralized event detection models and show the benefits stemmed from its adoption by achieving up to three orders of magnitude less energy consumption and high quality of inference

    Mining and visualizing uncertain data objects and named data networking traffics by fuzzy self-organizing map

    Get PDF
    Uncertainty is widely spread in real-world data. Uncertain data-in computer science-is typically found in the area of sensor networks where the sensors sense the environment with certain error. Mining and visualizing uncertain data is one of the new challenges that face uncertain databases. This paper presents a new intelligent hybrid algorithm that applies fuzzy set theory into the context of the Self-Organizing Map to mine and visualize uncertain objects. The algorithm is tested in some benchmark problems and the uncertain traffics in Named Data Networking (NDN). Experimental results indicate that the proposed algorithm is precise and effective in terms of the applied performance criteria.Peer ReviewedPostprint (published version

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Comparison of different strategies of utilizing fuzzy clustering in structure identification

    Get PDF
    Fuzzy systems approximate highly nonlinear systems by means of fuzzy "if-then" rules. In the literature, various algorithms are proposed for mining. These algorithms commonly utilize fuzzy clustering in structure identification. Basically, there are three different approaches in which one can utilize fuzzy clustering; the �first one is based on input space clustering, the second one considers clustering realized in the output space, while the third one is concerned with clustering realized in the combined input-output space. In this study, we analyze these three approaches. We discuss each of the algorithms in great detail and o¤er a thorough comparative analysis. Finally, we compare the performances of these algorithms in a medical diagnosis classi�cation problem, namely Aachen Aphasia Test. The experiment and the results provide a valuable insight about the merits and the shortcomings of these three clustering approaches

    On the Potential of Generic Modeling for VANET Data Aggregation Protocols

    Get PDF
    In-network data aggregation is a promising communication mechanism to reduce bandwidth requirements of applications in vehicular ad-hoc networks (VANETs). Many aggregation schemes have been proposed, often with varying features. Most aggregation schemes are tailored to specific application scenarios and for specific aggregation operations. Comparative evaluation of different aggregation schemes is therefore difficult. An application centric view of aggregation does also not tap into the potential of cross application aggregation. Generic modeling may help to unlock this potential. We outline a generic modeling approach to enable improved comparability of aggregation schemes and facilitate joint optimization for different applications of aggregation schemes for VANETs. This work outlines the requirements and general concept of a generic modeling approach and identifies open challenges

    Interoperable services based on activity monitoring in ambient assisted living environments

    Get PDF
    Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring human’s activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the users’ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption

    Toward a Taxonomy and Computational Models of Abnormalities in Images

    Full text link
    The human visual system can spot an abnormal image, and reason about what makes it strange. This task has not received enough attention in computer vision. In this paper we study various types of atypicalities in images in a more comprehensive way than has been done before. We propose a new dataset of abnormal images showing a wide range of atypicalities. We design human subject experiments to discover a coarse taxonomy of the reasons for abnormality. Our experiments reveal three major categories of abnormality: object-centric, scene-centric, and contextual. Based on this taxonomy, we propose a comprehensive computational model that can predict all different types of abnormality in images and outperform prior arts in abnormality recognition.Comment: To appear in the Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016
    corecore