403,905 research outputs found

    Defining the selective mechanism of problem solving in a distributed system.

    Get PDF
    Distribution and parallelism are historically important approaches for the implementation of artificial intelligence systems. Research in distributed problem solving considers the approach of solving a particular problem by sharing the problem across a number of cooperatively acting processing agents. Communicating problem solvers can cooperate by exchanging partial solutions to converge on global results. The purpose of this research programme is to make a contribution to the field of Artificial Intelligence by developing a knowledge representation language. The project has attempted to create a computational model using an underlying theory of cognition to address the problem of finding clusters of relevant problem solving agents to provide appropriate partial solutions, which when put together provide the overall solution for a given complex problem. To prove the validity of this approach to problem solving, a model of a distributed production system has been created. A model of a supporting parallel architecture for the proposed distributed production problem solving system (DPSS) is described, along with the mechanism for inference processing. The architecture should offer sufficient computing power to cope with the larger search space required by the knowledge representation, and the required faster methods of processing. The inference engine mechanism, which is a combination of task sharing and result sharing perspectives, is distinguished into three phases of initialising, clustering and integrating. Based on a fitness measure derived to balance the communication and computation for the clusters, new clusters are assembled using genetic operators. The algorithm is also guided by the knowledge expert. A cost model for fitness values has been used, parameterised by computation ration and communication performance. Following the establishment of this knowledge representation scheme and identification of a supporting parallel architecture, a simulation of the array of PEs has been developed to emulate the behaviour of such a system. The thesis reports on findings from a series of tests used to assess its potential gains. The performance of the DPSS has been evaluated to verify the validity of this approach by measuring the gain in speed of execution in a parallel environment as compared with serial processing. The evaluation of test results shows the validity of the proposed approach in constructing large knowledge based systems

    Analysis of a collaborative scheduling model applied in a job shop manufacturing environment

    Get PDF
    Collaborative Manufacturing Scheduling (CMS) is not yet a properly explored decision making practice, although its potential for being currently explored, in the digital era, by combining efforts among a set of entities, either persons or machines, to jointly cooperate for solving some more or less complex scheduling problem, namely occurring in job shop manufacturing environments. In this paper, an interoperable scheduling system integrating a proposed scheduling model, along with varying kinds of solving algorithms, are put forward and analyzed through an industrial case study. The case study was decomposed in three application scenarios, for enabling the evaluation of the proposed scheduling model when envisioning the prioritization of internal–makespan-or external–number of tardy jobs-performance measures, along with a third scenario assigning a same importance or weight to both kinds of performance measures. The results obtained enabled us to realize that the weighted application scenario permitted reaching more balanced, thus a potentially more attractive global solution for the scheduling problem considered through the combination of different kinds of scheduling algorithms for the resolution of each underlying sub problem according to the proposed scheduling model. Besides, the decomposition of a global more complex scheduling problem into simpler sub-problems turns them easier to be solved through the different solving algorithms available, while further enabling to obtain a wider range of alternative schedules to be explored and evaluated. Thus, contributing to enriching the scheduling problem-solving process. A future exploration of the application in other types of manufacturing environments, namely occurring in the context of extended, networked, distributed or virtual production systems, integrating an increased and variable set of collaborating entities or factories, is also suggested.The project is funded by the FCT—Fundação para a Ciência e Tecnologia through the R&D Units Project Scope: UIDB/00319/2020, and EXPL/EME-SIS/1224/2021

    THE DEVELOPMENT OF A PREDICTIVE PROBABILITY MODEL FOR EFFECTIVE CONTINUOUS LEARNING AND IMPROVEMENT

    Get PDF
    It is important for organizations to understand the factors responsible for establishing sustainable continuous improvement (CI) capabilities. This study uses learning curves as the basis to examine learning obtained by team members doing work with and without the application of fundamental aspects of the Toyota Production System. The results are used to develop an effective model to guide organizational activities towards achieving the ability to continuous improve in a sustainable fashion. This research examines the effect of standardization and waste elimination activities supported by systematic problem solving on team member learning at the work interface and system performance. The results indicate the application of Standard Work principles and elimination of formally defined waste using the systematic 8-step problem solving process positively impacts team member learning and performance, providing the foundation for continuous improvement Compared to their untreated counterparts, treated teams exhibited increased, more uniformly distributed, and more sustained learning rates as well as improved productivity as defined by decreased total throughput time and wait time. This was accompanied by reduced defect rates and a significant decrease in mental and physical team member burden. A major outcome of this research has been the creation of a predictive probability model to guide sustainable CI development using a simplified assessment tool aimed at identifying essential organizational states required to support sustainable CI development

    Optimal energy management system based on stochastic approach for a home microgrid with integrated responsive load demand and energy storage

    Get PDF
    In recent years, increasing interest in developing small-scale fully integrated energy resources in distributed power networks and their production has led to the emergence of smart Microgrids (MG), in particular for distributed renewable energy resources integrated with wind turbine, photovoltaic and energy storage assets. In this paper, a sustainable day-ahead scheduling of the grid-connected home-type Microgrids (H-MG) with the integration of non-dispatchable/dispatchable distributed energy resources and responsive load demand is co-investigated, in particular to study the simultaneously existed uncontrollable and controllable production resources despite the existence of responsive and non-responding loads. An efficient energy management system (EMS) optimization algorithm based on mixed-integer linear programming (MILP) (termed as EMS-MILP) using the GAMS implementation for producing power optimization with minimum hourly power system operational cost and sustainable electricity generation of within a H-MG. The day-ahead scheduling feature of electric power and energy systems shared with renewable resources as a MILP problem characteristic for solving the hourly economic dispatch-constraint unit commitment is also modelled to demonstrate the ability of an EMS-MILP algorithm for a H-MG under realistic technical constraints connected to the upstream grid. Numerical simulations highlights the effectiveness of the proposed algorithmic optimization capabilities for sustainable operations of smart H-MGs connected to a variety of global loads and resources to postulate best power economization. Results demonstrate the effectiveness of the proposed algorithm and show a reduction in the generated power cost by almost 21% in comparison with conventional EMS

    Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets

    Full text link
    We present an audio-focused creative coding toolkit for deploying music programs to remote networked devices. It is designed to support efficient creative exploratory search in the context of the Internet of Things (IoT), where one or more devices must be configured, programmed and interact over a network, with applications in digital musical instruments, networked music performance and other digital experiences. Users can easily monitor and hack what multiple devices are doing on the fly, enhancing their ability to perform “exploratory search” in a creative workflow. We present two creative case studies using the system: the creation of a dance performance and the creation of a distributed musical installation. Analysing different activities within the production process, with a particular focus on the trade-off between more creative exploratory tasks and more standard configuring and problem-solving tasks, we show how the system supports creative exploratory search for multiple networked devices

    Developing Methods and Algorithms for Cloud Computing Management Systems in Industrial Polymer Synthesis Processes

    Get PDF
    To date, the resources and computational capacity of companies have been insufficient to evaluate the technological properties of emerging products based on mathematical modelling tools. Often, several calculations have to be performed with different initial data. A remote computing system using a high-performance cluster can overcome this challenge. This study aims to develop unified methods and algorithms for a remote computing management system for modelling polymer synthesis processes at a continuous production scale. The mathematical description of the problem-solving algorithms is based on a kinetic approach to process investigation. A conceptual scheme for the proposed service can be built as a multi-level architecture with distributed layers for data storage and computation. This approach provides the basis for a unified database of laboratory and computational experiments to address and solve promising problems in the use of neural network technologies in chemical kinetics. The methods and algorithms embedded in the system eliminate the need for model description. The operation of the system was tested by simulating the simultaneous statement and computation of 15 to 30 tasks for an industrially significant polymer production process. Analysis of the time required showed a nearly 10-fold increase in the rate of operation when managing a set of similar tasks. The analysis shows that the described formulation and solution of problems is more time-efficient and provides better production modes. Doi: 10.28991/esj-2021-01324 Full Text: PD

    A Mathematical Framework of Human Thought Process: Rectifying Software Construction Inefficiency and Identifying Characteristic Efficiencies of Networked Systems Via Problem-solution Cycle

    Get PDF
    Problem The lack of a theory to explain human thought process latently affects the general perception of problem solving activities. This present study was to theorize human thought process (HTP) to ascertain in general the effect of problem solving inadequacy on efficiency. Method To theorize human thought process (HTP), basic human problem solving activities were investigated through the vein of problem-solution cycle (PSC). The scope of PSC investigation was focused on the inefficiency problem in software construction and latent characteristic efficiencies of a similar networked system. In order to analyze said PSC activities, three mathematical quotients and a messaging wavefunction model similar to Schrodinger’s electronic wavefunction model are respectively derived for four intrinsic brain traits namely intelligence, imagination, creativity and language. These were substantiated using appropriate empirical verifications. Firstly, statistical analysis of intelligence, imagination and creativity quotients was done using empirical data with global statistical views from: 1. 1994–2004 CHAOS report Standish Group International’s software development projects success and failure survey. 2. 2000–2009 Global Creativity Index (GCI) data based on 3Ts of economic development (technology, talent and tolerance indices) from 82 nations. 3. Other varied localized success and failure surveys from 1994–2009/1998–2010 respectively. These statistical analyses were done using spliced decision Sperner system (SDSS) to show that the averages of all empirical scientific data on successes and failures of software production within specified periods are in excellent agreement with theoretically derived values. Further, the catalytic effect of creativity (thought catalysis) in human thought process is outlined and shown to be in agreement with newly discovered branch-like nerve cells in brain of mice (similar to human brain). Secondly, the networked communication activities of the language trait during PSC was scrutinized statistical using journal-journal citation data from 13 randomly selected 1984 major chemistry journals. With the aid of aforementioned messaging wave formulation, computer simulation of message-phase “thermogram” and “chromatogram” were generated to provide messaging line spectra relative to the behavioral messaging activities of the messaging network under study. Results Theoretical computations stipulated 66.67% efficiency due to intelligence, imagination and creativity traits interactions (multi-computational skills) was 33.33% due to networked linkages of language trait (aggregated language skills). The worldwide software production and economic data used were normally distributed with significance level α of 0.005. Thus, there existed a permissible error of 1% attributed to the significance level of said normally distributed data. Of the brain traits quotient statistics, the imagination quotient (IMGQ) score was 52.53% from 1994-2004 CHAOS data analysis and that from 2010 GCI data was 54.55%. Their average reasonably approximated 50th percentile of the cumulative distribution of problem-solving skills. On the other hand, the creativity quotient score from 1994-2004 CHAOS data was 0.99% and that from 2010 GCI data was 1.17%. These averaged to a near 1%. The chances of creativity and intelligence working together as joint problem-solving skills was consistently found to average at 11.32%(1994-2004 CHAOS: 10.95%, 2010 GCI: 11.68%). Also, the empirical data analysis showed that the language inefficiency of thought flow ηʹ(τ) from 1994-2004 CHAOS data was 35.0977% and that for 2010 GCI data was 34.9482%. These averaged around 35%. On the success and failure of software production, statistical analysis of empirical data showed 63.2% average efficiency for successful software production (1994 - 2012) and 33.94% average inefficiency for failed software production (1998 - 2010). On the whole, software production projects had a bound efficiency approach level (BEAL) of 94.8%. In the messaging wave analysis of 13 journal-to-journal citations, the messaging phase space graph(s) indicated a fundamental frequency (probable minimum message state) of 11. Conclusions By comparison, using cutoff level of printed editions of Journal Citation Reports to substitute for missing data values is inappropriate. However, values from optimizing method(s) harmonized with the fundamental frequency inferred from message wave analysis using informatics wave equation analysis (IWEA). Due to its evenly spaced chronological data snapshot, the application of SDSS technique inherently does diminish the difficulty associated with handling large data volume (big data) for analysis. From CHAOS and GCI data analysis, the averaged CRTQ scores indicate that only 1 percent (on the average) of the entire human race can be considered exceptionally creative. However in the art of software production, the siphoning effect of existing latent language inefficiency suffocates its processes of solution creation to an efficiency bound level of 66.67%. With a BEAL value of 94.8% and basic human error of 5.2%, it can be reasonable said that software production projects have delivered efficiently within existing latent inefficiency. Consequently, by inference from the average language inefficiency of thought flow, an average language efficiency of 65% exists in the process of software production worldwide. Reasonably, this correlates very strongly with existing average software production efficiency of 63.2% around which software crisis has averagely stagnated since the inception of software creation. The persistent dismal performance of software production is attributable to existing central focus on the usage of multiplicity of programming languages. Acting as an “efficiency buffer”, the latter minimizes changes to efficiency in software production thereby limiting software production efficiency theoretically to 66.67%. From both theoretical and empirical perspective, this latently shrouds software production in a deficit maximum attainable efficiency (DMAE). Software crisis can only be improved drastically through policy-driven adaptation of a universal standard supporting very minimal number of programming languages. On the average, the proposed universal standardization could save the world an estimated 6 trillion US dollars per year which is lost through existing inefficient software industry

    A distributed knowledge-based approach to flexible automation : the contract-net framework

    Get PDF
    Includes bibliographical references (p. 26-29)

    Transportation Management in a Distributed Logistic Consumption System Under Uncertainty Conditions

    Get PDF
    The problem of supply management in the supplier-to-consumer logistics transport system has been formed and solved. The novelty of the formulation of the problem consists in the integrated accounting of costs in the logistic system, which takes into account at the same time the cost of transporting products from suppliers to consumers, as well as the costs for each of the consumers to store the unsold product and losses due to possible shortages. The resulting optimization problem is no longer a standard linear programming problem. In addition, the work assumes that the solution of the problem should be sought taking into account the fact that the initial data of the problem are not deterministic. The analysis of traditional methods of describing the uncertainty of the source data. It is concluded that, given the rapidly changing conditions for the implementation of the delivery process in a distributed supplier-to-consumer system, it is advisable to move from a theoretical probability representation of the source data to their description in terms of fuzzy mathematics. At the same time, in particular, the fuzzy values of the demand for the delivered product for each consumer are determined by their membership functions.Distribution of supplies in the system is described by solving a mathematical programming problem with a nonlinear objective function and a set of linear constraints of the transport type. In forming the criterion, a technology is used to transform the membership functions of fuzzy parameters of the problem to its theoretical probabilistic counterparts – density distribution of demand values. The task is reduced to finding for each consumer the value of the ordered product, minimizing the average total cost of storing the unrealized product and losses from the deficit. The initial problem is reduced to solving a set of integral equations solved, in general, numerically. It is shown that in particular, important for practice, particular cases, this solution is achieved analytically.The paper states the insufficient adequacy of the traditionally used mathematical models for describing fuzzy parameters of the problem, in particular, the demand. Statistical processing of real data on demand shows that the parameters of the membership functions of the corresponding fuzzy numbers are themselves fuzzy numbers. Acceptable mathematical models of the corresponding fuzzy numbers are formulated in terms of bifuzzy mathematics. The relations describing the membership functions of the bifuzzy numbers are given. A formula is obtained for calculating the total losses to storage and from the deficit, taking into account the bifuzzy of demand. In this case, the initial task is reduced to finding the distribution of supplies, at which the maximum value of the total losses does not exceed the permissible value
    corecore