10 research outputs found
North American Fuzzy Logic Processing Society (NAFIPS 1992), volume 2
This document contains papers presented at the NAFIPS '92 North American Fuzzy Information Processing Society Conference. More than 75 papers were presented at this Conference, which was sponsored by NAFIPS in cooperation with NASA, the Instituto Tecnologico de Morelia, the Indian Society for Fuzzy Mathematics and Information Processing (ISFUMIP), the Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), the International Fuzzy Systems Association (IFSA), the Japan Society for Fuzzy Theory and Systems, and the Microelectronics and Computer Technology Corporation (MCC). The fuzzy set theory has led to a large number of diverse applications. Recently, interesting applications have been developed which involve the integration of fuzzy systems with adaptive processes such a neural networks and genetic algorithms. NAFIPS '92 was directed toward the advancement, commercialization, and engineering development of these technologies
Some problems in the computation of sociolinguistic data
PhD ThesisThe research described in this thesis is concerned with some of the
problems encountered in the processing of sociolinguistic data.
Different methodologies are seen as different sets of strategies for
coping with the problems which arise from investigations of sociolinguistic
variability within any speech community.
One early approach to the analysis of sociolinguistic variation
(that of Labov: 1963, 1966) is discussed, and some of the difficulties
raised by this approach are indicated. One investigation of sociolinguistic
variability in a British urban setting (Trudgill: 1974) is also described
(Trudgill' s study is based on Labov's (J 966) general methodology).
FN The Tyneside Linguistic Survey (T.L.S.) is offered as an alternative
approach, which overcomes some of the problems inherent in Labov's methodsThe Department of Education and
Science, Newcastle University
A framework for managing global risk factors affecting construction cost performance
Poor cost performance of construction projects has been a major concern for both
contractors and clients. The effective management of risk is thus critical to the success of any construction project and the importance of risk management has grown as projects have become more complex and competition has increased. Contractors have
traditionally used financial mark-ups to cover the risk associated with construction
projects but as competition increases and margins have become tighter they can no longer rely on this strategy and must improve their ability to manage risk. Furthermore, the construction industry has witnessed significant changes particularly in procurement
methods with clients allocating greater risks to contractors.
Evidence shows that there is a gap between existing risk management techniques and
tools, mainly built on normative statistical decision theory, and their practical application
by construction contractors. The main reason behind the lack of use is that risk decision
making within construction organisations is heavily based upon experience, intuition and
judgement and not on mathematical models.
This thesis presents a model for managing global risk factors affecting construction cost
performance of construction projects. The model has been developed using behavioural
decision approach, fuzzy logic technology, and Artificial Intelligence technology. The
methodology adopted to conduct the research involved a thorough literature survey on
risk management, informal and formal discussions with construction practitioners to
assess the extent of the problem, a questionnaire survey to evaluate the importance of
global risk factors and, finally, repertory grid interviews aimed at eliciting relevant
knowledge. There are several approaches to categorising risks permeating construction projects. This
research groups risks into three main categories, namely organisation-specific, global and
Acts of God. It focuses on global risk factors because they are ill-defined, less
understood by contractors and difficult to model, assess and manage although they have
huge impact on cost performance. Generally, contractors, especially in developing
countries, have insufficient experience and knowledge to manage them effectively. The
research identified the following groups of global risk factors as having significant impact
on cost performance: estimator related, project related, fraudulent practices related,
competition related, construction related, economy related and political related factors.
The model was tested for validity through a panel of validators (experts) and crosssectional
cases studies, and the general conclusion was that it could provide valuable
assistance in the management of global risk factors since it is effective, efficient, flexible
and user-friendly. The findings stress the need to depart from traditional approaches and
to explore new directions in order to equip contractors with effective risk management
tools
Recommended from our members
Comments on the cybernetics of stability and regulation in social systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The methods and principles of cybernetics are applied to a discussion of stability and regulation in social systems taking a global viewpoint. The fundamental but still classical notion of stability as applied to homeostatic and ultrastable systems is discussed, with a particular reference to a specific well-studied example of a closed social group (the Tsembaga studied by Roy Rappaport in New Guinea).
The discussion extends to the problem of evolution in large systems and the question of regulating evolution is addressed without special qualifications. A more comprehensive idea of stability is introduced as the argument turns to the problem of evolution for viability in general.
Concepts pertaining to the problem of evolution are exemplified by a computer simulation model of an abstractly defined ecosystem in which various dynamic processes occur allowing the study of adaptive and evolutionary behaviour. In particular, the role of coalition formation and cooperative behaviour is stressed as a key factor in the evolution of complexity.
The model consists of a population of several species of dimensionless automata inhabiting a geometrically defined environment in which a commodity essential for metabolic requirements (food) appears. Automata can sense properties of their environment, move about it, compete for food, reproduce or combine into coalitions thus forming new and more complex species. Each species is associated with a specific genotype from which the species’ behavioural characteristics (its phenotype) are derived. Complexity and survival efficiency of species increases through coalition formation, an event which occurs when automata are faced with an “undecidable” situation that is resolvable only by forming a new and more complex organization.
Exogenous manipulation of the food distribution pattern and other critical factors produces different environmental conditions resulting in different behaviour patterns of automata and in different evolutionary “pathways.”
Eve-1, the computer program developed to implement this model, accepts a high-level command language which allows for the setting of parameters, definition of initial configurations, and control of output formats. Results of simulation are produced graphically and include various pertinent tables. The program was given a modular hierarchical structure which allows easy generation of new versions incorporating different sets of rules.
The model strives to capture the essence of the evolution of complexity viewed as a general process rather than to describe the evolution of a particular “real” system. In this respect it is not context-specific, and the behaviours which are observable in different runs can receive various interpretation depending on specific identifications. Of these, biological, ecological, and sociological interpretations are the most obvious and the latter, in particular, is stressed.J. M. Kaplan Fund in New Yor
Recommended from our members
A role for introspection in AI research
The main thesis is that introspection is recommended for the development of anthropic AI.
Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one's introspection with highly-educated notions such as mathematical methods.
Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus's tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus's more recent publications critiquing AI (Dreyfus, 2007, 2012)
Collected Papers (on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics), Volume XI
This eleventh volume of Collected Papers includes 90 papers comprising 988 pages on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics, written between 2001-2022 by the author alone or in collaboration with the following 84 co-authors (alphabetically ordered) from 19 countries: Abhijit Saha, Abu Sufian, Jack Allen, Shahbaz Ali, Ali Safaa Sadiq, Aliya Fahmi, Atiqa Fakhar, Atiqa Firdous, Sukanto Bhattacharya, Robert N. Boyd, Victor Chang, Victor Christianto, V. Christy, Dao The Son, Debjit Dutta, Azeddine Elhassouny, Fazal Ghani, Fazli Amin, Anirudha Ghosha, Nasruddin Hassan, Hoang Viet Long, Jhulaneswar Baidya, Jin Kim, Jun Ye, Darjan Karabašević, Vasilios N. Katsikis, Ieva Meidutė-Kavaliauskienė, F. Kaymarm, Nour Eldeen M. Khalifa, Madad Khan, Qaisar Khan, M. Khoshnevisan, Kifayat Ullah,, Volodymyr Krasnoholovets, Mukesh Kumar, Le Hoang Son, Luong Thi Hong Lan, Tahir Mahmood, Mahmoud Ismail, Mohamed Abdel-Basset, Siti Nurul Fitriah Mohamad, Mohamed Loey, Mai Mohamed, K. Mohana, Kalyan Mondal, Muhammad Gulfam, Muhammad Khalid Mahmood, Muhammad Jamil, Muhammad Yaqub Khan, Muhammad Riaz, Nguyen Dinh Hoa, Cu Nguyen Giap, Nguyen Tho Thong, Peide Liu, Pham Huy Thong, Gabrijela Popović, Surapati Pramanik, Dmitri Rabounski, Roslan Hasni, Rumi Roy, Tapan Kumar Roy, Said Broumi, Saleem Abdullah, Muzafer Saračević, Ganeshsree Selvachandran, Shariful Alam, Shyamal Dalapati, Housila P. Singh, R. Singh, Rajesh Singh, Predrag S. Stanimirović, Kasan Susilo, Dragiša Stanujkić, Alexandra Şandru, Ovidiu Ilie Şandru, Zenonas Turskis, Yunita Umniyati, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Binyamin Yusoff, Edmundas Kazimieras Zavadskas, Zhao Loon Wang.
Друга міжнародна конференція зі сталого майбутнього: екологічні, технологічні, соціальні та економічні питання (ICSF 2021). Кривий Ріг, Україна, 19-21 травня 2021 року
Second International Conference on Sustainable Futures: Environmental, Technological, Social and Economic Matters (ICSF 2021). Kryvyi Rih, Ukraine, May 19-21, 2021.Друга міжнародна конференція зі сталого майбутнього: екологічні, технологічні, соціальні та економічні питання (ICSF 2021). Кривий Ріг, Україна, 19-21 травня 2021 року
Bioinspired metaheuristic algorithms for global optimization
This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions
Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter
In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF