190,444 research outputs found
Constraint analysis for aircraft landing in distributed crewing contexts
The aim of this paper is to analyze human factors related and methodological constraints that prevent the distributed crewing or single pilot operational concept to be pushed forward in commercial aviation. First, it has been argued that alternatives for current commercial flight operations are not necessarily constrained by technology, but by the human factors characteristics of the socio-technical systems enabling these operations. In this paper, we present a constraint analysis of the landing phase of flight (both manual and automatic) using Cognitive Work Analysis (CWA). Given that CWA enables linking constraints related to human and non-human elements of the system and their interactions, CWA supports exploring systemic design solutions for distributed crewing operation. We argue that automatic landing calls for designing for distributed situational awareness, whereas manual landing calls for designing novel human roles in the overall system. Second, distributed crewing concept is being researched by several research groups simultaneously and with various methodologies, including expert interviews, semi-structured task analysis, experiments, policy and historical analysis. In the second half of the paper we argue that successfully progressing towards distributed crewing will require collaboration between research groups and integrating findings obtained with mixed methods. We explore strategies for mixed-method integration in the context of designing distributed crewing operations
ToyArchitecture: Unsupervised Learning of Interpretable Models of the World
Research in Artificial Intelligence (AI) has focused mostly on two extremes:
either on small improvements in narrow AI domains, or on universal theoretical
frameworks which are usually uncomputable, incompatible with theories of
biological intelligence, or lack practical implementations. The goal of this
work is to combine the main advantages of the two: to follow a big picture
view, while providing a particular theory and its implementation. In contrast
with purely theoretical approaches, the resulting architecture should be usable
in realistic settings, but also form the core of a framework containing all the
basic mechanisms, into which it should be easier to integrate additional
required functionality.
In this paper, we present a novel, purposely simple, and interpretable
hierarchical architecture which combines multiple different mechanisms into one
system: unsupervised learning of a model of the world, learning the influence
of one's own actions on the world, model-based reinforcement learning,
hierarchical planning and plan execution, and symbolic/sub-symbolic integration
in general. The learned model is stored in the form of hierarchical
representations with the following properties: 1) they are increasingly more
abstract, but can retain details when needed, and 2) they are easy to
manipulate in their local and symbolic-like form, thus also allowing one to
observe the learning process at each level of abstraction. On all levels of the
system, the representation of the data can be interpreted in both a symbolic
and a sub-symbolic manner. This enables the architecture to learn efficiently
using sub-symbolic methods and to employ symbolic inference.Comment: Revision: changed the pdftitl
What is Computational Intelligence and where is it going?
What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with ``computational intelligence'' in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed
Recommended from our members
When decision support systems fail: insights for strategic information systems from Formula
Decision support systems (DSS) are sophisticated tools that increasingly take advantage of big data and are used to design and implement individual - and organization - level strategic decisions . Yet, when organizations excessively rely on their potential the outcome may be decision - making failure, particularly when such tools are applied under high pressure and turbulent conditions. Partial understanding and unidimensional interpretation can prevent learning from failure. Building on a practice perspective, we study an iconic case of strategic failure in Formula 1 racing. Our approach, which integrates the decision maker as well as the organizational and material context , identifies three interrelated sources of strategic failure that are worth investigation for decision - makers using DSS and big data: (1) t he situated nature and affordances of decision - making ; (2) t he distributed nature of cognition in decision - making; and (3) the performativity of the DSS. We outline specific research questions and their implications for firm performance and competitive advantage. Finally, we advance an agenda that can help close timely gaps in strategic IS research
Collaborative assessment of information provider's reliability and expertise using subjective logic
Q&A social media have gained a lot of attention during the recent years. People rely on these sites to obtain information due to a number of advantages they offer as compared to conventional sources of knowledge (e.g., asynchronous and convenient access). However, for the same question one may find highly contradicting answers, causing an ambiguity with respect to the correct information. This can be attributed to the presence of unreliable and/or non-expert users. These two attributes (reliability and expertise) significantly affect the quality of the answer/information provided. We present a novel approach for estimating these user's characteristics relying on human cognitive traits. In brief, we propose each user to monitor the activity of her peers (on the basis of responses to questions asked by her) and observe their compliance with predefined cognitive models. These observations lead to local assessments that can be further fused to obtain a reliability and expertise consensus for every other user in the social network (SN). For the aggregation part we use subjective logic. To the best of our knowledge this is the first study of this kind in the context of Q&A SN. Our proposed approach is highly distributed; each user can individually estimate the expertise and the reliability of her peers using her direct interactions with them and our framework. The online SN (OSN), which can be considered as a distributed database, performs continuous data aggregation for users expertise and reliability assessment in order to reach a consensus. We emulate a Q&A SN to examine various performance aspects of our algorithm (e.g., convergence time, responsiveness etc.). Our evaluations indicate that it can accurately assess the reliability and the expertise of a user with a small number of samples and can successfully react to the latter's behavior change, provided that the cognitive traits hold in practice. © 2011 ICST
Multi-agent knowledge integration mechanism using particle swarm optimization
This is the post-print version of the final paper published in Technological Forecasting and Social Change. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.Unstructured group decision-making is burdened with several central difficulties: unifying the knowledge of multiple experts in an unbiased manner and computational inefficiencies. In addition, a proper means of storing such unified knowledge for later use has not yet been established. Storage difficulties stem from of the integration of the logic underlying multiple experts' decision-making processes and the structured quantification of the impact of each opinion on the final product. To address these difficulties, this paper proposes a novel approach called the multiple agent-based knowledge integration mechanism (MAKIM), in which a fuzzy cognitive map (FCM) is used as a knowledge representation and storage vehicle. In this approach, we use particle swarm optimization (PSO) to adjust causal relationships and causality coefficients from the perspective of global optimization. Once an optimized FCM is constructed an agent based model (ABM) is applied to the inference of the FCM to solve real world problem. The final aggregate knowledge is stored in FCM form and is used to produce proper inference results for other target problems. To test the validity of our approach, we applied MAKIM to a real-world group decision-making problem, an IT project risk assessment, and found MAKIM to be statistically robust.Ministry of Education, Science and Technology (Korea
Towards engineering ontologies for cognitive profiling of agents on the semantic web
Research shows that most agent-based collaborations
suffer from lack of flexibility. This is due to the fact that
most agent-based applications assume pre-defined
knowledge of agents’ capabilities and/or neglect basic
cognitive and interactional requirements in multi-agent
collaboration. The highlight of this paper is that it brings
cognitive models (inspired from cognitive sciences and HCI)
proposing architectural and knowledge-based requirements
for agents to structure ontological models for cognitive
profiling in order to increase cognitive awareness between
themselves, which in turn promotes flexibility, reusability
and predictability of agent behavior; thus contributing
towards minimizing cognitive overload incurred on humans.
The semantic web is used as an action mediating space,
where shared knowledge base in the form of ontological
models provides affordances for improving cognitive
awareness
Data Envelopment Analysis (Dea) approach In efficiency transport manufacturing industry in Malaysia
The objective of this study was to measure of technical efficiency, transport manufacturing industry in Malaysia score using the data envelopment analysis (DEA) from 2005 to 2010. The efficiency score analysis used only two inputs, i.e., capital and labor and one output i.e., total of sales. The results shown that the average efficiency score of the Banker, Charnes, Cooper - Variable Returns to Scale (BCC-VRS) model is higher than the Charnes, Cooper, Rhodes - Constant Return to Scale (CCR-CRS) model. Based on the BCC-VRS model, the average efficiency score was at a moderate level and only four sub-industry that recorded an average efficiency score more than 0.50 percent during the period study. The implication of this result suggests that the transport manufacturing industry needs to increase investment, especially in human capital such as employee training, increase communication expenses such as ICT and carry out joint ventures as well as research and development activities to enhance industry efficiency
- …