90,748 research outputs found
A New Constructivist AI: From Manual Methods to Self-Constructive Systems
The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way.
One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift
Reclaiming human machine nature
Extending and modifying his domain of life by artifact production is one of
the main characteristics of humankind. From the first hominid, who used a wood
stick or a stone for extending his upper limbs and augmenting his gesture
strength, to current systems engineers who used technologies for augmenting
human cognition, perception and action, extending human body capabilities
remains a big issue. From more than fifty years cybernetics, computer and
cognitive sciences have imposed only one reductionist model of human machine
systems: cognitive systems. Inspired by philosophy, behaviorist psychology and
the information treatment metaphor, the cognitive system paradigm requires a
function view and a functional analysis in human systems design process.
According that design approach, human have been reduced to his metaphysical and
functional properties in a new dualism. Human body requirements have been left
to physical ergonomics or "physiology". With multidisciplinary convergence, the
issues of "human-machine" systems and "human artifacts" evolve. The loss of
biological and social boundaries between human organisms and interactive and
informational physical artifact questions the current engineering methods and
ergonomic design of cognitive systems. New developpment of human machine
systems for intensive care, human space activities or bio-engineering sytems
requires grounding human systems design on a renewed epistemological framework
for future human systems model and evidence based "bio-engineering". In that
context, reclaiming human factors, augmented human and human machine nature is
a necessityComment: Published in HCI International 2014, Heraklion : Greece (2014
Designing as Construction of Representations: A Dynamic Viewpoint in Cognitive Design Research
This article presents a cognitively oriented viewpoint on design. It focuses
on cognitive, dynamic aspects of real design, i.e., the actual cognitive
activity implemented by designers during their work on professional design
projects. Rather than conceiving de-signing as problem solving - Simon's
symbolic information processing (SIP) approach - or as a reflective practice or
some other form of situated activity - the situativity (SIT) approach - we
consider that, from a cognitive viewpoint, designing is most appropriately
characterised as a construction of representations. After a critical discussion
of the SIP and SIT approaches to design, we present our view-point. This
presentation concerns the evolving nature of representations regarding levels
of abstraction and degrees of precision, the function of external
representations, and specific qualities of representation in collective design.
Designing is described at three levels: the organisation of the activity, its
strategies, and its design-representation construction activities (different
ways to generate, trans-form, and evaluate representations). Even if we adopt a
"generic design" stance, we claim that design can take different forms
depending on the nature of the artefact, and we propose some candidates for
dimensions that allow a distinction to be made between these forms of design.
We discuss the potential specificity of HCI design, and the lack of cognitive
design research occupied with the quality of design. We close our discussion of
representational structures and activities by an outline of some directions
regarding their functional linkages
Autonomic computing architecture for SCADA cyber security
Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator
Data Modeling with Large Random Matrices in a Cognitive Radio Network Testbed: Initial Experimental Demonstrations with 70 Nodes
This short paper reports some initial experimental demonstrations of the
theoretical framework: the massive amount of data in the large-scale cognitive
radio network can be naturally modeled as (large) random matrices. In
particular, using experimental data we will demonstrate that the empirical
spectral distribution of the large sample covariance matrix---a Hermitian
random matrix---agree with its theoretical distribution (Marchenko-Pastur law).
On the other hand, the eigenvalues of the large data matrix ---a non-Hermitian
random matrix---are experimentally found to follow the single ring law, a
theoretical result that has been discovered relatively recently. To our best
knowledge, our paper is the first such attempt, in the context of large-scale
wireless network, to compare theoretical predictions with experimental
findings.Comment: 4 pages, 11 figure
Negative Results in Computer Vision: A Perspective
A negative result is when the outcome of an experiment or a model is not what
is expected or when a hypothesis does not hold. Despite being often overlooked
in the scientific community, negative results are results and they carry value.
While this topic has been extensively discussed in other fields such as social
sciences and biosciences, less attention has been paid to it in the computer
vision community. The unique characteristics of computer vision, particularly
its experimental aspect, call for a special treatment of this matter. In this
paper, I will address what makes negative results important, how they should be
disseminated and incentivized, and what lessons can be learned from cognitive
vision research in this regard. Further, I will discuss issues such as computer
vision and human vision interaction, experimental design and statistical
hypothesis testing, explanatory versus predictive modeling, performance
evaluation, model comparison, as well as computer vision research culture
- …