8 research outputs found
Adjectival Retention: Persuasive Texts Students Universitas Negeri Makassar
This research contributes to language development, especially adjectival retention as a very basic resilience in the text. Research on adjectives has been conducted by (Kennedy, 2013), (Schiff et al., 2014), (Sharma et al., 2015), (Sato et al., 2016), (Cutillas & Tolchinsky, 2017) Compared to the previous studies, however, the present study focuses on adjectival retention. In addition, the research findings also provide an understanding of the use of adjectives with the meaningfulness contained in the sentence. This research is a qualitative one, using a content analysis method. The data consist of persuasive essays written by 83 students of Indonesian Language and Literature Education Program in the Faculty of Languages and Literature at State University of Makassar, submitted during March to September 2019 period. While the research data were in the form of adjectives in the essays submitted, the research instrument was the researchers themselves. Data collection was done by giving assignments to the students to write persuasive essays. Data analysis techniques were identifying, reducing, presenting the data, verifying, and drawing conclusions about adjectival retention. The findings of the study indicate the following cases of adjectival retention: (a) weakening of positive adjectival retention; (b) intensive retention occurring not only with the precursor which precedes it, but also with the constituent following it; (c) elative retention appearing only after the use of verbs in the sentence; (d) excessive adjectival retention having a formation that occurs due to a conjunction stating the cause; (e) retention of argumentative adjectives placing emphasis on the part of the core sentence; (f) retention of attenuate adjectives reinforcing one another and not weakening the other parts. Overall, some 42% of the essays contain adjectival retention and 58% of them were without adjectival retention
SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model
To realize human-like robot intelligence, a large-scale cognitive
architecture is required for robots to understand the environment through a
variety of sensors with which they are equipped. In this paper, we propose a
novel framework named Serket that enables the construction of a large-scale
generative model and its inference easily by connecting sub-modules to allow
the robots to acquire various capabilities through interaction with their
environments and others. We consider that large-scale cognitive models can be
constructed by connecting smaller fundamental models hierarchically while
maintaining their programmatic independence. Moreover, connected modules are
dependent on each other, and parameters are required to be optimized as a
whole. Conventionally, the equations for parameter estimation have to be
derived and implemented depending on the models. However, it becomes harder to
derive and implement those of a larger scale model. To solve these problems, in
this paper, we propose a method for parameter estimation by communicating the
minimal parameters between various modules while maintaining their programmatic
independence. Therefore, Serket makes it easy to construct large-scale models
and estimate their parameters via the connection of modules. Experimental
results demonstrated that the model can be constructed by connecting modules,
the parameters can be optimized as a whole, and they are comparable with the
original models that we have proposed
Invariant Feature Mappings for Generalizing Affordance Understanding Using Regularized Metric Learning
This paper presents an approach for learning invariant features for object
affordance understanding. One of the major problems for a robotic agent
acquiring a deeper understanding of affordances is finding sensory-grounded
semantics. Being able to understand what in the representation of an object
makes the object afford an action opens up for more efficient manipulation,
interchange of objects that visually might not be similar, transfer learning,
and robot to human communication. Our approach uses a metric learning algorithm
that learns a feature transform that encourages objects that affords the same
action to be close in the feature space. We regularize the learning, such that
we penalize irrelevant features, allowing the agent to link what in the sensory
input caused the object to afford the action. From this, we show how the agent
can abstract the affordance and reason about the similarity between different
affordances
Learning Hierarchical Compositional Task Definitions through Online Situated Interactive Language Instruction
Artificial agents, from robots to personal assistants, have become competent workers in many settings and embodiments, but for the most part, they are limited to performing the capabilities and tasks with which they were initially programmed. Learning in these settings has predominately focused on learning to improve the agent’s performance on a task, and not on learning the actual definition of a task. The primary method for imbuing an agent with the task definition has been through programming by humans, who have detailed knowledge of the task, domain, and agent architecture. In contrast, humans quickly learn new tasks from scratch, often from instruction by another human. If we desire AI agents to be flexible and dynamically extendable, they will need to emulate these learning capabilities, and not be stuck with the limitation that task definitions must be acquired through programming.
This dissertation explores the problem of how an Interactive Task Learning agent can learn the complete definition or formulation of novel tasks rapidly through online natural language instruction from a human instructor. Recent advances in natural language processing, memory systems, computer vision, spatial reasoning, robotics, and cognitive architectures make the time ripe to study how knowledge can be automatically acquired, represented, transferred, and operationalized. We present a learning approach embodied in an ITL agent that interactively learns the meaning of task concepts, the goals, actions, failure conditions, and task-specific terms, for 60 games and puzzles. In our approach, the agent learns hierarchical symbolic representations of task knowledge that enable it to transfer and compose knowledge, analyze and debug multiple interpretations, and communicate with the teacher to resolve ambiguity. Our results show that the agent can correctly generalize, disambiguate, and transfer concepts across variations of language descriptions and world representations, even with distractors present.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153434/1/jrkirk_1.pd
The learning of adjectives and nouns from affordance and appearance features
We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluate three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives, and appearance features are more relevant for nouns. We show that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we link our results with studies from psychology, neuroscience and linguistics that point to the differences between the development and representation of adjectives and nouns in humans
The learning of adjectives and nouns from affordance and appearance features
We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluate three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives, and appearance features are more relevant for nouns. We show that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we link our results with studies from psychology, neuroscience and linguistics that point to the differences between the development and representation of adjectives and nouns in humans
The learning of adjectives and nouns from affordance and appearance features
We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluate three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives, and appearance features are more relevant for nouns. We show that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we link our results with studies from psychology, neuroscience and linguistics that point to the differences between the development and representation of adjectives and nouns in humans