1,328 research outputs found
Video content delivery for the ESL classroom with vodcasting technology
In this paper I will explain the means by which video content can be delivered to the ESL classroom via a technology known as vodcasting. The ability to deliver video to the ESL classroom CAN profoundly change the learning process, and I will explore the implications
of this new technology in this paper. It must be emphasized, however, that the ABILITY to deliver video does not NECESSARILY enhance the learning experience. Content material needs to be appropriate and delivered in a manner that leads toward mastery of required language skills. To meet that goal, I will explain how material can be organized into “knowledge units”, as defined by B.F. Skinner in his work on programmed learning techniques. Using these knowledge units we will progress beyond the linguistic competence emphasized in traditional classrooms and work toward achieving true communicative competence.
The American psychologist B.F. Skinner believed people are best able to learn when the cognitive domain, or target material, is divided into knowledge units he called “learning frames”. He defined a learning frame as a limited set of new facts coupled with an incomplete statement or question the learner was required to complete based on information provided from within the frame itself, or from previous frames. Skinner’s “programmed learning” approach required that frames be ordered so that knowledge units required for subsequent frames were mastered before they were needed. Learning was made possible through a series of very small and rigidly ordered steps directed toward mastery of a series of learning frames and the inferences that could be associated with the facts contained
within those learning frames. The step-by-step approach advocated by Skinner provided reinforcement for correct responses, and kept the student focused on the material being studied. Skinner was especially critical of traditional education’s inability to provide sufficient
reinforcement for the material being studied. “Perhaps,” said Skinner, “the most serious criticism of the current classroom is the relative infrequency of reinforcement.” (Skinner, 1962, page 25) Skinner believed reinforcement was crucial to the learning process because it was only through repetition and reinforcement that a behavior, or acquired skill, could be maintained in strength. Skills not used frequently were easily lost, as language teachers and students can attest to. The concept of programmed learning based on learning frames and the sequential mastery of material became extremely influential in textbook development in the 1960s, even 3 though the practice of computerized programmed learning itself was limited by access to the rather expensive computers of the time. Ironically, interest in programmed learning
techniques seemed to have waned just as the development of personal computers made it truly possible to implement the practices Skinner had advocated
Incorporating Language-Driven Appearance Knowledge Units with Visual Cues in Pedestrian Detection
Large language models (LLMs) have shown their capability in understanding
contextual and semantic information regarding appearance knowledge of
instances. In this paper, we introduce a novel approach to utilize the strength
of an LLM in understanding contextual appearance variations and to leverage its
knowledge into a vision model (here, pedestrian detection). While pedestrian
detection is considered one of crucial tasks directly related with our safety
(e.g., intelligent driving system), it is challenging because of varying
appearances and poses in diverse scenes. Therefore, we propose to formulate
language-driven appearance knowledge units and incorporate them with visual
cues in pedestrian detection. To this end, we establish description corpus
which includes numerous narratives describing various appearances of
pedestrians and others. By feeding them through an LLM, we extract appearance
knowledge sets that contain the representations of appearance variations. After
that, we perform a task-prompting process to obtain appearance knowledge units
which are representative appearance knowledge guided to be relevant to a
downstream pedestrian detection task. Finally, we provide plentiful appearance
information by integrating the language-driven knowledge units with visual
cues. Through comprehensive experiments with various pedestrian detectors, we
verify the effectiveness of our method showing noticeable performance gains and
achieving state-of-the-art detection performance.Comment: 11 pages, 4 figures, 9 table
A Qualia-based description of specialized knowledge units in the lexical-constructional model
EcoLexicon és una base de dades de coneixement sobre medi
ambient basada en la idea de marcs semàntics. La informació
que conté està estructurada coherentment dins de l'esdeveniment
prototípic de domini, l'esdeveniment mediambiental
(EE). S'hi ha definit un inventari tancat de relacions, tant a
nivell intercategorial com intracategorial, que connecten els
conceptes entre si. Això és la base per a una ontologia formal
d'aquest àmbit que servirà per a finalitats computacionals, fer
cerques o extreure informació automàticament. Les premisses
teòriques de la terminologia basada en marcs, el lexicó generatiu
i la gramàtica de construccions lèxiques, proporcionen
un formalisme estricte que ens permet fer un pas endavant cap
a l'ontologia formal.EcoLexicon is a frame-based knowledge base on the environment.
The information it contains is coherently structured
within a prototypical domain event, the Environmental Event
(EE). At an intra- and intercategorial level, a closed inventory
of relations has been defined that relates concepts to each
other as well as to the EE. It will be the basis for a formal
domain ontology which will serve computational purposes,
enhance searches and allow for automatic information extraction.
Theoretical premises from Frame-Based Terminology,
the Generative Lexicon and the Lexical-Constructional Model
provide a streamlined formalism that brings us one step closer
to a formal ontology
Multi-Stakeholder Processes and Innovation Systems towards Science for impact
Multi_stakeholder processes (MSPs) have become an important phenomena in the work of many of the Science Groups and knowledge units of Wageningen UR. To realise ‘science for impact’ it is increasingly recognized that stakeholder engagement is a critical element. Much remains to be understood about their role and effectiveness in a wider context of politics, governance and societal change. There is clearly value to be gained from the efforts of Wageningen UR wide sharing and critical reflection processes. The CD&IC programme, Wageningen International, hosted a Critical Reflection Day, building on existing and past initiatives such as Own experiences, the Transition lab and deepening of Communities of Practice of action learning and ‘Telen met Toekomst’. The Critical Reflection Day was part of the three_week international course on 'Facilitating Multi_stakeholder Processes and Social Learning' attended by some 30 participants from all over the world. They facilitated and actively took part in the Critical Reflection Da
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
- …