58 research outputs found
BIOMECHANICAL ANALYSIS OF PERFORMANCE IMPROVEMENT IN AIMING MOTOR TASKS
INTRODUCTION: The improvement of performance is closely related to specific modifications of the kinematics and the myoelectrical parameters of each aiming motor skill. The purpose of this study was to determine the modifications of selected kinematic and myoelectrical parameters which resulted in the improvement of aiming performance.
METHODS: Seventy volunteers practiced a novel throwing skill which involved the throw of a ball performing elbow flexion. Kinematics were computed through film analysis at 80Hz. Furthermore, the surface ectromyograms of four muscles in the elbow region were analyzed to determine the changes in the timing and the intensity of muscle activation which may be accounted for improved performance. The correspondence analysis was employed for the evaluation of the data.
RESULTS: The results revealed that performance enhancement was related to the decrease in the time and the displacement of the movement. Practice also resulted to significant modifications in the electrical activity of the muscles. The number of the active muscles was diminished and the agonist activity was reduced. The antagonist activity was increased but it was presented significantly farther from the beginning of the movement after practice.
CONCLUSION: Practice brought about specific modifications in the muscular contribution to the throwing task, by means of a reduction in the electrical activity of the primary antagonistic muscles during the movement. These modifications gave rise to specific alterations in the physical aspects of the skill, which directly resulted in improved performance
Securing Synchronous Flooding Communications: An Atomic-SDN Implementation
Synchronous Flooding (SF) protocols can enhance the wireless connectivity
between Internet of Things (IoT) devices. However, existing SF solutions fail
to introduce sufficient security measures due to strict time synchronisation
requirements, making them vulnerable to malicious actions. Our paper presents a
design paradigm for encrypted SF communications. We describe a mechanism for
synchronising encryption parameters in a network-wide fashion. Our solution
operates with minimal overhead and without compromising communication
reliability. Evaluating our paradigm on a real-world, large-scale IoT testbed,
we have proven that a communication layer impervious to a range of attacks is
established without sacrificing the network performance.Comment: Accepted for Publication to EWSN 202
Impact of Guard Time Length on IEEE 802.15.4e TSCH Energy Consumption
The IEEE 802.15.4-2015 standard defines a number of Medium Access Control (MAC) layer protocols for low- power wireless communications in the IoT. Originally defined in the IEEE 802.15.4e amendment, TSCH (Time Slotted Channel Hopping) is among the proposed mechanisms. TSCH is a scheme aiming to guarantee network reliability by keeping nodes time-synchronised at the MAC layer. In order to ensure successful communication between a sender and a receiver, the latter starts listening shortly before the expected time of a MAC layer frame’s arrival. The offset between the time a node starts listening and the estimated time of frame arrival is called guard time and it aims to reduce the probability of missed frames due to clock drift. In this poster, we investigate the effect of the guard time duration on energy consumption. We identify that, when using the 6tisch minimal schedule, the most significant cause of energy consumption is idle listening during guard time. Therefore, the energy-efficiency of TSCH can be significantly improved by guard time optimisation. Our performance evaluation results, conducted using the Contiki operating system, show that an efficient configuration of guard time may reduce energy consumption by up to 30%, without compromising network reliability
Guard time optimisation and adaptation for energy efficient multi-hop TSCH networks
International audienceIn the IEEE 802.15.4-2015 standard, Time Slotted Channel Hopping (TSCH) aims to guarantee high-level network reliability by keeping nodes time-synchronised. In order to ensure successful communication between a sender and a receiver, the latter starts listening shortly before the expected time of a MAC layer frame's arrival. The offset between the time a node starts listening and the estimated time of frame arrival is called guard time and it aims to reduce the probability of missed frames due to clock drift. In this paper, we investigate the impact of the guard time on network performance. We identify that, when using the 6tisch minimal schedule, the most significant cause of energy consumption is idle listening during guard time. Therefore, we first perform mathematical modelling on a TSCH link to identify the guard time that maximises the energy-efficiency of the TSCH network in single hop topology. We then continue in multi-hop network, where we empirically adapt the guard time locally at each node depending its distance, in terms of hops, from the sink. Our performance evaluation results, conducted using the Contiki OS, demonstrate that the proposed decentralised guard time adaptation can reduce the energy consumption by up to 40%, without compromising network reliability
Which Examples to Annotate for In-Context Learning? Towards Effective and Efficient Selection
Large Language Models (LLMs) can adapt to new tasks via in-context learning
(ICL). ICL is efficient as it does not require any parameter updates to the
trained LLM, but only few annotated examples as input for the LLM. In this
work, we investigate an active learning approach for ICL, where there is a
limited budget for annotating examples. We propose a model-adaptive
optimization-free algorithm, termed AdaICL, which identifies examples that the
model is uncertain about, and performs semantic diversity-based example
selection. Diversity-based sampling improves overall effectiveness, while
uncertainty sampling improves budget efficiency and helps the LLM learn new
information. Moreover, AdaICL poses its sampling strategy as a Maximum Coverage
problem, that dynamically adapts based on the model's feedback and can be
approximately solved via greedy algorithms. Extensive experiments on nine
datasets and seven LLMs show that AdaICL improves performance by 4.4% accuracy
points over SOTA (7.7% relative improvement), is up to 3x more budget-efficient
than performing annotations uniformly at random, while it outperforms SOTA with
2x fewer ICL examples
Road safety training through a master course in Belarus
Road safety is a multidisciplinary and multivariate scientific field, where every proposed action and measure should be developed and supported through strategies in the areas of engineering, enforcement,
education and emergency medical services, taking into consideration social and economic aspects as well. However tools do not create the road safety future, trained professionals do. A robust educational
curriculum is the only mean to communicate the necessary insights and knowledge within the constantly evolving environment of road safety. The objective of this paper is the comprehensive proposal for the development and testing, in Belarus, of a Masters Course in road safety according to the Bologna process requirements. In the framework of this proposal, , the requirements set, the masters’ curricula modules as well as the relevant expected learning outcomes are described
Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs
How can we learn effective node representations on textual graphs? Graph
Neural Networks (GNNs) that use Language Models (LMs) to encode textual
information of graphs achieve state-of-the-art performance in many node
classification tasks. Yet, combining GNNs with LMs has not been widely explored
for practical deployments due to its scalability issues. In this work, we
tackle this challenge by developing a Graph-Aware Distillation framework (GRAD)
to encode graph structures into an LM for graph-free, fast inference. Different
from conventional knowledge distillation, GRAD jointly optimizes a GNN teacher
and a graph-free student over the graph's nodes via a shared LM. This
encourages the graph-free student to exploit graph information encoded by the
GNN teacher while at the same time, enables the GNN teacher to better leverage
textual information from unlabeled nodes. As a result, the teacher and the
student models learn from each other to improve their overall performance.
Experiments in eight node classification benchmarks in both transductive and
inductive settings showcase GRAD's superiority over existing distillation
approaches for textual graphs
- …