3,971 research outputs found
Teachers and standardized assessments in mathematics: an affective perspective
Standardized assessments in mathematics have an increasing relevance in the educational debate and, often, they heavily affect educational policies. Specifically, the framework and the items of standardized assessments suggest what is considered relevant as an outcome of mathematics education at a certain school level. The strength and the quality of the educational impact of standardized assessments seem to depend heavily on teachersâ affective reactions to standardized assessment; however, studies focused on this issue are very rare: what are teachersâ attitudes towards the standardized assessments and their effects? In this frame, we carried out a large qualitative research to investigate teachersâ attitudes in the Italian context
Regular polysemy: A distributional semantic approach
Polysemy and Homonymy are two different kinds of lexical ambiguity. The main difference between them is that plysemous words can share the same alternation - where alternation is the senses a word can have - and homonymous words have idiosyncratic alternations. This means that, for instance, a word such as lamb, whose alternation is given by the senses food and animal, is a polysemous word, given that a number of other words share this very alternation food-animal, e.g. the word fish. On the other hand, a word such as ball, whose possible senses are of artifact and event, is homonymous, given that no other words share the alternation artifact-event. Furthermore, polysemy highlights two different aspects of the same lexical item, where homonymy describes the fact that the same lexical unit is used to represent two different and completely unrelated word-meanings.
These two kinds of lexical ambiguity have even been an issue in lexicography, given that there is no clear rule used to distinguish between polysemous and homonymous words. As a matter of principle, we would expect to have different lexical entries for homonymous words, but only one lexical entry with internal differentiation for polysemous words. An important work needs to be mentioned here, that is the Generative Lexicon (Pustejovsky, 1995). This is a theoretical framework for lexical semantics which focuses on the compositionality of word meanings. In regard of polysemy and homonymy, GL provides a clear explanation of how it is possible to understand the appropriate sense of a word in a specific sentence. This is done by looking at the context in which the word appears, and, specifically, looking at the type of argument required by the predication.
These phenomena have even been of interest among computational linguists, insomuch as they have tried to implement some models able to predict the alter- nations polysemous words can have. One of the most important work concerning this matter is the one made by Boleda, Pado, Utt (2012), in which a model is proposed that is able to predict words having a particular alternation of senses. This means that, for instance, given an alternation such as food-animal, they can predict the words having that alternation. Another relevant work has been made by Rumshisky, Grinberg, Pustejovsky (2007), in which, using some syntac- tic information, they have managed to detect the senses a polysemous word can have. For instance, given the polysemous word lunch, whose sense alternation is food-event, they first extracted all of the verbs whose object can be the word lunch. This lead to the extraction of verbs requiring an argument expressing the sense of food (the verb cook can be extracted as verb whose object can be lunch), and verbs requiring the argument of event (again, lunch can be object of the verb to attend). Finally, they extracted all of the objects that those verbs can have (for instance, pasta can be object of the verb cook, and conference can be object of the verb to attend). By doing so, they can get to the creation of two clusters, each one of which represents words similar to one of the senses of the
ambiguous word.
These two models are totally different in the way they are implemented, even
though they are grounded in one of the most important theories used in compu- tational semantics: the Distributional Hypothesis. This theory can be stated as âwords with similar meaning tend to occur in similar contextsâ. To implement this theory, it is necessary to describe the contexts in a computational valid way, so that it will be possible to get a degree of similarity between two words by only looking at their contexts. The mathematical model used is the Vector, in which it is possible to store the frequency of a word in all its contexts. The model using vectors to describe the distributional properties of words is called Vector Space Model, which can be also called Distributional Model.
In this work, our goal is to automatically detect the alternation a word has. To do so, we have first considered the possibility of using a Sense Discrimina- tion procedure proposed by Schu Ìtze. In this method, he proposes to create a Distributional Model and use it to create context vectors and sense vectors. A context vector is given by the sum of the vectors of the words found in a context in which an ambiguous word appears, so there will be as many context vectors as there are occurrences of the target word. Once we have the context vectors, it is possible to get the sense vectors by simply clustering them together. The ideas is that two context vectors representing the same sense of the ambiguous word will be similar, and so clustered together. The centroid, that is the vector given by the sum of the context vectors clustered together, will be the sense vector. This means that there will be as many sense vectors as there are senses of an ambiguous word. Our idea was to use this work and go a step further in the creation of the alternation, but this was not possible for many reasons.
We have developed a new method to create context vectors, which is based on the idea that the understanding of an ambiguous word is given by some elements in the sentence in which the word appears.
Our model is able to carry out two tasks: 1) it can predict the alternation of a regular polysemous word; 2) it can distinguish whether the lexical ambiguity of a word is homonymy or regular polysemy
High field MR microimaging investigation gives more insights on spongy bone characteristics
Spongy-bone is a porous system characterized by a solid trabecular network immersed in bonemarrow and characterized by a different relative percentage of water and fats. In our previous paper, we demonstrated using calf bone samples, that water is more prevalent in the boundary zone while fats are rearranged primarily in the central zone of each pore. Moreover we showed that water internal gradient (Gi) magnitude from the samples was directly proportional to their trabecular bone density. Using a 9.4T MR micro-imaging system, here we evaluated T2, T2*, apparent diffusion coefficient (ADC) and Gi parameters from in vitro calf samples in spatially resolved modality, for both water and fat components. Moreover, relative percentages of water and fats were quantified from spectra. T2, T2* and ADC values are higher in fat than in water component. Moreover, the differential effects of fat and water diffusion result in different T2 and Gi behaviours. Our results suggest that differently from fat parameters, water T2*, ADC and Gi, may be reliable markers to assess not only trabecular bone density but, more generally, the status of spongy bone
Neurophysiological Profile of Antismoking Campaigns
Over the past few decades, antismoking public service announcements (PSAs) have been used by governments to promote healthy
behaviours in citizens, for instance, against drinking before the drive and against smoke. Effectiveness of such PSAs has been
suggested especially for young persons. By now, PSAs efficacy is still mainly assessed through traditional methods (questionnaires
and metrics) and could be performed only after the PSAs broadcasting, leading to waste of economic resources and time in the
case of Ineffective PSAs. One possible countermeasure to such ineffective use of PSAs could be promoted by the evaluation of the
cerebral reaction to the PSA of particular segments of population (e.g., old, young, and heavy smokers). In addition, it is crucial to
gather such cerebral activity in front of PSAs that have been assessed to be effective against smoke (Effective PSAs), comparing
results to the cerebral reactions to PSAs that have been certified to be not effective (Ineffective PSAs). &e eventual differences
between the cerebral responses toward the two PSA groups will provide crucial information about the possible outcome of new
PSAs before to its broadcasting. &is study focused on adult population, by investigating the cerebral reaction to the vision of
different PSA images, which have already been shown to be Effective and Ineffective for the promotion of an antismoking
behaviour. Results showed how variables as gender and smoking habits can influence the perception of PSA images, and how
different communication styles of the antismoking campaigns could facilitate the comprehension of PSAâs message and then
enhance the related impac
On Over-Squashing in Message Passing Neural Networks: The Impact of Width, Depth, and Topology
Message Passing Neural Networks (MPNNs) are instances of Graph Neural
Networks that leverage the graph to send messages over the edges. This
inductive bias leads to a phenomenon known as over-squashing, where a node
feature is insensitive to information contained at distant nodes. Despite
recent methods introduced to mitigate this issue, an understanding of the
causes for over-squashing and of possible solutions are lacking. In this
theoretical work, we prove that: (i) Neural network width can mitigate
over-squashing, but at the cost of making the whole network more sensitive;
(ii) Conversely, depth cannot help mitigate over-squashing: increasing the
number of layers leads to over-squashing being dominated by vanishing
gradients; (iii) The graph topology plays the greatest role, since
over-squashing occurs between nodes at high commute (access) time. Our analysis
provides a unified framework to study different recent methods introduced to
cope with over-squashing and serves as a justification for a class of methods
that fall under `graph rewiring'.Comment: Accepted to ICML23; 21 page
Epidemiology of enuresis: a large number of children at risk of low regard
Aim: To describe the epidemiological aspects of nocturnal enuresis (NE). In this study we identify the prevalence and the familial conditions in a large, representative sample of children with monosymptomatic NE (MNE) and nonmonosyptomatic NE (NMNE).
Material and methods: In this descriptive-analytic study the Italian Society of Pediatrics (SIP) promoted a prevalence study of NE using a questionnaire involved 320 primary care Pediatricians from Northern, Central and Southern Italy, from January 2019 to July 2019, with a total of 130,000 children analyzed by questionnaire related to epidemiology and type of NE, familiarity, quality of sleep, eating and drinking habits, pharmacological and psychological/behavioural interventions and family involvement.
Results: 270/320 (84.4%) Paediatrician replied to our questionnaire. We enrolled a total of 9307/130,000 (7.2%) children with NE, aged between 5 and 14 years: 2141 diagnosed with MNE and 7176 qualified as NMNE. Poor quality of sleep were reported in 7064 patients; 90% of children did not consider a dietary and drinking recommendation. Pediatrician reported a total of 54.1% of parents who declared to have a negative reaction to their children because of the bedwetting. A percentage of 71.4% of parents declared to use or to have used alternative therapies and not to prefer, at first, a pharmacological intervention.
Conclusion: The choice of treatment should include psychological/behavioural interventions in all cases to improve the therapeutic outcome. All primary care Pediatricians should be aware of the all aspects of NE to choose the best way to treat every patient
Predicting value for incomplete recovery in Bell's palsy of facial nerve ultrasound versus nerve conduction study
Objective This longitudinal study aims at assessing the predictive value of facial nerve high-resolution ultrasound (HRUS) for incomplete clinical recovery in patients with Bellâs palsy, the most common facial nerve disease. Methods We prospectively enrolled 34 consecutive patients with Bellâs palsy. All patients underwent neurophysiological testing (including facial nerve conduction study) and HRUS evaluations 10-15 days (T1), one month (T2), and three months (T3) after the onset of Bellâs palsy. Patients who did not experience complete recovery within three months were also evaluated after six months (T4). We have then compared the accuracy of HRUS with that of the facial nerve conduction study in predicting incomplete clinical recovery at three and six months. Results At T1, the facial nerve diameter, as assessed with HRUS, was larger on the affected side than on the normal side, particularly in patients with incomplete recovery at T2, T3 and T4. ROC curve analysis, however, showed that the facial nerve diameter at T1 had a lower predictive value than the facial nerve conduction study for an incomplete clinical recovery at three (T3) and six (T4) months. Still, the facial nerve diameter asymmetry, as assessed with HRUS, had a relatively high negative predictive value (thus indicating a strong association between normal HRUS examination and a good prognosis). Conclusions Although HRUS shows abnormally increased facial nerve diameter in patients in the acute phase of Bellâs palsy, the predictive value of this technique for incomplete clinical recovery at three and six months is lower than that of the nerve conduction study. Significance: Nerve ultrasound has a low predictive value for incomplete clinical recovery in patients with Bellâs Palsy
Overtraining syndrome, stress and nutrition in football amateur athletes
In competitive sports is important optimizing and improving the recovery-stress state. We aimed to investigate the overtraining syndrome in the football, modulating the agonistic training and evaluating the nutritional status of the young amateur soccer players, thought monitoring of the capacity of endurance and strength in a sample of twenty athletes between the ages of 18 and 33 (mean 22 +/- 4.43 SD). Overtraining syndrome is a condition of physical, behavioural and emotional stress in sports and occurs when the physical activity is so intense as to prevent the athlete from performing a correct recovery totally eliminating the sense of fatigue. The athletes, from September 2016 to April 2017 were monitored with anthropometric tests (BMI calculation), nutritional tests (Recall test) and sportive (Cooper and Sargent test) to prevention of the overtraining syndrome with initial, intermediate and final measurements. Each player, during the observational period, performed normal athletic training sessions and participating two additional monthly sessions, for a total of sixteen sessions, with free overloads and, after intermediate verification, the exercises has been modified to reduce overtraining phenomenon. Tests initial results have been positive for defenders and midfielders while at the end of the search goalkeepers and forwards have significantly improved the performances. The total percentage increase of sample is around +/- 4% and the study confirmed that by modulating the intensity of training and controlling the athletes' diet, it is possible to reduce or eliminate overtraining effects
EEG-based cognitive control behaviour assessment: an ecological study with professional air traffic controllers
Several models defining different types of cognitive human behaviour are available. For this work, we
have selected the Skill, Rule and Knowledge (SRK) model proposed by Rasmussen in 1983. This model
is currently broadly used in safety critical domains, such as the aviation. Nowadays, there are no tools
able to assess at which level of cognitive control the operator is dealing with the considered task, that
is if he/she is performing the task as an automated routine (skill level), as procedures-based activity
(rule level), or as a problem-solving process (knowledge level). Several studies tried to model the SRK
behaviours from a Human Factor perspective. Despite such studies, there are no evidences in which such
behaviours have been evaluated from a neurophysiological point of view, for example, by considering
brain activity variations across the different SRK levels. Therefore, the proposed study aimed to
investigate the use of neurophysiological signals to assess the cognitive control behaviours accordingly
to the SRK taxonomy. The results of the study, performed on 37 professional Air Traffic Controllers,
demonstrated that specific brain features could characterize and discriminate the different SRK levels,
therefore enabling an objective assessment of the degree of cognitive control behaviours in realistic
setting
- âŠ