34 research outputs found
Recommended from our members
Investigation into the semantic density and semantic gravity wave profile of teachers when discussing electrophilic aromatic substitution (SEAr)
Language in chemistry is highly specialized and, for students, transitions in language complexity from high school to university can be extremely challenging. With an increasingly diverse cohort of students enrolled upon UK chemistry degree programmes, better understanding the linguistic challenges students face is becoming a greater pedagogical priority. Spoken language plays a central role when learning chemistry, and any misunderstandings can lead to misconceptions that can impede students’ success in this demanding subject. This small-scale study sought to compare the complexity of spoken-language explanations of the same chemical process within UK secondary (high school) and university contexts. The study involved seven organic chemistry educators/teachers, four based in a UK University and three in a UK high school, discussing electrophilic aromatic substitution (SEAr) via a lecture or screencast. The participants’ spoken discourse was transcribed and coded according to the concepts of semantic gravity (the degree to which meaning relates to context) and semantic density (the degree to which meaning is condensed within symbols) drawn from Legitimation Code Theory, and then analyzed for semantic waves. When considering semantic gravity, there were some similarities and some differences. In all cases, semantic gravity was weaker, but participants based in a university environment generally tended to exhibit slightly weaker semantic gravity than their school-based counterparts. The school-based participants usually added further explanations to clarify what was meant during an explanation and exhibited semantic waves by unpacking and re-packing a concept, whereas the university-based participants tended to show a flatter semantic profile. Findings showed that across the levels of study investigated, semantic density was stronger: a similar complexity of chemistry-specific vocabulary used by all seven participants, regardless of the audience. Findings have pedagogical implications and suggest that a larger-scale study of semantic waves in oral chemistry discourse could usefully inform specific-purposes language teaching
Development of automatic surveillance of animal behaviour and welfare using image analysis and machine learned segmentation technique
Quick, accurate, smart: 3D computer vision technology helps assessing confined animals' behaviour
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.</p
Recording behaviour of indoor-housed farm animals automatically using machine vision technology: a systematic review
Large-scale phenotyping of animal behaviour traits is time consuming and has led to increased demand for technologies that can automate these procedures. Automated tracking of animals has been successful in controlled laboratory settings, but recording from animals in large groups in highly variable farm settings presents challenges. The aim of this review is to provide a systematic overview of the advances that have occurred in automated, high throughput image detection of farm animal behavioural traits with welfare and production implications. Peer-reviewed publications written in English were reviewed systematically following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. After identification, screening, and assessment for eligibility, 108 publications met these specifications and were included for qualitative synthesis. Data collected from the papers included camera specifications, housing conditions, group size, algorithm details, procedures, and results. Most studies utilized standard digital colour video cameras for data collection, with increasing use of 3D cameras in papers published after 2013. Papers including pigs (across production stages) were the most common (n = 63). The most common behaviours recorded included activity level, area occupancy, aggression, gait scores, resource use, and posture. Our review revealed many overlaps in methods applied to analysing behaviour, and most studies started from scratch instead of building upon previous work. Training and validation sample sizes were generally small (mean±s.d. groups = 3.8±5.8) and in data collection and testing took place in relatively controlled environments. To advance our ability to automatically phenotype behaviour, future research should build upon existing knowledge and validate technology under commercial settings and publications should explicitly describe recording conditions in detail to allow studies to be reproduced
Discerning pig screams in production environments
Pig vocalisations convey information about their current state of health and welfare. Continuously monitoring these vocalisations can provide useful information for the farmer. For instance, pig screams can indicate stressful situations. When monitoring screams, other sounds can interfere with scream detection. Therefore, identifying screams from other sounds is essential. The objective of this study was to understand which sound features define a scream. Therefore, a method to detect screams based on sound features with physical meaning and explicit rules was developed. To achieve this, 7 hours of labelled data from 24 pigs was used. The developed detection method attained 72% sensitivity, 91% specificity and 83% precision. As a result, the detection method showed that screams contain the following features discerning them from other sounds: a formant structure, adequate power, high frequency content, sufficient variability and duration
Automatic Sheep Weight Estimation Based on K-Means Clustering and Multiple Linear Regression
Development of automatic surveillance of animal behaviour and welfare using image analysis and machine learned segmentation technique.
In this paper the feasibility to extract the proportion of pigs located in different areas of a pig pen by advanced image analysis technique is explored and discussed for possible applications. For example, pigs generally locate themselves in the wet dunging area at high ambient temperatures in order to avoid heat stress, as wetting the body surface is the major path to dissipate the heat by evaporation. Thus, the portion of pigs in the dunging area and resting area, respectively, could be used as an indicator of failure of controlling the climate in the pig environment as pigs are not supposed to rest in the dunging area. The computer vision methodology utilizes a learning based segmentation approach using several features extracted from the image. The learning based approach applied is based on extended state-of-the-art features in combination with a structured prediction framework based on a logistic regression solver using elastic net regularization. In addition, the method is able to produce a probability per pixel rather than form a hard decision. This overcomes some of the limitations found in a setup using grey-scale information only. The pig pen is a difficult imaging environment because of challenging lighting conditions like shadows, poor lighting and poor contrast between pig and background. In order to test practical conditions, a pen containing nine young pigs was filmed from a top view perspective by an Axis M3006 camera with a resolution of 640×480 in three, 10-min sessions under different lighting conditions. The results indicate that a learning based method improves, in comparison with greyscale methods, the possibility to reliable identify proportions of pigs in different areas of the pen. Pigs with a changed behaviour (location) in the pen may indicate changed climate conditions. Changed individual behaviour may also indicate inferior health or acute illness
