509 research outputs found

    Library Impact Data Project: hit, miss or maybe

    Get PDF
    Purpose In February 2011 the University of Huddersfield along with 7 partners were awarded JISC funding through the Activity Data programme to investigate the hypothesis that: “There is a statistically significant correlation across a number of universities between library activity data and student attainment” The Library Impact Data Project aimed to analyse users’ actions with regards to library usage and then linking those to final degree award. By identifying a positive correlation in this data those subject areas or courses which exhibit high usage of library resources can be used as models of good practice. Design, methodology or approach The overall approach of the project is to extract anonymised activity data from partners’ systems and analyse the findings. For each student who graduated in the sample years, the following data was required: final grade achieved; number of books borrowed; number of times e-resources were accessed; number of times each student entered the library and school or faculty. This data was then collated, normalised, and then analysed. In addition all partners were asked to hold a number of focus groups in order to secure qualitative data from students on library usage to provide a holistic picture of how students engage with library resources. Findings This paper will report on the findings of the project which ran from February to July 2011. It will consider whether the hypothesis was proven for the three indicators of library usage. Research or practical limitations or implications The main aim of the project was to support the hypothesis. The project acknowledges however, that the relationship between the two variables is not a causal relationship and there will be other factors which influence student attainment. Conclusions The paper will discuss the implications of the results and suggest further work that could result from the projects findings

    FERAL HOGS-BOON OR BURDEN

    Get PDF
    Feral hogs (Sus scrofa L. ) have long been considered a pest by most land managers because of the potential range and pasture damage that can result from their feeding habits. In recent years however, second only to deer, feral hogs have become the most sought after big game animal in California. Their great reproductive capacity coupled with the ruggedness of their preferred habitat has allowed the California State Fish and Game Department to set liberal seasons and bag limits. The freedom to work within the states liberal framework has prompted some private land managers to look at controlled harvest programs with several objectives in mind. Using paid hunting as the main means of control, thus providing additional revenue for the landowner, such programs would aim at keeping the herds within the carrying capacity of the range, so that minimal damage is done to the vegetation and soil as well as keeping interspecific competition in check. Reviewed here is a description of how such a program is carried out on the Dye Creek Preserve

    The Good, The Bad and The Ugly: Using APIs to develop reading list software at the University of Huddersfield

    Get PDF
    Presentation given at the European Libraries Automation Group Conference 201

    ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges

    Get PDF
    This paper describes the third Affective Behavior Analysis in-the-wild (ABAW) Competition, held in conjunction with IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022. The 3rd ABAW Competition is a continuation of the Competitions held at ICCV 2021, IEEE FG 2020 and IEEE CVPR 2017 Conferences, and aims at automatically analyzing affect. This year the Competition encompasses four Challenges: i) uni-task Valence-Arousal Estimation, ii) uni-task Expression Classification, iii) uni-task Action Unit Detection, and iv) Multi-Task-Learning. All the Challenges are based on a common benchmark database, Aff-Wild2, which is a large scale in-the-wild database and the first one to be annotated in terms of valence-arousal, expressions and action units. In this paper, we present the four Challenges, with the utilized Competition corpora, we outline the evaluation metrics and present the baseline systems along with their obtained results

    ClusterFace: Joint Clustering and Classification for Set-Based Face Recognition

    Get PDF
    Deep learning technology has enabled successful modeling of complex facial features when high quality images are available. Nonetheless, accurate modeling and recognition of human faces in real world scenarios `on the wild' or under adverse conditions remains an open problem. When unconstrained faces are mapped into deep features, variations such as illumination, pose, occlusion, etc., can create inconsistencies in the resultant feature space. Hence, deriving conclusions based on direct associations could lead to degraded performance. This rises the requirement for a basic feature space analysis prior to face recognition. This paper devises a joint clustering and classification scheme which learns deep face associations in an easy-to-hard way. Our method is based on hierarchical clustering where the early iterations tend to preserve high reliability. The rationale of our method is that a reliable clustering result can provide insights on the distribution of the feature space, that can guide the classification that follows. Experimental evaluations on three tasks, face verification, face identification and rank-order search, demonstrates better or competitive performance compared to the state-of-the-art, on all three experiments

    Deep Semantic Clustering by Partition Confidence Maximisation

    Get PDF
    By simultaneously learning visual features and data grouping, deep clustering has shown impressive ability to deal with unsupervised learning for structure analysis of high-dimensional visual data. Existing deep clustering methods typically rely on local learning constraints based on inter-sample relations and/or self-estimated pseudo labels. This is susceptible to the inevitable errors distributed in the neighbourhoods and suffers from error-propagation during training. In this work, we propose to solve this problem by learning the most confident clustering solution from all the possible separations, based on the observation that assigning samples from the same semantic categories into different clusters will reduce both the intra-cluster compactness and inter-cluster diversity, i.e. lower partition confidence. Specifically, we introduce a novel deep clustering method named PartItion Confidence mAximisation (PICA). It is established on the idea of learning the most semantically plausible data separation, in which all clusters can be mapped to the ground-truth classes one-to-one, by maximising the 'global' partition confidence of clustering solution. This is realised by introducing a differentiable partition uncertainty index and its stochastic approximation as well as a principled objective loss function that minimises such index, all of which together enables a direct adoption of the conventional deep networks and mini-batch based model training. Extensive experiments on six widely-adopted clustering benchmarks demonstrate our model's performance superiority over a wide range of the state-of-the-art approaches. The code is available online

    SSDL: Self-Supervised Domain Learning for Improved Face Recognition

    Get PDF
    Face recognition in unconstrained environments is challenging due to variations in illumination, quality of sensing, motion blur and etc. An individual’s face appearance can vary drastically under different conditions creating a gap between train (source) and varying test (target) data. The domain gap could cause decreased performance levels in direct knowledge transfer from source to target. Despite fine-tuning with domain specific data could be an effective solution, collecting and annotating data for all domains is extremely expensive. To this end, we propose a self-supervised domain learning (SSDL) scheme that trains on triplets mined from unlabelled data. A key factor in effective discriminative learning, is selecting informative triplets. Building on most confident predictions, we follow an “easy-to-hard” scheme of alternate triplet mining and self-learning. Comprehensive experiments on four different benchmarks show that SSDL generalizes well on different domains

    Image Search with Text Feedback by Visiolinguistic Attention Learning

    Get PDF
    Image search with text feedback has promising impacts in various real-world applications, such as e-commerce and internet search. Given a reference image and text feedback from user, the goal is to retrieve images that not only resemble the input image, but also change certain aspects in accordance with the given text. This is a challenging task as it requires the synergistic understanding of both image and text. In this work, we tackle this task by a novel Visiolinguistic Attention Learning (VAL) framework. Specifically, we propose a composite transformer that can be seamlessly plugged in a CNN to selectively preserve and transform the visual features conditioned on language semantics. By inserting multiple composite transformers at varying depths, VAL is incentive to encapsulate the multi-granular visiolinguistic information, thus yielding an expressive representation for effective image search. We conduct comprehensive evaluation on three datasets: Fashion200k, Shoes and FashionIQ. Extensive experiments show our model exceeds existing approaches on all datasets, demonstrating consistent superiority in coping with various text feedbacks, including attribute-like and natural language descriptions

    The effect of spectrogram reconstructions on automatic music transcription: an alternative approach to improve transcription accuracy

    Get PDF
    Most of the state-of-the-art automatic music transcription (AMT) models break down the main transcription task into sub-tasks such as onset prediction and offset prediction and train them with onset and offset labels. These predictions are then concatenated together and used as the input to train another model with the pitch labels to obtain the final transcription. We attempt to use only the pitch labels (together with spectrogram reconstruction loss) and explore how far this model can go without introducing supervised sub-tasks. In this paper, we do not aim at achieving state-of-the-art transcription accuracy, instead, we explore the effect that spectrogram reconstruction has on our AMT model. Our proposed model consists of two U-nets: the first U-net transcribes the spectrogram into a posteriorgram, and a second U-net transforms the posteriorgram back into a spectrogram. A reconstruction loss is applied between the original spectrogram and the reconstructed spectrogram to constrain the second U-net to focus only on reconstruction. We train our model on three different datasets: MAPS, MAESTRO, and MusicNet. Our experiments show that adding the reconstruction loss can generally improve the note-level transcription accuracy when compared to the same model without the reconstruction part. Moreover, it can also boost the frame-level precision to be higher than the state-of-the-art models. The feature maps learned by our U-net contain gridlike structures (not present in the baseline model) which implies that with the presence of the reconstruction loss, the model is probably trying to count along both the time and frequency axis, resulting in a higher note-level transcription accuracy
    • 

    corecore