3,152 research outputs found

    Attentive Single-Tasking of Multiple Tasks

    Full text link
    In this work we address task interference in universal networks by considering that a network is trained on multiple tasks, but performs one task at a time, an approach we refer to as "single-tasking multiple tasks". The network thus modifies its behaviour through task-dependent feature adaptation, or task attention. This gives the network the ability to accentuate the features that are adapted to a task, while shunning irrelevant ones. We further reduce task interference by forcing the task gradients to be statistically indistinguishable through adversarial training, ensuring that the common backbone architecture serving all tasks is not dominated by any of the task-specific gradients. Results in three multi-task dense labelling problems consistently show: (i) a large reduction in the number of parameters while preserving, or even improving performance and (ii) a smooth trade-off between computation and multi-task accuracy. We provide our system's code and pre-trained models at http://vision.ee.ethz.ch/~kmaninis/astmt/.Comment: CVPR 2019 Camera Read

    Improving One-class Recommendation with Multi-tasking on Various Preference Intensities

    Full text link
    In the one-class recommendation problem, it's required to make recommendations basing on users' implicit feedback, which is inferred from their action and inaction. Existing works obtain representations of users and items by encoding positive and negative interactions observed from training data. However, these efforts assume that all positive signals from implicit feedback reflect a fixed preference intensity, which is not realistic. Consequently, representations learned with these methods usually fail to capture informative entity features that reflect various preference intensities. In this paper, we propose a multi-tasking framework taking various preference intensities of each signal from implicit feedback into consideration. Representations of entities are required to satisfy the objective of each subtask simultaneously, making them more robust and generalizable. Furthermore, we incorporate attentive graph convolutional layers to explore high-order relationships in the user-item bipartite graph and dynamically capture the latent tendencies of users toward the items they interact with. Experimental results show that our method performs better than state-of-the-art methods by a large margin on three large-scale real-world benchmark datasets.Comment: RecSys 2020 (ACM Conference on Recommender Systems 2020

    Multi-tasking: The Relationship between Watching a Video and Memory

    Get PDF
    This study investigates the relationship between media multitasking and memory among undergraduate students at a small, liberal arts college in Minnesota. The participants (N=20) were randomly assigned using block randomization to either the experimental group which was asked to study a list of 20 words while watching a video clip or the control group which only studied the set of 20 words. All of the participants were given 2 minutes to study the list of 20 words and then 2 minutes to write as many words as they could recall from their memory. The participants who watched the video clip were asked to answer questions about the video to ensure the independent variable was manipulated correctly. I found that participants who media multitasked by watching a video had a more difficult time recalling words from the list compared to those in the control group (t (18) = -2.427, p-value = .026, mean difference = -4.20, and standard error difference = 2.79). These results suggest that people who media multitask while they study may have a more difficult time recalling information from their memory compared to those who do not multitask while they study information

    Improving Landmark Localization with Semi-Supervised Learning

    Full text link
    We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5\% of labeled images we outperform previous state-of-the-art trained on the AFLW dataset.Comment: Published as a conference paper in CVPR 201

    Interactive lectures: Clickers or personal devices?

    Get PDF
    Audience response systems (‘clickers’) are frequently used to promote participation in large lecture classes, and evidence suggests that they convey a number of benefits to students, including improved academic performance and student satisfaction. The limitations of these systems (such as limited access and cost) can be overcome using students’ personal electronic devices, such as mobile phones, tablets and laptops together with text message, web- or app-based polling systems. Using questionnaires, we compare student perceptions of clicker and smartphone based polling systems. We find that students prefer interactive lectures generally, but those that used their own device preferred those lectures over lectures using clickers. However, device users were more likely to report using their devices for other purposes (checking email, social media etc.) when they were available to answer polling questions. These students did not feel that this distracted them from the lecture, instead, concerns over the use of smartphones centred around increased battery usage and inclusivity for students without access to suitable technology. Our results suggest that students generally preferred to use their own devices over clickers, and that this may be a sensible way to overcome some of the limitations associated with clickers, although issues surrounding levels of distraction and the implications for retention and recall of information need further investigation

    The Hazard Potential of Non-Driving-Related Tasks in Conditionally Automated Driving

    Get PDF
    Today, humans and machines successfully interact in a multitude of scenarios. Facilitated by advancements in artificial intelligence, increasing driving automation may allow drivers to focus on non-driving-related tasks (NDRTs) during the automated ride. However, conditionally automated driving as a transitional state between human-operated driving and fully automated driving requires drivers to take over control of the vehicle whenever requested. Thus, the productive use of driving time might come at the cost of increased traffic safety risks due to insufficient and insecure human-vehicle interaction. This study aims to explore the take-over performance and risk potential of different NDRTs (auditory task, visual task on regular display, visual task with mixed reality hardware) while driving. Our study indicates the hazard potential of visual vs. auditory distraction and multitasking vs. sequential tasking. Our findings contribute to understanding what influences the acceptance and adoption of automated driving and inform the design of safe vehicle-human take-overs

    MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple Tasks

    Full text link
    In this paper we explore capabilities of spiking neural networks in solving multi-task classification problems using the approach of single-tasking of multiple tasks. We designed and implemented a multi-task spiking neural network (MT-SNN) that can learn two or more classification tasks while performing one task at a time. The task to perform is selected by modulating the firing threshold of leaky integrate and fire neurons used in this work. The network is implemented using Intel's Lava platform for the Loihi2 neuromorphic chip. Tests are performed on dynamic multitask classification for NMNIST data. The results show that MT-SNN effectively learns multiple tasks by modifying its dynamics, namely, the spiking neurons' firing threshold.Comment: 4 pages, 2 figure
    • …
    corecore