138 research outputs found

    EgoTV: Egocentric Task Verification from Natural Language Task Descriptions

    Full text link
    To enable progress towards egocentric agents capable of understanding everyday tasks specified in natural language, we propose a benchmark and a synthetic dataset called Egocentric Task Verification (EgoTV). EgoTV contains multi-step tasks with multiple sub-task decompositions, state changes, object interactions, and sub-task ordering constraints, in addition to abstracted task descriptions that contain only partial details about ways to accomplish a task. We also propose a novel Neuro-Symbolic Grounding (NSG) approach to enable the causal, temporal, and compositional reasoning of such tasks. We demonstrate NSG's capability towards task tracking and verification on our EgoTV dataset and a real-world dataset derived from CrossTask (CTV). Our contributions include the release of the EgoTV and CTV datasets, and the NSG model for future research on egocentric assistive agents

    Semantic text similarity using autoencoders

    Get PDF
    Word vectors have become corner stone of modern NLP. Researchers are taking embedding ever further by learning to craft embedding vectors with task specific semantics to power wide array of applications. In this thesis we apply simple feed forward network and stacked LSTM on triplets dataset converted to sentence embeddings to evaluate paragraph semantic text similarity. We explore how to leverage existing state of the art sentence embeddings for paragraph semantic text similarity and examine information sentence embeddings used hold

    Visual search and recognition for robot task execution and monitoring

    Full text link
    Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a "common sense" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy

    Doctor of Philosophy

    Get PDF
    dissertationDataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms

    Transmission adaptative de modèles 3D massifs

    Get PDF
    Avec les progrès de l'édition de modèles 3D et des techniques de reconstruction 3D, de plus en plus de modèles 3D sont disponibles et leur qualité augmente. De plus, le support de la visualisation 3D sur le web s'est standardisé ces dernières années. Un défi majeur est donc de transmettre des modèles massifs à distance et de permettre aux utilisateurs de visualiser et de naviguer dans ces environnements virtuels. Cette thèse porte sur la transmission et l'interaction de contenus 3D et propose trois contributions majeures. Tout d'abord, nous développons une interface de navigation dans une scène 3D avec des signets -- de petits objets virtuels ajoutés à la scène sur lesquels l'utilisateur peut cliquer pour atteindre facilement un emplacement recommandé. Nous décrivons une étude d'utilisateurs où les participants naviguent dans des scènes 3D avec ou sans signets. Nous montrons que les utilisateurs naviguent (et accomplissent une tâche donnée) plus rapidement en utilisant des signets. Cependant, cette navigation plus rapide a un inconvénient sur les performances de la transmission : un utilisateur qui se déplace plus rapidement dans une scène a besoin de capacités de transmission plus élevées afin de bénéficier de la même qualité de service. Cet inconvénient peut être atténué par le fait que les positions des signets sont connues à l'avance : en ordonnant les faces du modèle 3D en fonction de leur visibilité depuis un signet, on optimise la transmission et donc, on diminue la latence lorsque les utilisateurs cliquent sur les signets. Deuxièmement, nous proposons une adaptation du standard de transmission DASH (Dynamic Adaptive Streaming over HTTP), très utilisé en vidéo, à la transmission de maillages texturés 3D. Pour ce faire, nous divisons la scène en un arbre k-d où chaque cellule correspond à un adaptation set DASH. Chaque cellule est en outre divisée en segments DASH d'un nombre fixe de faces, regroupant des faces de surfaces comparables. Chaque texture est indexée dans son propre adaptation set à différentes résolutions. Toutes les métadonnées (les cellules de l'arbre k-d, les résolutions des textures, etc.) sont référencées dans un fichier XML utilisé par DASH pour indexer le contenu: le MPD (Media Presentation Description). Ainsi, notre framework hérite de la scalabilité offerte par DASH. Nous proposons ensuite des algorithmes capables d'évaluer l'utilité de chaque segment de données en fonction du point de vue du client, et des politiques de transmission qui décident des segments à télécharger. Enfin, nous étudions la mise en place de la transmission et de la navigation 3D sur les appareils mobiles. Nous intégrons des signets dans notre version 3D de DASH et proposons une version améliorée de notre client DASH qui bénéficie des signets. Une étude sur les utilisateurs montre qu'avec notre politique de chargement adaptée aux signets, les signets sont plus susceptibles d'être cliqués, ce qui améliore à la fois la qualité de service et la qualité d'expérience des utilisateur

    Improving Efficiency and Generalization of Visual Recognition

    Get PDF
    Deep Neural Networks (DNNs) are heavy in terms of their number of parameters and computational cost. This leads to two major challenges: first, training and deployment of deep networks are expensive; second, without tremendous annotated training data, which are very costly to obtain, DNNs easily suffer over-fitting and have poor generalization. We propose approaches to these two challenges in the context of specific computer vision problems to improve their efficiency and generalization. First, we study network pruning using neuron importance score propagation. To reduce the significant redundancy in DNNs, we formulate network pruning as a binary integer optimization problem which minimizes the reconstruction errors on the final responses produced by the network, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network, then prune neurons in the entire networks jointly. Second, we study visual relationship detection (VRD) with linguistic knowledge distillation. Since the semantic space of visual relationships is huge and training data is limited, especially for long-tail relationships that have few instances, detecting visual relationships from images is a challenging problem. To improve the predictive capability, especially generalization on unseen relationships, we utilize knowledge of linguistic statistics obtained from both training annotations (internal knowledge) and publicly available text, e.g., Wikipedia (external knowledge) to regularize visual model learning. Third, we study the role of context selection in object detection. We investigate the reasons why context in object detection has limited utility by isolating and evaluating the predictive power of different context cues under ideal conditions in which context provided by an oracle. Based on this study, we propose a region-based context re-scoring method with dynamic context selection to remove noise and emphasize informative context. Fourth, we study the efficient relevant motion event detection for large-scale home surveillance videos. To detect motion events of objects-of-interest from large scale home surveillance videos, traditional methods based on object detection and tracking are extremely slow and require expensive GPU devices. To dramatically speedup relevant motion event detection and improve its performance, we propose a novel network for relevant motion event detection, ReMotENet, which is a unified, end-to-end data-driven method using spatial-temporal attention-based 3D ConvNets to jointly model the appearance and motion of objects-of-interest in a video. In the last part, we address the recognition of agent-in-place actions, which are associated with agents who perform them and places where they occur, in the context of outdoor home surveillance. We introduce a representation of the geometry and topology of scene layouts so that a network can generalize from the layouts observed in the training set to unseen layouts in the test set. This Layout-Induced Video Representation (LIVR) abstracts away low-level appearance variance and encodes geometric and topological relationships of places in a specific scene layout. LIVR partitions the semantic features of a video clip into different places to force the network to learn place-based feature descriptions; to predict the confidence of each action, LIVR aggregates features from the place associated with an action and its adjacent places on the scene layout. We introduce the Agent-in-Place Action dataset to show that our method allows neural network models to generalize significantly better to unseen scenes

    Towards Making JavaScript Applications Secure and Private

    Get PDF
    JavaScript is a popular programming language widely used on both the browser and the server sides. Researchers have extensively studied different aspects of the security and privacy of JavaScript, for instance, the vulnerability detection of the server-side Node.JS applications and the browser-side fingerprinting techniques. Despite the research efforts, multiple challenges of JavaScript remain unsolved: on the server-side, existing vulnerability detection approaches do not generalize to a wide range of popular vulnerabilities and the detection rate is not satisfactory; on the client-side, service providers can only fingerprint users within a single browser but not cross different browsers. In this dissertation, we propose a flow-, branch- and context-sensitive static analysis approach to generate a novel graph structure, named Object Dependence Graph (ODG), to address the server-side vulnerability detection challenges, and a cross-browser fingerprinting method that utilizes multiple novel OS and hardware level features to solve the client-side fingerprinting challenge. On the server-side, ODG represents JavaScript objects as nodes and their relations with Abstract Syntax Tree (AST) as edges, and allows users to detect multiple types of vulnerabilities during and after the generation process of ODG and by graph queries. Our evaluation shows that for server-side vulnerability detection, our approach outperforms all the state-of-the-art JavaScript vulnerability detection tools in terms of false-positive rate and false-negative rate. We apply our tool to detect six types of vulnerabilities on top of an NPM package dataset, which correctly reports 241 zero-day vulnerable packages, and 81 of them are assigned with CVE identifiers. On the client-side, our approach utilizes multiple novel OS and hardware level features, such as those from graphics cards and CPUs, to achieve better accuracy and stability. The evaluation shows that our approach can identify 99.24% of the browsers and 84.64% of the devices, as opposed to 90.83% and 68.98% of the state-of-the-art approaches, respectively

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors â CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case
    • …
    corecore