959 research outputs found

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    Can we Pretrain a SotA Legal Language Model on a Budget From Scratch?

    Get PDF
    Even though many efficient transformers have been proposed, only few such models are available for specialized domains. Additionally, since the pretraining process is extremely costly in general – but even more so as the sequence length increases – it is often only in reach of large research labs. One way of making pretraining cheaper is the Replaced Token Detection (RTD) task, by providing more signal during training compared to MLM, since the loss can be computed over all tokens. In this work, we train Longformer models with the efficient RTD task on long-context legal data to showcase that pretraining efficient LMs is possibl using less than 12 GPU days. We evaluate the trained models on challenging summarization tasks requiring the model to summarize complex long texts. We find that both the small and base models outperform their baselines on the in-domain BillSum and out-of-domain PubMed tasks in their respective parameter range. We publish our models as a resource for researcher and practitioners

    A framework for creating natural language descriptions of video streams

    Get PDF
    This contribution addresses generation of natural language descriptions for important visual content present in video streams. The work starts with implementation of conventional image processing techniques to extract high-level visual features such as humans and their activities. These features are converted into natural language descriptions using a template-based approach built on a context free grammar, incorporating spatial and temporal information. The task is challenging particularly because feature extraction processes are erroneous at various levels. In this paper we explore approaches to accommodating potentially missing information, thus creating a coherent description. Sample automatic annotations are created for video clips presenting humans’ close-ups and actions, and qualitative analysis of the approach is made from various aspects. Additionally a task-based scheme is introduced that provides quantitative evaluation for relevance of generated descriptions. Further, to show the framework’s potential for extension, a scalability study is conducted using video categories that are not targeted during the development

    Developing a Creative Learning Format for Street Children

    Get PDF
    Developing a Creative Learning Format for Street Children As the number of street children grow globally, governments and UN agencies are looking at solutions to cater to their needs of shelter, health and safety. Efforts to provide formal education to street children are not always successful. This project hypothesizes that equipping these children with essential knowledge in an informal but creative learning environment will allow them to live safe and productive lives. The prototype of the creative learning format is based on approaches and tools that have worked with children in other settings and has been modified for street children. The focus is on igniting the curiosity and creativity of street children to encouraging learning. The key element of the model is the learning environment which is a Mobile Learning system which brings learning to the location of the street children through a low cost mobile unit with a trained operator. The setting of the prototype is Karachi, Pakistan and the elements of the learning environment, the content and the tools have been designed to suit this context

    Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine

    Get PDF
    Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall

    Rulemaking 2.0

    Get PDF
    In response to President Obama\u27s Memorandum on Transparency and Open Government, federal agencies are on the verge of a new generation in online rulemaking. However, unless we recognize the several barriers to making rulemaking a more broadly participatory process, and purposefully adapt Web 2.0 technologies and methods to lower those barriers, Rulemaking 2.0 is likely to disappoint agencies and open-government advocates alike. This article describes the design, operation, and initial results of Regulation Room, a pilot public rulemaking participation platform created by a cross-disciplinary group of Cornell researchers in collaboration with the Department of Transportation. Regulation Room uses selected live rulemakings to experiment with human and computer support for public comment. The ultimate project goal is to provide guidance on design, technological, and human intervention strategies, grounded in theory and tested in practice, for effective Rulemaking 2.0 systems. Early results give some cause for optimism about the open-government potential of Web 2.0-supported rulemaking. But significant challenges remain. Broader, better public participation is hampered by 1) ignorance of the rulemaking process; 2) unawareness that rulemakings of interest are going on; and 3) information overload from the length and complexity of rulemaking materials. No existing, commonly used Web services or applications are good analogies for what a Rulemaking 2.0 system must do to lower these barriers. To be effective, the system must not only provide the right mix of technology, content, and human assistance to support users in the unfamiliar environment of complex government policymaking; it must also spur them to revise their expectations about how they engage information on the Web and also, perhaps, about what is required for civic participation

    Rulemaking 2.0

    Get PDF

    Rulemaking 2.0

    Get PDF
    In response to President Obama\u27s Memorandum on Transparency and Open Government, federal agencies are on the verge of a new generation in online rulemaking. However, unless we recognize the several barriers to making rulemaking a more broadly participatory process, and purposefully adapt Web 2.0 technologies and methods to lower those barriers, Rulemaking 2.0 is likely to disappoint agencies and open-government advocates alike. This article describes the design, operation, and initial results of Regulation Room, a pilot public rulemaking participation platform created by a cross-disciplinary group of Cornell researchers in collaboration with the Department of Transportation. Regulation Room uses selected live rulemakings to experiment with human and computer support for public comment. The ultimate project goal is to provide guidance on design, technological, and human intervention strategies, grounded in theory and tested in practice, for effective Rulemaking 2.0 systems. Early results give some cause for optimism about the open-government potential of Web 2.0-supported rulemaking. But significant challenges remain. Broader, better public participation is hampered by 1) ignorance of the rulemaking process; 2) unawareness that rulemakings of interest are going on; and 3) information overload from the length and complexity of rulemaking materials. No existing, commonly used Web services or applications are good analogies for what a Rulemaking 2.0 system must do to lower these barriers. To be effective, the system must not only provide the right mix of technology, content, and human assistance to support users in the unfamiliar environment of complex government policymaking; it must also spur them to revise their expectations about how they engage information on the Web and also, perhaps, about what is required for civic participation
    • …
    corecore