2,753 research outputs found

    Developing a comprehensive framework for multimodal feature extraction

    Full text link
    Feature extraction is a critical component of many applied data science workflows. In recent years, rapid advances in artificial intelligence and machine learning have led to an explosion of feature extraction tools and services that allow data scientists to cheaply and effectively annotate their data along a vast array of dimensions---ranging from detecting faces in images to analyzing the sentiment expressed in coherent text. Unfortunately, the proliferation of powerful feature extraction services has been mirrored by a corresponding expansion in the number of distinct interfaces to feature extraction services. In a world where nearly every new service has its own API, documentation, and/or client library, data scientists who need to combine diverse features obtained from multiple sources are often forced to write and maintain ever more elaborate feature extraction pipelines. To address this challenge, we introduce a new open-source framework for comprehensive multimodal feature extraction. Pliers is an open-source Python package that supports standardized annotation of diverse data types (video, images, audio, and text), and is expressly with both ease-of-use and extensibility in mind. Users can apply a wide range of pre-existing feature extraction tools to their data in just a few lines of Python code, and can also easily add their own custom extractors by writing modular classes. A graph-based API enables rapid development of complex feature extraction pipelines that output results in a single, standardized format. We describe the package's architecture, detail its major advantages over previous feature extraction toolboxes, and use a sample application to a large functional MRI dataset to illustrate how pliers can significantly reduce the time and effort required to construct sophisticated feature extraction workflows while increasing code clarity and maintainability

    LFO – A Graph-based Modular Approach to the Processing of Data Streams

    Get PDF
    This paper introduces lfo — for Low Frequency Operators — a graph-based Javascript (ES2015) API for online and online processing (i.e. analysis and transformation) of data streams such as audio and motion sensor data. The library is open-source and entirely based on web standards. The project aims at creating an ecosystem consisting of platform-independent stream operator modules such as filters and extractors as well as platform-specific source and sink modules such as audio i/o, motion sensor inputs, and file access. The modular approach of the API allows for using the library in virtually any Javascript environment. A first set of operators as well as basic source and sink modules for web browsers and Node.js are included in the distribu- tion of the library. The paper introduces the underlying concepts, describes the implementation of the API, and reports on benchmarks of a set of operators. It concludes with the presentation of a set of example applications

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Rapid Prototyping for Virtual Environments

    Get PDF
    Development of Virtual Environment (VE) applications is challenging where application developers are required to have expertise in the target VE technologies along with the problem domain expertise. New VE technologies impose a significant learning curve to even the most experienced VE developer. The proposed solution relies on synthesis to automate the migration of a VE application to a new unfamiliar VE platform/technology. To solve the problem, the Common Scene Definition Framework (CSDF) is developed, that serves as a superset/model representation of the target virtual world. Input modules are developed to populate the framework with the capabilities of the virtual world imported from VRML 2.0 and X3D formats. The synthesis capability is built into the framework to synthesize the virtual world into a subset of VRML 2.0, VRML 1.0, X3D, Java3D, JavaFX, JavaME, and OpenGL technologies, which may reside on different platforms. Interfaces are designed to keep the framework extensible to different and new VE formats/technologies. The framework demonstrated the ability to quickly synthesize a working prototype of the input virtual environment in different VE formats

    Scientific and Theoretical Prerequisites for Improvement of Modern Pedagogical Technologies

    Get PDF
    It is established that pedagogy performs the same functions as any other scientific discipline: description, explanation, and prediction of phenomena of that area of reality it studies. However, in the social and humanitarian sphere, it has its own characteristics. Pedagogical science cannot confine itself to objective reflection of what it is studying. Pedagogical science is required to influence the pedagogical reality and to transform and improve the pedagogical process. Therefore, it combines two functions: scientific-theoretical and constructive-technical. Scientific-theoretical function is a reflection of the pedagogical reality as it is. The constructive-technical one is a regulative function that reflects the pedagogical reality as it should be. The pedagogical process is closely connected with the application of teaching technologies. The application of teaching technologies presupposes organizational arrangement of all dependencies of the learning process, alignment of its stages, identification of conditions for their implementation, and correlation of methods, forms, measures, and means of training during conducting classes with capabilities of the teacher and students

    Operational concepts for selected Sortie missions: Executive summary

    Get PDF
    An executive summary is presented of a Spacelab concept study conducted from August 1973 to June 1974. Background information and a summary of study conclusions are given. Specific data are reported for the quick-reaction carrier concept, software and mission integration, configuration management, documentation, equipment pool, and integration alternatives. A forecast of the impact of a second launch site, mission feasibility, and space availability for the Spacelab are also discussed

    Integrated Control of Microfluidics – Application in Fluid Routing, Sensor Synchronization, and Real-Time Feedback Control

    Get PDF
    Microfluidic applications range from combinatorial chemical synthesis to high-throughput screening, with platforms integrating analog perfusion components, digitally controlled microvalves, and a range of sensors that demand a variety of communication protocols. A comprehensive solution for microfluidic control has to support an arbitrary combination of microfluidic components and to meet the demand for easy-to-operate system as it arises from the growing community of unspecialized microfluidics users. It should also be an easy to modify and extendable platform, which offer an adequate computational resources, preferably without a need for a local computer terminal for increased mobility. Here we will describe several implementation of microfluidics control technologies and propose a microprocessor-based unit that unifies them. Integrated control can streamline the generation process of complex perfusion sequences required for sensor-integrated microfluidic platforms that demand iterative operation procedures such as calibration, sensing, data acquisition, and decision making. It also enables the implementation of intricate optimization protocols, which often require significant computational resources. System integration is an imperative developmental milestone for the field of microfluidics, both in terms of the scalability of increasingly complex platforms that still lack standardization, and the incorporation and adoption of emerging technologies in biomedical research. Here we describe a modular integration and synchronization of a complex multicomponent microfluidic platform

    Reference Avionics Architecture for Lunar Surface Systems

    Get PDF
    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements
    • …
    corecore