345,700 research outputs found
Real-Time Video Processing Using Native Programming on Android Platform
As the smartphone industry grows rapidly,
smartphone applications need to be faster and real-time. For this
purpose, most of the smartphone platforms run the program on
the native language or compiler that can produce native code for
hardware. However for the Android platform that based on the
JAVA language, most of the software algorithm is running on
JAVA that consumes more time to be compiled. In this paper the
performance of native programming and high level
programming using JAVA are compared with respect to video
processing speed. Eight image processing methods are applied to
each frame of the video captured from a smartphone that is
running on an Android platform. The efficiencies of the two
applications with difference programming language are
compared by observing their frame processing rate. The
experimental results show that out of the eight images processing
methods, six methods that are executed using the native
programming are faster than that of the JAVA programming
with a total average ratio of 0.41. An application of the native
programming for real-time object detection is also presented in
this paper. The result shows that with native programming on
Android platform, even a complicated object detection algorithm
can be done in real-time
Conjunctive Visual and Auditory Development via Real-Time Dialogue
Human developmental learning is capable of
dealing with the dynamic visual world, speech-based
dialogue, and their complex real-time association.
However, the architecture that realizes
this for robotic cognitive development has
not been reported in the past. This paper takes
up this challenge. The proposed architecture does
not require a strict coupling between visual and
auditory stimuli. Two major operations contribute
to the “abstraction” process: multiscale temporal
priming and high-dimensional numeric abstraction
through internal responses with reduced variance.
As a basic principle of developmental learning,
the programmer does not know the nature
of the world events at the time of programming
and, thus, hand-designed task-specific representation
is not possible. We successfully tested the
architecture on the SAIL robot under an unprecedented
challenging multimodal interaction mode:
use real-time speech dialogue as a teaching source
for simultaneous and incremental visual learning
and language acquisition, while the robot is viewing
a dynamic world that contains a rotating object
to which the dialogue is referring
Deteksi Berbasis Marker Untuk Mengambil (Capture) Gambar
Marker used as a marker for the camera recorded in real time. Marker-based detection using image processing, which will be the laying of the object (virtual) may be in 3D animation. The method used is to use ARToolkit for recognizing the markers in which one marker is used to identify an object and, later OpenGL for drawing and displaying objects where the object will automatically degenerate and realtime, and OpenCV to take pictures. Picture here as well as capturing images using digital cameras. Final project with title-based detection marker to take (capture) the image is intended to introduce the use of augmented reality-based programming language C + + as a decision support interfaces (capture) images at the moment. Keywords: Camera, Augmented Reality, C + +, capture, OpenGL, OpenCV
Design Space Exploration of Object Caches with Cross-Profiling
To avoid data cache trashing between heapallocated data and other data areas, a distinct object cache has been proposed for embedded real-time Java processors. This object cache uses high associativity in order to statically track different object pointers for worst-case execution-time analysis. However, before implementing such an object cache, an empirical analysis of different organization forms is needed. We use a cross-profiling technique based on aspect-oriented programming in order to evaluate different object cache organizations with standard Java benchmarks. From the evaluation we conclude that field access exhibits some temporal locality, but almost no spatial locality. Therefore, filling long cache lines on a miss just introduces a high miss penalty without increasing the hit rate enough to make up for the increased miss penalty. For an object cache, it is more efficient to fill individual words within the cache line on a miss
Expert system decision support for low-cost launch vehicle operations
Progress in assessing the feasibility, benefits, and risks associated with AI expert systems applied to low cost expendable launch vehicle systems is described. Part one identified potential application areas in vehicle operations and on-board functions, assessed measures of cost benefit, and identified key technologies to aid in the implementation of decision support systems in this environment. Part two of the program began the development of prototypes to demonstrate real-time vehicle checkout with controller and diagnostic/analysis intelligent systems and to gather true measures of cost savings vs. conventional software, verification and validation requirements, and maintainability improvement. The main objective of the expert advanced development projects was to provide a robust intelligent system for control/analysis that must be performed within a specified real-time window in order to meet the demands of the given application. The efforts to develop the two prototypes are described. Prime emphasis was on a controller expert system to show real-time performance in a cryogenic propellant loading application and safety validation implementation of this system experimentally, using commercial-off-the-shelf software tools and object oriented programming techniques. This smart ground support equipment prototype is based in C with imbedded expert system rules written in the CLIPS protocol. The relational database, ORACLE, provides non-real-time data support. The second demonstration develops the vehicle/ground intelligent automation concept, from phase one, to show cooperation between multiple expert systems. This automated test conductor (ATC) prototype utilizes a knowledge-bus approach for intelligent information processing by use of virtual sensors and blackboards to solve complex problems. It incorporates distributed processing of real-time data and object-oriented techniques for command, configuration control, and auto-code generation
Experiments in cooperative manipulation: A system perspective
In addition to cooperative dynamic control, the system incorporates real time vision feedback, a novel programming technique, and a graphical high level user interface. By focusing on the vertical integration problem, not only these subsystems are examined, but also their interfaces and interactions. The control system implements a multi-level hierarchical structure; the techniques developed for operator input, strategic command, and cooperative dynamic control are presented. At the highest level, a mouse-based graphical user interface allows an operator to direct the activities of the system. Strategic command is provided by a table-driven finite state machine; this methodology provides a powerful yet flexible technique for managing the concurrent system interactions. The dynamic controller implements object impedance control; an extension of Nevill Hogan's impedance control concept to cooperative arm manipulation of a single object. Experimental results are presented, showing the system locating and identifying a moving object catching it, and performing a simple cooperative assembly. Results from dynamic control experiments are also presented, showing the controller's excellent dynamic trajectory tracking performance, while also permitting control of environmental contact force
OpenVX-Based Python Framework for Real-time Cross-Platform Acceleration of Embedded Computer Vision Applications
Embedded real-time vision applications are being rapidly deployed in a large realm of consumer electronics, ranging from automotive safety to surveillance systems. However, the relatively limited computational power of embedded platforms is considered as a bottleneck for many vision applications, necessitating optimization. OpenVX is a standardized interface, released in late 2014, in an attempt to provide both system and kernel level optimization to vision applications. With OpenVX, Vision processing are modeled with coarse-grained data flow graphs, which can be optimized and accelerated by the platform implementer. Current full implementations of OpenVX are given in the programming language C, which does not support advanced programming paradigms such as object-oriented, imperative and functional programming, nor does it have runtime or type-checking. Here we present a python-based full Implementation of OpenVX, which eliminates much of the discrepancies between the object-oriented paradigm used by many modern applications and the native C implementations. Our open-source implementation can be used for rapid development of OpenVX applications in embedded platforms. Demonstration includes static and real-time image acquisition and processing using a Raspberry Pi and a GoPro camera. Code is given as supplementary information. Code project and linked deployable virtual machine are located on GitHub: https://github.com/NBEL-lab/PythonOpenVX
Object-oriented programming in C# with dynamic classification.
Object-oriented programming language has gained popularity in recent years. However, some problems exist in object-oriented programming languages. It works well with static classification, but does not support object dynamic classification. Static classification means an object always and only belongs to one class during its life spans. In real-world applications, objects may belong to different classes rendering different roles certain times during the lifetime. Dynamic classification enables the changing of object classification over time. Objects can be classified and declassified into/from acquire and release class membership during runtime. In this thesis, many approaches to dynamic classification will be discussed in different implementing languages. Based on the thorough reviews of these approaches, we give a new approach. This approach combines the concept of object and roles and extends a class hierarchy with dynamic classification. The syntax of dynamic classification shows how to implement the function of dynamic classification in the object-oriented programming language. Finally, we present a preprocessor, by which a C♯ code including the extendable dynamic classification functions can be translated to standard C♯ code.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .W364. Source: Masters Abstracts International, Volume: 43-03, page: 0892. Adviser: Liwu Li. Thesis (M.Sc.)--University of Windsor (Canada), 2004
An Open Architecture Framework for Electronic Warfare Based Approach to HLA Federate Development
A variety of electronic warfare models are developed in the Electronic Warfare Research Center. An Open Architecture Framework for Electronic Warfare (OAFEw) has been developed for reusability of various object models participating in the electronic warfare simulation and for extensibility of the electronic warfare simulator. OAFEw is a kind of component-based software (SW) lifecycle management support framework. This OAFEw is defined by six components and ten rules. The purpose of this study is to construct a Distributed Simulation Interface Model, according to the rules of OAFEw, and create Use Case Model of OAFEw Reference Conceptual Model version 1.0. This is embodied in the OAFEw-FOM (Federate Object Model) for High-Level Architecture (HLA) based distributed simulation. Therefore, we design and implement EW real-time distributed simulation that can work with a model in C++ and MATLAB API (Application Programming Interface). In addition, OAFEw-FOM, electronic component model, and scenario of the electronic warfare domain were designed through simple scenarios for verification, and real-time distributed simulation between C++ and MATLAB was performed through OAFEw-Distributed Simulation Interface
- …