2,843 research outputs found

    Design and Validation of a Software Defined Radio Testbed for DVB-T Transmission

    Get PDF
    This paper describes the design and validation of a Software Defined Radio (SDR) testbed, which can be used for Digital Television transmission using the Digital Video Broadcasting - Terrestrial (DVB-T) standard. In order to generate a DVB-T-compliant signal with low computational complexity, we design an SDR architecture that uses the C/C++ language and exploits multithreading and vectorized instructions. Then, we transmit the generated DVB-T signal in real time, using a common PC equipped with multicore central processing units (CPUs) and a commercially available SDR modem board. The proposed SDR architecture has been validated using fixed TV sets, and portable receivers. Our results show that the proposed SDR architecture for DVB-T transmission is a low-cost low-complexity solution that, in the worst case, only requires less than 22% of CPU load and less than 170 MB of memory usage, on a 3.0 GHz Core i7 processor. In addition, using the same SDR modem board, we design an off-line software receiver that also performs time synchronization and carrier frequency offset estimation and compensation

    Analysis of Effects of Sensor Multithreading to Generate Local System Event Timelines

    Get PDF
    In practice, organizations with their own information technology infrastructure normally log or otherwise monitor network information at boundary routers and similar network devices that are log-capable. However, not all organizations opt to log local system information, such as an employee\u27s organization-owned workstation activity. This research explores one approach to logging pertinent local system information using multithreading and free software designed for such logging purposes as well as utilities that come with the Microsoft Windows 7 Operating System. Research focuses on file downloads on the local system and combines the aforementioned pieces of software into an event logging suite. The event logging suite consists of four different sensors and utilizes multithreading in an attempt to effectively capture as many pertinent events as possible, with the ultimate goal of capturing 100% of the events in chronological order of actual occurrence. Specifically, the event logging suite increases the number of processes and thus threads that two of the four sensors, Windows NETSTAT and tasklist utilities respectively, in the suite execute in order to determine the optimal settings for the two sensors. To add some realism to the experiments, this research implements three different system loads to simulate user activity on the system while a scripted file-download scenario executes and the logging suite actively captures events. Ultimately, the performance accuracies of the NETSTAT and tasklist sensors across numerous tests show that while the sensors can capture above 85% of the expected number of events, neither are capable of consistently achieving this accuracy, even under a low system load

    Incorporating haptic features into physics-based simulation

    Get PDF
    In our graphic lab, we have developed many physics-based animations focusing on muscles and we hope to create an interactive interface with tactile feedback so that the users can not only see those physical features but also experience the forces in the muscle line. They will be able to touch on the surface of muscles and feel the muscle texture and they will also be able to drag the muscle line and feel the tension and forces. This is especially important for co-contraction of two opposing muscles, since co-contractions do not produce any motion by changes the stiffness of the joint. Therefore, we used the Geomagic Touch (tm) haptic device for generating the haptic feedbacks and to incorporate OpenHaptics for haptic programming

    Unity GOAP Tool

    Get PDF
    Unity is the most used real-time 3D (​RT3D​) engine all over the world, with more than 4.5M registered developers. Nowadays Unity engine still don’t have an officially supported tool for the development of artificial intelligence (​AI​) using the goal oriented action planning (​GOAP​) system. In order to solve this lack, some developers have created its own tools to generate GOAP based AI. Some of this tools are initially developed to create videogames or any AI related project and laterpublished on a git platform when the project is released. Some others are directly published on the assets store of unity. What all these mentioned tools have in common is the limited userinterface(​UI​) interactions. In this kind of tools a limitation of UI elements can be an important usability barrier, because uses need UI elements that let them modify and generate all kind of planning data (conditions, action, effects,...) and UI elements that display a clear out put of the resultant behavior. Otherwise,users will no thave enough information about why the AIis not having the desired behavior and will immediately stop using the tool.The objective of this project is to compete with the existing unofficial tools, with a completely new one that let users generate GOAP based AIeasily and fast. This tool will support a worked system of visual editors, with all the necessary UI elements to generate and modify every thing related with the AI planning, and a friendly clean code easy to work with. In order to achieve all the mentioned objectives, we will use an agile methodology that will let us prototype and test all the project functionalities

    tsdownsample: high-performance time series downsampling for scalable visualization

    Full text link
    Interactive line chart visualizations greatly enhance the effective exploration of large time series. Although downsampling has emerged as a well-established approach to enable efficient interactive visualization of large datasets, it is not an inherent feature in most visualization tools. Furthermore, there is no library offering a convenient interface for high-performance implementations of prominent downsampling algorithms. To address these shortcomings, we present tsdownsample, an open-source Python package specifically designed for CPU-based, in-memory time series downsampling. Our library focuses on performance and convenient integration, offering optimized implementations of leading downsampling algorithms. We achieve this optimization by leveraging low-level SIMD instructions and multithreading capabilities in Rust. In particular, SIMD instructions were employed to optimize the argmin and argmax operations. This SIMD optimization, along with some algorithmic tricks, proved crucial in enhancing the performance of various downsampling algorithms. We evaluate the performance of tsdownsample and demonstrate its interoperability with an established visualization framework. Our performance benchmarks indicate that the algorithmic runtime of tsdownsample approximates the CPU's memory bandwidth. This work marks a significant advancement in bringing high-performance time series downsampling to the Python ecosystem, enabling scalable visualization. The open-source code can be found at https://github.com/predict-idlab/tsdownsampleComment: Submitted to Software

    An Expressive Language and Efficient Execution System for Software Agents

    Full text link
    Software agents can be used to automate many of the tedious, time-consuming information processing tasks that humans currently have to complete manually. However, to do so, agent plans must be capable of representing the myriad of actions and control flows required to perform those tasks. In addition, since these tasks can require integrating multiple sources of remote information ? typically, a slow, I/O-bound process ? it is desirable to make execution as efficient as possible. To address both of these needs, we present a flexible software agent plan language and a highly parallel execution system that enable the efficient execution of expressive agent plans. The plan language allows complex tasks to be more easily expressed by providing a variety of operators for flexibly processing the data as well as supporting subplans (for modularity) and recursion (for indeterminate looping). The executor is based on a streaming dataflow model of execution to maximize the amount of operator and data parallelism possible at runtime. We have implemented both the language and executor in a system called THESEUS. Our results from testing THESEUS show that streaming dataflow execution can yield significant speedups over both traditional serial (von Neumann) as well as non-streaming dataflow-style execution that existing software and robot agent execution systems currently support. In addition, we show how plans written in the language we present can represent certain types of subtasks that cannot be accomplished using the languages supported by network query engines. Finally, we demonstrate that the increased expressivity of our plan language does not hamper performance; specifically, we show how data can be integrated from multiple remote sources just as efficiently using our architecture as is possible with a state-of-the-art streaming-dataflow network query engine

    Accelerating Data Loading in Deep Neural Network Training

    Full text link
    Data loading can dominate deep neural network training time on large-scale systems. We present a comprehensive study on accelerating data loading performance in large-scale distributed training. We first identify performance and scalability issues in current data loading implementations. We then propose optimizations that utilize CPU resources to the data loader design. We use an analytical model to characterize the impact of data loading on the overall training time and establish the performance trend as we scale up distributed training. Our model suggests that I/O rate limits the scalability of distributed training, which inspires us to design a locality-aware data loading method. By utilizing software caches, our method can drastically reduce the data loading communication volume in comparison with the original data loading implementation. Finally, we evaluate the proposed optimizations with various experiments. We achieved more than 30x speedup in data loading using 256 nodes with 1,024 learners.Comment: 11 pages, 12 figures, accepted for publication in IEEE International Conference on High Performance Computing, Data and Analytics (HiPC) 201
    • …
    corecore