843 research outputs found

    The walking robot project

    Get PDF
    A walking robot was designed, analyzed, and tested as an intelligent, mobile, and a terrain adaptive system. The robot's design was an application of existing technologies. The design of the six legs modified and combines well understood mechanisms and was optimized for performance, flexibility, and simplicity. The body design incorporated two tripods for walking stability and ease of turning. The electrical hardware design used modularity and distributed processing to drive the motors. The software design used feedback to coordinate the system and simple keystrokes to give commands. The walking machine can be easily adapted to hostile environments such as high radiation zones and alien terrain. The primary goal of the leg design was to create a leg capable of supporting a robot's body and electrical hardware while walking or performing desired tasks, namely those required for planetary exploration. The leg designers intent was to study the maximum amount of flexibility and maneuverability achievable by the simplest and lightest leg design. The main constraints for the leg design were leg kinematics, ease of assembly, degrees of freedom, number of motors, overall size, and weight

    CASCH: a tool for computer-aided scheduling

    Get PDF
    A software tool called Computer-Aided Scheduling (CASCH) for parallel processing on distributed-memory multiprocessors in a complete parallel programming environment is presented. A compiler automatically converts sequential applications into parallel codes to perform program parallelization. The parallel code that executes on a target machine is optimized by CASCH through proper scheduling and mapping.published_or_final_versio

    Approaches to Interpreter Composition

    Get PDF
    In this paper, we compose six different Python and Prolog VMs into 4 pairwise compositions: one using C interpreters; one running on the JVM; one using meta-tracing interpreters; and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs.Comment: 33 pages, 1 figure, 9 table

    Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications

    Get PDF
    Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead

    Korkean luotettavuuden verkkohallinteiset laitteiden väliset yhteydet

    Get PDF
    Fifth generation cellular networks aim to provide new types of services. Prominent amongst these are industrial automation and vehicle-to-vehicle communications. Such new use cases demand lower latencies and higher reliability along with greater flexibility than current and past generations of cellular technologies allow. Enabling these new service types requires the introduction of device-to-device communications (D2D). This work investigated network-controlled D2D schemes wherein cellular base stations retain control over spectrum usage. D2D nodes assemble into clusters. Each D2D cluster then organises itself as it sees fit within the constraints imposed by the cellular network. A review of proposed D2D control schemes was conducted to identify pertinent interference issues. Measurements were then devised to empirically collect quantitative data on the impact of this interference. Measurements were conducted using a software-defined radio (SDR) platform. An SDR based system was selected to enable a low cost and highly flexible iterative approach to development while still providing the accuracy of real-world measurement. D2D functionality was added to the chosen SDR system with the essential parts of Long Term Evolution Release 8 implemented. Two series of measurements were performed. The first aimed to determine the adjacent channel interference impact of a cellular user being located near a D2D receiver. The second measurement series collected data on the co-channel interference of spectrum re-use between a D2D link and a moving cellular transmitter. Based on these measurements it was determined that D2D communications within a cellular system is feasible. Furthermore, the required frequency of channel state information reporting as a function of node velocity was determined.Viidennen sukupolven solukkoverkoilla pyritään mahdollistamaan uudentyyppisiä palveluja kuten teollisuusautomatiikkaa ja ajoneuvojen välistä viestintää. Tämänkaltaiset uudet käyttötarkoitukset vaativat lyhyempien viiveiden ja korkeammat luotettavuuden ohella myös suurempaa joustavuutta kuin minkä nykyisen sukupolven matkapuhelinverkkoteknologiat sallivat. Edellä mainittujen uusien palvelujen toteuttaminen vaatii suoria laitteiden välisiä yhteyksiä (engl. D2D). Tässä diplomityössä keskityttiin tutkimaan verkkohallinteisia D2D-rakenteita, joissa solukkoverkko hallinnoi spektrin käyttöä. D2D-päätteet liittyvät yhteen muodostaakseen klustereita, jotka hallinnoivat sisäistä tietoliikennettään parhaaksi katsomallaan tavalla solukkoverkon asettamien rajoitusten puitteissa. Kirjallisuuskatsauksen avulla selvitettiin aiemmissa tutkimuksissa esitetyille D2D-ratkaisuille yhteiset interferenssiongelmat. Näiden vaikutusta ja suuruutta tutkittiin mittausten avulla. Mittaukset toteutettiin ohjelmistoradioalustan (engl. SDR) avulla. SDR-pohjaisen järjestelmän käyttö mahdollisti edullisen ja joustavan tavan kerätä empiirisiä mittaustuloksia. D2D-toiminnallisuus lisättiin Long Term Evolution Release 8:n olennaiset ominaisuudet omaavaan alustaan. Tällä alustalla toteutettiin kaksi mittaussarjaa. Ensimmäisellä kerättiin tuloksia viereisellä kanavalla toimivan matkapuhelimen D2D-vastaanottimelle aiheuttamasta interferenssistä näiden ollessa toistensa läheisyydessä. Toisella mittaussarjalla selvitettiin samalla kanavalla toimivan D2D-yhteyden ja liikkuvan matkapuhelimen välistä interferenssiä. Mittausten perusteella todettiin D2D-toiminnallisuuden lisäämisen solukkoverkkoon olevan mahdollista. Lisäksi laskettiin vaadittava kanavalaadun päivitystiheys päätteiden nopeuden funktiona

    Description and Optimization of Abstract Machines in a Dialect of Prolog

    Full text link
    In order to achieve competitive performance, abstract machines for Prolog and related languages end up being large and intricate, and incorporate sophisticated optimizations, both at the design and at the implementation levels. At the same time, efficiency considerations make it necessary to use low-level languages in their implementation. This makes them laborious to code, optimize, and, especially, maintain and extend. Writing the abstract machine (and ancillary code) in a higher-level language can help tame this inherent complexity. We show how the semantics of most basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog. These descriptions are then compiled to C and assembled to build a complete bytecode emulator. Thanks to the high level of the language used and its closeness to Prolog, the abstract machine description can be manipulated using standard Prolog compilation and optimization techniques with relative ease. We also show how, by applying program transformations selectively, we obtain abstract machine implementations whose performance can match and even exceed that of state-of-the-art, highly-tuned, hand-crafted emulators.Comment: 56 pages, 46 figures, 5 tables, To appear in Theory and Practice of Logic Programming (TPLP

    Group implicit concurrent algorithms in nonlinear structural dynamics

    Get PDF
    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers

    Bi-temporal 3D active appearance models with applications to unsupervised ejection fraction estimation

    Get PDF
    Rapid and unsupervised quantitative analysis is of utmost importance to ensure clinical acceptance of many examinations using cardiac magnetic resonance imaging (MRI). We present a framework that aims at fulfilling these goals for the application of left ventricular ejection fraction estimation in four-dimensional MRI. The theoretical foundation of our work is the generative two-dimensional Active Appearance Models by Cootes et al., here extended to bi-temporal, three-dimensional models. Further issues treated include correction of respiratory induced slice displacements, systole detection, and a texture model pruning strategy. Cross-validation carried out on clinical-quality scans of twelve volunteers indicates that ejection fraction and cardiac blood pool volumes can be estimated automatically and rapidly with accuracy on par with typical inter-observer variability. \u

    Autotuning the Intel HLS Compiler using the Opentuner Framework

    Get PDF
    High level synthesis (HLS) tools can be used to improve design flow and decrease verification times for field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design. The Intel HLS Compiler is a high level synthesis tool that takes in untimed C/C++ as input and generates production-quality register transfer level (RTL) code that is optimized for Intel FPGAs. The translation does, however, require multiple iterations and manual optimizations to get comparable synthesized results to that of a solution written in a hardware descriptive language. The synthesis results can vary greatly based upon coding style and optimization techniques, and typically require an in-depth knowledge of FPGAs to fully optimize the translation which limits the audience of the tool. The extra abstraction that the C/C++ source code presents can also make it difficult to meet more specific design requirements; this includes designs to meet specific resource usage or performance based metrics. To improve the quality of results generated by the Intel HLS Compiler without a manual iterative process that requires an in-depth knowledge of FPGAs, this research proposes a method of automating some of the optimization techniques that improve the synthesized design through an autotuning process. The proposed approach utilizes the PyCParser library to parse C source files and the OpenTuner Framework to autotune the synthesis to provide a method that generates results that better meet the needs of the designer's requirements through lower FPGA resource usage or increased design performance. Such functionality is not currently available in Intel's commercial tools. The proposed approach was tested with the CHStone Benchmarking Suite of C programs as well as a standard digital signal processing finite impulse response filter. The results show that the commercial HLS tool can be automatically autotuned through placeholder injection using a source parsing tool for C code and using the OpenTuner Framework to autotune the results. For designs that are small in nature and include conducive structures to be autotuned, the results indicate resource usage reductions and/or performance increases of up to 40% as compared to the default Intel HLS Compiler results. The method developed in this research also allows additional design targets to be specified through the autotuner for consideration in the synthesized design which can yield results that are better matched to a design's requirements
    corecore