764 research outputs found

    Computational framework for remotely operable laboratories

    Get PDF
    Decision-makers envision a significant role for remotely operable laboratories in advancing research in structural engineering, as seen from the tremendous support for the network for earthquake engineering simulation (NEES) framework. This paper proposes a computational framework that uses LabVIEW and web technologies to enable observation and control of laboratory experiments via the internet. The framework, which is illustrated for a shaketable experiment, consists of two key hardware components: (1) a local network that has an NI-PXI with hardware for measurement acquisition and shaketable control along with a Windows-based PC that acquires images from a high-speed camera for video, and (2) a proxy server that controls access to the local network. The software for shaketable control and data/video acquisition are developed in the form of virtual instruments (VI) using LabVIEW development system. The proxy server employs a user-based authentication protocol to provide security to the experiment. The user can run perl-based CGI scripts on the proxy server for scheduling to control or observe the experiment in a future timeslot as well as gain access to control or observe the experiment during that timeslot. The proxy server implements single-controller multiple-observer architecture so that many users can simultaneously observe and download measurements as a single controller decides the waveform input into the shaketable. A provision is also created for users to simultaneously view the real-time video of the experiment. Two different methods to communicate the video are studied. It is concluded that a JPEG compression of the images acquired from the camera offers the best performance over a wide range of networks. The framework is accessible by a remote user with a computer that has access to a high-speed internet connection and has the LabVIEW runtime engine that is available at no cost to the user. Care is taken to ensure that the implementation of the LabVIEW applications and the perl scripts have little dependency for ease of portability to other experiment

    A software approach to defeating side channels in last-level caches

    Full text link
    We present a software approach to mitigate access-driven side-channel attacks that leverage last-level caches (LLCs) shared across cores to leak information between security domains (e.g., tenants in a cloud). Our approach dynamically manages physical memory pages shared between security domains to disable sharing of LLC lines, thus preventing "Flush-Reload" side channels via LLCs. It also manages cacheability of memory pages to thwart cross-tenant "Prime-Probe" attacks in LLCs. We have implemented our approach as a memory management subsystem called CacheBar within the Linux kernel to intervene on such side channels across container boundaries, as containers are a common method for enforcing tenant isolation in Platform-as-a-Service (PaaS) clouds. Through formal verification, principled analysis, and empirical evaluation, we show that CacheBar achieves strong security with small performance overheads for PaaS workloads

    SurveyMan: Programming and Automatically Debugging Surveys

    Full text link
    Surveys can be viewed as programs, complete with logic, control flow, and bugs. Word choice or the order in which questions are asked can unintentionally bias responses. Vague, confusing, or intrusive questions can cause respondents to abandon a survey. Surveys can also have runtime errors: inattentive respondents can taint results. This effect is especially problematic when deploying surveys in uncontrolled settings, such as on the web or via crowdsourcing platforms. Because the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct. We present SurveyMan, a system for designing, deploying, and automatically debugging surveys. Survey authors write their surveys in a lightweight domain-specific language aimed at end users. SurveyMan statically analyzes the survey to provide feedback to survey authors before deployment. It then compiles the survey into JavaScript and deploys it either to the web or a crowdsourcing platform. SurveyMan's dynamic analyses automatically find survey bugs, and control for the quality of responses. We evaluate SurveyMan's algorithms analytically and empirically, demonstrating its effectiveness with case studies of social science surveys conducted via Amazon's Mechanical Turk.Comment: Submitted version; accepted to OOPSLA 201

    QuickSNP: an automated web server for selection of tagSNPs

    Get PDF
    Although large-scale genetic association studies involving hundreds to thousands of SNPs have become feasible, the associated cost is substantial. Even with the increased efficiency introduced by the use of tagSNPs, researchers are often seeking ways to maximize resource utilization given a set of SNP-based gene-mapping goals. We have developed a web server named QuickSNP in order to provide cost-effective selection of SNPs, and to fill in some of the gaps in existing SNP selection tools. One useful feature of QuickSNP is the option to select only gene-centric SNPs from a chromosomal region in an automated fashion. Other useful features include automated selection of coding non-synonymous SNPs, SNP filtering based on inter-SNP distances and information regarding the availability of genotyping assays for SNPs and whether they are present on whole genome chips. The program produces user-friendly summary tables and results, and a link to a UCSC Genome Browser track illustrating the position of the selected tagSNPs in relation to genes and other genomic features. We hope the unique combination of features of this server will be useful for researchers aiming to select markers for their genotyping studies. The server is freely available and can be accessed at the URL http://bioinformoodics.jhmi.edu/quickSNP.pl

    Website Development Agreements: A Guide to Planning & Drafting

    Get PDF

    A Conceptual Semi-Humanoid Wireless Robotic Lecturer for Distance Learning (DL)

    Get PDF
    Information and Communications Technology is causing a worldwide revolution in virtually all fields of human endeavor. The education sector is not left out as the delivery of course content is no longer limited to the traditional teacher-student classroom interaction but also via electronic media. This paper presents a novel approach to e-Learning by leveraging on advancements in Machine-to-Machine communications (M2M), Internet-of-Things (IoT) and robotics technologies to design and construct a semihumanoid class teaching robot that aids teachers, lecturers and other educational personnel in communicating effectively with the students irrespective of their location and the distance using a plastic mannequin. The implementation of the system is achieved through hardware (mannequin) and software designs. The authors successfully developed a plastic mannequin with embedded electronics systems to work as a telepresence lecturer, allowing the elimination of time and distance between a professional remote educator and the students. The device was tested and compared with existing remote teaching technologies such as teleconferencing, telepresence with tablet screens and found to be more reliable, cheaper, and easy to use than the existing ones. The paper therefore concludes that the semi-humanoid robotic lecturer is a disruptive innovation in the world of Distance Education Learning (DEL)

    Development of a framework for internet based education system

    Get PDF
    Development of a framework for Internet based education has demonstrated the use of Oracle tools for the use of delivery of education on the World Wide Web. This also has proved that for an efficient dynamic education scenario, the use of a database-based system with a proper retrieval system is required. We have designed the system on an Oracle backend system with Designer/2000. Developer/2000 helped us with the design and development of the system. Oracle Web Server 2.1 helped us with the retrieval of the web pages. The entire design of the system and the reasoning behind the system has been documented. Appendix A provides a glossary of currently used terminology in the field of Internet based systems. The Appendix B provides actual screen prints of the Designer/2000 phases of design, Developer/2000 graphical user interface screens and the actual code involved in the design and development of the system. The fact that the entire system is based on the Oracle Repository makes the system very dynamic in terms of the data and can be used to present the student with a course material rapidly

    Computational Framework for Remotely Operable Laboratories

    Get PDF

    Towards Intelligent Data Acquisition Systems with Embedded Deep Learning on MPSoC

    Get PDF
    Large-scale scientific experiments rely on dedicated high-performance data-acquisition systems to sample, readout, analyse, and store experimental data. However, with the rapid development in detector technology in various fields, the number of channels and the data rate are increasing. For trigger and control tasks data acquisition systems needs to satisfy real-time constraints, enable short-time latency and provide the possibility to integrate intelligent data processing. During recent years machine learning approaches have been used successfully in many applications. This dissertation will study how machine learning techniques can be integrated already in the data acquisition of large-scale experiments. A universal data acquisition platform for multiple data channels has been developed. Different machine learning implementation methods and application have been realized using this system. On the hardware side, recent FPGAs do not only provide high-performance parallel logic but more and more additional features, like ultra-fast transceivers and embedded ARM processors. TSMC\u27s 16nm FinFET Plus (16FF+) 3D transistor technology enables Xilinx in the Zynq UltraScale+ FPGA devices to increase the performance/watt ratio by 2 to 5 times compared to their previous generation. The selected main processor ZU11EG owns 32 GTH transceivers where each one could operate up to 16.316.3 Gb/s and 16 GTY transceivers where each of them could operate up to 32.7532.75 Gb/s. These transceivers are routed to x16 lanes Gen 33/44 PCIe, 1212 lanes full-duplex FireFly electrical/optical data link and VITA 57.4 FMC+ connector. The new Zynq UltraScale+ device provides at least three major advantages for advanced data acquisition systems: First, the 16nm FinFET+ programmable logic (PL) provides high-speed readout capabilities by high-speed transceivers; second, built-in quad-core 64-bit ARM Cortex-A53 processor enable host embedded Linux system. Thus, webservers, slow control and monitoring application could be realized in a embedded processor environment; third, the Zynq Multiprocessor System-on-Chip technology connects programmable logic and microprocessors. In this thesis, the benefits of such architectures for the integration of machine learning algorithms in data acquisition systems and control application are demonstrated. On the algorithm side, there have been many achievements in the field of machine learning over the last decades. Existing machine learning algorithms split into several categories depending on how the learning phase is organized: Supervised Learning, Unsupervised Learning, Semi-Supervised Learning and Reinforcement Learning. Most commonly used in scientific applications are supervised learning and reinforcement learning. Supervised learning learns from the labelled input and output, and generates a function that could predict the future different input to the appropriate output. A common application instance is a classification. They have a wide difference in basic math theory, training, inference, and their implementation. One of the natural solutions is Application Specific Integrated Circuit (ASIC) Artificial Intelligence (AI) chips. A typical example is the Google Tensor Processing Unit (TPU), it could cover the training and inference for both supervised learning and reinforcement learning. One of the major issues is that such chip could not provide high data transferring bandwidth other than high compute power. As a comparison, the Xilinx UltraScale+ FPGA could also provide raw compute power and efficiency for all different data types down to a single bit. From a deployment point of view, the training part of supervised learning is typically performed by CPU/GPU/TPU on a fixed dataset. For reinforcement learning, the training phase is more complex. The algorithm needs to periodically interact with the controlled system and execute a Markov Decision Process (MDP). There is no static training dataset, but it is obtained in real-time. The time slot between each step depends on the dynamics of the controlled system. The inference is also bound to this sampling time because the algorithm needs to interact with the environment and decide the appropriate action for a response, then a higher demand on time is proposed. This thesis gives solutions for both training and inference of reinforcement learning. At first, the requirements are analyzed, then the algorithm is deduced from scratch, and training on the PS part of Zynq device is implemented, meanwhile the inference at FPGA side is proposed which is similar solution compared with supervised learning. The results for Policy Gradient show a lot of improvement over a CPU/GPU-based machine learning framework. The Deep Deterministic Policy Gradient also has improvement regarding both training latency and stability. This implementation method provides a low-latency approach for reinforcement learning on-field training process
    corecore