76 research outputs found
A hybrid radiation detector for simultaneous spatial and temporal dosimetry
In this feasibility study an organic plastic scintillator is calibrated against ionisation chamber measurements and then embedded in a polymer gel dosimeter to obtain a quasi-4D experimental measurement of a radiation field. This hybrid dosimeter was irradiated with a linear accelerator, with temporal measurements of the dose rate being acquired by the scintillator and spatial measurements acquired with the gel dosimeter. The detectors employed in this work are radiologically equivalent; and we show that neither detector perturbs the intensity of the radiation field of the other. By employing these detectors in concert, spatial and temporal variations in the radiation intensity can now be detected and gel dosimeters can be calibrated for absolute dose from a single irradiation
Recommended from our members
Satune: Synthesizing Efficient SAT Encoders
Modern SAT solvers are extremely efficient at solving boolean satisfiability problems, enabling a wide spectrum of techniques for checking, verifying, and validating real-world programs. What remains challenging, though, is how to encode a domain problem (e.g. model checking) into a SAT formula because the same problem can have multiple distinct encodings, which can yield performance results that are orders-of-magnitude apart, regardless of the underlying solvers used. We develop Satune, a tool that can automatically synthesize SAT encoders for different problem domains. Satune employs a DSL that allows developers to express domain problems at a high level and a search algorithm that can effectively find efficient solutions. The search process is guided by observations made over example encodings and their performance for the domain and hence Satune can quickly synthesize a high-performance encoder by incorporating patterns from examples that yield good performance. A thorough evaluation with JMCR, SyPet, Dirk, Hexiom, Sudoku, and KillerSudoku demonstrates that Satune can easily synthesize high-performance encoders for different domains including model checking, synthesis, and games. These encodings generate constraint problems that are often several orders of magnitude faster to solve than the original encodings used by the tools
Recommended from our members
Verifying Correctness of Persistent Memory Programs
Persistent memory (PM) technologies offer performance close to DRAM with persistence. Persistent memory enables programs to directly modify persistent data through normal load and store instructions bypassing heavyweight OS system calls for persistency. Ensuring that these programs are crash-consistent (i.e., power failures) is a major challenge. Stores to persistent memory are not immediately made persistent --- they initially reside in processor cache and are only written to PM when a flush occurs due to space constraints or explicit flush instructions. It is more challenging to test crash consistency for PM than for disks given the PM's byte-addressability that leads to significantly more states. Most of the existing state-of-the-art testing tools require heavy user annotations, report violations that may not correspond to actual bugs, do not test the recovery procedure, and rely on a test suite to cover all test scenarios.This dissertation describes three different testing tools to verify the crash consistency of persistent memory programs:1) Jaaru: a fully-automated and ultra-efficient model checker for PM programs. Key to Jaaru's efficiency is a new technique based on constraint refinement that can reduce the number of executions that must be explored by many orders of magnitude. This exploration technique effectively leverages commit stores, a common coding pattern, to reduce the model checking complexity from exponential in the length of program executions to quadratic.
2) PSan: a tool introducing robustness as a sufficient correctness condition to ensure that program executions are free from bugs resulting from missing flushes. PSan implements an algorithm for checking robustness. This tool can help developers both identify silent data corruption bugs and localize bugs in large traces to the problematic memory operations that are missing flush operations.
3) Yashme: a tool that can detect a novel class of crash consistency bugs for persistent memory programs, which we call persistency races. Persistency races can cause non-atomic stores to be made partially persistent. Persistency races arise due to the interaction of standard compiler optimizations with persistent memory semantics. A major challenge is that in order to detect persistency races, the execution must crash in a very narrow window between a store with a persistency race and its corresponding cache flush operation, making it challenging for naive techniques to be effective. Yashme overcomes this challenge with a novel technique for detecting races in executions that are prefixes of the pre-crash execution. This technique enables Yashme to effectively find persistency races even if the injected crashes do not fall into that window. These testing frameworks were capable of finding many bugs in well-tested applications ranging from persistent data structures to real-world frameworks. These bugs are reported to the developers of these frameworks and most of them are confirmed and the corresponding fixes are available on their Github repositories
Recommended from our members
Satune: Synthesizing Efficient SAT Encoders
Modern SAT solvers are extremely efficient at solving boolean satisfiability problems, enabling a wide spectrum of techniques for checking, verifying, and validating real-world programs. What remains challenging, though, is how to encode a domain problem (e.g. model checking) into a SAT formula because the same problem can have multiple distinct encodings, which can yield performance results that are orders-of-magnitude apart, regardless of the underlying solvers used. We develop Satune, a tool that can automatically synthesize SAT encoders for different problem domains. Satune employs a DSL that allows developers to express domain problems at a high level and a search algorithm that can effectively find efficient solutions. The search process is guided by observations made over example encodings and their performance for the domain and hence Satune can quickly synthesize a high-performance encoder by incorporating patterns from examples that yield good performance. A thorough evaluation with JMCR, SyPet, Dirk, Hexiom, Sudoku, and KillerSudoku demonstrates that Satune can easily synthesize high-performance encoders for different domains including model checking, synthesis, and games. These encodings generate constraint problems that are often several orders of magnitude faster to solve than the original encodings used by the tools
Custom Processor Design Using NISC: A Case-Study on DCT algorithm
Designing Application-Specific Instruction-set Processors (ASIPs) usually requires designing a custom datapath, and modifying instruction-set, instruction decoder, and compiler. A new alternative to ASIPs is No-Instruction-Set-Computers (NISCs) that eliminate the instruction abstraction by compiling programs directly to a given datapath. The compiler analyzes the datapath and extracts possible operations and data flows. The NISC approach simplifies and accelerates the task of custom processor design. In this paper, we present a case-study of designing a custom datapath for a 2-D DCT algorithm. We applied several optimization techniques such as software transformations, operation chaining, datapath pipelining, controller pipelining, and functional unit customization to improve the quality of the design. Most of the techniques are general and can be applied to other applications. The result of synthesizing our final custom datapath on a Xilinx FPGA shows 7.14 times performance improvement, 1.64 times power reduction, 12.5 times energy savings, and more than 3 times area reduction compared to a softcore MIPS implementation. 1
FPGA-friendly code compression for horizontal microcoded custom IPs
Shrinking time-to-market and high demand for productivity has driven traditional hardware designers to use design methodologies that start from high-level languages. However, meeting timing constraints of automatically generated IPs is often a challenging and time-consuming task that must be repeated every time the specification is modified. To address this issue, a new generation of IP-design technologies that is capable of generating custom datapaths as well as programming an existing one is developed. These technologies are often based on Horizontal Microcoded Architectures. Large code size is a well-know problem in HMAs, and is referred to as “code bloating ” problem. In this paper, we study the code size of one of the new HMA-based technologies called NISC. We show that NISC code size can be several times larger than a typical RISC processor, and we propose several low-overhead dictionary-based code compression techniques to reduce the code size. Our compression algorithm leverages the knowledge of “don’t care ” values in the control words to better compress the content of dictionary memories. Our experiments show that by selecting proper memory architectures the code size of NISC can be reduced by 70 % (i.e. 3.3 times) at cost of only 9% performance degradation. We also show that some code compression techniques may increase number of utilized block RAMs in FPGA-based implementations. To address this issue, we propose combining dictionaries and implementing them using embedded dual-port memories
Nanoscience and nanotechnology research publications : a comparison between Australia and the rest of the world
Nanoscience and nanotechnology are research areas of a multidisciplinary nature. Having a good knowledge of the rapidly evolving nature of these research areas is important to understand the research paths, as well as national and global developments in these areas. Accordingly, in this reported study nanoscience and nanotechnology research undertaken globally was compared with that of Australia by way of analyzing research publications. Initially, four different bibliometric Boolean-based search methodologies were used to analyze publications in the Web of Science database (Thomson Reuters ISI Web of Knowledge). These methodologies were (a) lexical query, (b) search in nanoscience and nanotechnology journals, (c) combination of lexical query and journal search and (d) search in the ten nano-journals with the highest impact factors. Based on results obtained, the third methodology was found to be the most comprehensive approach. Consequently, this search methodology was used to compare global and Australian nanoscience and nanotechnology publications for the period 1988-2000. Results demonstrated that depending on the search technique used, Australia ranks fourteenth to seventeenth internationally with a higher than world average number of nanoscience and nanotechnology publications. Over the last decade, Australia showed a relative growth rate in nanoscience and nanotechnology publications of 16 % compared to 12 % for the rest of the world. Researchers from China, the USA and the UK are from the main countries that collaborate with Australian researchers in nanoscience and nanotechnology publications.28 page(s
An Algorithm to Avoid Power Command Jitter in Middleware-Based Distributed Embedded Systems
Middleware such as CORBA provides a software architecture that supports integration of legacy software components with new software in a way that is modular, scalable, and evolvable. However, these benefits come with high run-time overhead. In dynamic hard realtime distributed embedded systems, usually a central power manager calculates and issues all the power commands. The power manager must communicate with different modules to transparently perform mode transitions. Due to inherent communication overhead of middleware based embedded systems, issuing each power command will have considerable overhead for the power manager. This overhead limits the rate of issuing power commands and may cause a shift in their schedule. This paper has two contributions; first, it introduces the Power Command Jitter (PCJ) problem in middleware based embedded systems; second, it proposes an effective Power Command Adjustment algorithm (PCA) that re-orders and reschedules the power commands so that the correctness of the schedule is maintained while minimizing the energy loss. Our experimental results on a commercial software defined radio system (JTRS) shows PCJ can cause violation of real-time deadlines and unreliability of th
- …