11 research outputs found

    A mechanism for simplified scanner control with application to MRI-guided interventions

    Get PDF
    Magnetic Resonance Image (MRI)-guided interventions involving percutaneous biopsies of lesions, or trajectory alignment with prospective stereotaxy are conducted in real time using rapid image acquisition. A mechanism of passively localizing a device and calculating its orientation is desired to improve interventional outcomes in these situations. In this work, we propose and evaluate an image-based technique to determine the position and alignment of a linearly shaped interventional device within an ex-vivo tissue specimen. Low resolution 3D orientation scan data is processed to produce a virtual line tting using principal component analysis. The line tting algorithm was incorporated into a biopsy needle tracking system implemented with an MRscanner operated using a footswitch. A GUI application was written to collect foot pedal input and display automated visualization of device placement inside the scanner room. Placement time trials (N=3) conducted with this system using porcine muscle and phantom samples suspended in rigid frames with inserted gadolinium-enhanced targets. The mean targeting error across all directions was 3:6 mm and 5:1 mm for the phantom trials and ex-vivo trials respectively. The average entry-to-target time was 247 sec. Device localization during trials was adequate to contain a 11-gauge titanium biopsy needle within a visualization slice volume of 10 mm after 93:8% of alignments over insertion lengths between 30 mm to 110 mm at insertion angles between 1:4 to 20 from the static magnetic eld and frequency encoding axes. Practical considerations were identi ed and occupational exposure measurements were collected as part of determining the system's overall feasibility

    Modeling and Experimental Techniques to Demonstrate Nanomanipulation With Optical Tweezers

    Get PDF
    The development of truly three-dimensional nanodevices is currently impeded by the absence of effective prototyping tools at the nanoscale. Optical trapping is well established for flexible three-dimensional manipulation of components at the microscale. However, it has so far not been demonstrated to confine nanoparticles, for long enough time to be useful in nanoassembly applications. Therefore, as part of this work we demonstrate new techniques that successfully extend optical trapping to nanoscale manipulation. In order to extend optical trapping to the nanoscale, we must overcome certain challenges. For the same incident beam power, the optical binding forces acting on a nanoparticle within an optical trap are very weak, in comparison with forces acting on microscale particles. Consequently, due to Brownian motion, the nanoparticle often exits the trap in a very short period of time. We improve the performance of optical traps at the nanoscale by using closed-loop control. Furthermore, we show through laboratory experiments that we are able to localize nanoparticles to the trap using control systems, for sufficient time to be useful in nanoassembly applications, conditions under which a static trap set to the same power as the controller is unable to confine a same-sized particle. Before controlled optical trapping can be demonstrated in the laboratory, key tools must first be developed. We implement Langevin dynamics simulations to model the interaction of nanoparticles with an optical trap. Physically accurate simulations provide a robust platform to test new methods to characterize and improve the performance of optical tweezers at the nanoscale, but depend on accurate trapping force models. Therefore, we have also developed two new laboratory-based force measurement techniques that overcome the drawbacks of conventional force measurements, which do not accurately account for the weak interaction of nanoparticles in an optical trap. Finally, we use numerical simulations to develop new control algorithms that demonstrate significantly enhanced trapping of nanoparticles and implement these techniques in the laboratory. The algorithms and characterization tools developed as part of this work will allow the development of optical trapping instruments that can confine nanoparticles for longer periods of time than is currently possible, for a given beam power. Furthermore, the low average power achieved by the controller makes this technique especially suitable to manipulate biological specimens, but is also generally beneficial to nanoscale prototyping applications. Therefore, capabilities developed as part of this work, and the technology that results from it may enable the prototyping of three-dimensional nanodevices, critically required in many applications

    Adapting Computer Vision Models To Limitations On Input Dimensionality And Model Complexity

    Get PDF
    When considering instances of distributed systems where visual sensors communicate with remote predictive models, data traffic is limited to the capacity of communication channels, and hardware limits the processing of collected data prior to transmission. We study novel methods of adapting visual inference to limitations on complexity and data availability at test time, wherever the aforementioned limitations exist. Our contributions detailed in this thesis consider both task-specific and task-generic approaches to reducing the data requirement for inference, and evaluate our proposed methods on a wide range of computer vision tasks. This thesis makes four distinct contributions: (i) We investigate multi-class action classification via two-stream convolutional neural networks that directly ingest information extracted from compressed video bitstreams. We show that selective access to macroblock motion vector information provides a good low-dimensional approximation of the underlying optical flow in visual sequences. (ii) We devise a bitstream cropping method by which AVC/H.264 and H.265 bitstreams are reduced to the minimum amount of necessary elements for optical flow extraction, while maintaining compliance with codec standards. We additionally study the effect of codec rate-quality control on the sparsity and noise incurred on optical flow derived from resulting bitstreams, and do so for multiple coding standards. (iii) We demonstrate degrees of variability in the amount of data required for action classification, and leverage this to reduce the dimensionality of input volumes by inferring the required temporal extent for accurate classification prior to processing via learnable machines. (iv) We extend the Mixtures-of-Experts (MoE) paradigm to adapt the data cost of inference for any set of constituent experts. We postulate that the minimum acceptable data cost of inference varies for different input space partitions, and consider mixtures where each expert is designed to meet a different set of constraints on input dimensionality. To take advantage of the flexibility of such mixtures in processing different input representations and modalities, we train biased gating functions such that experts requiring less information to make their inferences are favoured to others. We finally note that, our proposed data utility optimization solutions include a learnable component which considers specified priorities on the amount of information to be used prior to inference, and can be realized for any combination of tasks, modalities, and constraints on available data

    Scientific Programming and Computer Architecture

    Get PDF
    A variety of programming models relevant to scientists explained, with an emphasis on how programming constructs map to parts of the computer.What makes computer programs fast or slow? To answer this question, we have to get behind the abstractions of programming languages and look at how a computer really works. This book examines and explains a variety of scientific programming models (programming models relevant to scientists) with an emphasis on how programming constructs map to different parts of the computer's architecture. Two themes emerge: program speed and program modularity. Throughout this book, the premise is to "get under the hood," and the discussion is tied to specific programs. The book digs into linkers, compilers, operating systems, and computer architecture to understand how the different parts of the computer interact with programs. It begins with a review of C/C++ and explanations of how libraries, linkers, and Makefiles work. Programming models covered include Pthreads, OpenMP, MPI, TCP/IP, and CUDA.The emphasis on how computers work leads the reader into computer architecture and occasionally into the operating system kernel. The operating system studied is Linux, the preferred platform for scientific computing. Linux is also open source, which allows users to peer into its inner workings. A brief appendix provides a useful table of machines used to time programs. The book's website (https://github.com/divakarvi/bk-spca) has all the programs described in the book as well as a link to the html text

    Mobile Forensics – The File Format Handbook

    Get PDF
    This open access book summarizes knowledge about several file systems and file formats commonly used in mobile devices. In addition to the fundamental description of the formats, there are hints about the forensic value of possible artefacts, along with an outline of tools that can decode the relevant data. The book is organized into two distinct parts: Part I describes several different file systems that are commonly used in mobile devices. · APFS is the file system that is used in all modern Apple devices including iPhones, iPads, and even Apple Computers, like the MacBook series. · Ext4 is very common in Android devices and is the successor of the Ext2 and Ext3 file systems that were commonly used on Linux-based computers. · The Flash-Friendly File System (F2FS) is a Linux system designed explicitly for NAND Flash memory, common in removable storage devices and mobile devices, which Samsung Electronics developed in 2012. · The QNX6 file system is present in Smartphones delivered by Blackberry (e.g. devices that are using Blackberry 10) and modern vehicle infotainment systems that use QNX as their operating system. Part II describes five different file formats that are commonly used on mobile devices. · SQLite is nearly omnipresent in mobile devices with an overwhelming majority of all mobile applications storing their data in such databases. · The second leading file format in the mobile world are Property Lists, which are predominantly found on Apple devices. · Java Serialization is a popular technique for storing object states in the Java programming language. Mobile application (app) developers very often resort to this technique to make their application state persistent. · The Realm database format has emerged over recent years as a possible successor to the now ageing SQLite format and has begun to appear as part of some modern applications on mobile devices. · Protocol Buffers provide a format for taking compiled data and serializing it by turning it into bytes represented in decimal values, which is a technique commonly used in mobile devices. The aim of this book is to act as a knowledge base and reference guide for digital forensic practitioners who need knowledge about a specific file system or file format. It is also hoped to provide useful insight and knowledge for students or other aspiring professionals who want to work within the field of digital forensics. The book is written with the assumption that the reader will have some existing knowledge and understanding about computers, mobile devices, file systems and file formats

    Mobile Forensics – The File Format Handbook

    Get PDF
    This open access book summarizes knowledge about several file systems and file formats commonly used in mobile devices. In addition to the fundamental description of the formats, there are hints about the forensic value of possible artefacts, along with an outline of tools that can decode the relevant data. The book is organized into two distinct parts: Part I describes several different file systems that are commonly used in mobile devices. · APFS is the file system that is used in all modern Apple devices including iPhones, iPads, and even Apple Computers, like the MacBook series. · Ext4 is very common in Android devices and is the successor of the Ext2 and Ext3 file systems that were commonly used on Linux-based computers. · The Flash-Friendly File System (F2FS) is a Linux system designed explicitly for NAND Flash memory, common in removable storage devices and mobile devices, which Samsung Electronics developed in 2012. · The QNX6 file system is present in Smartphones delivered by Blackberry (e.g. devices that are using Blackberry 10) and modern vehicle infotainment systems that use QNX as their operating system. Part II describes five different file formats that are commonly used on mobile devices. · SQLite is nearly omnipresent in mobile devices with an overwhelming majority of all mobile applications storing their data in such databases. · The second leading file format in the mobile world are Property Lists, which are predominantly found on Apple devices. · Java Serialization is a popular technique for storing object states in the Java programming language. Mobile application (app) developers very often resort to this technique to make their application state persistent. · The Realm database format has emerged over recent years as a possible successor to the now ageing SQLite format and has begun to appear as part of some modern applications on mobile devices. · Protocol Buffers provide a format for taking compiled data and serializing it by turning it into bytes represented in decimal values, which is a technique commonly used in mobile devices. The aim of this book is to act as a knowledge base and reference guide for digital forensic practitioners who need knowledge about a specific file system or file format. It is also hoped to provide useful insight and knowledge for students or other aspiring professionals who want to work within the field of digital forensics. The book is written with the assumption that the reader will have some existing knowledge and understanding about computers, mobile devices, file systems and file formats

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains
    corecore