2,544 research outputs found

    MFC: An open-source high-order multi-component, multi-phase, and multi-scale compressible flow solver

    Get PDF
    MFC is an open-source tool for solving multi-component, multi-phase, and bubbly compressible flows. It is capable of efficiently solving a wide range of flows, including droplet atomization, shock–bubble interaction, and bubble dynamics. We present the 5- and 6-equation thermodynamically-consistent diffuse-interface models we use to handle such flows, which are coupled to high-order interface-capturing methods, HLL-type Riemann solvers, and TVD time-integration schemes that are capable of simulating unsteady flows with strong shocks. The numerical methods are implemented in a flexible, modular framework that is amenable to future development. The methods we employ are validated via comparisons to experimental results for shock–bubble, shock–droplet, and shock–water-cylinder interaction problems and verified to be free of spurious oscillations for material-interface advection and gas–liquid Riemann problems. For smooth solutions, such as the advection of an isentropic vortex, the methods are verified to be high-order accurate. Illustrative examples involving shock–bubble-vessel-wall and acoustic–bubble-net interactions are used to demonstrate the full capabilities of MFC

    Registration accuracy of the optical navigation system for image-guided surgery

    Get PDF
    Abstract. During the last decades, image-guided surgery has been a vastly growing method during medical operations. It provides a new opportunity to perform surgical operations with higher accuracy and reliability than before. In image-guided surgery, a navigation system is used to track the instrument’s location and orientation during the surgery. These navigation systems can track the instrument in many ways, the most common of which are optical tracking, mechanical tracking, and electromagnetic tracking. Usually, the navigation systems are used primarily in surgical operations located in the head and spine area. For this reason, it is essential to know the registration accuracy and thus the navigational accuracy of the navigation system, and how different registration methods might affect them. In this research, the registration accuracy of the optical navigation system is investigated by using a head phantom whose coordinate values of holes in the surface are measured during the navigation after different registration scenarios. Reference points are determined using computed tomography images taken from the head phantom. The absolute differences of the measured points to the corresponding reference points are calculated and the results are illustrated using bar graphs and three-dimensional point clouds. MATLAB is used to analyze and present the results. Results show that registration accuracy and thus also navigation accuracy are primarily affected by how the first three registration points are determined for the navigation system at the beginning of the registration. This should be considered in future applications where the navigation system is used in image-guided surgery.Kuvaohjatun kirurgian optisen navigointilaitteen rekisteröintitarkkuus. Tiivistelmä. Viimeisten vuosikymmenien aikana kuvaohjattu kirurgia on yleistynyt laajalti lääketieteellisten toimenpiteiden aikana ja se tarjoaa entistä paremman mahdollisuuden tarkkaan ja luotettavaan hoitoon. Kuvaohjatussa kirurgiassa navigointilaitteisto seuraa käytetyn instrumentin paikkaa ja orientaatiota operaation aikana. Navigointilaitteistoilla on erilaisia toimintaperiaatteita, joiden perusteella ne seuraavat instrumenttia. Yleisimmin käytetyt navigointilaitteistot perustuvat optiseen, mekaaniseen, tai sähkömagneettiseen seurantaan. Yleensä kuvaohjattua kirurgiaa käytetään pään ja selkärangan alueen kirurgisissa operaatioissa, joten on erittäin tärkeää, että navigointilaitteiston rekisteröinti- ja siten myös navigointitarkkuus tunnetaan, sekä erilaisten rekisteröintitapojen mahdolliset vaikutukset kyseisiin tarkkuuksiin. Tässä tutkimuksessa optisen navigointilaitteen rekisteröintitarkkuutta tutkitaan päämallin avulla, jonka pintaan luotujen reikien koordinaattiarvot mitataan navigointitilanteessa erilaisten rekisteröintitapojen jälkeen. Referenssipisteet kyseisille mittauspisteille määritetään päämallin tietokonetomografiakuvista. Mitattujen pisteiden, sekä vastaavien referenssipisteiden väliset absoluuttiset erot lasketaan ja tulokset esitetään palkkikuvaajien, sekä kolmiulotteisten pistepilvien avulla käyttäen apuna MATLAB-ohjelmistoa. Tulokset osoittavat, että rekisteröintitarkkuuteen ja siten navigointitarkkuuteen vaikuttaa eniten rekisteröintitilanteen alussa määritettävien kolmen ensimmäisen rekisteröintipisteen sijainti ja tämä tuleekin ottaa huomioon jatkossa tilanteissa, joissa navigointilaitetta käytetään kuvaohjatussa kirurgiassa

    REAL-TIME 4D ULTRASOUND RECONSTRUCTION FOR IMAGE-GUIDED INTRACARDIAC INTERVENTIONS

    Get PDF
    Image-guided therapy addresses the lack of direct vision associated with minimally- invasive interventions performed on the beating heart, but requires effective intraoperative imaging. Gated 4D ultrasound reconstruction using a tracked 2D probe generates a time-series of 3D images representing the beating heart over the cardiac cycle. These images have a relatively high spatial resolution and wide field of view, and ultrasound is easily integrated into the intraoperative environment. This thesis presents a real-time 4D ultrasound reconstruction system incorporated within an augmented reality environment for surgical guidance, whose incremental visualization reduces common acquisition errors. The resulting 4D ultrasound datasets are intended for visualization or registration to preoperative images. A human factors experiment demonstrates the advantages of real-time ultrasound reconstruction, and accuracy assessments performed both with a dynamic phantom and intraoperatively reveal RMS localization errors of 2.5-2.7 mm, and 0.8 mm, respectively. Finally, clinical applicability is demonstrated by both porcine and patient imaging

    Towards Computer Aided Management of Kidney Disease

    Get PDF
    Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task. Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP) have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function. While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modalities for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in the physical space in which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. Renal volumetry is of great interest in the management of chronic kidney diseases (CKD). In this work, we have implemented a tracked ultrasound system and developed a statistical shape model of the kidney. We utilize the tracked ultrasound to acquire a stack of slices that are able to capture the region of interest, in our case kidney phantoms, and reconstruct 3D volume from spatially localized 2D slices. Approximate shape data is then extracted from this 3D volume using manual segmentation of the organ and a shape model is fit to this data. This generates an instance from the shape model that best represents the scanned phantom and volume calculation is done on this instance. We observe that we can calculate the volume to within 10% error in estimation when compared to the gold standard volume of the phantom

    B2B2: LiDAR 2D Mapping Rover

    Get PDF
    Autonomous machines are becoming more popular and useful with even self-driving cars being a thing of the present. Most of these machines navigate using cameras and LiDAR which does not detect glass, therefore the machines give misleading results when objects and obstacles are transparent to the wavelengths of the light used. This is problematic in modern building floor plans with glass walls. A solution is to build a ROS system that fuses ultrasonic sensors with LiDAR sensors in order for a robot to navigate in a building that has glass walls. Using both sensors, the final product is a robot that creates a 2D map using Simultaneous Localization and Mapping (SLAM) as well as other pertinent Robotics Operating Systems (ROS) packages. This map enables any mobile robot to pathplan from point A to B on the now created 2D floor plan that incorporates glass and non-glass obstacles. This saves time and energy when compared to a robot that moves from point A to B that has to continuously change paths in the presence of obstacles

    Efficient 2D SLAM for a Mobile Robot with a Downwards Facing Camera

    Get PDF
    As digital cameras become cheaper and better, computers more powerful, and robots more abundant the merging of these three techniques also becomes more common and capable. The combination of these techniques is often inspired by the human visual system and often strives to give machines the same capabilities that humans already have, such as object identification, navigation, limb coordination, and event detection. One such field that is particularly popular is that of SLAM, or Simultaneous Localization and Mapping, which has high-profile applications in self-driving cars and delivery drones. This thesis proposes and describes an online SLAM algorithm for a specific scenario: that of a robot with a downwards facing camera exploring a flat surface (e.g., a floor). The method is based on building homographies from robot odometry data, which are then used to rectify the images so that the tilt of the camera with regards to the floor is eliminated, thereby moving the problem from 3D to 2D. The 2D pose of the robot in the plane is estimated using registrations of SURF features, and then a bundle adjustment algorithm is used to consolidate the most recent measurements with the older ones in order to optimize the map. The algorithm is implemented and tested with an AR.Drone 2.0 quadcopter. The results are mixed, but hardware seems to be the limiting factor: the algorithm performs well and runs at 5-20 Hz on a i5 desktop computer; but the bad quality, high compression and low resolution of the drone’s bottom camera makes the algorithm unstable and this cannot be overcome, even with several tiers of outlier filtering.För att robotar skall vara praktiska behöver de ha en flexibel uppfattning om sin omgivning och deras egen position i den, men de metoder som finns för detta idag är ofta väldigt krävande. I det här projektet har en förenklad metod för kartläggning i realtid med en drönare utvecklats. Algoritmen behandlar ett enklare problem än de vanliga tredimensionella problemen - istället för att titta framåt i rummet tittar drönaren neråt och försöker bygga en karta genom att pussla ihop bilder av golvet. Metoden är effektiv, men kvalitén på drönarens kamera som användes är för dålig för att metoden skall ge pålitliga resultat

    3-d reconstruction and morphological analysis of normal rectum

    Get PDF
    There is no documentation of the 3-D dimensions of normal rectum. We present a method for the architectural and volumetric analysis of the rectum which makes use of a computerized 3-D reconstruction and CT scans of cross-sections of the rectum. The technique is simple, fast, yet potentially reliable. An attempt has been made to calculate the volume, area and average diameter of non-distended rectums (normal). Once we can standardize the technique, then we can study dimensions of the rectum in adult males and females. This thesis constitutes a unified presentation of the essential aspects of the method used in this study. We plan to study the dimensions of distended rectums (normal) measured in single column Barium Enema X-rays in two views. Once the technique is standardized, it can be applied to study the dimensions of diseased rectums

    Towards Image-Guided Pediatric Atrial Septal Defect Repair

    Get PDF
    Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10\% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). This software system serves as the core technology by which a system for reduction of radiation exposure to pediatric patients was developed. The bulk of the initial work in this research endevaour was the development of KNIFE which itself went through countless iterations before arriving at its current state as per the feature requirements established. Secondly, since this work involved the use of captured medical images and their use in an IGS software suite, a brief analysis of the physics behind the images was conducted. Through this aspect of the work, intrinsic parameters (principal point and focal point) of the fluoroscope were quantified using a 3D grid calibration phantom. A second grid phantom was traversed through the fluoroscopic imaging volume of II and flat panel based systems at 2 cm intervals building a scatter field of the volume to demonstrate pincushion and \u27S\u27 distortion in the images. Effects of projection distortion on the images was assessed by measuring the fiducial registration error (FRE) of each point used in two different registration techniques, where both methods utilized ordinary procrustes analysis but the second used a projection matrix built from the fluoroscopes calculated intrinsic parameters. A case study was performed to test whether the projection registration outperforms the rigid transform only. Using the knowledge generated were able to successfully design and complete mock clinical procedures using cardiac phantom models. These mock trials at the beginning of this work used a single point to represent catheter location but this was eventually replaced with a full shape model that offered numerous advantages. At the conclusion of this work a novel protocol for conducting IG ASD procedures was developed. Future work would involve the construction of novel EM tracked tools, phantom models for other vascular diseases and finally clinical integration and use

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing
    corecore