151 research outputs found

    Image Processing Techniques for Bone Cell Analysis

    Get PDF
    poster abstractOsteoblast and osteoclast are two different types of bone cell that are responsible for bone formation and bone resorption, respectively. Both cell types are very critical in maintaining, repairing, and remodeling of the skeleton in the human body. Moreover, they are involved in skeletal diseases such as osteoporosis and osteoarthritis. To absorb bone matrix, pre-osteoclasts infuse into one large multinucleated mature osteoclast. The area of the large multinucleated cell is measured to represent the formation and the activity of mature osteoclast cells. The number of osteoblast cells is a key factor that determines the rate of bone formation. Thus, the area of mature osteoclast and the number of osteoblast are two critical parameters to decide the effect of a stimulus on bone remodeling. In order to automatically obtain the number of osteoblast cells and the area of the osteoclast cells from bright field images, an image analysis technique, implemented in OpenCV, was developed. After cells are stained and photographed, edge maps of the acquired images are obtained using edge detection techniques such as the Canny edge detector. The scheme requires a threshold value from the user and employs it to determine an initial edge map, that is displayed to the user. If the user is not satisfied with the outcome they can request the threshold value to be adjusted and new edge map is consequently obtained. If the edge maps are satisfactory, they are subsequently converted into segmentation masks. The purpose of this step is to eliminate noise in the background while retaining objects/cells of interest. Once the cells have been identified the technique employs the Hough Circle Transform to identify and count the number of osteoblast cells present in the image. For the osteoclast cells, the scheme permits the user to manually select specific cells in order to determine their size as a ratio of the total image size

    Parallelized Ray Casting Volume Rendering and 3D Segmentation with Combinatorial Map

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Rapid development of digital technology has enabled the real-time volume rendering of scientific data, in particular large microscopy data sets. In general, volume rendering techniques project 3D discrete datasets onto 2D image planes, with the generated views being transparent and having designated color that is not necessarily "real" color. Volume rendering techniques initially require designating a processing method that assigns different colors and transparency coefficients to different regions. Then based on the "viewer" and the dataset "location," the method will determine the final imaging effect. Current popular techniques include ray casting, splatting, shear warp, and texture-based volume rendering. Of particular interest is ray casting as it permits the display of objects interior to a dataset as well as render complex objects such as skeleton and muscle. However, ray casting requires large memory and suffers from longer processing time. One way to address this is to parallelize its implementation on programmable graphic processing hardware. This thesis proposes a GPU based ray casting algorithm that can render a 3D volume in real-time application. In addition, to implementing volume rendering techniques on programmable graphic processing hardware to decrease execution times, 3D image segmentation techniques can also be utilized to increase execution speeds. In 3D image segmentation, the dataset is partitioned into smaller sized regions based on specific properties. By using a 3D segmentation method in volume rendering applications, users can extract individual objects from within the 3D dataset for rendering and further analysis. This thesis proposes a 3D segmentation algorithm with combinatorial map that can be parallelized on graphic processing units

    CIC Rearrangement Sarcoma: A Case Report and Literature Review

    Get PDF
    Background: CIC-rearranged sarcoma (capicua transcriptional repressor- rearranged sarcoma, CRS) is a rare type of undifferentiated small round-cell sarcoma. There are few reported cases of CRS; in 2017, 115 cases were reported abroad and 10 cases were reported in China. Case summary: The patient is a 41-year-old male who presented with a mass in the left lumbar region for more than 1 month. Tumor excision was performed at another hospital. Pathology results indicated CRS. PET-CT indicated changes in the left lumbar region, and postoperative tissue repair changes were considered. However, combined with the medical history and imaging features, the clinical diagnosis was considered recurrence of tumor in the left lumbar region. Postoperatively, the patient was transferred to the burn department for pedicled skin-flap repair. Conclusion: CRS is rare, and the prognosis of these patients is poor. Surgical resection of the lesion is the first choice for patients without metastasis

    On Tightness of the Tsaknakis-Spirakis Algorithm for Approximate Nash Equilibrium

    Full text link
    Finding the minimum approximate ratio for Nash equilibrium of bi-matrix games has derived a series of studies, started with 3/4, followed by 1/2, 0.38 and 0.36, finally the best approximate ratio of 0.3393 by Tsaknakis and Spirakis (TS algorithm for short). Efforts to improve the results remain not successful in the past 14 years. This work makes the first progress to show that the bound of 0.3393 is indeed tight for the TS algorithm. Next, we characterize all possible tight game instances for the TS algorithm. It allows us to conduct extensive experiments to study the nature of the TS algorithm and to compare it with other algorithms. We find that this lower bound is not smoothed for the TS algorithm in that any perturbation on the initial point may deviate away from this tight bound approximate solution. Other approximate algorithms such as Fictitious Play and Regret Matching also find better approximate solutions. However, the new distributed algorithm for approximate Nash equilibrium by Czumaj et al. performs consistently at the same bound of 0.3393. This proves our lower bound instances generated against the TS algorithm can serve as a benchmark in design and analysis of approximate Nash equilibrium algorithms

    ABSNFT: Securitization and Repurchase Scheme for Non-Fungible Tokens Based on Game Theoretical Analysis

    Full text link
    The Non-Fungible Token (NFT) is viewed as one of the important applications of blockchain technology. Although NFT has a large market scale and multiple practical standards, several limitations of the existing mechanism in NFT markets exist. This work proposes a novel securitization and repurchase scheme for NFT to overcome these limitations. We first provide an Asset-Backed Securities (ABS) solution to settle the limitations of non-fungibility of NFT. Our securitization design aims to enhance the liquidity of NFTs and enable Oracles and Automatic Market Makers (AMMs) for NFTs. Then we propose a novel repurchase protocol for a participant owing a portion of NFT to repurchase other shares to obtain the complete ownership. As participants may strategically bid during the acquisition process, our repurchase process is formulated as a Stackelberg game to explore the equilibrium prices. We also provide solutions to handle difficulties at market such as budget constraints and lazy bidders.Comment: To appear in Financial Cryptography and Data Security 202

    Transformation Model With Constraints for High Accuracy of 2D-3D Building Registration in Aerial Imagery

    Get PDF
    This paper proposes a novel rigorous transformation model for 2D-3D registration to address the difficult problem of obtaining a sufficient number of well-distributed ground control points (GCPs) in urban areas with tall buildings. The proposed model applies two types of geometric constraints, co-planarity and perpendicularity, to the conventional photogrammetric collinearity model. Both types of geometric information are directly obtained from geometric building structures, with which the geometric constraints are automatically created and combined into the conventional transformation model. A test field located in downtown Denver, Colorado, is used to evaluate the accuracy and reliability of the proposed method. The comparison analysis of the accuracy achieved by the proposed method and the conventional method is conducted. Experimental results demonstrated that: (1) the theoretical accuracy of the solved registration parameters can reach 0.47 pixels, whereas the other methods reach only 1.23 and 1.09 pixels; (2) the RMS values of 2D-3D registration achieved by the proposed model are only two pixels along the x and y directions, much smaller than the RMS values of the conventional model, which are approximately 10 pixels along the x and y directions. These results demonstrate that the proposed method is able to significantly improve the accuracy of 2D-3D registration with much fewer GCPs in urban areas with tall buildings

    PRIOR: Prototype Representation Joint Learning from Medical Images and Reports

    Full text link
    Contrastive learning based vision-language joint pre-training has emerged as a successful representation learning strategy. In this paper, we present a prototype representation learning framework incorporating both global and local alignment between medical images and reports. In contrast to standard global multi-modality alignment methods, we employ a local alignment module for fine-grained representation. Furthermore, a cross-modality conditional reconstruction module is designed to interchange information across modalities in the training phase by reconstructing masked images and reports. For reconstructing long reports, a sentence-wise prototype memory bank is constructed, enabling the network to focus on low-level localized visual and high-level clinical linguistic features. Additionally, a non-auto-regressive generation paradigm is proposed for reconstructing non-sequential reports. Experimental results on five downstream tasks, including supervised classification, zero-shot classification, image-to-text retrieval, semantic segmentation, and object detection, show the proposed method outperforms other state-of-the-art methods across multiple datasets and under different dataset size settings. The code is available at https://github.com/QtacierP/PRIOR.Comment: Accepted by ICCV 202
    • …
    corecore