3,689 research outputs found

    A Brief Comparison Between Available Bio-printing Methods

    Full text link
    The scarcity of organs for transplant has led to large waiting lists of very sick patients. In drug development, the time required for human trials greatly increases the time to market. Drug companies are searching for alternative environments where the in-vivo conditions can be closely replicated. Both these problems could be addressed by manufacturing artificial human tissue. Recently, researchers in tissue engineering have developed tissue generation methods based on 3-D printing to fabricate artificial human tissue. Broadly, these methods could be classified as laser-assisted and laser free. The former have very fine spatial resolutions (10s of μ\mum) but suffer from slow speed ( <102< 10^2 drops per second). The later have lower spatial resolutions (100s of μ\mu m) but are very fast (up to 5×1035\times 10^3 drops per second). In this paper we review state-of-the-art methods in each of these classes and provide a comparison based on reported resolution, printing speed, cell density and cell viability

    Inequality and Network Formation Games

    Full text link
    This paper addresses the matter of inequality in network formation games. We employ a quantity that we are calling the Nash Inequality Ratio (NIR), defined as the maximal ratio between the highest and lowest costs incurred to individual agents in a Nash equilibrium strategy, to characterize the extent to which inequality is possible in equilibrium. We give tight upper bounds on the NIR for the network formation games of Fabrikant et al. (PODC '03) and Ehsani et al. (SPAA '11). With respect to the relationship between equality and social efficiency, we show that, contrary to common expectations, efficiency does not necessarily come at the expense of increased inequality.Comment: 27 pages. 4 figures. Accepted to Internet Mathematics (2014

    A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units

    Get PDF
    Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations

    Sparse And Low Rank Decomposition Based Batch Image Alignment for Speckle Reduction of retinal OCT Images

    Full text link
    Optical Coherence Tomography (OCT) is an emerging technique in the field of biomedical imaging, with applications in ophthalmology, dermatology, coronary imaging etc. Due to the underlying physics, OCT images usually suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. Here, a sparse and low rank decomposition based method is used for speckle reduction in retinal OCT images. This technique works on input data that consists of several B-scans of the same location. The next step is the batch alignment of the images using a sparse and low-rank decomposition based technique. Finally the denoised image is created by median filtering of the low-rank component of the processed data. Simultaneous decomposition and alignment of the images result in better performance in comparison to simple registration-based methods that are used in the literature for noise reduction of OCT images.Comment: Accepted for presentation at ISBI'1
    • …
    corecore