9 research outputs found

    Artificial Neural Network based Cancer Cell Classification

    Get PDF
    This paper addresses the system which achieves auto-segmentation and cell characterization for prediction of percentage of carcinoma (cancerous) cells in the given image with high accuracy. The system has been designed and developed for analysis of medical pathological images based on hybridization of syntactic and statistical approaches, using Artificial Neural Network as a classifier tool (ANN) [2]. This system performs segmentation and classification as is done in human vision system [1] [9] [10] [12], which recognize objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features and segmentation with Artificial Neural Network (ANN) classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semi-supervised approach in which supervision is involved only at the level of defining structure of Artificial Neural Network; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. Finally, algorithm was applied to selected pathological images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated [18] [21]. Keywords: Grey scale images, Histogram equalization, Gausian filtering, Haris corner detector, Threshold, Seed point, Region growing segmentation, Tamura texture feature extraction, Artificial Neural Network(ANN), Artificial Neuron, Synapses, Weights, Activation function, Learning function, Classification matrix

    Consensus-less Security: A truly scalable distributed ledger

    No full text
    Distributed ledger technology was expected to spark a technical revolution similar to the internet revolution. After the release of Bitcoin in 2008, many developments have significantly increased the performance of distributed ledger technology. Nevertheless, the first truly scalable ledger has yet to be deployed. All of them have issues with scaling in either the throughput, the number of nodes which can validate transaction or both. The concept behind a distributed ledger is that the integrity of the ledger is a shared responsibility. However, as soon as new technology emerges, also misuse surfaces, especially if there are financial gains involved. The general solution, to prevent such abuse, in distributed ledger technology is through the use of global consensus. If the majority of a network is honest, and we require a majority vote on the validity of a transaction, no malicious transactions will succeed. A downside of requiring a majority vote is that every node eligible to vote must contain full knowledge on all previous transactions. This work argues that the requirement of global consensus is a major limiting factor when it comes to the scalability of current ledgers. The goal of this work is to design a scalable distributed ledger whose security does not rely on global consensus. It proposes a novel algorithm that guarantees security, even under adversarial attack, by up to a third of the network exhibiting byzantine behavior. It does so using Trustchain, a pair-wise ledger designed by the Delft University of Technology, and `Fair Witness Selection Protocol', a newly designed publicly verifiable witness selection algorithm with an indicated message and communication complexity of O(log(n))O(log^\star(n)). A mathematical lower-bound is given on the security level of the algorithm, and the security is reduced to the security of the underlying hash function. Several experiments were executed on the DAS-5 supercomputer to confirm the scalability of this work. These experiments show that the throughput of the network scales linearly, and has been tested up to 2500 nodes (simultaneously acting as validators and clients). To the best of the author's knowledge, it is the only ledger that has no theoretical limits on the number of clients, number of validators, or throughput. A peak-throughput of 7025 tx/s has been observed at a network size of 280 nodes. Furthermore, the total transaction time remained roughly constant at about 15 milliseconds regardless of the network size

    On the Correctness of Program Execution when Cache Coherence is Maintained Locally at Data-Sharing Boundaries in Distributed Shared Memory Multiprocessors

    No full text
    Emerging multiprocessor architectures such as chip multiprocessors, embedded architectures, and massively parallel architectures, demand faster, more efficient, and more scalable cache coherence schemes. In devising more costefficient schemes, formal insights into a system model is deemed useful. We,in this paper, build formalisms for execution in cache based Distributed shared-memory multiprocessors (DSM) obeying Release Consistency model, and derive conditions for cache coherence. A cost-efficient cache coherence scheme without directories is designed. Our approach relies on processor directed coherence actions, which are early in nature. The scheme exploits sharing information provided by a programmer-centric framework. Per-processor coherence buffers (CB) are employed to impose coherence on live shared variables between consecutive release points in the execution. Simulation of 8 entry 4-way associative CB based system achieves a speedup of 1.07–4.31 over full-map 3-hop directory scheme for six of the SPLASH-2 benchmarks

    Coherence Buffer: An Architectural Support for Imposing Early and Local Cache Coherence in Distributed Shared-Memory Multiprocessors

    No full text
    Cache coherence problem is pervasive, and a solution to this problem affects the memory performance, influences the amount of..

    Rice leaves disease classification using deep convolutional neural network

    No full text
    The rice disease due to fungus, bacteria, spot and sheath blight, leaf scald effects the crops yield. The farmers have limitation predicting the quality on the crop for large scale evaluation. Therefore, there is a need for an automatic leaves disease prediction tool to assists to apply corrective procedures. Deep learning models have outperformed in several sectors of computer vision. In this paper a deep leaning model based on pre-trained CNN is customized through altering the architecture of the models and apply transfer learning methods and the resulting model named PaddyLeaf15 CNN is evaluated on the benchmark dataset from Kaggle. The results indicate that the proposed model outperforms as compared to VGG-16 and Inception V3 based models with highest model accuracy of 95%

    Rice Leaves Disease Classification Using Deep Convolutional Neural Network

    Full text link
    The rice disease due to fungus, bacteria, spot and sheath blight, leaf scald effects the crops yield. The farmers have limitation predicting the quality on the crop for large scale evaluation. Therefore, there is a need for an automatic leaves disease prediction tool to assists to apply corrective procedures. Deep learning models have outperformed in several sectors of computer vision. In this paper a deep leaning model based on pre-trained CNN is customized through altering the architecture of the models and apply transfer learning methods and the resulting model named PaddyLeaf15 CNN is evaluated on the benchmark dataset from Kaggle. The results indicate that the proposed model outperforms as compared to VGG-16 and Inception V3 based models with highest model accuracy of 95%

    Current Trends in Technology and Science ISSN: 2279-0535. Volume: III, Issue: I An NLP Based Approach for Extracting Intelligence from

    No full text
    Abstract- A vast amount of electronic information is available in the form of documents such as papers, emails, reports, html pages etc. Sifting through such documents can result in very essential information. An automated tool would be of great use, for identifying and extracting this kind of information. This paper presents an automated approach for identifying a set of event patterns called intelligent information from natural language text
    corecore