380 research outputs found

    Efficient hardware implementations of high throughput SHA-3 candidates keccak, luffa and blue midnight wish for single- and multi-message hashing

    Get PDF
    In November 2007 NIST announced that it would organize the SHA-3 competition to select a new cryptographic hash function family by 2012. In the selection process, hardware performances of the candidates will play an important role. Our analysis of previously proposed hardware implementations shows that three SHA-3 candidate algorithms can provide superior performance in hardware: Keccak, Luffa and Blue Midnight Wish (BMW). In this paper, we provide efficient and fast hardware implementations of these three algorithms. Considering both single- and multi-message hashing applications with an emphasis on both speed and efficiency, our work presents more comprehensive analysis of their hardware performances by providing different performance figures for different target devices. To our best knowledge, this is the first work that provides a comparative analysis of SHA-3 candidates in multi-message applications. We discover that BMW algorithm can provide much higher throughput than previously reported if used in multi-message hashing. We also show that better utilization of resources can increase speed via different configurations. We implement our designs using Verilog HDL, and map to both ASIC and FPGA devices (Spartan3, Virtex2, and Virtex 4) to give a better comparison with those in the literature. We report total area, maximum frequency, maximum throughput and throughput/area of the designs for all target devices. Given that the selection process for SHA3 is still open; our results will be instrumental to evaluate the hardware performance of the candidates

    Pairwise sequence alignment with block and character edit operations

    Full text link
    Pairwise sequence comparison is one of the most fundamental problems in string processing. The most common metric to quantify the similarity between sequences S and T is edit distance, d(S,T), which corresponds to the number of characters that need to be substituted, deleted from, or inserted into S to generate T. However, fewer edit operations may be sufficient for some string pairs to transform one string to the other if larger rearrangements are permitted. Block edit distance refers to such changes in substring level (i.e., blocks) that "penalizes" entire block removals, insertions, copies, and reversals with the same cost as single-character edits (Lopresti & Tomkins, 1997). Most studies to calculate block edit distance to date aimed only to characterize the distance itself for applications in sequence nearest neighbor search without reporting the full alignment details. Although a few tools try to solve block edit distance for genomic sequences, such as GR-Aligner, they have limited functionality and are no longer maintained. Here, we present SABER, an algorithm to solve block edit distance that supports block deletions, block moves, and block reversals in addition to the classical single-character edit operations. Our algorithm runs in O(m^2.n.l_range) time for |S|=m, |T|=n and the permitted block size range of l_range; and can report all breakpoints for the block operations. We also provide an implementation of SABER currently optimized for genomic sequences (i.e., generated by the DNA alphabet), although the algorithm can theoretically be used for any alphabet. SABER is available at http://github.com/BilkentCompGen/sabe

    Distance Education Adaptation of Vocational High School Students within the Digital Divide

    Get PDF
    The rapid development of information and communication technologies has led to a rapid change in the education system. The most important alteration is that distance education methods have been passed down in universities. In this context, it was carried out in order to determine whether there is a digital divide among the vocational high school students in different socioeconomic situation which makes their courses with distance education method. This study was carried out with the participation of 891 students from first year students of Dokuz Eylül University Technical Programs Department of İzmir Vocational High School. One sample t-test, ANOVA and spatial statistical methods (weighted average center, weighted standard distance and weighted standard deviation ellipse) were used in the analysis of the data. In the evaluation of the results, the students were determined as gender and cities where they graduate as factor. Among the students, it has been found that men use information technologies more effectively than women who have information technologies. Additionally, results of the spatial statistics there is no digital divide between regions of country according to city where students graduated

    THE ACTIVE PARTICIPATION OF SUBJECT MATTER EXPERTS IN ECOURSE PRODUCTION: A CASE STUDY FROM ANADOLU UNIVERSİTY OPEN EDUCATION SYSTEM

    Get PDF
    Storyboard is used in the field of e-learning as a tool for the subject matter experts and instructional designers to speak the same language while producing e-course materials. In this study, the roles of the subject matter experts who are the keystones of the team that produce storyboards of e-learning materials within Anadolu University Open Education System and their experiences were examined, in an attempt to evaluate the e-course production system. For this reason, a qualitative case study design was used and interview data were collected from subject matter experts, technical production staff and decision-makers. Qualitative data analysis revealed that the roles and responsibilities of the subject matter experts should be well defined and more clearly structured, they should be trained on certain aspects of storyboard development; and their communication and collaboration with the production team needs to be more regularly planned and continuously kept alive.  Article visualizations

    Hyperbolic Centroid Calculations for Text Classification

    Full text link
    A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification

    Farklı nitelikteki biyolojik ağların entegrasyonu ve yerel topolojik özellik vektörleri tabanlı karşılaştırılması

    Get PDF
    TÜBİTAK EEEAG Proje01.12.2016In this project, we developed a framework for the analysis of integrated genome-scale networks using using directed graphlet signatures. In addition, we developed a novel graph layout algorithm specific for visualizing aligned networks. Analysis of integrated genome-scale networks is a challenging problem due to heterogeneity of high-throughput data. There are several topological measures, such as graphlet counts, for characterization of biological networks. In this project, we present methods for counting small sub-graph patterns in integrated genome-scale networks which are modeled as labeled multidigraphs. We have obtained physical, regulatory, and metabolic interactions between H. sapiens proteins from the Pathway Commons database. The integrated network is filtered for tissue/disease specific proteins by using a large-scale human transcriptional profiling study, resulting in several tissue and disease specific sub-networks. We have applied and extended the idea of graphlet counting in undirected protein-protein interaction (PPI) networks to directed multi-labeled networks and represented each network as a vector of graphlet counts. Graphlet counts are assessed for statistical significance by comparison against a set of randomized networks. We present our results on analysis of differential graphlets between different conditions and on the utility of graphlet count vectors for clustering multiple condition specific networks. Our results show that there are numerous statistically significant graphlets in integrated biological networks and the graphlet signature vector can be used as an effective representation of a multi-labeled network for clustering and systems level analysis of tissue/disease specific networks. In addition, the proposed graph layout algorithm can be used to visualize the similarities and differences between aligned regions of these network

    Using Machine Learning in Forestry

    Get PDF
    Advanced technology has increased demands and needs for innovative approaches to apply traditional methods more economically, effectively, fast and easily in forestry, as in other disciplines. Especially recently emerging terms such as forestry informatics, precision forestry, smart forestry, Forestry 4.0, climate-intelligent forestry, digital forestry and forestry big data have started to take place on the agenda of the forestry discipline. As a result, significant increases are observed in the number of academic studies in which modern approaches such as machine learning and recently emerged automatic machine learning (AutoML) are integrated into decision-making processes in forestry. This study aims to increase further the comprehensibility of machine learning algorithms in the Turkish language, to make them widespread, and be considered a resource for researchers interested in their use in forestry. Thus, it was aimed to bring a review article to the national literature that reveals both how machine learning has been used in various forestry activities from the past to the present and its potential for use in the future

    Wheel Hub Fatigue Performance under Non-constant Rotational Loading and Comparison to Eurocycle Test

    Get PDF
    AbstractWheel Eurocycle (EC) loading condition could be adapted to hub as a result of similar loading characteristics on vehicle. A correlation is constructed between road load data (RLD) for specified vehicles and EC test spectrum. To provide correlation between EC and RLD, test speed, axial and lateral loads at EC are converted to cyclic loading condition and relevant loading scenarios are generated. Rotational effect is taken into account. Pseudo-damage results of RLD and EC spectra are compared and expected fatigue lifetime for hub is presented

    A Deep Learning Model for Automated Segmentation of Fluorescence Cell images

    Get PDF
    Deep learning techniques bring together key advantages in biomedical image segmentation. They speed up the process, increase the reproducibility, and reduce the workload in segmentation and classification. Deep learning techniques can be used for analysing cell concentration, cell viability, as well as the size and form of each cell. In this study, we develop a deep learning model for automated segmentation of fluorescence cell images, and apply it to fluorescence images recorded with a home-built epi-fluorescence microscope. A deep neural network model based on U-Net architecture was built using a publicly available dataset of cell nuclei images [1]. A model accuracy of 97.3% was reached at the end of model training. Fluorescence cell images acquired with our home-built microscope were then segmented using the developed model. 141 of 151 cells in 5 images were successfully segmented, revealing a segmentation success rate of 93.4%. This deep learning model can be extended to the analysis of different cell types and cell viability
    corecore