119 research outputs found

    EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS

    Get PDF
    Substitution ciphers are one of the earliest types of ciphers. Examples of classic substitution ciphers include the well-known simple substitution and the less well-known homophonic substitution. Although simple substitution ciphers are indeed simple - both in terms of their use and attacks; the homophonic substitution ciphers are far more challenging to break. Even with modern computing technology, homophonic substitution ciphers remain a significant challenge. This project focuses on designing, implementing, and testing an efficient attack on homophonic substitution ciphers. We use an iterative approach that generalizes the fastest known attack on simple substitution ciphers and also employs a heuristic search technique for improved efficiency. We test our algorithm on a wide variety of homophonic substitution ciphers. Finally, we apply our technique to the “Zodiac 340” cipher, which is an unsolved ciphertext created in the 1970s by the infamous Zodiac killer

    Linked Data Quality Assessment and its Application to Societal Progress Measurement

    Get PDF
    In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis

    Analysis and Design of Earthquake Resitant Masonry Buildings

    Get PDF
    Masonry buildings are the most common type of construction used for all housing around the world. Masonry buildings of brick and stone are superior with respect to durability, fire resistance, heat resistance and formative effects. Because of the easy availability of materials for masonry construction, economic reasons and merits mentioned above this type of construction is employed in the rural, urban and hilly regions up to its optimum, since it is flexible enough to accommodate itself according to the prevailing environmental conditions. Although this type of construction is most oftenly preferred and most frequently employed yet it is not completely perfect regard to seismic efficiency. The post earthquake survey has proved that the masonry buildings are most vulnerable to and have suffered maximum damages in the past earthquakes. A survey of the affected areas in past earthquakes (Bhuj 2001; Chamoli 1999; Jabalpur, 1997; Killari 1993; Uttarkashi 1991 and Bihar- Nepal 1988) has clearly demonstrated that the major losses of lives were due to collapse of low strength masonry buildings. Thus this type of construction is treated as non-engineered construction and most casualties are due to the collapse of these constructions in earthquake. Moreover the plight is that even after gaining knowledge of earthquake engineering since last three decades, neither a proper method have been developed for seismic analysis and design of masonry buildings nor the topic is fairly covered in the Indian curriculum in spite of the fact that 90% of the population lives in masonry buildings. The present work is a step towards with regard to illustrate a procedure for seismic analysis and design of a masonry building. The paper gives detail procedure of the seismic analysis and design of a three stoyered masonry Residential building. The procedure is divided into several distinctive steps in order to create a solid feeling and confidence that masonry buildings may also be designed as engineered construction

    Utility of serum lactate dehydrogenase in the diagnosis of megaloblastic anemia

    Get PDF
    Background: Megaloblastc anemia corresponds to severe macrocytic anemia with hypersegmented neutrophils and very high serum Lactate Dehydrogenase (LDH). The present study was undertaken to evaluate the utility of serum LDH and chloroform inhibited serum LDH in the diagnosis of megaloblastic anemia and to observe if this can be used to differentiate megaloblastic anemia from iron deficiency anemia and hemolytic anemia.Methods: The present study was carried out on 75 patients of anemia categorised on bone marrow examination (into megaloblastic and non-megaloblastic anaemia) to evaluate the efficacy of total serum LDH levels and LDH isoenzyme pattern in the diagnosis of megaloblastic anemia. About 25 healthy adults were taken as controls.Results: In megaloblastic anemia, total serum LDH level was found to be increased to about nineteen folds and in hemolytic anemia it was found to increased four folds as compared to normal. On statistical analysis this increased total serum LDH level in megaloblastic anemia and hemolytic anemia as compared to control group was found to be significant.In the present study serum LDH level above 3000IU/L was associated with megaloblastic anemia and serum LDH level below 900IU/L was suggestive of iron deficiency anemia. The chloroform inhibition test was less than 25% in megaloblastic anemia and more than 25% in hemolytic anemia and these differences were found to be statistically significant (t=9.62, df=49, pLDH2) by chloroform inhibition test is an adjuvant in the diagnosis where total serum LDH levels are between 451-3000IU/L and can also differentiate megaloblastic anemia from hemolytic anemia

    Venturing the Definition of Green Energy Transition:A systematic literature review

    Get PDF
    The issue of climate change has become increasingly noteworthy in the past years, the transition towards a renewable energy system is a priority in the transition to a sustainable society. In this document, we explore the definition of green energy transition, how it is reached, and what are the driven factors to achieve it. To answer that firstly, we have conducted a literature review discovering definitions from different disciplines, secondly, gathering the key factors that are drivers for energy transition, finally, an analysis of the factors is conducted within the context of European Union data. Preliminary results have shown that household net income and governmental legal actions related to environmental issues are potential candidates to predict energy transition within countries. With this research, we intend to spark new research directions in order to get a common social and scientific understanding of green energy transition

    A prospective study of laproscopic paravaginal repair of cystocoele: our experience

    Get PDF
    Background: Cystocele is diagnosed clinically by vaginal examination approaches using the pelvic organ prolapse quantifications system (POP-Q) of classification. Abdominal and laproscopic are now used due to high failure rate involving the transvaginal repair. Laproscopic repair involves approximation of the vaginal sub-epithelial tissue with the Cooper’s ligament using non-absorbable suture.Methods: This was a prospective observational study from June 2016 to May 2020 over women with symptomatic cystocele of grade ≥2. All patients were preoperatively and post-operatively assessed with quality-of-life questionnaires, pelvic organ prolapse distress inventory-6 (POPDI-6) and urinary distress inventory short form. Clinical examination was done with and without Valsalva maneuver. POP classification was used for grading the prolapse. All patients were assessed for any voiding difficulty after surgery, at one week, three months, six months and 12 months.Results: The median age of patient was 55.5 years. 90.9% patients presented with urinary symptoms. 54.5% patients underwent hysterectomy. The mean blood loss was 55 cc. The anatomic cure rate for cystocoele was 100% in our study in 1 week, 3 months and 6 months post-operatively. There was significant improvement in the quality-of-life scores. Overall, symptomatic relief was seen in 90.9% patients at first week, 95.4% at 3 months, 95.4% at 6 and 12 months follow up. Urinary symptoms were relieved in all patients at first follow up after 7 days, and 95.4% patients during 3, 6 and 12 months follow up.Conclusions: Laproscopic paravaginal cystocoele repair is safe, effective and an easy procedure with good results. The procedure is easy to learn and master with low recurrence rates

    Isolation and Proteomic Characterization of Escherichia Coli Persisters

    Get PDF
    Bacteria are known to adapt in various unfavorable situations. The term ‘persister’ has emerged to signify their ability to survive under physiologically adverse environments. Persisters can even tolerate multiple lethal antibiotic treatments. They are of major clinical importance due to their involvement in many fatal chronic bacterial infections such as tuberculosis, lung infection associated with cystic fibrosis, urinary tract infection, and many more. Physiological dormancy is the main basis of their survival in antibiotic stress or host induced environmental stress. Persisters serve as a ‘biological memory’ and are capable of resuming their growth to repopulate when the favourable conditions are restored. Only little do we know about the diverse cascade of events taking place in persisters. Eradication of chronic infection requires elimination of persisters using a suitable anti-persister drug. It is imperative to analyze their physiology thoroughly for a better insight and to explore effective drug-targets. Naturally occurring persisters are present at a very low frequency in any bacterial population and also they lack specific biomarker to selectively fish them out. To isolate native persisters from a heterogeneous population, we first needed to develop a highly specific and quantitative method that selectively identifies and separates them. Using E. coli as a model, we developed an innovative and sophisticated flow cytometry method to isolate native persisters from planktonic liquid cultures. Taking advantage of their non-dividing nature, we exploited their inability to dilute fluorescence and thereby based on the pattern of fluorescent intensity they were quantitatively detected and sorted through flow cytometry. We also performed microscopic analysis to address the ‘fluorescent dilution’ ability of normal dividing cells. Whole proteomic analysis using Mass spectrometry of the sorted persisters revealed an involvement of tryptophanase enzyme as a key regulator of persister physiology. Further we performed microscopic and flow cytometric analysis to demonstrate the crucial role of tryptophanase in bacterial persistence. Thereby tryptophanase can be proposed as an attractive drug-target to develop effective anti-persister drug. Overall, our successful attempt to isolate and analyze persisters in a modern and more authentic manner paid off by reflecting a unique aspect of their physiology.Biochemistry & Molecular Biolog

    Ontology Mapping for Life Science Linked Data

    Get PDF
    Abstract. Bio2RDF is an open-source project that offers a large and connected knowledge graph of Life Science Linked Data. Each dataset is expressed using its own vocabulary, thereby hindering integration, search, query, and browse data across similar or identical types of data. With growth and content changes in source data, a manual approach to maintain mappings has proven untenable. The aim of this work is to develop a (semi)automated procedure to generate high quality mappings between Bio2RDF and SIO using BioPortal ontologies. Our preliminary results demonstrate that our approach is promising in that it can find new mappings using a transitive closure between ontology mappings. Further development of the methodology coupled with improvements in the ontology will offer a better-integrated view of the Life Science Linked Data. Keywords: biomedical data, life science, ontology, integration, semantic web, linked data, RDF, Bio2RDF, SIO Life Science Data Integration The life sciences have a long history of producing open data. Unfortunately, these data are represented using a wide variety of incompatible formats (e.g. CSV, XML, custom flat files etc.). Linked Data (LD) offers a new paradigm of using community standards to represent and provide a data in a uniform and semantic manne
    corecore