3,322 research outputs found

    A SEARCHING ALGORITHM FOR TEXT WITH MISTAKES

    Get PDF
    The paper contains a new text searching method representing modification of the Boyer-Moore algorithm and enabling a user to find the places in the text where the given substring occurs maybe with possible errors, that is the string in text and a query may not coincide but nevertheless are identical. The idea consists in division of the searching process in two phases: at the first phase a fuzzy variant of the Boyer–Moore algorithm is performed; at the second phase the Dice metrics is used. The advantage of suggested technique in comparison with the known methods using the fixed value of the mistakes number is that it 1) does not perform precomputation of the auxiliary table of the sizes comparable to the original text sizes and 2) it more flexibly catches the semantics of the erroneous text substrings even for a big number of mistakes. This circumstance extends possibilities of the Boyer–Moore method by addmitting a bigger amount of possible mistakes in text and preserving text semantics. The suggested method provides also more accurate regulation of the upper boundary for the text mistakes which differs it from the known methods with fixed value of the maximum number of mistakes not depending on the text sizes. Moreover, this upper boundary is defined as Levenshtein distance not suitable for evaluating a relevance of the founded text and a query, while the Dice metrics provides such a relevance. In fact, if maximum Levenshtein distanse is 3 then how one can judge if this value is big or small to provide relevance of the search results. Consequently, the suggested method is more flexible, enables one to find relevant answers even in case of a big number of mistakes in text. The efficiency of the suggested method in the worst case is O(nc) with constant c defining the biggest allowable number of mistakes.The paper contains a new text searching method representing modification of the Boyer-Moore algorithm and enabling a user to find the places in the text where the given substring occurs maybe with possible errors, that is the string in text and a query may not coincide but nevertheless are identical. The idea consists in division of the searching process in two phases: at the first phase a fuzzy variant of the Boyer–Moore algorithm is performed; at the second phase the Dice metrics is used. The advantage of suggested technique in comparison with the known methods using the fixed value of the mistakes number is that it 1) does not perform precomputation of the auxiliary table of the sizes comparable to the original text sizes and 2) it more flexibly catches the semantics of the erroneous text substrings even for a big number of mistakes. This circumstance extends possibilities of the Boyer–Moore method by addmitting a bigger amount of possible mistakes in text and preserving text semantics. The suggested method provides also more accurate regulation of the upper boundary for the text mistakes which differs it from the known methods with fixed value of the maximum number of mistakes not depending on the text sizes. Moreover, this upper boundary is defined as Levenshtein distance not suitable for evaluating a relevance of the founded text and a query, while the Dice metrics provides such a relevance. In fact, if maximum Levenshtein distanse is 3 then how one can judge if this value is big or small to provide relevance of the search results. Consequently, the suggested method is more flexible, enables one to find relevant answers even in case of a big number of mistakes in text. The efficiency of the suggested method in the worst case is O(nc) with constant c defining the biggest allowable number of mistakes

    Evaluation of hydrometric network efficacy and user requirements in the Republic of Ireland via expert opinion and statistical analysis

    Get PDF
    Decreased funding and shifting governmental priorities have resulted in a contraction of hydrometric measurement in many regions over the past two decades. Moreover, concerns exist with respect to appropriate data usage and (transboundary) exchange, in addition to the compatibility and extent of existing hydrometric datasets. These issues are undoubtedly magnified due to enhanced data demands and increased financial pressures on network managers, thus requiring new approaches to optimising the societal benefits and overall efficacy of hydrometric information for future socio-hydrological resilience. The current study employed a quantitative cross-sectional expert elicitation of 203 respondents to collate, analyse and assess hydrometric network users’ opinions, knowledge and experience. Current usage patterns, perceived network strengths, requirements, and limitations have been identified and discussed within the context of hydrometric resilience in a changing social, economic and natural environment. Findings indicate that small (\u3c30 km2) catchment data are most frequently employed in the Republic of Ireland, particularly with respect to extreme event prediction and flood management. Similarly, small catchments and areas characterised by previous/recent flooding were prioritised for resilience management via network amendment. Over half of those surveyed (50.5%) reported the current network as inadequate for their professional requirements. Conversely, respondents indicated network efficacy has improved (53.2%) or remained stable (26.6%) over the course of their professional career, however, improvements (as defined by individual respondents i.e. network density, data quality, data availability) have not occurred at a sufficient rate. User-defined efficacy (adequacy, resilience) was found to be a somewhat vague, multivariate concept, with no individual predictor identified, however, general data quality, network density, and urban catchment data were the most significant issues among respondents. A significant majority (85.4%) of respondents indicate that future resilience would be best achieved via network density amendment, with over 60% favouring geographically and/or categorically focused network increases, as opposed to more general national increases

    Generalization of Retractable and Coretractable Modules

    Get PDF
    In this work, we extend the notion of retractability to  s- retractability. An R-module is called s-retractable if  for all nonzero . Also we extend coretractable modules to semi-coretractable modules. An R-module  is called semi-coretractable if  for all maximal essential submodule  of . We investigate theseclasses of modules and extend some of main theorems on retractable and coretractable modules to s-retractable and semi-coretractable modules, respectively

    Repeated measurements of non-invasive fibrosis tests to monitor the progression of non-alcoholic fatty liver disease : A long-term follow-up study

    Get PDF
    Background and Aims The presence of advanced hepatic fibrosis is the prime marker for the prediction of liver-related complications in non-alcoholic fatty liver disease (NAFLD). Blood-based non-invasive tests (NITs) have been developed to evaluate fibrosis and identify patients at risk. Current guidelines propose monitoring the progression of NAFLD using repeated NITs at 2-3-year intervals. The aim of this study was to evaluate the association of changes in NITs measured at two time points with the progression of NAFLD. Methods We retrospectively included NAFLD patients with NIT measurements in whom the baseline hepatic fibrosis stage had been assessed by biopsy or transient elastography (TE). Subjects underwent follow-up visits at least 1 year from baseline to evaluate the progression of NAFLD. NAFLD progression was defined as the development of end-stage liver disease or fibrosis progression according to repeat biopsy or TE. The following NITs were calculated at baseline and follow-up: Fibrosis-4 (FIB-4), NAFLD fibrosis score (NFS), aspartate aminotransferase to platelet ratio index (APRI) and dynamic aspartate-to-alanine aminotransferase ratio (dAAR). Results One hundred and thirty-five patients were included with a mean follow-up of 12.6 +/- 8.5 years. During follow-up, 41 patients (30%) were diagnosed with progressive NAFLD. Change in NIT scores during follow-up was significantly associated with disease progression for all NITs tested except for NFS. However, the diagnostic precision was suboptimal with area under the receiver operating characteristics 0.56-0.64 and positive predictive values of 0.28-0.36 at sensitivity fixed at 90%. Conclusions Change of FIB-4, NFS, APRI, and dAAR scores is only weakly associated with disease progression in NAFLD. Our findings do not support repeated measurements of these NITs for monitoring the course of NAFLD.Peer reviewe

    The Methods to Improve Quality of Service by Accounting Secure Parameters

    Full text link
    A solution to the problem of ensuring quality of service, providing a greater number of services with higher efficiency taking into account network security is proposed. In this paper, experiments were conducted to analyze the effect of self-similarity and attacks on the quality of service parameters. Method of buffering and control of channel capacity and calculating of routing cost method in the network, which take into account the parameters of traffic multifractality and the probability of detecting attacks in telecommunications networks were proposed. The both proposed methods accounting the given restrictions on the delay time and the number of lost packets for every type quality of service traffic. During simulation the parameters of transmitted traffic (self-similarity, intensity) and the parameters of network (current channel load, node buffer size) were changed and the maximum allowable load of network was determined. The results of analysis show that occurrence of overload when transmitting traffic over a switched channel associated with multifractal traffic characteristics and presence of attack. It was shown that proposed methods can reduce the lost data and improve the efficiency of network resources.Comment: 10 pages, 1 figure, 1 equation, 1 table. arXiv admin note: text overlap with arXiv:1904.0520

    Type of High Secondary School (Governmental Vs Private) and Type of High Secondary School Certificate (Sudanese Vs Arabian): Do They Affect Learning Style?

    Get PDF
    Background: People differ in the way they perceive, process, store, and recall what they are attempting to learn. This study aimed to assess the learning styles among preclinical 1st year medical students and the influence of the type of high secondary school (governmental vs. private) and type of high secondary school certificate (Sudanese vs. Arabian) on learning style.Materials and Methods: A cross sectional institutional-based study was conducted at Al Neelain University, Khartoum State, Sudan. First year students of Medicine, Dentistry and Physiotherapy Faculties were enrolled. The VARK (Visual, Auditory, Read and write, and Kinesthetic) learning style hard copy questionnaire, © Copyright Version 7.8 (2014) held by VARK Learn Limited, Christchurch, New Zealand was administered following permission. Data were analyzed using Statistical Package for Social Sciences (SPSS) version 21.Results: Out of 320 students, 198 correctly completed VARK questionnaires, with mean age of 17.88 years (SD 1.52) and 74.2% were female students. About 59.6% were from governmental schools and 79.4% of the studied students had Sudanese High Secondary Certificates. About 64.1% demonstrated singular mode preference. Inferential statistics showed statistically significant difference between the learning styles and the type of secondary school whether governmental or private (P-value 0.005) while no statistically significant difference in relation to the type of high school certificate of the studied group (P-value 0.225).Conclusion: The type of secondary school whether governmental or private may affect learning style of medical students while student's gender, type of college, or type of high school certificate (whether Sudanese or Arabian) do not. More andlarger studies are encouraged.Key word: Learning modalities, VARK questionnaire, Unimodal preference, Medicaleducation, Sudan

    Doubt m-Polar Fuzzy Sets Based on BCK-Algebras

    Get PDF
    Doubt m-polar subalgebras (ideals) were introduced and some properties were investigated. Also, doubt m-polar positive implicative (commutative) ideals were defined and related results were proved

    A searching algorithm for text with mistakes

    Get PDF
    The paper contains a new text searching method representing modification of the Boyer-Moore algorithm and enabling a user to find the places in the text where the given substring occurs maybe with possible errors, that is the string in text and a query may not coincide but nevertheless are identical. The idea consists in division of the searching process in two phases: at the first phase a fuzzy variant of the Boyer–Moore algorithm is performed; at the second phase the Dice metrics is used. The advantage of suggested technique in comparison with the known methods using the fixed value of the mistakes number is that it 1) does not perform precomputation of the auxiliary table of the sizes comparable to the original text sizes and 2) it more flexibly catches the semantics of the erroneous text substrings even for a big number of mistakes. This circumstance extends possibilities of the Boyer–Moore method by addmitting a bigger amount of possible mistakes in text and preserving text semantics. The suggested method provides also more accurate regulation of the upper boundary for the text mistakes which differs it from the known methods with fixed value of the maximum number of mistakes not depending on the text sizes. Moreover, this upper boundary is defined as Levenshtein distance not suitable for evaluating a relevance of the founded text and a query, while the Dice metrics provides such a relevance. In fact, if maximum Levenshtein distanse is 3 then how one can judge if this value is big or small to provide relevance of the search results. Consequently, the suggested method is more flexible, enables one to find relevant answers even in case of a big number of mistakes in text. The efficiency of the suggested method in the worst case is O(nc) with constant c defining the biggest allowable number of mistakes
    corecore