854 research outputs found

    New normalized constant modulus algorithms with relaxation

    Get PDF
    Published versio

    Elocutional Force of Expression in Speach

    Get PDF
    The problem of indirect speech acts is tightly connected with the problem of illocative strength of expression. The problem is how to make possible understanding of indirect speech act by a hearer, when hearable and understandable means something more. Analysis of illocative strength of expression showed that the same speech act can be realized by means of some different sentences. The direct meaning of sentences is the element of language system, as the language is included into the sphere of a human activity, so the speech act is necessary to consider in more wide context. The main function of an interrogative sentence is the expression of question.

    Characterization of un-17, chol-3 and chol-4, phospholipid biosynthetic mutants of Neurospora crassa

    Get PDF
    Four choline-requiring mutants of Neurospora crassa have been identified genetically. The chol-1 and chol-2 mutants are impaired in phosphatidylcholine biosynthesis. Phospholipid synthesis is also impaired in the inl and un-17 mutants; the inl mutant cannot synthesize inositol phospholipids. Defects in phospholipid biosynthesis are reported here for three previously uncharacterized mutants, un-17, chol-3 and chol-4

    Land valuation using an innovative model combining machine learning and spatial context

    Get PDF
    Valuation predictions are used by buyers, sellers, regulators, and authorities to assess the fairness of the value being asked. Urbanization demands a modern and efficient land valuation system since the conventional approach is costly, slow, and relatively subjective towards locational factors. This necessitates the development of alternative methods that are faster, user-friendly, and digitally based. These approaches should use geographic information systems and strong analytical tools to produce reliable and accurate valuations. Location information in the form of spatial data is crucial because the price can vary significantly based on the neighborhood and context of where the parcel is located. In this thesis, a model has been proposed that combines machine learning and spatial context. It integrates raster information derived from remote sensing as well as vector information from geospatial analytics to predict land values, in the City of Springfield. These are used to investigate whether a joint model can improve the value estimation. The study also identifies the factors that are most influential in driving these models. A geodatabase was created by calculating proximity and accessibility to key locations as well as integrating socio-economic variables, and by adding statistics related to green space density and vegetation index utilizing Sentinel-2 -satellite data. The model has been trained using Greene County government data as truth appraisal land values through supervised machine learning models and the impact of each data type on price prediction was explored. Two types of modeling were conducted. Initially, only spatial context data were used to assess their predictive capability. Subsequently, socio-economic variables were added to the dataset to compare the performance of the models. The results showed that there was a slight difference in performance between the random forest and gradient boosting algorithm as well as using distance measures data derived from GIS and adding socioeconomic variables to them. Furthermore, spatial autocorrelation analysis was conducted to investigate how the distribution of similar attributes related to the location of the land affects its value. This analysis also aimed to identify the disparities that exist in terms of socio-economic structure and to measure their magnitude.Includes bibliographical references

    Fuzzy virtual ligands for virtual screening

    Get PDF
    A new method to bridge the gap between ligand and receptor-based methods in virtual screening (VS) is presented. We introduce a structure-derived virtual ligand (VL) model as an extension to a previously published pseudo-ligand technique [1]: LIQUID [2] fuzzy pharmacophore virtual screening is combined with grid-based protein binding site predictions of PocketPicker [3]. This approach might help reduce bias introduced by manual selection of binding site residues and introduces pocket shape information to the VL. It allows for a combination of several protein structure models into a single "fuzzy" VL representation, which can be used to scan screening compound collections for ligand structures with a similar potential pharmacophore. PocketPicker employs an elaborate grid-based scanning procedure to determine buried cavities and depressions on the protein's surface. Potential binding sites are represented by clusters of grid probes characterizing the shape and accessibility of a cavity. A rule-based system is then applied to project reverse pharmacophore types onto the grid probes of a selected pocket. The pocket pharmacophore types are assigned depending on the properties and geometry of the protein residues surrounding the pocket with regard to their relative position towards the grid probes. LIQUID is used to cluster representative pocket probes by their pharmacophore types describing a fuzzy VL model. The VL is encoded in a correlation vector, which can then be compared to a database of pre-calculated ligand models. A retrospective screening using the fuzzy VL and several protein structures was evaluated by ten fold cross-validation with ROC-AUC and BEDROC metrics, obtaining a significant enrichment of actives. Future work will be devoted to prospective screening using a novel protein target of Helicobacter pylori and compounds from commercial providers

    A Weakest Chain Approach to Assessing the Overall Effectiveness of the 802.11 Wireless Network Security

    Full text link
    This study aims to assess wireless network security holistically and attempts to determine the weakest link among the parts that comprise the 'secure' aspect of the wireless networks: security protocols, wireless technologies and user habits. The assessment of security protocols is done by determining the time taken to break a specific protocol's encryption key, or to pass an access control by using brute force attack techniques. Passphrase strengths as well as encryption key strengths ranging from 40 to 256 bits are evaluated. Different scenarios are planned and created for passphrase generation, using different character sets and different number of characters. Then each scenario is evaluated based on the time taken to break that passphrase. At the end of the study, it is determined that the choice of the passphrase is the weakest part of the entire 802.11 wireless security system.Comment: 8 pages, 3 table

    Standardization of information systems development processes and banking industry adaptations

    Full text link
    This paper examines the current system development processes of three major Turkish banks in terms of compliance to internationally accepted system development and software engineering standards to determine the common process problems of banks. After an in-depth investigation into system development and software engineering standards, related process-based standards were selected. Questions were then prepared covering the whole system development process by applying the classical Waterfall life cycle model. Each question is made up of guidance and suggestions from the international system development standards. To collect data, people from the information technology departments of three major banks in Turkey were interviewed. Results have been aggregated by examining the current process status of the three banks together. Problematic issues were identified using the international system development standards.Comment: 12 pages; International Journal of Software Engineering & Applications (IJSEA), Vol.2, No.2, April 201
    • 

    corecore