3 research outputs found

    Анализ защищенности веб-ресурсов на основе метрики CVSS

    Get PDF
    Based on the analysis of vulnerability data for web resources and the CVSS metric, the distribution of the average CVSS (Common Vulnerability Scoring System standard for calculating a numerical vulnerability score on a ten-point scale) score for the websites of theRepublicofBelaruswas studied. The hypothesis on the distribution of the CVSS vulnerability assessment according to Poisson's law was tested by chi-square criteria. It was found that about 10% of web resources from the original general of samples of 19000 size have a critical averaged assessment level of vulnerability. As part of this work an universal system for collecting technical information about active web resources on the Internet from public directories and registries has been developed. Specific search templates have been developed using RegExp JavaScript expressions to detect the versions of technologies that were used to create websites. Based on this data the percentage distribution of used technologies, top-level domains and the geographical location of the servers were calculated. Proposed system can be adapted to any unique conditions required by information security specialists to conduct a security audit of web resources.На основе анализа данных об уязвимостях веб-ресурсов и метрики CVSS (Common Vulnerability Scoring System) изучено распределение усредненной величины оценки по стандарту CVSS для расчета числового показателя уязвимости по десятибалльной шкале для сайтов Республики Беларусь. Проведена проверка гипотезы о распределении оценки уязвимостей CVSS по закону Пуассона методом критерия хи-квадрат. Установлено, что около 10 % веб-ресурсов из исходной генеральной выборки размером 19 000 имеют критическую усредненную оценку уязвимости. В рамках проведенного исследования создана универсальная система для сбора технической информации об активных веб-ресурсах в сети Интернет из общедоступных каталогов и реестров. Разработаны специальные шаблоны поиска с помощью RegExp-выражений языка программирования JavaScript для точного определения версий технологий, которые были использованы для создания веб-сайтов. На базе полученных данных установлены процентные соотношения используемых технологий, доменов верхнего уровня и географическое расположение серверов, которые обслуживают веб-ресурсы. Предлагаемая система может быть адаптирована под любые уникальные требования, необходимые специалистам по защите информации для проведения аудита безопасности веб-ресурсов

    A novel defense mechanism against web crawler intrusion

    Get PDF
    Web robots also known as crawlers or spiders are used by search engines, hackers and spammers to gather information about web pages. Timely detection and prevention of unwanted crawlers increases privacy and security of websites. In this research, a novel method to identify web crawlers is proposed to prevent unwanted crawler to access websites. The proposed method suggests a five-factor identification process to detect unwanted crawlers. This study provides the pretest and posttest results along with a systematic evaluation of web pages with the proposed identification technique versus web pages without the proposed identification process. An experiment was performed with repeated measures for two groups with each group containing ninety web pages. The outputs of the logistic regression analysis of treatment and control groups confirm the novel five-factor identification process as an effective mechanism to prevent unwanted web crawlers. This study concluded that the proposed five distinct identifier process is a very effective technique as demonstrated by a successful outcome
    corecore