7 research outputs found

    A Brief History of Web Crawlers

    Full text link
    Web crawlers visit internet applications, collect data, and learn about new web pages from visited pages. Web crawlers have a long and interesting history. Early web crawlers collected statistics about the web. In addition to collecting statistics about the web and indexing the applications for search engines, modern crawlers can be used to perform accessibility and vulnerability checks on the application. Quick expansion of the web, and the complexity added to web applications have made the process of crawling a very challenging one. Throughout the history of web crawling many researchers and industrial groups addressed different issues and challenges that web crawlers face. Different solutions have been proposed to reduce the time and cost of crawling. Performing an exhaustive crawl is a challenging question. Additionally capturing the model of a modern web application and extracting data from it automatically is another open question. What follows is a brief history of different technique and algorithms used from the early days of crawling up to the recent days. We introduce criteria to evaluate the relative performance of web crawlers. Based on these criteria we plot the evolution of web crawlers and compare their performanc

    Gender and Risk of Congenital Hypothyroidism: A Systematic Review and Meta-Analysis

    No full text
    Background: Although numerous observational studies have investigated the association between gender and risk of congenital hypothyroidism, the role of gender as a risk factor for congenital hypothyroidism remains unknown.Thismeta-analysis was conducted to summarize the epidemiologic evidence of the effect of gender on the congenital hypothyroidism occurrence, and also to identify the sex ratio for congenital hypothyroidism. Materials and Methods: A comprehensive literature search of numerous electronic databases including PubMed, Scopus, EMBASE, and Science Direct was performed until February 1st, 2017. All studies designed case-control (six studies with 3,254 subjects) and cross-sectional studies (eight studies with 8,258,745 subjects) addressing the association by odds ratio (OR) and 95% confidence interval (95% CI) were included. Moreover, eleven cross-sectional studies were also included providing a sex ratio for congenital hypothyroidism. Pooled Mantel-Haenszel OR (MH OR) with 95% CI was estimated using the random-effects method. Results The overall summary results showed that girl gender is associated with an increased risk of congenital hypothyroidism (pooled MH OR=1.46; 95%CI: 1.10, 1.95). The pooled MH OR for case-control studies was 1.69 (95%CI: 1.35, 2.13), whereas the pooled MH OR for cross-sectional studies was 1.26 (95%CI: 1.00, 1.59). In addition, pooled female to male sex ratio of congenital hypothyroidism incidence was 1.35 (95%CI: 0.99, 1.83). Conclusion: The results of this meta-analysis provide evidence for a higher risk in girl gender for developing congenital hypothyroidism. More epidemiological and clinical studies are needed to explore why girl gender is at increased risk of congenital hypothyroidism compared with boy

    Recovering user-interactions of Rich Internet Applications through replaying of HTTP traces

    No full text
    Abstract In this paper, we study the “Session Reconstruction” problem which is the reconstruction of user interactions from recorded request/response logs of a session. The reconstruction is especially useful when the only available information about the session is its HTTP trace, as could be the case during a forensic analysis of an attack on a website. Solutions to the reconstruction problem do exist for “traditional” Web applications. However, these solutions cannot handle modern “Rich Internet Applications” (RIAS). Our solution is implemented in the context of RIAs in a tool called D-ForenRIA. Our tool is made of a proxy and a set of browsers. Browsers are responsible for trying candidate actions on each DOM, and the proxy, which contains the observed HTTP trace, is responsible for responding to browsers’ requests and validating attempted actions on each DOM. D-ForenRIA has a distributed architecture, a learning mechanism to guide the session reconstruction process efficiently, and can handle complex user-inputs, client-side randomness, and to some extents actions that do not generate any HTTP traffic. In addition, concurrent reconstruction makes the system scalable for real-world use. The results of our evaluation on several RIAs show that D-ForenRIA can efficiently reconstruct user-sessions in practice

    Reconstructing Interactions with Rich Internet Applications from HTTP Traces

    No full text
    Part 3: NETWORK FORENSICSInternational audienceThis chapter describes the design and implementation of ForenRIA, a forensic tool for performing automated and complete reconstructions of user sessions with rich Internet applications using only the HTTP logs. ForenRIA recovers all the application states rendered by the browser, reconstructs screenshots of the states and lists every action taken by the user, including recovering user inputs. Rich Internet applications are deployed widely, including on mobile systems. Recovering information from logs for these applications is significantly more challenging compared with classical web applications. This is because HTTP traffic predominantly contains application data with no obvious clues about what the user did to trigger the traffic. ForenRIA is the first forensic tool that specifically targets rich Internet applications. Experiments demonstrate that the tool can successfully handle relatively complex rich Internet applications

    Infectious and Parasitic Diseases of the Alimentary Tract

    No full text
    corecore