33 research outputs found

    A security analysis of automated Chinese turing tests

    Get PDF
    Text-based Captchas have been widely used to deter misuse of services on the Internet. However, many designs have been broken. It is intellectually interesting and practically relevant to look for alternative designs, which are currently a topic of active research. We motivate the study of Chinese Captchas as an interesting alternative design - counterintuitively, it is possible to design Chinese Captchas that are universally usable, even to those who have never studied Chinese language. More importantly, we ask a fundamental question: is the segmentation-resistance principle established for Roman-character based Captchas applicable to Chinese based designs? With deep learning techniques, we offer the first evidence that computers do recognize individual Chinese characters well, regardless of distortion levels. This suggests that many real-world Chinese schemes are insecure, in contrast to common beliefs. Our result offers an essential guideline to the design of secure Chinese Captchas, and it is also applicable to Captchas using other large-alphabet languages such as Japanese

    Mothers\u27 Adaptation to Caring for a New Baby

    Get PDF
    To date, most research on parents\u27 adjustment after adding a new baby to their family unit has focused on mothers\u27 initial transition to parenthood. This past research has examined changes in mothers\u27 marital satisfaction and perceived well-being across the transition, and has compared their prenatal expectations to their postnatal experiences. This project assessed first-time and experienced mothers\u27 stress and satisfaction associated with parenting, their adjustment to competing demands, and their perceived well-being longitudinally before and after the birth of a baby. Additionally, how maternal and child-related variables influenced the trajectory of mothers\u27 postnatal adaptation was assessed. These variables included mothers\u27 age, their education level, their prenatal expectations and postnatal experiences concerning shared infant care, their satisfaction with the division of infant caregiving, and their perceptions of their infant\u27s temperament. Mothers (N = 136) completed an online survey during their third trimester and additional online surveys when their baby was approximately 2, 4, 6, and 8 weeks old.;First-time mothers prenatally expected a more equal division of infant caregiving between themselves and their partners than did experienced mothers. Both first-time and experienced mothers reported less assistance from their partners than they had prenatally expected. Additionally, they experienced almost twice as many violated expectations than met expectations. Growth curve modeling revealed that a cubic function of time best fit the trajectory of mothers\u27 postnatal parenting satisfaction. Mothers reported less parenting satisfaction at 4 weeks, compared to 2 and 6 weeks, and reported stability in their satisfaction between 6 and 8 weeks. A quadratic function of time best fit the trajectories of mothers\u27 postnatal parenting stress and adjustment to the demands of their baby. Mothers reported more stress and difficulty adjusting to their baby\u27s demands at 4 and 6 weeks, compared to 2 and 8 weeks. A linear function of time best fit the trajectories of mothers\u27 adjustment to home demands, generalized state anxiety, and depressive symptoms. Mothers reported less difficulty meeting home demands, less generalized anxiety, and fewer depressive symptoms across the postnatal period. Mothers\u27 violated expectations were associated with level differences in all aspects of mothers\u27 postnatal adaptation except their adjustment to home demands. Specifically, more violated expectations, in number or in magnitude, were associated with poorer postnatal adaptation. Mothers\u27 violated expectations were not associated with the slope of mothers\u27 postnatal adaptation trajectories. Exploratory models revealed that other maternal and child-related variables also impacted the level and slope of mothers\u27 postnatal adaptation.;Overall, first-time and experienced mothers were more similar than different in regards to their postnatal adaptation. This study suggests that prior findings concerning adults\u27 initial transition to parenthood may also apply to adults during each addition of a new baby into the family unit. Additionally, mothers who reported less of a mismatch between their expectations and experiences concerning shared infant care had fewer issues adapting the postnatal period. Thus, methods to increase the assistance mothers receive from their partner should be sought. Limitations of this study and suggestions for future research are also discussed

    Enhancing Online Security with Image-based Captchas

    Get PDF
    Given the data loss, productivity, and financial risks posed by security breaches, there is a great need to protect online systems from automated attacks. Completely Automated Public Turing Tests to Tell Computers and Humans Apart, known as CAPTCHAs, are commonly used as one layer in providing online security. These tests are intended to be easily solvable by legitimate human users while being challenging for automated attackers to successfully complete. Traditionally, CAPTCHAs have asked users to perform tasks based on text recognition or categorization of discrete images to prove whether or not they are legitimate human users. Over time, the efficacy of these CAPTCHAs has been eroded by improved optical character recognition, image classification, and machine learning techniques that can accurately solve many CAPTCHAs at rates approaching those of humans. These CAPTCHAs can also be difficult to complete using the touch-based input methods found on widely used tablets and smartphones.;This research proposes the design of CAPTCHAs that address the shortcomings of existing implementations. These CAPTCHAs require users to perform different image-based tasks including face detection, face recognition, multimodal biometrics recognition, and object recognition to prove they are human. These are tasks that humans excel at but which remain difficult for computers to complete successfully. They can also be readily performed using click- or touch-based input methods, facilitating their use on both traditional computers and mobile devices.;Several strategies are utilized by the CAPTCHAs developed in this research to enable high human success rates while ensuring negligible automated attack success rates. One such technique, used by fgCAPTCHA, employs image quality metrics and face detection algorithms to calculate a fitness value representing the simulated performance of human users and automated attackers, respectively, at solving each generated CAPTCHA image. A genetic learning algorithm uses these fitness values to determine customized generation parameters for each CAPTCHA image. Other approaches, including gradient descent learning, artificial immune systems, and multi-stage performance-based filtering processes, are also proposed in this research to optimize the generated CAPTCHA images.;An extensive RESTful web service-based evaluation platform was developed to facilitate the testing and analysis of the CAPTCHAs developed in this research. Users recorded over 180,000 attempts at solving these CAPTCHAs using a variety of devices. The results show the designs created in this research offer high human success rates, up to 94.6\% in the case of aiCAPTCHA, while ensuring resilience against automated attacks

    Denial of Service in Web-Domains: Building Defenses Against Next-Generation Attack Behavior

    Get PDF
    The existing state-of-the-art in the field of application layer Distributed Denial of Service (DDoS) protection is generally designed, and thus effective, only for static web domains. To the best of our knowledge, our work is the first that studies the problem of application layer DDoS defense in web domains of dynamic content and organization, and for next-generation bot behaviour. In the first part of this thesis, we focus on the following research tasks: 1) we identify the main weaknesses of the existing application-layer anti-DDoS solutions as proposed in research literature and in the industry, 2) we obtain a comprehensive picture of the current-day as well as the next-generation application-layer attack behaviour and 3) we propose novel techniques, based on a multidisciplinary approach that combines offline machine learning algorithms and statistical analysis, for detection of suspicious web visitors in static web domains. Then, in the second part of the thesis, we propose and evaluate a novel anti-DDoS system that detects a broad range of application-layer DDoS attacks, both in static and dynamic web domains, through the use of advanced techniques of data mining. The key advantage of our system relative to other systems that resort to the use of challenge-response tests (such as CAPTCHAs) in combating malicious bots is that our system minimizes the number of these tests that are presented to valid human visitors while succeeding in preventing most malicious attackers from accessing the web site. The results of the experimental evaluation of the proposed system demonstrate effective detection of current and future variants of application layer DDoS attacks

    Artificial Intelligence in Computer Networks : Role of AI in Network Security

    Get PDF
    Artificial Intelligence (AI) in computer networks has been emerging for the last decade, there are revolutionary inventions that have created automation and digitalization in the fields of the Internet. The layout of computer networks works in layers of topologies with the help of AI, a virtual layer of software has been added that runs predictive algorithms of Artificial Neural Networks (ANNs) with the help of Machine Learning (ML) and Deep Learning (DL). This thesis describes the relation between AI algorithms and duplication of human cognitive behavior in emerging technologies. The advantages of AI in computer networks include automation, digitalization, Internet of Things (IoT), centralization of data, etc. At the same time, the biggest disadvantage is the ethical violation of privacy and the security of data. It is further discussed in the thesis that Artificial Intelligence uses many security protocols, including Next-Generation Firewalls, to prevent security violations. The Software Network Analysis (SNA) and Software Defined Networks (SDN) play an important role in Artificial Intelligence in computer Networks. This thesis aims to analyze the relationship between the development of AI algorithms and the duplication of the human cognitive behavior in various emerging technologies. Software Network Analysis (SNA) and Software Defined Networks (SDN) are critical components of computer network artificial intelligence. The purpose of this dissertation is to investigate the relationship between AI algorithms and network security. The thesis analyzes 2 main aspects, the role of Artificial Intelligence in Computer Networks and how Artificial Intelligence is helping in securing computer networks to deal with the modern network threats. Security today has become one of the main concerns, everyday a production networks receives arounds thousands of attacks of different scales, and proper network security measures are not configured and taken, a lot can be compromised. Network virtualization, Cloud Computing, has seen exponentially growth in few past years, because of the trend of less human interaction, and minimizing of doing repeated tasks over and over. Data in today鈥檚 world is now more important than it has been in decades earlier, this is because today everything is moving towards digitalization, proper Information Security policies are derived and implemented all over the world to ensure the protection of Data. Europe has its own General Data Protection Regulation (GDPR) which ensures that every company who deals with data is to implement certain measures to ensure the data is protected which also involves implementing the right network security measures so that the right people have the access to the sensitive information. This thesis covers the overall impact of Artificial Intelligence in Computer Networks and Network Security

    A goal-oriented user interface for personalized semantic search

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2006.Includes bibliographical references (v. 2, leaves 280-288).Users have high-level goals when they browse the Web or perform searches. However, the two primary user interfaces positioned between users and the Web, Web browsers and search engines, have very little interest in users' goals. Present-day Web browsers provide only a thin interface between users and the Web, and present-day search engines rely solely on keyword matching. This thesis leverages large knowledge bases of semantic information to provide users with a goal-oriented Web browsing experience. By understanding the meaning of Web pages and search queries, this thesis demonstrates how Web browsers and search engines can proactively suggest content and services to users that are both contextually relevant and personalized. This thesis presents (1) Creo, a Programming by Example system that allows users to teach their computers how to automate interactions with their favorite Web sites by providing a single demonstration, (2) Miro, a Data Detector that matches the content of a Web page to high-level user goals, and allows users to perform semantic searches, and (3) Adeo, an application that streamlines browsing the Web on mobile devices, allowing users to complete actions with a minimal amount of input and output.(cont.) An evaluation with 34 subjects found that they were more effective at completing tasks when using these applications, and that the subjects would use these applications if they had access to them. Beyond these three user interfaces, this thesis also explores a number of underlying issues, including (1) automatically providing semantics to unstructured text, (2) building robust applications on top of messy knowledge bases, (3) leveraging surrounding context to disambiguate concepts that have multiple meanings, and (4) learning new knowledge by reading the Web.by Alexander James Faaborg.S.M

    E-Learning in Higher and Adult Education

    Get PDF

    Increasing the robustness of networked systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 133-143).What popular news do you recall about networked systems? You've probably heard about the several hour failure at Amazon's computing utility that knocked down many startups for several hours, or the attacks that forced the Estonian government web-sites to be inaccessible for several days, or you may have observed inexplicably slow responses or errors from your favorite web site. Needless to say, keeping networked systems robust to attacks and failures is an increasingly significant problem. Why is it hard to keep networked systems robust? We believe that uncontrollable inputs and complex dependencies are the two main reasons. The owner of a web-site has little control on when users arrive; the operator of an ISP has little say in when a fiber gets cut; and the administrator of a campus network is unlikely to know exactly which switches or file-servers may be causing a user's sluggish performance. Despite unpredictable or malicious inputs and complex dependencies we would like a network to self-manage itself, i.e., diagnose its own faults and continue to maintain good performance. This dissertation presents a generic approach to harden networked systems by distinguishing between two scenarios. For systems that need to respond rapidly to unpredictable inputs, we design online solutions that re-optimize resource allocation as inputs change. For systems that need to diagnose the root cause of a problem in the presence of complex subsystem dependencies, we devise techniques to infer these dependencies from packet traces and build functional representations that facilitate reasoning about the most likely causes for faults. We present a few solutions, as examples of this approach, that tackle an important class of network failures. Specifically, we address (1) re-routing traffic around congestion when traffic spikes or links fail in internet service provider networks, (2) protecting websites from denial of service attacks that mimic legitimate users and (3) diagnosing causes of performance problems in enterprises and campus-wide networks. Through a combination of implementations, simulations and deployments, we show that our solutions advance the state-of-the-art.by Srikanth Kandula.Ph.D

    Contribuciones para la Detecci贸n de Ataques Distribuidos de Denegaci贸n de Servicio (DDoS) en la Capa de Aplicaci贸n

    Get PDF
    Se analizaron seis aspectos sobre la detecci贸n de ataques DDoS: t茅cnicas, variables, herramientas, ubicaci贸n de implementaci贸n, punto en el tiempo y precisi贸n de detecci贸n. Este an谩lisis permiti贸 realizar una contribuci贸n 煤til al dise帽o de una estrategia adecuada para neutralizar estos ataques. En los 煤ltimos a帽os, estos ataques se han dirigido hacia la capa de aplicaci贸n. Este fen贸meno se debe principalmente a la gran cantidad de herramientas para la generaci贸n de este tipo de ataque. Por ello, adem谩s, en este trabajo se propone una alternativa de detecci贸n basada en el dinamismo del usuario web. Para esto, se evaluaron las caracter铆sticas del dinamismo del usuario extra铆das de las funciones del mouse y del teclado. Finalmente, el presente trabajo propone un enfoque de detecci贸n de bajo costo que consta de dos pasos: primero, las caracter铆sticas del usuario se extraen en tiempo real mientras se navega por la aplicaci贸n web; en segundo lugar, cada caracter铆stica extra铆da es utilizada por un algoritmo de orden (O1) para diferenciar a un usuario real de un ataque DDoS. Los resultados de las pruebas con las herramientas de ataque LOIC, OWASP y GoldenEye muestran que el m茅todo propuesto tiene una eficacia de detecci贸n del 100% y que las caracter铆sticas del dinamismo del usuario de la web permiten diferenciar entre un usuario real y un robot

    Crowd sourcing for translation and software localization

    Get PDF
    This work studies the capability of Crowd sourcing related to translation and software localization and the quality obtained by the use of a crowd sourcing methodology. This work is performed within the collaboration of CA Labs, Europe, and is specifically focused on the design of a crowd sourcing platform able to guarantee high quality in translation, and to comply with industrial aspects of translation. Moreover, a prototype of the designed platform has been developed and has been used to run some experiment with a reduced and controlled crowd, to test the potentiality of translation done by a not homogeneous group of users. The reasons, challenges, road-map and results obtained in this work are described in detail in this document
    corecore