53 research outputs found

    NEURAL NETWORK CAPTCHA CRACKER

    Get PDF
    NEURAL NETWORK CAPTCHA CRACKER A CAPTCHA (acronym for Completely Automated Public Turing test to tell Computers and Humans Apart ) is a type of challenge-response test used to determine whether or not a user providing the response is human. In this project, we used a deep neural network framework for CAPTCHA recognition. The core idea of the project is to learn a model that breaks image-based CAPTCHAs. We used convolutional neural networks and recurrent neural networks instead of the conventional methods of CAPTCHA breaking based on segmenting and recognizing a CAPTCHA. Our models consist of two convolutional layers to learn image features and a recurrent layer to output character sequence. We tried different configurations, including wide and narrow layers and deep and shallow networks. We synthetically generated a CAPTCHA dataset of varying complexity and used different libraries to avoid overfitting on one library. We trained on both fixed-and variable-length CAPTCHAs and were able to get accuracy levels of 99.8% and 80%, respectively

    Retrieving, annotating and recognizing human activities in web videos

    Get PDF
    Recent e orts in computer vision tackle the problem of human activity understanding in video sequences. Traditionally, these algorithms require annotated video data to learn models. In this work, we introduce a novel data collection framework, to take advantage of the large amount of video data available on the web. We use this new framework to retrieve videos of human activities, and build training and evaluation datasets for computer vision algorithms. We rely on Amazon Mechanical Turk workers to obtain high accuracy annotations. An agglomerative clustering technique brings the possibility to achieve reliable and consistent annotations for temporal localization of human activities in videos. Using two datasets, Olympics Sports and our novel Daily Human Activities dataset, we show that our collection/annotation framework can make robust annotations of human activities in large amount of video data

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload

    Future of the Internet--and how to stop it

    Get PDF
    vi, 342 p. : ill. ; 25 cmLibro ElectrónicoOn January 9, 2007, Steve Jobs introduced the iPhone to an eager audience crammed into San Francisco’s Moscone Center.1 A beautiful and brilliantly engineered device, the iPhone blended three products into one: an iPod, with the highest-quality screen Apple had ever produced; a phone, with cleverly integrated functionality, such as voicemail that came wrapped as separately accessible messages; and a device to access the Internet, with a smart and elegant browser, and with built-in map, weather, stock, and e-mail capabilities. It was a technical and design triumph for Jobs, bringing the company into a market with an extraordinary potential for growth, and pushing the industry to a new level of competition in ways to connect us to each other and to the Web.Includes bibliographical references (p. 249-328) and index Acceso restringido a miembros del Consorcio de Bibliotecas Universitarias de Andalucía Electronic reproduction. Palo Alto, Calif. : ebrary, 2009 Modo de acceso : World Wide Webpt. 1. The rise and stall of the generative Net -- Battle of the boxes -- Battle of the networks -- Cybersecurity and the generative dilemma -- pt. 2. After the stall -- The generative pattern -- Tethered appliances, software as service, and perfect enforcement -- The lessons of Wikipedia -- pt. 3. Solutions -- Stopping the future of the Internet : stability on a generative Net -- Strategies for a generative future -- Meeting the risks of generativity : Privacy 2.0. Index32

    A goal-oriented user interface for personalized semantic search

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2006.Includes bibliographical references (v. 2, leaves 280-288).Users have high-level goals when they browse the Web or perform searches. However, the two primary user interfaces positioned between users and the Web, Web browsers and search engines, have very little interest in users' goals. Present-day Web browsers provide only a thin interface between users and the Web, and present-day search engines rely solely on keyword matching. This thesis leverages large knowledge bases of semantic information to provide users with a goal-oriented Web browsing experience. By understanding the meaning of Web pages and search queries, this thesis demonstrates how Web browsers and search engines can proactively suggest content and services to users that are both contextually relevant and personalized. This thesis presents (1) Creo, a Programming by Example system that allows users to teach their computers how to automate interactions with their favorite Web sites by providing a single demonstration, (2) Miro, a Data Detector that matches the content of a Web page to high-level user goals, and allows users to perform semantic searches, and (3) Adeo, an application that streamlines browsing the Web on mobile devices, allowing users to complete actions with a minimal amount of input and output.(cont.) An evaluation with 34 subjects found that they were more effective at completing tasks when using these applications, and that the subjects would use these applications if they had access to them. Beyond these three user interfaces, this thesis also explores a number of underlying issues, including (1) automatically providing semantics to unstructured text, (2) building robust applications on top of messy knowledge bases, (3) leveraging surrounding context to disambiguate concepts that have multiple meanings, and (4) learning new knowledge by reading the Web.by Alexander James Faaborg.S.M

    Security architecture for Fog-To-Cloud continuum system

    Get PDF
    Nowadays, by increasing the number of connected devices to Internet rapidly, cloud computing cannot handle the real-time processing. Therefore, fog computing was emerged for providing data processing, filtering, aggregating, storing, network, and computing closer to the users. Fog computing provides real-time processing with lower latency than cloud. However, fog computing did not come to compete with cloud, it comes to complete the cloud. Therefore, a hierarchical Fog-to-Cloud (F2C) continuum system was introduced. The F2C system brings the collaboration between distributed fogs and centralized cloud. In F2C systems, one of the main challenges is security. Traditional cloud as security provider is not suitable for the F2C system due to be a single-point-of-failure; and even the increasing number of devices at the edge of the network brings scalability issues. Furthermore, traditional cloud security cannot be applied to the fog devices due to their lower computational power than cloud. On the other hand, considering fog nodes as security providers for the edge of the network brings Quality of Service (QoS) issues due to huge fog device’s computational power consumption by security algorithms. There are some security solutions for fog computing but they are not considering the hierarchical fog to cloud characteristics that can cause a no-secure collaboration between fog and cloud. In this thesis, the security considerations, attacks, challenges, requirements, and existing solutions are deeply analyzed and reviewed. And finally, a decoupled security architecture is proposed to provide the demanded security in hierarchical and distributed fashion with less impact on the QoS.Hoy en día, al aumentar rápidamente el número de dispositivos conectados a Internet, el cloud computing no puede gestionar el procesamiento en tiempo real. Por lo tanto, la informática de niebla surgió para proporcionar procesamiento de datos, filtrado, agregación, almacenamiento, red y computación más cercana a los usuarios. La computación nebulizada proporciona procesamiento en tiempo real con menor latencia que la nube. Sin embargo, la informática de niebla no llegó a competir con la nube, sino que viene a completar la nube. Por lo tanto, se introdujo un sistema continuo jerárquico de niebla a nube (F2C). El sistema F2C aporta la colaboración entre las nieblas distribuidas y la nube centralizada. En los sistemas F2C, uno de los principales retos es la seguridad. La nube tradicional como proveedor de seguridad no es adecuada para el sistema F2C debido a que se trata de un único punto de fallo; e incluso el creciente número de dispositivos en el borde de la red trae consigo problemas de escalabilidad. Además, la seguridad tradicional de la nube no se puede aplicar a los dispositivos de niebla debido a su menor poder computacional que la nube. Por otro lado, considerar los nodos de niebla como proveedores de seguridad para el borde de la red trae problemas de Calidad de Servicio (QoS) debido al enorme consumo de energía computacional del dispositivo de niebla por parte de los algoritmos de seguridad. Existen algunas soluciones de seguridad para la informática de niebla, pero no están considerando las características de niebla a nube jerárquica que pueden causar una colaboración insegura entre niebla y nube. En esta tesis, las consideraciones de seguridad, los ataques, los desafíos, los requisitos y las soluciones existentes se analizan y revisan en profundidad. Y finalmente, se propone una arquitectura de seguridad desacoplada para proporcionar la seguridad exigida de forma jerárquica y distribuida con menor impacto en la QoS.Postprint (published version

    Automated Reverse Engineering of Agent Behaviors

    Get PDF
    corecore