283 research outputs found

    Artificial Intelligence in a Main Warehouse in Panasonic: Los Indios, Texas

    Get PDF
    The Panasonic Company warehouse is located in Los Indios Texas. The warehouse presents the limitation of the great distances between its headquarters and the Main Warehouse for supplying the branches and main customers, which requires a considerable amount of time to maintain effective communication in the inventory area. In addition, during an online review, it can be confirmed that the website is disabled, contradicting its corporate policy. The structure of the thesis proposal is arranged in four chapters from the Introduction, Statement of the Problem and Purposes; Previous Studies and Definition of the literature; the Research Methodology and the resources for data collection, the results, the proposal, and the conclusions. This paper ends with a list of references from different substantial sources that facilitated the research

    Report on Strategic Initiative to Provide Enhanced Intellectual Access to NYUCurated Digital Collections

    Get PDF
    This report addresses Goal no. 4 of the NYU Division of Libraries’ Strategic Plan 2013-2017, namely, “Establish processes and support structures that ensure we can select, acquire, preserve, and provide access to the full spectrum of research materials,” and specifically Initiative 4.3, “a plan to provide intellectual access to NYU-curated digital collections via the library's primary discovery-and-access interfaces.” Since the Initiative’s inception in July 2013, participants have identified and prioritized eligible collections, collected user stories, prototyped the “Ichabod” tool for metadata aggregation and normalization, mapped metadata elements to a local Nyucore schema, and harvested the processed metadata into the development instance of BobCat. The Ichabod tool is based on Fedora, Hydra, Solr, and Blacklight. It was implemented using Agile methodology and involving developers from DLTS, KADD, and Web Services. The emerging code base, processes, and working relationships place NYU in a strong position to solve local discovery problems as well as innovate in the field of repository metadata management and enrichment

    GMODWeb: a web framework for the generic model organism database

    Get PDF
    ABSTRACT: The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of Model Organism Database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from http://turnkey.sourceforge.net

    Report on Strategic Initiative to Provide Enhanced Intellectual Access to NYUCurated Digital Collections

    Get PDF
    This report addresses Goal no. 4 of the NYU Division of Libraries’ Strategic Plan 2013-2017, namely, “Establish processes and support structures that ensure we can select, acquire, preserve, and provide access to the full spectrum of research materials,” and specifically Initiative 4.3, “a plan to provide intellectual access to NYU-curated digital collections via the library's primary discovery-and-access interfaces.” Since the Initiative’s inception in July 2013, participants have identified and prioritized eligible collections, collected user stories, prototyped the “Ichabod” tool for metadata aggregation and normalization, mapped metadata elements to a local Nyucore schema, and harvested the processed metadata into the development instance of BobCat. The Ichabod tool is based on Fedora, Hydra, Solr, and Blacklight. It was implemented using Agile methodology and involving developers from DLTS, KADD, and Web Services. The emerging code base, processes, and working relationships place NYU in a strong position to solve local discovery problems as well as innovate in the field of repository metadata management and enrichment

    MT-WAVE: Profiling multi-tier web applications

    Get PDF
    The web is evolving: what was once primarily used for sharing static content has now evolved into a platform for rich client-side applications. These applications do not run exclusively on the client; while the client is responsible for presentation and some processing, there is a significant amount of processing and persistence that happens server-side. This has advantages and disadvantages. The biggest advantage is that the user’s data is accessible from anywhere. It doesn’t matter which device you sign into a web application from, everything you’ve been working on is instantly accessible. The largest disadvantage is that large numbers of servers are required to support a growing user base; unlike traditional client applications, an organization making a web application needs to provision compute and storage resources for each expected user. This infrastructure is designed in tiers that are responsible for different aspects of the application, and these tiers may not even be run by the same organization. As these systems grow in complexity, it becomes progressively more challenging to identify and solve performance problems. While there are many measures of software system performance, web application users only care about response latency. This “fingertip-to-eyeball performance” is the only metric that users directly perceive: when a button is clicked in a web application, how long does it take for the desired action to complete? MT-WAVE is a system for solving fingertip-to-eyeball performance problems in web applications. The system is designed for doing multi-tier tracing: each piece of the application is instrumented, execution traces are collected, and the system merges these traces into a single coherent snapshot of system latency at every tier. To ensure that user-perceived latency is accurately captured, the tracing begins in the web browser. The application developer then uses the MT-WAVE Visualization System to explore the execution traces to first identify which system is causing the largest amount of latency, and then zooms in on the specific function calls in that tier to find optimization candidates. After fixing an identified problem, the system is used to verify that the changes had the intended effect. This optimization methodology and toolset is explained through a series of case studies that identify and solve performance problems in open-source and commercial applications. These case studies demonstrate both the utility of the MT-WAVE system and the unintuitive nature of system optimization

    Data Model Verification via Theorem Proving

    Get PDF
    Software applications have moved from desktop computers onto the web. This is not surprising since there are many advantages that web applications provide, such as ubiquitous access and distributed processing power. However, these benefits come at a cost. Web applications are complex distributed systems written in multiple languages. As such, they are prone to errors at any stage of development, and difficult to verify, or even test. Considering that web applications store and manage data for millions (even billions) of users, errors in web applications can have disastrous effects.In this dissertation, we present a method for verifying code that is used to access and modify data in web applications. We focus on applications that use frameworks such as Ruby on Rails, Django or Spring. These frameworks are RESTful, enforce the Model-View-Controller architecture, and use Object Relational Mapping libraries to manipulate data. We developed a formal model for data stores and data store manipulation, including access control. We developed a translation of these models to formulas in First Order Logic (FOL) that allows for verification of data model invariants using off-the-shelf FOL theorem provers. In addition, we developed a method for extracting these models from existing applications implemented in Ruby on Rails. Our results demonstrate that our approach is applicable to real world applications, it is able to discover previously unknown bugs, and it does so within minutes on commonly available hardware

    Intermediador de serviços na Nuvem

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaDe acordo com história dos sistemas informáticos, os engenheiros têm vindo a remodelar infraestruturas para melhorar a eficiência das organizações, visando o acesso partilhado a recursos computacionais. O advento da computação em núvem desencadeou um novo paradigma, proporcionando melhorias no alojamento e entrega de serviços através da Internet. Quando comparado com abordagens tradicionais, este apresenta vantajens por disponibilizar acesso ubíquo, escalável e sob demanda, a determinados conjuntos de recursos computacionais partilhados. Ao longo dos últimos anos, observou-se a entrada de novos operadores que providenciam serviços na núvem, a preços competitivos e diferentes acordos de nível de serviço (“Service Level Agreements”). Com a adoção crescente e sem precedentes da computação em núvem, os fornecedores da área estão se a focar na criação e na disponibilização de novos serviços, com valor acrescentado para os seus clientes. A competitividade do mercado e a existência de inúmeras opções de serviços e de modelos de negócio gerou entropia. Por terem sido criadas diferentes terminologias para conceitos com o mesmo significado e o facto de existir incompatibilidade de Interfaces de Programação Aplicacional (“Application Programming Interface”), deu-se uma restrição de fornecedores de serviços específicos na núvem a utilizadores. A fragmentação na faturação e na cobrança ocorreu quando os serviços na núvem passaram a ser contratualizados com diferentes fornecedores. Posto isto, seria uma mais valia existir uma entidade, que harmonizasse a relação entre os clientes e os múltiplos fornecedores de serviços na núvem, por meio de recomendação e auxílio na intermediação. Esta dissertação propõe e implementa um Intermediador de Serviços na Núvem focado no auxílio e motivação de programadores para recorrerem às suas aplicações na núvem. Descrevendo as aplicações de modo facilitado, um algoritmo inteligente recomendará várias ofertas de serviços na núvem cumprindo com os requisitos aplicacionais. Desta forma, é prestado aos utilizadores formas de submissão, gestão, monitorização e migração das suas aplicações numa núvem de núvens. A interação decorre a partir de uma única interface de programação que orquestrará todo um processo juntamente com outros gestores de serviços na núvem. Os utilizadores podem ainda interagir com o Intermediador de Serviços na Núvem a partir de um portal Web, uma interface de linha de comandos e bibliotecas cliente.Throughout the history of computer systems, experts have been reshaping IT infrastructure for improving the efficiency of organizations by enabling shared access to computational resources. The advent of cloud computing has sparked a new paradigm providing better hosting and service delivery over the Internet. It offers advantages over traditional solutions by providing ubiquitous, scalable and on-demand access to shared pools of computational resources. Over the course of these last years, we have seen new market players offering cloud services at competitive prices and different Service Level Agreements. With the unprecedented increasing adoption of cloud computing, cloud providers are on the look out for the creation and offering of new and valueadded services towards their customers. Market competitiveness, numerous service options and business models led to gradual entropy. Mismatching cloud terminology got introduced and incompatible APIs locked-in users to specific cloud service providers. Billing and charging become fragmented when consuming cloud services from multiple vendors. An entity recommending cloud providers and acting as an intermediary between the cloud consumer and providers would harmonize this interaction. This dissertation proposes and implements a Cloud Service Broker focusing on assisting and encouraging developers for running their applications on the cloud. Developers can easily describe their applications, where an intelligent algorithm will be able to recommend cloud offerings that better suit application requirements. In this way, users are aided in deploying, managing, monitoring and migrating their applications in a cloud of clouds. A single API is required for orchestrating the whole process in tandem with truly decoupled cloud managers. Users can also interact with the Cloud Service Broker through a Web portal, a command-line interface, and client libraries

    A User-driven Annotation Framework for Scientific Data

    Get PDF
    Annotations play an increasingly crucial role in scientific exploration and discovery, as the amount of data and the level of collaboration among scientists increases. There are many systems today focusing on annotation management, querying, and propagation. Although all such systems are implemented to take user input (i.e., the annotations themselves), very few systems are user-driven, taking into account user preferences on how annotations should be propagated and applied over data. In this thesis, we propose to treat annotations as first-class citizens for scientific data by introducing a user-driven, view-based annotation framework. Under this framework, we try to resolve two critical questions: Firstly, how do we support annotations that are scalable both from a system point of view and also from a user point of view? Secondly, how do we support annotation queries both from an annotator point of view and a user point of view, in an efficient and accurate way? To address these challenges, we propose the VIew-base annotation Propagation (ViP) framework to empower users to express their preferences over the time semantics of annotations and over the network semantics of annotations, and define three query types for annotations. To efficiently support such novel functionality, ViP utilizes database views and introduces new annotation caching techniques. The use of views also brings a more compact representation of annotations, making our system easier to scale. Through an extensive experimental study on a real system (with both synthetic and real data), we show that the ViP framework can seamlessly introduce user-driven annotation propagation semantics while at the same time significantly improving the performance (in terms of query execution time) over the current state of the art

    Intentio Ex Machina: Android Intent Access Control via an Extensible Application Hook

    Get PDF
    Android\u27s intent framework facilitates binder based interprocess communication (IPC) and encourages application developers to utilize IPC in their applications with a frequency unseen in traditional desktop environments. The increased volume of IPC present in Android devices, coupled with intent\u27s ability to implicitly find valid receivers for IPC, bring about new security challenges to the computing security landscape. This work proposes Intentio Ex Machina (IEM), an access control solution for Android intent IPC security. IEM separates the logic for performing access control from where the intents are intercepted by placing an interface in the Android framework. This allows the access control logic to be placed inside a normal application and reached via the interface. The app, called a “user firewall”, can then receive intents as they enter the system and inspect them. Not only can the user firewall allow or block intents, but it can even—within designed limitations—modify them. Since the user firewall runs as a normal user application, developers are free to create their own user firewall applications which users can then download and enable. In this way, IEM creates a new genre of security application for Android systems allowing for creative and interactive approaches to active IPC defense
    corecore