1,938 research outputs found

    Cost and Response Time Simulation for Web-based Applications on Mobile Channels

    Get PDF
    When considering the addition of a mobile presentation channel to an existing web-based application, a key question that has to be answered even before development begins is how the mobile channel’s characteristics will impact the user experience and the cost of using the application. If either of these factors is outside acceptable limits, economical considerations may forbid adding the channels, even if it would be feasible from a purely technical perspective. Both of these factors depend considerably on two metrics: The time required to transmit data over the mobile network, and the volume transmitted. The PETTICOAT method presented in this paper uses the dialog flow model and web server log files of an existing application to identify typical interaction sequences and to compile volume statistics, which are then run through a tool that simulates the volume and time that would be incurred by executing the interaction sequences on a mobile channel. From the simulated volume and time data, we can then calculate the cost of accessing the application on a mobile channel

    Performance tuning and cost discovery of mobile web-based applications

    Get PDF
    When considering the addition of a mobile presentation channel to an existing web-based application, project managers should know how the mobile channel|s characteristics will impact the user experience and the cost of using the application, even before development begins. The PETTICOAT (Performance Tuning and cost discovery of mobile web-based Applications) approach presented here provides decision-makers with indicators on the economical feasibility of mobile channel development. In a nutshell, it involves analysing interaction patterns on the existing stationary channel, identifying key business processes among them, measuring the time and data volume incurred in their execution, and then simulating how the same interaction patterns would run when subjected to the frame conditions of a mobile channel. As a result of the simulation, we then gain time and volume projections for those interaction patterns that allow us to estimate the costs incurred by executing certain business processes on different mobile channels

    Cost Simulation and Performance Optimization of Web-based Applications on Mobile Channels

    Get PDF
    When considering the addition of a mobile presentation channel to an existing web-based application, a key question that has to be answered even before development begins is how the mobile channel's characteristics will impact the user experience and the cost of using the application. If either of these factors is outside acceptable limits, economical considerations may forbid adding the channels, even if it would be feasible from a purely technical perspective. Both of these factors depend considerably on two metrics: The time required to transmit data over the mobile network, and the volume transmitted. The PETTICOAT method presented in this paper uses the dialog flow model and web server log files of an existing application to identify typical interaction sequences and to compile volume statistics, which are then run through a tool that simulates the volume and time that would be incurred by executing the interaction sequences on a mobile channel. From the simulated volume and time data, we can then calculate the cost of accessing the application on a mobile channel, and derive suitable approaches for optimizing cost and response times

    Current Challenges and Visions in Music Recommender Systems Research

    Full text link
    Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field

    Teaching and Collecting Technical Standards: A Handbook for Librarians and Educators

    Get PDF
    Technical standards are a vital source of information for providing guidelines during the design, manufacture, testing, and use of whole products, materials, and components. To prepare students—especially engineering students—for the workforce, universities are increasing the use of standards within the curriculum. Employers believe it is important for recent university graduates to be familiar with standards. Despite the critical role standards play within academia and the workforce, little information is available on the development of standards information literacy, which includes the ability to understand the standardization process; identify types of standards; and locate, evaluate, and use standards effectively. Libraries and librarians are a critical part of standards education, and much of the discussion has been focused on the curation of standards within libraries. However, librarians also have substantial experience in developing and teaching standards information literacy curriculum. With the need for universities to develop a workforce that is well-educated on the use of standards, librarians and course instructors can apply their experiences in information literacy toward teaching students the knowledge and skills regarding standards that they will need to be successful in their field. This title provides background information for librarians on technical standards as well as collection development best practices. It also creates a model for librarians and course instructors to use when building a standards information literacy curriculum.https://docs.lib.purdue.edu/pilh/1004/thumbnail.jp

    Suffolk University Graduate Academic Catalog, College of Arts and Sciences and Sawyer Business School, 2014-2015

    Get PDF
    This catalog contains information for the graduate programs in the College of Arts and Sciences and the Sawyer Business School. The catalog is a pdf version of the Suffolk website, so many pages have repeated information and links in the document will not work. The catalog is keyword searchable by clicking ctrl+f. A-Z course descriptions are also included here as separate pdf files with lists of CAS and SBS courses. Please contact the Archives if you need assistance navigating this catalog or finding information on degree requirements or course descriptions.https://dc.suffolk.edu/cassbs-catalogs/1169/thumbnail.jp

    Development of a Framework for Ontology Population Using Web Scraping in Mechatronics

    Get PDF
    One of the major challenges in engineering contexts is the efficient collection, management, and sharing of data. To address this problem, semantic technologies and ontologies are potent assets, although some tasks, such as ontology population, usually demand high maintenance effort. This thesis proposes a framework to automate data collection from sparse web resources and insert it into an ontology. In the first place, a product ontology is created based on the combination of several reference vocabularies, namely GoodRelations, the Basic Formal Ontology, ECLASS stan- dard, and an information model. Then, this study introduces a general procedure for developing a web scraping agent to collect data from the web. Subsequently, an algorithm based on lexical similarity measures is presented to map the collected data to the concepts of the ontology. Lastly, the collected data is inserted into the ontology. To validate the proposed solution, this thesis implements the previous steps to collect information about microcontrollers from three differ- ent websites. Finally, the thesis evaluates the use case results, draws conclusions, and suggests promising directions for future research

    Data Assets: Tokenization and Valuation

    Get PDF
    Your Data (new gold, new oil) is hugely valuable (est. $13T globally) but not a 'balance-sheet' asset. Tokenization- used by banks for payments and settlement- lets you manage, value, and monetize your data. Data is the ultimate commodity industry. This position paper outlines our vision and a general framework for tokenizing data, managing data assets and data liquidity to allow individuals and organizations in the public and private sectors to gain the economic value of data, while facilitating its responsible and ethical use. We will examine the challenges associated with developing and securing a data economy, as well as the potential applications and opportunities of the decentralised data-tokenized economy. We will also discuss the ethical considerations to promote the responsible exchange and use of data to fuel innovation and progress

    Program Transformations for Information Personalization

    Get PDF
    Personalization constitutes the mechanisms necessary to automatically customize information content, structure, and presentation to the end user to reduce information overload. Unlike traditional approaches to personalization, the central theme of our approach is to model a website as a program and conduct website transformation for personalization by program transformation (e.g., partial evaluation, program slicing). The goal of this paper is study personalization through a program transformation lens and develop a formal model, based on program transformations, for personalized interaction with hierarchical hypermedia. The specific research issues addressed involve identifying and developing program representations and transformations suitable for classes of hierarchical hypermedia and providing supplemental interactions for improving the personalized experience. The primary form of personalization discussed is out-of-turn interaction—a technique that empowers a user navigating a hierarchical website to postpone clicking on any of the hyperlinks presented on the current page and, instead, communicate the label of a hyperlink nested deeper in the hierarchy. When the user supplies out-of-turn input, we personalize the hierarchy to reflect the user\u27s informational need. While viewing a website as a program and site transformation as program transformation is non-traditional, it offers a new way of thinking about personalized interaction, especially with hierarchical hypermedia. Our use of program transformations casts personalization in a formal setting and provides a systematic and implementation-neutral approach to designing systems. Moreover, this approach helped connect our work to human-computer dialog management and, in particular, mixed-initiative interaction. Putting personalized web interaction on a fundamentally different landscape gave birth to this new line of research. Relating concepts in the web domain (e.g., sites, interactions) to notions in the program-theoretic domain (e.g., programs, transformations) constitutes the creativity in this work

    Un environnement de spécification et de découverte pour la réutilisation des composants logiciels dans le développement des logiciels distribués

    Get PDF
    Notre travail vise à élaborer une solution efficace pour la découverte et la réutilisation des composants logiciels dans les environnements de développement existants et couramment utilisés. Nous proposons une ontologie pour décrire et découvrir des composants logiciels élémentaires. La description couvre à la fois les propriétés fonctionnelles et les propriétés non fonctionnelles des composants logiciels exprimées comme des paramètres de QoS. Notre processus de recherche est basé sur la fonction qui calcule la distance sémantique entre la signature d'un composant et la signature d'une requête donnée, réalisant ainsi une comparaison judicieuse. Nous employons également la notion de " subsumption " pour comparer l'entrée-sortie de la requête et des composants. Après sélection des composants adéquats, les propriétés non fonctionnelles sont employées comme un facteur distinctif pour raffiner le résultat de publication des composants résultats. Nous proposons une approche de découverte des composants composite si aucun composant élémentaire n'est trouvé, cette approche basée sur l'ontologie commune. Pour intégrer le composant résultat dans le projet en cours de développement, nous avons développé l'ontologie d'intégration et les deux services " input/output convertor " et " output Matching ".Our work aims to develop an effective solution for the discovery and the reuse of software components in existing and commonly used development environments. We propose an ontology for describing and discovering atomic software components. The description covers both the functional and non functional properties which are expressed as QoS parameters. Our search process is based on the function that calculates the semantic distance between the component interface signature and the signature of a given query, thus achieving an appropriate comparison. We also use the notion of "subsumption" to compare the input/output of the query and the components input/output. After selecting the appropriate components, the non-functional properties are used to refine the search result. We propose an approach for discovering composite components if any atomic component is found, this approach based on the shared ontology. To integrate the component results in the project under development, we developed the ontology integration and two services " input/output convertor " and " output Matching "
    • …
    corecore