208 research outputs found
Measuring the Performance of Online Distributed Team Innovation (Learning) Services
Copyright câ—‹2004 by the authors. Leifer et al.: Measuring the Performance of Online Distributed Team Innovation
Measuring the Performance of Online Distributed Team Innovation (Learning) Services
collaboration network and its supporting technology while working with professional engineers in a continuing education framework using advanced project based learning curricula. Four insights lie at the core of the proposed work. These insights, each the product of one or more Ph.D. theses, tell us what kind of activities make innovation activity effective. Guided by findings presented in A. A Product Development Knowledge Acquisition Model Grounded in Practice For over a decade the Mechanical Engineering Department at Stanford, through its Center for Design Research [4, Informed by these insights, we expect to intervene in ways that will increase performance and produce better learning experiences, better engineers, and better products. Our focus has been on creating environments and information technology to enhance peer-to-peer communications, cogeneration, and sharing of design knowledge. To this end, epirical studies have been conducted in industy As modeled in The formal tasks and procedures embodied in a product development process must be interpreted and contextualized by product development teams. Otherwise, specific process suggestions fail to be tangible, their value unappreciated, as hard working teammates perceive advice to be overhead. What is of value to the teams is to understand the intent of the proces
GUISET: A CONCEPTUAL DESIGN OF A GRID-ENABLED PORTAL FOR E-COMMERCE ON-DEMAND SERVICES
Conventional grid-enabled portal designs have been largely influenced by the usual functional requirements such as security requirements, grid resource requirements and job management requirements. However, the pay-as-you-use service provisioning model of utility computing platforms mean that additional requirements must be considered in order to realize effective grid-enabled portals design for such platforms. This work investigates those relevant additional
requirements that must be considered for the design of grid-enabled portals for utility computing
contexts.
Based on a thorough review of literature, we identified a number of those relevant additional
requirements, and developed a grid-enabled portal prototype for the Grid-based Utility
Infrastructure for SMME-enabling Technology (GUISET) initiative – a utility computing platform.
The GUISET portal was designed to cater for both the traditional grid requirements and some of
the relevant additional requirements for utility computing contexts. The result of the evaluation of
the GUISET portal prototype using a set of benchmark requirements (standards) revealed that it
fulfilled the minimum requirements to be suitable for the utility context
Data Driven Decision Making
This project investigated data reporting in various fixed income information technology (FI IT) management systems at BNP Paribas to produce optimized project reports for effective decision making. Our team conducted a series of interviews to understand the systems usage and developed conclusions and recommendations to improve project reporting for data driven decision making. A proposed reporting template was created and suggestions for future improvement were submitted to BNP Paribas FI IT
Development of a centralized log management system
Os registos de um sistema são uma peça crucial de qualquer sistema e fornecem
uma visão útil daquilo que este está fazendo e do que acontenceu em caso de falha.
Qualquer processo executado num sistema gera registos em algum formato.
Normalmente, estes registos ficam armazenados em memória local. À medida que os
sistemas evoluiram, o número de registos a analisar também aumentou, e, como
consequência desta evolução, surgiu a necessidade de produzir um formato de registos
uniforme, minimizando assim dependências e facilitando o processo de análise.
A ams é uma empresa que desenvolve e cria soluções no mercado dos sensores.
Com vinte e dois centros de design e trĂŞs locais de fabrico, a empresa fornece os seus
serviços a mais de oito mil clientes em todo o mundo. Um centro de design está
localizado no Funchal, no qual está incluida uma equipa de engenheiros de aplicação
que planeiam e desenvolvem applicações de software para clientes internos. O processo
de desenvolvimento destes engenheiros envolve várias aplicações e programas, cada
um com o seu prĂłprio sistema de registos.
Os registos gerados por cada aplicação são mantido em sistemas de
armazenamento distintos. Se um desenvolvedor ou administrador quiser solucionar um
problema que abrange várias aplicações, será necessário percorrer as várias localizações
onde os registos estĂŁo armazenados, colecionando-os e correlacionando-os de forma a
melhor entender o problema. Este processo Ă© cansativo e, se o ambiente for
dimensionado automaticamente, a solução de problemas semelhantes torna-se
inconcebĂvel.
Este projeto teve como principal objetivo resolver estes problemas, criando
assim um Sistema de GestĂŁo de Registos Centralizado capaz de lidar com registos de
várias fontes, como também fornecer serviços que irão ajudar os desenvolvedores e
administradores a melhor entender os diferentes ambientes afetados.
A solução final foi desenvolvida utilizando um conjunto de diferentes tecnologias
de cĂłdigo aberto, tais como a Elastic Stack (Elasticsearch, Logstash e Kibana), Node.js,
GraphQL e Cassandra.
O presente documento descreve o processo e as decisões tomadas para chegar
à solução apresentada.Logs are a crucial piece of any system and give a helpful insight into what it is
doing as well as what happened in case of failure. Every process running on a system
generates logs in some format. Generally, these logs are written to local storage
resources. As systems evolved, the number of logs to analyze increased, and, as a
consequence of this progress, there was the need of having a standardized log format,
minimizing dependencies and making the analysis process easier.
ams is a company that develops and creates sensor solutions. With twenty-two
design centers and three manufacturing locations, the company serves to over eight
thousand clients worldwide. One design center is located in Funchal, which includes a
team of application engineers that design and develop software applications to clients
inside the company. The application engineer’s development process is comprised of
several applications and programs, each having its own logging system.
Log entries generated by different applications are kept in separate storage
systems. If a developer or administrator wants to troubleshoot an issue that includes
several applications, he/she would have to go to different database systems or locations
to collect the logs and correlate them across the several requests. This is a tiresome
process and if the environment is auto-scaled, then troubleshooting an issue is
inconceivable.
This project aimed to solve these problems by creating a Centralized Log
Management System that was capable of handling logs from a variety of sources, as well
as to provide services that will help developers and administrators better understand
the different affected environments.
The deployed solution was developed using a set of different open-source
technologies, such as the Elastic Stack (Elasticsearch, Logstash and Kibana), Node.js,
GraphQL and Cassandra.
The present document describes the process and decisions taken to achieve the
solution
A reproducible approach to equity backtesting
Research findings relating to anomalous equity returns should ideally be repeatable by others. Usually, only a small subset of the decisions made in a particular backtest workflow are released, which limits reproducability. Data collection and cleaning, parameter setting, algorithm development and report generation are often done with manual point-and-click tools which do not log user actions. This problem is compounded by the fact that the trial-and-error approach of researchers increases the probability of backtest overfitting. Borrowing practices from the reproducible research community, we introduce a set of scripts that completely automate a portfolio-based, event-driven backtest. Based on free, open source tools, these scripts can completely capture the decisions made by a researcher, resulting in a distributable code package that allows easy reproduction of results
Temporal meta-model framework for Enterprise Information Systems (EIS) development
This thesis has developed a Temporal Meta-Model Framework for semi-automated Enterprise System Development, which can help drastically reduce the time and cost to develop, deploy and maintain Enterprise Information Systems throughout their lifecycle. It proposes that the analysis and requirements gathering can also perform the bulk of the design phase, stored and available in a suitable model which would then be capable of automated execution with the availability of a set of specific runtime components
Designing Data Spaces
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty
- …