1,442 research outputs found
Four Decades of Computing in Subnuclear Physics - from Bubble Chamber to LHC
This manuscript addresses selected aspects of computing for the
reconstruction and simulation of particle interactions in subnuclear physics.
Based on personal experience with experiments at DESY and at CERN, I cover the
evolution of computing hardware and software from the era of track chambers
where interactions were recorded on photographic film up to the LHC experiments
with their multi-million electronic channels
Enabling application agility: software as a service, cloud computing and dynamic languages
The good news is that application developers are on the verge of being liberated from the tyranny of middleware. Next Generation IT will leverage a new computing platform which makes the development and deliver of applications significantly easier than it is today. This new platform consists of Cloud Computing, Software As A Service and Dynamic Languages. Cloud Computing [1] offers mainframe or better infrastructure through a small set of services delivered globally over the Internet
Integrating legacy mainframe systems: architectural issues and solutions
For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most.
In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged.
Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment.
However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications.
The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies
Integration using Oracle SOA suite
Estágio realizado na Wipro RetailTese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200
MS
thesisSeveral methods exist for monitoring software development. Few formal evaluation methods have been applied to measure and improve clinical software application problems once the software has been implemented in the clinical setting. A standardized software problem classification system was developed and implemented at the University of Utah Health Sciences Center. External validity was measured by a survey of 14 University Healthcare Consortium (UHC) hospitals. Internal validation was accomplished by: an indepth analysis of problems details; revision in the problem ticket format; verification from staff within the information systems department; and mapping of old problems to the new classification system. Cohen's Kappa statistics of agreement, used for reliability testing of the new classification systems, revealed good agreement (Kappa = .6162) among HELP Desk agents in consistency of classifying problems calls. A monthly quality improvement report template with the following categories was developed from the new classification system: top 25 problems; unplanned server downtimes; problem summaries; customer satisfaction survey results; top problems details; case analyses; and follow-up of case analysis. Continuous Quality Improvement (CQ) methodology was applied to problem reporting within the Office of Information Resources (OIR) and a web-based ticket entry system was implemented. The new system has resulted in the following benefits: reduction in problem resolution times by one third; improved problem ticket information; shift of 2 FTEs from call center to dispatch due to the increased efficiency of the HELP DESK; and a trend in improvement of customer satisfaction as measured by an online survey. The study provided an internal quality model for the OIR department and the UUHSC. The QM report template provided a method for tracking and trending software problems to use in conducting evaluation and quality improvement studies. The template also provided data for analysis and improvement studies. The template also provided data for analysis and improvement of customer satisfaction. The study has further potential as a model for information system departments at other health care institutions for implementing quality improvement methods. There is potential for improvement in the information technology, social, organizational, and cultural aspects as key issues emerge over time. There can be many consequences to the data collected and many consequences of change can be studied
To Host a Legacy System to the Web
The dramatic improvements in global interconectivity due to intranets, extranets and the Internet has led to many enterprises to consider migrating legacy systems to a web based systems. While data remapping is relatively straightforward in most cases, greater challenges lie in adapting legacy application software. This research effort describes an experiment in which a legacy system is migrated to a web-client/server environment. First, this thesis reports on the difficulties and issues arising when porting a legacy system International Invoice (IIMM) to a web-client/server environment. Next, this research analyzes the underlying issues, and offer cautionary guidance to future migrators and finally this research effort builds a prototype of the legacy system on a web client/server environment that demonstrates effective strategies to deal with these issues
Recommended from our members
Evaluating the adoption of enterprise application integration in multinational organisations
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A review of normative literature, in the field of Information Systems (IS) integration, indicates that traditional approaches to applications integration have failed to result in flexible and maintainable IT infrastructures. In addressing this issue, a new technology called Enterprise Application Integration (EAI) has emerged and addresses most of integration problems by resulting in the development of reusable and manageable IT infrastructures. Enterprise application integration is a new research area with many research issues needing to be investigated. At this end, EAI adoption has not efficiently studied with organisations and researchers needing to understand and analyse EAI adoption. This work examines the introduction of enterprise application integration in multinational organisations and proposes a novel model for its adoption. The model is based on a comprehensive set of factors that influence the introduction of EAI in organisations. Since there is an absence of theoretical models for EAI adoption, the proposed model adapts factors that influence the adoption of other integration technologies such as Electronic Data Interchange (EDT). Additional factors like an evaluation framework that supports decision making have been considered by the author as factors that influence EAI adoption. In moving from the conceptual to the empirical, the work is based on a qualitative case study approach to examine the concepts of the proposed model for the adoption of EAI. In doing so, two case studies were conducted at multinational organisations and presented and analysed. However, during the empirical research complementary factors also emerged, which resulted in modifications being made to the previously presented conceptual model. In interpreting from empirical data, it appears that ten main factors influence the adoption of EAT namely: (a) benefits; (b) barriers; (c) costs; (d) internal pressures; (e) external pressures; (f) IT infrastructure; (g) IT sophistication; (h) an evaluation framework for the assessment of integration technologies; (i) evaluation framework for the assessment of EAT packages and,
(j) support. The proposed model makes novel contribution at two levels. First, at the conceptual level, as it incorporates factors identified separately in previous studies as influencing adoption of other integration technologies. These factors are used for the development of a consistent model for the adoption and evaluation of EAT. Secondly, the concepts of the proposed model can be used for the adoption of inter-organisational information systems. The proposed model can be used as a decision-making tool to support management when taking decisions regarding the adoption of EAI. Additionally, it can be used by researchers to analyse and understand the adoption of application integration.This work is funded by the Brunel University Department of Information Systems and Computing
Off-line computing for experimental high-energy physics
The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated
Maine IT Workforce Skills Management : A study for the Maine State Department of Labor
Executive Summary:
From August 2010 to February 2011 personnel from Information and Innovation at the University of Southern Maine have conducted a study of IT skills needed, possessed and taught in Maine. The goals of this study were to provide fine-grained information to the Maine state Department of Labor to facilitate their workforce development activities.
This study concerns the skills sought after by employers, possessed by unemployed and employed workers and taught in education and training establishments with a bricks and mortar presence in Maine. It relied on data created by third parties and by study personnel. Anecdotal evidence was gathered from meetings with local industry IT professionals as well. This study does not attempt to estimate demand or supply of a given skill, but it does assess which skills are in greatest and least demand, which skills are in greatest and least supply, and which skills are taught more and less often. The results of data analysis are presented in a new measure, skill rank disparity, which exposes skill and training gaps and gluts.
This study provides certain insights into its results, observing individual cases of skills high in demand and low in supply, for example. Insights are also provided in terms of groups of skills that are often taught, often asked for, and whether these groups of skills are well-represented in the Maine IT workforce.
This study also provides specific and actionable recommendatio
- …