15,089 research outputs found
Discovering Business Models for Software Process Management - An Approach for Integrating Time and Resource Perspectives from Legacy Information Systems
Business Process Management (BPM) is becoming the modern core to support business in all type of organizations and software business is not an exception. Software companies are often involved in important and complex collaborative projects carried out by many stakeholders. Each actor (customers, suppliers or government instances, among others) works with individual and shared processes. Everyone needs dynamic and evolving approaches for managing their software projects lifecycle. Nevertheless, many companies still use systems that are out of the scope of BPM for planning and control projects and managing enterprise content (Enterprise Content Management, ECM) as well as all kinds of resources (ERP). Somehow systems include scattered artifacts that are related to BPM perspectives: control and data flow, time, resource and case, for example. It is aimed to get interoperable BPM models from these classical Legacy Information Systems (LIS). Model-Driven Engineering (MDE) allows going from application code to higher-level of abstraction models. Particularly, there are standards and proposals for reverse engineering LIS. This paper illustrates LIS cases for software project planning and ECM, looking at time and resource perspectives. To conclude, we will propose a MDE-based approach for taking out business models in the context of software process management.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-
On the Modularization of ExplorViz towards Collaborative Open Source Development
Software systems evolve over their lifetime. Changing conditions such as requirements or customer requests make it inevitable for developers to perform adjustments to the underlying code base. Especially in the context of open source software where everybody can contribute, demands can change over time and new user groups may be addressed. In particular, research software is often not structured with a maintainable and extensible architecture. In combination with obsolescent technologies, this is a challenging task for developers, especially, when students are involved. In this paper, we report on the modularization process and architecture of our open source research project ExplorViz towards a microservice architecture, which facilitates a collaborative development process for both researchers and students. We describe the modularization measures and present how we solved occurring issues and enhanced our development process. Afterwards, we illustrate our modularization approach with our modernized, extensible software system architecture and highlight the improved collaborative development process. Finally, we present a proof-of-concept implementation featuring several developed extensions in terms of architecture and extensibility
On the Effect of Semantically Enriched Context Models on Software Modularization
Many of the existing approaches for program comprehension rely on the
linguistic information found in source code, such as identifier names and
comments. Semantic clustering is one such technique for modularization of the
system that relies on the informal semantics of the program, encoded in the
vocabulary used in the source code. Treating the source code as a collection of
tokens loses the semantic information embedded within the identifiers. We try
to overcome this problem by introducing context models for source code
identifiers to obtain a semantic kernel, which can be used for both deriving
the topics that run through the system as well as their clustering. In the
first model, we abstract an identifier to its type representation and build on
this notion of context to construct contextual vector representation of the
source code. The second notion of context is defined based on the flow of data
between identifiers to represent a module as a dependency graph where the nodes
correspond to identifiers and the edges represent the data dependencies between
pairs of identifiers. We have applied our approach to 10 medium-sized open
source Java projects, and show that by introducing contexts for identifiers,
the quality of the modularization of the software systems is improved. Both of
the context models give results that are superior to the plain vector
representation of documents. In some cases, the authoritativeness of
decompositions is improved by 67%. Furthermore, a more detailed evaluation of
our approach on JEdit, an open source editor, demonstrates that inferred topics
through performing topic analysis on the contextual representations are more
meaningful compared to the plain representation of the documents. The proposed
approach in introducing a context model for source code identifiers paves the
way for building tools that support developers in program comprehension tasks
such as application and domain concept location, software modularization and
topic analysis
MANAGING KNOWLEDGE AND DATA FOR A BETTER DECISION IN PUBLIC ADMINISTRATION
In the current context, the society is dominated by the rapid development of computer networks and the integration of services and facilities offered by the Internet environment at the organizational level. The success of an organization depends largely on the quality and quantity of information it has available to develop quickly decisions able to meet the current needs. The need for a collaborative environment within the central administration leads to the unification of resources and instruments around the Center of Government, to increase both the quality and efficiency of decision - making, especially reducing the time spent with decision - making, and upgrading the decision – making act.administration, strategy, decision, complex systems, management, infrastructure, e-government, information society, government platform.
Computer-Aided Warehouse Engineering (CAWE): Leveraging MDA and ADM for the Development of Data Warehouses
During the last decade, data warehousing has reached a high maturity and is a well-accepted technology in decision support systems. Nevertheless, development and maintenance are still tedious tasks since the systems grow over time and complex architectures have been established. The paper at hand adopts the concepts of Model Driven Architecture (MDA) and Architecture Driven Modernization (ADM) taken from the software engineering discipline to the data warehousing discipline. We show the works already available, outline further research directions and give hints for implementation of Computer-Aided Warehouse Engineering systems
Recommended from our members
Knowledge Management for Public Administrations: Technical Realizations of an Enterprise Attention Management System
The improvement of governments’ efficiency has gained great importance and validity especially in the current times of economic downturn. E-Government constitutes the most contemporary techno-managerial proposition in the track of possible interventions. The paper addresses, more specifically, empowerments necessitated by Public Administration (PA) organizations. Anchored on the needs of three real-life cases, the paper describes the conception and the realization of an IT artefact together with its methodological appeals aiming at improving information access and delivery and thus PAs’ decision making capacity. Our proposition constitutes a novel approach for managing users’ attention in knowledge intensive organizations which goes beyond informing a user about changes in relevant information towards proactively supporting the user to react on changes. The approach is based on an expressive attention model, which is realized by combining ECA (Event-Condition-Action) rules with ontologies. The technical realizations described in the paper constitute the underlying infrastructure of an Enterprise Attention Management System
Re-theorising the core: a ‘globalized’ business elite in Santiago, Chile
World systems theory continues to be a widely adopted approach in theorisations of the contemporary world economy. An important epistemological component to world systems theory is the metaphor of core-periphery. Recent work within the approach has sought to transcend earlier criticisms of regional conceptions of cores, peripheries and semi-peripheries by an increasing sensitivity to local differences and an increasing emphasis on Wallerstein's original idea of core-periphery as process, operating at all scales in the contemporary world system. However, this paper argues that the core-periphery metaphor currently used by world systems theorists is founded around a restrictively narrow spatial epistemology. Such a narrow epistemology implements the core-periphery metaphor only as something which produces territorial outcomes in the physical world. This paper contends that recent work within the social services, concerned with the globalization debate and issues of spatial epistemology, should inform world systems theory in producing a reformulated spatial understanding of the core-periphery metaphor, embodying a wider conception of space to include abstract social spaces. This argument is developed in the notion that the world economy must also be understood as having a ‘social core’: a transnational diasporic business elite exercising decision-making power over the capitalist world system. The contention is grounded in the presentation of research into a case study of such a ‘globalized’ business elite in the capital city of Chile, Santiago
Reverse Engineering Heterogeneous Applications
Nowadays a large majority of software systems are built using various technologies that in turn rely on different languages (e.g. Java, XML, SQL etc.). We call such systems heterogeneous applications (HAs). By contrast, we call software systems that are written in one language homogeneous applications. In HAs the information regarding the structure and the behaviour of the system is spread across various components and languages and the interactions between different application elements could be hidden. In this context applying existing reverse engineering and quality assurance techniques developed for homogeneous applications is not enough. These techniques have been created to measure quality or provide information about one aspect of the system and they cannot grasp the complexity of HAs. In this dissertation we present our approach to support the analysis and evolution of HAs based on: (1) a unified first-class description of HAs and, (2) a meta-model that reifies the concept of horizontal and vertical dependencies between application elements at different levels of abstraction. We implemented our approach in two tools, MooseEE and Carrack. The first is an extension of the Moose platform for software and data analysis and contains our unified meta-model for HAs. The latter is an engine to infer derived dependencies that can support the analysis of associations among the heterogeneous elements composing HA. We validate our approach and tools by case studies on industrial and open-source JEAs which demonstrate how we can handle the complexity of such applications and how we can solve problems deriving from their heterogeneous nature
METHODOLOGY FOR MODELING COST AND SCHEDULE RISK ASSOCIATED WITH RESOURCE DECISIONS INVOLVING THE U.S. ARMY'S MODERNIZATION EFFORTS FOR 2035
Prioritization decisions using the Army Modernization and Analysis (AMA)-developed Trade-Space Decision Exploration System (TRADES) does not address programmatic variance related to cost and schedule growth. This study offers an improved methodology for modeling cost risk by employing sound cost estimation principles, distribution fitting, Monte Carlo simulations, and cost/benefit analysis to assist strategic decision makers and the acquisitions community. To that end, this approach follows a five-step methodology that (1) collects and screens cost data from the Cost Assessment Database Enterprise (CADE), (2) determines normalized cost growth factors, (3) identifies and constructs the appropriate distributions for modeling, (4) simulates cost variance among the entire program portfolio, and (5) recommends the necessary contingency cash reserve quantity associated with a decision maker’s confidence level. The result is a credible, repeatable, and effectual cost estimating methodology that promotes commodity-based models for predicting cost growth and measuring risk.Major, United States ArmyApproved for public release. Distribution is unlimited
- …