33 research outputs found
A Reengineering Approach to Reconciling Requirements and Implementation for Context - Aware Web Services Systems
In modern software development, the gap between software requirements and implementation is not always conciliated. Typically, for Web services-based context-aware systems, reconciling this gap is even harder. The aim of this research is to explore how software reengineering can facilitate the reconciliation between requirements and implementation for the said systems. The underlying research in this thesis comprises the following three components.
Firstly, the requirements recovery framework underpins the requirements elicitation approach on the proposed reengineering framework. This approach consists of three stages: 1) Hypothesis generation, where a list of hypothesis source code information is generated; 2) Segmentation, where the hypothesis list is grouped into segments; 3) Concept binding, where the segments turn into a list of concept bindings linking regions of source code.
Secondly, the derived viewpoints-based context-aware service requirements model is proposed to fully discover constraints, and the requirements evolution model is developed to maintain and specify the requirements evolution process for supporting context-aware services evolution.
Finally, inspired by context-oriented programming concepts and approaches, ContXFS is implemented as a COP-inspired conceptual library in F#, which enables developers to facilitate dynamic context adaption. This library along with context-aware requirements analyses mitigate the development of the said systems to a great extent, which in turn, achieves reconciliation between requirements and implementation
TRANSOM: An Efficient Fault-Tolerant System for Training LLMs
Large language models (LLMs) with hundreds of billions or trillions of
parameters, represented by chatGPT, have achieved profound impact on various
fields. However, training LLMs with super-large-scale parameters requires large
high-performance GPU clusters and long training periods lasting for months. Due
to the inevitable hardware and software failures in large-scale clusters,
maintaining uninterrupted and long-duration training is extremely challenging.
As a result, A substantial amount of training time is devoted to task
checkpoint saving and loading, task rescheduling and restart, and task manual
anomaly checks, which greatly harms the overall training efficiency. To address
these issues, we propose TRANSOM, a novel fault-tolerant LLM training system.
In this work, we design three key subsystems: the training pipeline automatic
fault tolerance and recovery mechanism named Transom Operator and Launcher
(TOL), the training task multi-dimensional metric automatic anomaly detection
system named Transom Eagle Eye (TEE), and the training checkpoint asynchronous
access automatic fault tolerance and recovery technology named Transom
Checkpoint Engine (TCE). Here, TOL manages the lifecycle of training tasks,
while TEE is responsible for task monitoring and anomaly reporting. TEE detects
training anomalies and reports them to TOL, who automatically enters the fault
tolerance strategy to eliminate abnormal nodes and restart the training task.
And the asynchronous checkpoint saving and loading functionality provided by
TCE greatly shorten the fault tolerance overhead. The experimental results
indicate that TRANSOM significantly enhances the efficiency of large-scale LLM
training on clusters. Specifically, the pre-training time for GPT3-175B has
been reduced by 28%, while checkpoint saving and loading performance have
improved by a factor of 20.Comment: 14 pages, 9 figure
Modeling 4.0: Conceptual Modeling in a Digital Era
Digitization provides entirely new affordances for our economies and societies. This leads to previously unseen design opportunities and complexities as systems and their boundaries are re-defined, creating a demand for appropriate methods to support design that caters to these new demands. Conceptual modeling is an established means for this, but it needs to be advanced to adequately depict the requirements of digitization. However, unlike the actual deployment of digital technologies in various industries, the domain of conceptual modeling itself has not yet undergone a comprehensive renewal in light of digitization. Therefore, inspired by the notion of Industry 4.0, an overarching concept for digital manufacturing, in this commentary paper, we propose Modeling 4.0 as the notion for conceptual modeling mechanisms in a digital environment. In total, 12 mechanisms of conceptual modeling are distinguished, providing ample guidance for academics and professionals interested in ensuring that modeling techniques and methods continue to fit contemporary and emerging requirements
Recommended from our members
Specification and Analysis of Resource Utilization Policies for Human-Intensive Systems
Contemporary systems often require the effective support of many types of resources, each governed by complex utilization policies. Sound management of these resources plays a key role in assuring that these systems achieve their key goals. To help system developers make sound resource management decisions, I provide a resource utilization policy specification and analysis framework for (1) specifying very diverse kinds of resources and their potentially complex resource utilization policies, (2) dynamically evaluating the policies’ effects on the outcomes achieved by systems utilizing the resources, and (3) formally verifying various kinds of properties of these systems.
Resource utilization policies range from simple, e.g., first-in-first-out, to extremely complex, responding to changes in system environment, state, and stimuli. Further, policies may at times conflict with each other, requiring conflict resolution strategies that add extra complexity. Prior specification approaches rely on relatively simple resource models that prevent the specification of complex utilization and conflict resolution policies. My approach (1) separates resource utilization policy concerns from resource characteristic and request specifications, (2) creates an expressive specification notation for constraint policies, and (3) creates a resource constraint conflict resolution capability. My approach enables creating specifications of policies that are sufficiently precise and detailed to support static and dynamic analyses of how these policies affect the properties of systems constrained or governed by these policies.
I provide a process- and resource-aware discrete-event simulator for simulating system executions that adhere to policies of resource utilization. The simulator integrates the existing JSim simulation engine with a separate resource management system. The separate architectural component makes it easy to keep track of resource utilization traces during a simulation run. My simulation framework facilitates considerable flexibility in the evaluation of diverse resource management decisions and powerful dynamic analyses.
Dynamic verification through simulation is inherently limited because of the impossibility of exhaustive simulation of all scenarios. I complement this approach with static verification. Prior static resource analysis has supported the verification only of relatively simple resource utilization policies. My research utilizes powerful model checking techniques, building on the existing FLAVERS model checking tool, to verify properties of complex systems that are also verified to conform to complex resource utilization policies. My research demonstrates how to use systems such as FLAVERS to verify adherence to complex resource utilization policies as well as overall system properties, such as the absence of resource leak and resource deadlock.
I evaluated my approach working with a hospital emergency department domain expert, using detailed, expert-developed models of the processes and resource utilization policies of an emergency department. In doing this, my research demonstrates how my framework can be effective in guiding the domain expert towards making sound decisions about policies for the management of hospital resources, while also providing rigorously-based assurances that the guidance is reliable and well-founded.
My research makes the following contributions: (1) a specification language for resources and resource utilization policies for human-intensive systems, (2) a process- and resource-aware discrete-event simulation engine that creates simulations that adhere to the resource utilization policies, allowing for the dynamic evaluation of resource utilization policies, (3) a process- and resource-aware model checking technique that formally verifies system properties and adherence to resource utilization policies, and (4) validated and verified specifications of an emergency department healthcare system, demonstrating the utility of my approach
A semantic framework for unified cloud service search, recommendation, retrieval and management
Cloud computing (CC) is a revolutionary paradigm of consuming Information and Communication Technology (ICT) services. However, while trying to find the optimal services, many users often feel confused due to the inadequacy of service information description. Although some efforts are made in the semantic modelling, retrieval and recommendation of cloud services, existing practices would only work effectively for certain restricted scenarios to deal for example with basic and non-interactive service specifications. In the meantime, various service management tasks are usually performed individually for diverse cloud resources for distinct service providers. This results into significant decreased effectiveness and efficiency for task implementation. Fundamentally, it is due to the lack of a generic service management interface which enables a unified service access and manipulation regardless of the providers or resource types.To address the above issues, the thesis proposes a semantic-driven framework, which integrates two main novel specification approaches, known as agility-oriented and fuzziness-embedded cloud service semantic specifications, and cloud service access and manipulation request operation specifications. These consequently enable comprehensive service specification by capturing the in-depth cloud concept details and their interactions, even across multiple service categories and abstraction levels. Utilising the specifications as CC knowledge foundation, a unified service recommendation and management platform is implemented. Based on considerable experiment data collected on real-world cloud services, the approaches demonstrate distinguished effectiveness in service search, retrieval and recommendation tasks whilst the platform shows outstanding performance for a wide range of service access, management and interaction tasks. Furthermore, the framework includes two sets of innovative specification processing algorithms specifically designed to serve advanced CC tasks: while the fuzzy rating and ontology evolution algorithms establish a manner of collaborative cloud service specification, the service orchestration reasoning algorithms reveal a promising means of dynamic service compositions
Cloud enterprise resource planning development model based on software factory approach
Literature reviews revealed that Cloud Enterprise Resource Planning (Cloud ERP) is
significantly growing, yet from software developers’ perspective, it has succumbed to high management complexity, high workload, inconsistency software quality, and knowledge retention problems. Previous researches lack a solution that holistically addresses all the research problem components. Software factory approach was chosen to be adapted along with relevant theories to develop a model referred to as Cloud ERP Factory Model (CEF Model), which intends to pave the way in solving the above-mentioned problems. There are three specific objectives, those are (i) to develop the model by identifying the components with its elements and compile them into the CEF Model, (ii) to verify the model’s deployment technical feasibility, and (iii) to validate the model field usability in a real Cloud ERP production case studies. The research employed Design Science methodology, with a mixed method
evaluation approach. The developed CEF Model consists of five components; those are Product Lines, Platform, Workflow, Product Control, and Knowledge Management, which can be used to setup a CEF environment that simulates a process-oriented software production environment with capacity and resource planning features. The model was validated through expert reviews and the finalized model was verified to be technically feasible by a successful deployment into a selected commercial Cloud ERP production facility. Three Cloud ERP commercial deployment case studies were conducted using the prototype environment. Using the survey instruments developed, the results yielded a Likert score mean of 6.3 out of 7 thus reaffirming that the model is usable and the research has met its objective in addressing the problem components. The models along with its deployment verification processes are the main research contributions. Both items can also be used by software industry practitioners and academician as references in developing a robust Cloud ERP production facility
Analysis and design of scalable software as a service architecture
Ankara : The Department of Computer Engineering and The Graduate School of Engineering and Science of Bilkent University, 2015.Thesis (Master's) -- Bilkent University, 2015.Includes bibliographical references leaves 104-109.Different from traditional enterprise applications that rely on the infrastructure
and services provided and controlled within an enterprise, cloud computing is
based on services that are hosted on providers over the Internet. Hereby, services
are fully managed by the provider, whereas consumers can acquire the required
amount of services on demand, use applications without installation and access
their personal files through any computer with internet access. Recently, a
growing interest in cloud computing can be observed thanks to the significant
developments in virtualization and distributed computing, as well as improved
access to high-speed Internet and the need for economical optimization of
resources.
An important category of cloud computing is the software as a service domain in
which software applications are provided over the cloud. In general when
describing SaaS, no specific application architecture is prescribed but rather the
general components and structure is defined. Based on the provided reference
SaaS architecture different application SaaS architectures can be derived each of
which will typically perform differently with respect to different quality factors.
An important quality factor in designing SaaS architectures is scalability.
Scalability is the ability of a system to handle a growing amount of work in a
capable manner or its ability to be enlarged to accommodate that growth. In this
thesis we provide a systematic modeling and design approach for designing
scalable SaaS architectures.
To identify the aspects that impact the scalability of SaaS based systems we have
conducted a systematic literature review in which we have identified and analyzed
the relevant primary studies that discuss scalability of SaaS systems. Our study
has yielded the aspects that need to be considered when designing scalable
systems. Our research has continued in two subsequent directions. Firstly, we
have defined a UML profile for supporting the modeling of scalable SaaS
architectures. The profile has been defined in accordance with the existing
practices on defining and documenting profiles. Secondly, we provide the socalled
architecture design perspective for designing scalable SaaS systems.
Architectural Perspectives are a collection of activities, tactics and guidelines to
modify a set of existing views, to document and analyze quality properties.
Architectural perspectives as such are basically guidelines that work on multiple
views together. So far architecture perspectives have been defined for several
quality factors such as for performance, reuse and security. However, an
architecture perspective dedicated for designing scalable SaaS systems has not
been defined explicitly. The architecture perspective that we have defined
considers the scalability aspects derived from the systematic literature review as
well as the architectural design tactics that represent important proved design rules
and practices. Further, the architecture perspective adopts the UML profile for
scalability that we have defined. The scalability perspective is illustrated for the
design of a SaaS architecture for a real industrial case study.Özcan, OnurM.S