148 research outputs found

    Pharmaceutical product modularization as a mass customization strategy to increase patient benefit cost-efficiently

    Get PDF
    Customized pharmaceutical products aim to comply with the individual needs of a patient to enhance the treatment outcome. The current pharmaceutical production paradigm is, however, dominated by mass production, where the pharmaceutical products embrace a one-size-fits-all design with a low possibility of treatment optimization to patient needs. This production paradigm is not designed or intended for customized pharmaceutical products and operating this production context for customized pharmaceutical products is argued to be cost-inefficient. To address this challenge of inefficient production of customized pharmaceutical products, this study proposes an approach to modular pharmaceutical product design. As a mass customization strategy, product modularization enables serving customers with customized products cost-efficiently. The proposed modular pharmaceutical products integrate three product design requirements originating from patient needs: a scalable dose strength, a flexible target release profile, and a scalable treatment size. An approach to assess the value of these product designs is presented, by means of proposing three benefit metrics complying with respective design requirements and a cost metric assessing the cost of producing these modular pharmaceutical product designs. Results suggest that pharmaceutical product modularization can, by keeping the number of produced components low, substantially increase the external product variety and, hence, enhance the treatment outcome of patients. Fur-thermore, results indicate that the achieved benefit for the patient through product modularization increases beyond additional costs arising during production. However, a careful modularization must be performed to optimize the tradeoff between the increased benefit and cost

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Application of service composition mechanisms to Future Networks architectures and Smart Grids

    Get PDF
    Aquesta tesi gira entorn de la hipòtesi de la metodologia i mecanismes de composició de serveis i com es poden aplicar a diferents camps d'aplicació per a orquestrar de manera eficient comunicacions i processos flexibles i sensibles al context. Més concretament, se centra en dos camps d'aplicació: la distribució eficient i sensible al context de contingut multimèdia i els serveis d'una xarxa elèctrica intel·ligent. En aquest últim camp es centra en la gestió de la infraestructura, cap a la definició d'una Software Defined Utility (SDU), que proposa una nova manera de gestionar la Smart Grid amb un enfocament basat en programari, que permeti un funcionament molt més flexible de la infraestructura de xarxa elèctrica. Per tant, revisa el context, els requisits i els reptes, així com els enfocaments de la composició de serveis per a aquests camps. Fa especial èmfasi en la combinació de la composició de serveis amb arquitectures Future Network (FN), presentant una proposta de FN orientada a serveis per crear comunicacions adaptades i sota demanda. També es presenten metodologies i mecanismes de composició de serveis per operar sobre aquesta arquitectura, i posteriorment, es proposa el seu ús (en conjunció o no amb l'arquitectura FN) en els dos camps d'estudi. Finalment, es presenta la investigació i desenvolupament realitzat en l'àmbit de les xarxes intel·ligents, proposant diverses parts de la infraestructura SDU amb exemples d'aplicació de composició de serveis per dissenyar seguretat dinàmica i flexible o l'orquestració i gestió de serveis i recursos dins la infraestructura de l'empresa elèctrica.Esta tesis gira en torno a la hipótesis de la metodología y mecanismos de composición de servicios y cómo se pueden aplicar a diferentes campos de aplicación para orquestar de manera eficiente comunicaciones y procesos flexibles y sensibles al contexto. Más concretamente, se centra en dos campos de aplicación: la distribución eficiente y sensible al contexto de contenido multimedia y los servicios de una red eléctrica inteligente. En este último campo se centra en la gestión de la infraestructura, hacia la definición de una Software Defined Utility (SDU), que propone una nueva forma de gestionar la Smart Grid con un enfoque basado en software, que permita un funcionamiento mucho más flexible de la infraestructura de red eléctrica. Por lo tanto, revisa el contexto, los requisitos y los retos, así como los enfoques de la composición de servicios para estos campos. Hace especial hincapié en la combinación de la composición de servicios con arquitecturas Future Network (FN), presentando una propuesta de FN orientada a servicios para crear comunicaciones adaptadas y bajo demanda. También se presentan metodologías y mecanismos de composición de servicios para operar sobre esta arquitectura, y posteriormente, se propone su uso (en conjunción o no con la arquitectura FN) en los dos campos de estudio. Por último, se presenta la investigación y desarrollo realizado en el ámbito de las redes inteligentes, proponiendo varias partes de la infraestructura SDU con ejemplos de aplicación de composición de servicios para diseñar seguridad dinámica y flexible o la orquestación y gestión de servicios y recursos dentro de la infraestructura de la empresa eléctrica.This thesis revolves around the hypothesis the service composition methodology and mechanisms and how they can be applied to different fields of application in order to efficiently orchestrate flexible and context-aware communications and processes. More concretely, it focuses on two fields of application that are the context-aware media distribution and smart grid services and infrastructure management, towards a definition of a Software-Defined Utility (SDU), which proposes a new way of managing the Smart Grid following a software-based approach that enable a much more flexible operation of the power infrastructure. Hence, it reviews the context, requirements and challenges of these fields, as well as the service composition approaches. It makes special emphasis on the combination of service composition with Future Network (FN) architectures, presenting a service-oriented FN proposal for creating context-aware on-demand communication services. Service composition methodology and mechanisms are also presented in order to operate over this architecture, and afterwards, proposed for their usage (in conjunction or not with the FN architecture) in the deployment of context-aware media distribution and Smart Grids. Finally, the research and development done in the field of Smart Grids is depicted, proposing several parts of the SDU infrastructure, with examples of service composition application for designing dynamic and flexible security for smart metering or the orchestration and management of services and data resources within the utility infrastructure

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Migrating microservices to graph database

    Get PDF
    Microservice architecture is a popular approach to structuring web backend services. Another emerging trend, after a period of hibernation, is utilizing modern graph database management systems for managing complex, richly connected data. The two approaches have rarely been used in tandem, as microservices emphasize modularization and decoupling of services, while graph data models favor data integration. In this study, literature on microservices and graph databases is reviewed and a synthesis between the two paradigms is presented. Based on the theoretical discussion, a software architecture combining the two elements is formulated and implemented using microservices serving content metadata at Yleisradio, the Finnish national broadcasting company. The architecture design follows the Design Science Research Process model. Finally, the renewed system is evaluated using quantitative and qualitative metrics. The performance of the system is measured using automated API queries and load tests. The new system was compared to an earlier version based on a PostgreSQL database. The tests gave slight indication that the renewed system performed better for complex queries, where a large number of relations were traversed, but worse in terms of throughput under heavy load. Based on the these findings, a number of performance-enhancing optimizations to the system are introduced. Observations and perpectives are also gathered in a project retrospective session. It is concluded that the resulting architecture holds promise for managing complex data rich in relations in a safe manner. In it, the different domains of the knowledge graph are decoupled into distinct named graphs managed by different microservices

    Mining, Understanding and Integrating User Preferences in Software Refactoring Using Computational Search, Machine Learning, and Dimensionality Reduction

    Full text link
    Search-Based Software Engineering (SBSE) is a software development practice which focuses on couching software engineering problems as optimization problems using metaheuristic techniques to automate the search for near optimal solutions to those problems. While SBSE has been successfully applied to a wide variety of software engineering problems, our understanding of the extent and nature of how software engineering problems can be formulated as automated or semi-automated search is still lacking. The majority of software engineering solutions are very subjective and present difficulties to formally define fitness functions to evaluate them. Current studies focus on guiding the search of optimal solutions rather than performing it. It is unclear yet the degree of interaction required with software engineers during the optimization process and how to reduce it. In this work, we focus on search-based software maintenance and evolution problems including software refactoring and software remodularization to improve the quality of systems. We propose to address the following challenges: • A major challenge in adapting a search-based technique for a software engineering problem is the definition of the fitness function. In most cases, fitness functions are ill-defined or subjective. • Most existing refactoring studies do not include the developer in the loop to analyze suggested refactoring solutions, and give their feedback during the optimization process. In addition, some quality metrics are cost-expensive leading to cost-expensive fitness functions. Moreover, while quality metrics evaluate the structural improvements of the refactored system, it is impossible to evaluate the semantic coherence of the design without user interactions. • Finally, several metrics can be dependent and correlated, thus it may be possible to reduce the number of objectives/dimensions when addressing refactoring problems. To address the above challenges, this work provides new techniques and tools to formulate software refactoring as scalable and learning-based search problem. We proposed novel interactive learning-based techniques using machine learning to incorporate developers knowledge and preferences in the search, resulting in more efficient and cost-effective search-based refactoring recommendation systems. We designed and implemented novel objective reduction SBSE methodologies to support scalable number of objectives. The proposed solutions were empirically evaluated in academic (open-source systems) and industrial settings.Ph.D.College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/138970/1/Dea Final Dissertation.pdfDescription of Dea Final Dissertation.pdf : DissertationDescription of Troh Josselin Dea Signed Certification Form.pdf : Committee signature fil

    An Introduction to Software Ecosystems

    Full text link
    This chapter defines and presents different kinds of software ecosystems. The focus is on the development, tooling and analytics aspects of software ecosystems, i.e., communities of software developers and the interconnected software components (e.g., projects, libraries, packages, repositories, plug-ins, apps) they are developing and maintaining. The technical and social dependencies between these developers and software components form a socio-technical dependency network, and the dynamics of this network change over time. We classify and provide several examples of such ecosystems. The chapter also introduces and clarifies the relevant terms needed to understand and analyse these ecosystems, as well as the techniques and research methods that can be used to analyse different aspects of these ecosystems.Comment: Preprint of chapter "An Introduction to Software Ecosystems" by Tom Mens and Coen De Roover, published in the book "Software Ecosystems: Tooling and Analytics" (eds. T. Mens, C. De Roover, A. Cleve), 2023, ISBN 978-3-031-36059-6, reproduced with permission of Springer. The final authenticated version of the book and this chapter is available online at: https://doi.org/10.1007/978-3-031-36060-

    Scalable Automated Incrementalization for Real-Time Static Analyses

    Get PDF
    This thesis proposes a framework for easy development of static analyses, whose results are incrementalized to provide instantaneous feedback in an integrated development environment (IDE). Today, IDEs feature many tools that have static analyses as their foundation to assess software quality and catch correctness problems. Yet, these tools often fail to provide instantaneous feedback and are thus restricted to nightly build processes. This precludes developers from fixing issues at their inception time, i.e., when the problem and the developed solution are both still fresh in mind. In order to provide instantaneous feedback, incrementalization is a well-known technique that utilizes the fact that developers make only small changes to the code and, hence, analysis results can be re-computed fast based on these changes. Yet, incrementalization requires carefully crafted static analyses. Thus, a manual approach to incrementalization is unattractive. Automated incrementalization can alleviate these problems and allows analyses writers to formulate their analyses as queries with the full data set in mind, without worrying over the semantics of incremental changes. Existing approaches to automated incrementalization utilize standard technologies, such as deductive databases, that provide declarative query languages, yet also require to materialize the full dataset in main-memory, i.e., the memory is permanently blocked by the data required for the analyses. Other standard technologies such as relational databases offer better scalability due to persistence, yet require large transaction times for data. Both technologies are not a perfect match for integrating static analyses into an IDE, since the underlying data, i.e., the code base, is already persisted and managed by the IDE. Hence, transitioning the data into a database is redundant work. In this thesis a novel approach is proposed that provides a declarative query language and automated incrementalization, yet retains in memory only a necessary minimum of data, i.e., only the data that is required for the incrementalization. The approach allows to declare static analyses as incrementally maintained views, where the underlying formalism for incrementalization is the relational algebra with extensions for object-orientation and recursion. The algebra allows to deduce which data is the necessary minimum for incremental maintenance and indeed shows that many views are self-maintainable, i.e., do not require to materialize memory at all. In addition an optimization for the algebra is proposed that allows to widen the range of self-maintainable views, based on domain knowledge of the underlying data. The optimization works similar to declaring primary keys for databases, i.e., the optimization is declared on the schema of the data, and defines which data is incrementally maintained in the same scope. The scope makes all analyses (views) that correlate only data within the boundaries of the scope self-maintainable. The approach is implemented as an embedded domain specific language in a general-purpose programming language. The implementation can be understood as a database-like engine with an SQL-style query language and the execution semantics of the relational algebra. As such the system is a general purpose database-like query engine and can be used to incrementalize other domains than static analyses. To evaluate the approach a large variety of static analyses were sampled from real-world tools and formulated as incrementally maintained views in the implemented engine

    Investigating the Effects of a Virtual Process Environment on the Comprehension of Business Process Models

    Get PDF
    Within the scope of Business Process Management and Modeling, gamification is used, inter alia, to promote process model comprehension and for motivational and educational purposes. In the context of gamification in Business Process Management, this master thesis aims to investigate the effects of a virtual process environment on the cognitive load a process reader perceives during the comprehension of a process model. The comprehension of process models is essential for the proper modeling of business processes, and vice versa. In addition to the previous research approaches in terms of gamification regarding the management and modeling of business processes, this master thesis takes into account concepts from cognitive research. A study with 72 participants was conducted online. Thereby, measures of interest were the cognitive load of the textual process description, the process model and the process model extended with graphics extracted from the virtual process environment. Therefore, a fractorial desgin was established as only the process model was extended with static pictures. The virtual process environment is realized through a video based on a 3D - warehouse scenario game. As a result, no significant difference in the perceived cognitive load of the process reader was found between the three process variants. In summary, after experiencing a virtual process environment, the cognitive load of the process documentations does not differ significantly. Further analysis has shown that the process reader’s confidence in the completeness and adequacy of the shown process documentation is associated with the process document variant. Participants were more confident about the correctness of the process model extended with graphics
    corecore