14,757 research outputs found

    Extracorporeal Shock Wave Treatment (ESWT) enhances the in vitro-induced differentiation of human tendon-derived stem/progenitor cells (hTSPCs)

    Get PDF
    Extracorporeal shock wave therapy (ESWT) is a non-invasive and innovative technology for the management of specific tendinopathies. In order to elucidate the ESWT-mediated clinical benefits, human Tendon-derived Stem/Progenitor cells (hTSPCs) explanted from 5 healthy semitendinosus (ST) and 5 ruptured Achilles (AT) tendons were established. While hTSPCs from the two groups showed similar proliferation rates and stem cell surface marker profiles, we found that the clonogenic potential was maintained only in cells derived from healthy donors. Interestingly, ESWT significantly accelerated hTSPCs differentiation, suggesting that the clinical benefits of ESWT may be ascribed to increased efficiency of tendon repair after injury

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    How open is open enough?: Melding proprietary and open source platform strategies

    Get PDF
    Computer platforms provide an integrated architecture of hardware and software standards as a basis for developing complementary assets. The most successful platforms were owned by proprietary sponsors that controlled platform evolution and appropriated associated rewards. Responding to the Internet and open source systems, three traditional vendors of proprietary platforms experimented with hybrid strategies which attempted to combine the advantages of open source software while retaining control and differentiation. Such hybrid standards strategies reflect the competing imperatives for adoption and appropriability, and suggest the conditions under which such strategies may be preferable to either the purely open or purely proprietary alternatives

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Systeme entwickeln sich schnell weiter und existieren in verschiedenen Variationen, um unterschiedliche und sich ändernde Anforderungen erfüllen zu können. Das führt zu aufeinanderfolgenden Revisionen (Variabilität in Zeit) und zeitgleich existierenden Produktvarianten (Variabilität in Raum). Redundanzen und Abhängigkeiten zwischen unterschiedlichen Produkten über mehrere Revisionen hinweg sowie heterogene Typen von Artefakten führen schnell zu Inkonsistenzen während der Evolution eines variablen Systems. Die Bewältigung der Komplexität sowie eine einheitliche und konsistente Verwaltung beider Variabilitätsdimensionen sind wesentliche Herausforderungen, um große und langlebige Systeme erfolgreich entwickeln zu können. Variabilität in Raum wird primär in der Softwareproduktlinienentwicklung betrachtet, während Variabilität in Zeit im Softwarekonfigurationsmanagement untersucht wird. Konsistenzerhaltung zwischen heterogenen Artefakttypen und sichtbasierte Softwareentwicklung sind zentrale Forschungsthemen in modellgetriebener Softwareentwicklung. Die Isolation der drei angrenzenden Disziplinen hat zu einer Vielzahl von Ansätzen und Werkzeugen aus den unterschiedlichen Bereichen geführt, was die Definition eines gemeinsamen Verständnisses erschwert und die Gefahr redundanter Forschung und Entwicklung birgt. Werkzeuge aus den verschiedenen Disziplinen sind oftmals nicht ausreichend integriert und führen zu einer heterogenen Werkzeuglandschaft sowie hohem manuellen Aufwand während der Evolution eines variablen Systems, was wiederum der Systemqualität schadet und zu höheren Wartungskosten führt. Basierend auf dem aktuellen Stand der Forschung in den genannten Disziplinen werden in dieser Dissertation drei Kernbeiträge vorgestellt, um den Umgang mit der Komplexität während der Evolution variabler Systeme zu unterstützten. Das unifizierte konzeptionelle Modell dokumentiert und unifiziert Konzepte und Relationen für den gleichzeitigen Umgang mit Variabilität in Raum und Zeit basierend auf einer Vielzahl ausgewählter Ansätze und Werkzeuge aus der Softwareproduktlinienentwicklung und dem Softwarekonfigurationsmanagement. Über die bloße Kombination vorhandener Konzepte hinaus beschreibt das unifizierte konzeptionelle Modell neue Möglichkeiten, beide Variabilitätsdimensionen zueinander in Beziehung zu setzen. Die unifizierten Operationen verwenden das unifizierte konzeptionelle Modell als Datenstruktur und stellen die Basis für operative Verwaltung von Variabilität in Raum und Zeit dar. Die unifizierten Operationen werden basierend auf einer Analyse diverser Ansätze konzipiert, welche verschiedene Modalitäten und Paradigmen verfolgen. Während die unifizierten Operationen die Funktionalität von analysierten Werkzeugen abdecken, ermöglichen sie den gleichzeitigen Umgang mit beiden Variabilitätsdimensionen. Der unifizierte Ansatz basiert auf den vorhergehenden Beiträgen und erweitert diese um Konsistenzerhaltung. Zu diesem Zweck wurden Typen von variabilitätsspezifischen Inkonsistenzen identifiziert, die während der Evolution variabler heterogener Systeme auftreten können. Der unifizierte Ansatz ermöglicht automatisierte Konsistenzerhaltung für eine ausgewählte Teilmenge der identifizierten Inkonsistenztypen. Jeder Kernbeitrag wurde empirisch evaluiert. Zur Evaluierung des unifizierten konzeptionellen Modells und der unifizierten Operationen wurden Expertenbefragungen durchgeführt, Metriken zur Bewertung der Angemessenheit einer Unifizierung definiert und angewendet, sowie beispielhafte Anwendungen demonstriert. Die funktionale Eignung des unifizierten Ansatzes wurde mittels zweier Realweltfallstudien evaluiert: Die häufig verwendete ArgoUML-SPL, die auf ArgoUML basiert, einem UML-Modellierungswerkzeug, sowie MobileMedia, eine mobile Applikation für Medienverwaltung. Der unifizierte Ansatz ist mit dem Eclipse Modeling Framework (EMF) und dem Vitruvius Ansatz implementiert. Die Kernbeiträge dieser Arbeit erweitern das vorhandene Wissen hinsichtlich der uniformen Verwaltung von Variabilität in Raum und Zeit und verbinden diese mit automatisierter Konsistenzerhaltung für variable Systeme bestehend aus heterogenen Artefakttypen

    GHOST: Building blocks for high performance sparse linear algebra on heterogeneous systems

    Get PDF
    While many of the architectural details of future exascale-class high performance computer systems are still a matter of intense research, there appears to be a general consensus that they will be strongly heterogeneous, featuring "standard" as well as "accelerated" resources. Today, such resources are available as multicore processors, graphics processing units (GPUs), and other accelerators such as the Intel Xeon Phi. Any software infrastructure that claims usefulness for such environments must be able to meet their inherent challenges: massive multi-level parallelism, topology, asynchronicity, and abstraction. The "General, Hybrid, and Optimized Sparse Toolkit" (GHOST) is a collection of building blocks that targets algorithms dealing with sparse matrix representations on current and future large-scale systems. It implements the "MPI+X" paradigm, has a pure C interface, and provides hybrid-parallel numerical kernels, intelligent resource management, and truly heterogeneous parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We describe the details of its design with respect to the challenges posed by modern heterogeneous supercomputers and recent algorithmic developments. Implementation details which are indispensable for achieving high efficiency are pointed out and their necessity is justified by performance measurements or predictions based on performance models. The library code and several applications are available as open source. We also provide instructions on how to make use of GHOST in existing software packages, together with a case study which demonstrates the applicability and performance of GHOST as a component within a larger software stack.Comment: 32 pages, 11 figure

    Consistent View-Based Management of Variability in Space and Time

    Get PDF
    Developing variable systems faces many challenges. Dependencies between interrelated artifacts within a product variant, such as code or diagrams, across product variants and across their revisions quickly lead to inconsistencies during evolution. This work provides a unification of common concepts and operations for variability management, identifies variability-related inconsistencies and presents an approach for view-based consistency preservation of variable systems

    Towards Collaborative Scientific Workflow Management System

    Get PDF
    The big data explosion phenomenon has impacted several domains, starting from research areas to divergent of business models in recent years. As this intensive amount of data opens up the possibilities of several interesting knowledge discoveries, over the past few years divergent of research domains have undergone the shift of trend towards analyzing those massive amount data. Scientific Workflow Management System (SWfMS) has gained much popularity in recent years in accelerating those data-intensive analyses, visualization, and discoveries of important information. Data-intensive tasks are often significantly time-consuming and complex in nature and hence SWfMSs are designed to efficiently support the specification, modification, execution, failure handling, and monitoring of the tasks in a scientific workflow. As far as the complexity, dimension, and volume of data are concerned, their effective analysis or management often become challenging for an individual and requires collaboration of multiple scientists instead. Hence, the notion of 'Collaborative SWfMS' was coined - which gained significant interest among researchers in recent years as none of the existing SWfMSs directly support real-time collaboration among scientists. In terms of collaborative SWfMSs, consistency management in the face of conflicting concurrent operations of the collaborators is a major challenge for its highly interconnected document structure among the computational modules - where any minor change in a part of the workflow can highly impact the other part of the collaborative workflow for the datalink relation among them. In addition to the consistency management, studies show several other challenges that need to be addressed towards a successful design of collaborative SWfMSs, such as sub-workflow composition and execution by different sub-groups, relationship between scientific workflows and collaboration models, sub-workflow monitoring, seamless integration and access control of the workflow components among collaborators and so on. In this thesis, we propose a locking scheme to facilitate consistency management in collaborative SWfMSs. The proposed method works by locking workflow components at a granular attribute level in addition to supporting locks on a targeted part of the collaborative workflow. We conducted several experiments to analyze the performance of the proposed method in comparison to related existing methods. Our studies show that the proposed method can reduce the average waiting time of a collaborator by up to 36% while increasing the average workflow update rate by up to 15% in comparison to existing descendent modular level locking techniques for collaborative SWfMSs. We also propose a role-based access control technique for the management of collaborative SWfMSs. We leverage the Collaborative Interactive Application Methodology (CIAM) for the investigation of role-based access control in the context of collaborative SWfMSs. We present our proposed method with a use-case of Plant Phenotyping and Genotyping research domain. Recent study shows that the collaborative SWfMSs often different sets of opportunities and challenges. From our investigations on existing research works towards collaborative SWfMSs and findings of our prior two studies, we propose an architecture of collaborative SWfMSs. We propose - SciWorCS - a Collaborative Scientific Workflow Management System as a proof of concept of the proposed architecture; which is the first of its kind to the best of our knowledge. We present several real-world use-cases of scientific workflows using SciWorCS. Finally, we conduct several user studies using SciWorCS comprising different real-world scientific workflows (i.e., from myExperiment) to understand the user behavior and styles of work in the context of collaborative SWfMSs. In addition to evaluating SciWorCS, the user studies reveal several interesting facts which can significantly contribute in the research domain, as none of the existing methods considered such empirical studies, and rather relied only on computer generated simulated studies for evaluation
    corecore