202 research outputs found
Nonlinear momentum compaction and coherent synchrotron radiation at the Metrology Light Source
Das Thema der vorgelegten Dissertation ist der quasi-isochrone Betrieb der Metrology Light Source zur Erzeugung kurzer Elektronenpakete mit der damit verbundenen Emission von kohärenter Sychrotronstrahlung. Die Metrology Light Source wurde schon in der Planungsphase auf den quasi-isochronen Betrieb ausgelegt. Es stehen Quadrupol-, Sextupol- und Oktupolmagnete zur Verfügung, um die drei führenden Ordnungen des sogenannten momentum compaction factors zu kontrollieren. Der Schwerpunkte der Arbeit ist nichtlineare, longitudinale Strahldynamik, insbesondere die sogenannten "alpha-buckets". Der Vergleich zwischen analytischen Ansätzen, numerischen Simulation und experimentellen Daten wird vorgestellt und diskutiert. Desweiteren wurde die Stromlimitierung durch die Bursting-Instabilität an der Metrology Light Source untersucht. Der Großteil der Messungen ist dabei an der Metrology Light Source durchgeführt worden mit komplementären Messungen am Elektronenspeicherring BESSY II.The subject of this thesis is the operation of an electron storage ring at a low momentum compaction to generate short electron bunches as a source for coherent synchrotron radiation. For this purpose the Metrology Light Source is ideally suited, as it is the first light source designed with the ability to adjust the three leading orders of the momentum compaction factor by quadrupole, sextupole and octupole magnets. Therefore, new opportunities to shape the longitudinal phase space arise. Focus will be put on beam dynamics dominated by nonlinear momentum compaction, in particular the generation of a new bucket type "alpha-buckets" and possible applications. Relation of analytical theory, numerical simulations and experimental data will be presented and discussed. In addition, the current limitation due to the bursting instability at the Metrology Light Source bunches will be investigated. The majority of measurements were conducted at the Metrology Light Source complemented by measurements at the BESSY II storage ring
The drug development pipeline for glioblastoma: a cross sectional assessment of the FDA orphan drug product designation database
BACKGROUND: Glioblastoma (GBM) is the most common malignant brain tumour among adult patients and represents an almost universally fatal disease. Novel therapies for GBM are being developed under the orphan drug legislation and the knowledge on the molecular makeup of this disease has been increasing rapidly. However, the clinical outcomes in GBM patients with currently available therapies are still dismal. An insight into the current drug development pipeline for GBM is therefore of particular interest. OBJECTIVES: To provide a quantitative clinical-regulatory insight into the status of FDA orphan drug designations for compounds intended to treat GBM. METHODS: Quantitative cross-sectional analysis of the U.S. Food and Drug Administration Orphan Drug Product database between 1983 and 2020. STROBE criteria were respected. RESULTS: Four orphan drugs out of 161 (2,4%) orphan drug designations were approved for the treatment for GBM by the FDA between 1983 and 2020. Fourteen orphan drug designations were subsequently withdrawn for unknown reasons. The number of orphan drug designations per year shows a growing trend. In the last decade, the therapeutic mechanism of action of designated compounds intended to treat glioblastoma shifted from cytotoxic drugs (median year of designation 2008) to immunotherapeutic approaches and small molecules (median year of designation 2014 and 2015 respectively) suggesting an increased focus on precision in the therapeutic mechanism of action for compounds the development pipeline. CONCLUSION: Despite the fact that current pharmacological treatment options in GBM are sparse, the drug development pipeline is steadily growing. In particular, the surge of designated immunotherapies detected in the last years raises the hope that elaborate combination possibilities between classical therapeutic backbones (radiotherapy and chemotherapy) and novel, currently experimental therapeutics may help to provide better therapies for this deadly disease in the future
Konzeption und Aufbau eines Single-Source Tumordokumentationsablaufs am Comprehensive Cancer Center Erlangen-NĂĽrnberg
Background and Goals: With 24,4 % cancer is the second leading cause of death in Germany. Because patient care is characterized by long treatment periods as well as a large share of interdisciplinary and trans-sectoral treatment processes, well-structured tumor documentation is important. Oncology data is not only used within patient care, but also for quality assurance purposes, cancer registration and research projects. With regards to documentation content and documentation procedures these scenarios are not harmonized. Instead, the scenarios are characterized by redundancies and multiple documentation. This work develops a concept for electronic gathering of oncology data during clinical treatment process. Following the single-source concept, this data is documented once at its origin and reused afterwards for medical process management, quality insurance, cancer registration and research projects. Methods: The concept is developed in a three-step procedure. In a first step, stakeholders and tendencies in oncology care are identified based on a literature research as well as interviews with internal and external oncology experts. Then, a solution model is developed. With regards to data privacy aspects, it is analyzed if available dataset for cancer registries are suitable for clinical integrated single-source tumor documentation. Furthermore, existing clinical IT applications and documentation standards are combined to an IT-architecture. This solution model is validated in 22 projects at the Comprehensive Cancer Center Erlangen-Nürnberg (CCC EN). Moreover, the feasibility of the model is proved in several case studies. Results: Development of single-source tumor documentation is a protracted project, requiring collaboration of doctors and nurses as well as employees of the cancer register, the IT department and the quality management department. Within cancer care, it is important to prove quality of care by providing facts and figures, promote interdisciplinary as well as trans-sectoral collaboration and expand translational research projects. To meet these requirements and enable data reuse for medical process management, quality assurance, cancer registration and research, oncology data must be electronically documented during patient care. To enable integration in clinical processes, the available datasets for cancer registries have been splitted into documentation packages and matched to existing IT applications. Because these systems are linked by a suitable IT architecture, documented data can be exchanged and reused afterwards. The feasibility of this concept is demonstrated by the three case studies prostate carcinoma, psycho-oncology and melanoma. Practical conclusions: At the CCC EN single-source tumor documentation has been integrated in the clinical care process. Furthermore, multiple documentation has been reduced by reusing data. It is important to motivate medical employees for electronic documentation of oncology data by providing new features supporting them during patient care. Moreover, a directive committee for hospital internal standardization must define organizational processes, documentation standards and a uniform IT architecture. The exemplary implementation of single-source tumor documentation at the CCC EN has been time and resource intensive. By following these suggestions, the effort could be significantly reduced: •Harmonizing quality assurance programs, certifications and oncology data sets on a national level. •National committees must use existing IT standards (HL7, CDA) to define syntax and semantics of oncology data exchange. •By providing more field reports, different strategies within tumor documentation could be carved out and compared. Finally, best practices could be identified.Hintergrund und Ziele: Bösartige Neubildungen stellen mit 24,4 % die zweithäufigste Todesursache in Deutschland dar. Da die Patientenversorgung durch eine lange Behandlungsdauer, sowie einen hohen Anteil interdisziplinärer und transsektoraler Behandlungsprozesse geprägt wird, ist eine gut strukturierte Tumordokumentation essentiell. Neben der Patientenversorgung werden onkologische Daten auch für Qualitätssicherungsverfahren, zur Krebsregistrierung und für Forschungsprojekte erhoben. Diese Szenarien sind bezüglich Dokumentationsinhalten und Dokumentationsverfahren nicht aufeinander abgestimmt, sondern vielmehr durch Redundanzen und Mehrfachdokumentation gekennzeichnet. Die vorliegende Arbeit entwickelt ein Konzept, in dem onkologische Da-ten einmalig, digital und strukturiert während des klinischen Prozesses erfasst werden. Anschließend werden diese Daten, dem Single-Source Ansatz folgend, in den Bereichen medizinisches Prozessmanagement, Qualitätssicherung, Krebsregistrierung und Forschung weiterverwendet. Methoden: Die Entwicklung des Konzepts folgt einem dreistufigen Verfahren. Zuerst werden in einer Anforderungsanalyse, einzubeziehende Interessensgruppen und Entwicklungen der onkologischen Versorgungsstruktur durch eine Literaturreche und Interviews mit internen sowie externen onkologischen Experten ermittelt. Danach wird auf dieser Basis ein Lösungsmodell entwickelt. Hierzu werden, unter Berücksichtigung datenschutzrechtlicher Aspekte, existierende onkologische Datensätze bezüglich ihrer Eignung für eine klinisch integrierte Single-Source Tumordokumentation analysiert. Zudem werden bestehende klinische IT-Systeme und Kommunikationsstandards zu einer IT-Architektur zusammengestellt. Abschließend erfolgt die Validierung dieses Modells innerhalb von 22 Projekten am Comprehensive Cancer Center Erlangen-Nürnberg (CCC EN). Die Umsetzbarkeit des Modells wird anhand konkreter Fallbeispiele belegt. Ergebnisse und Beobachtungen: Die Entwicklung einer Single-Source Tumordokumentation ist ein umfangreiches Projekt, welches die Zusammenarbeit von ärztlichem und pflegerischem Personal, sowie den Mitarbeitern des Krebsregisters, der IT-Abteilung und des Qualitätsmanagements erfordert. Innerhalb der onkologischen Versorgung in Deutschland ist es notwendig, die Qualität der medizinischen Versorgung mit Daten und Fakten zu belegen, interdisziplinäre sowie transsektorale Zusammenarbeit zu intensivieren und translationale Forschungsprojekte voranzutreiben. Um diese Anforderungen abzudecken und die Weiterverwendung der Daten der Patientenversorgung für die Bereiche medizinisches Prozessmanagement, Qualitätssicherung, Krebsregistrierung und Forschung zu ermöglichen, muss die Tumordokumentation digital und innerhalb des klinischen Behandlungsprozesses erfolgen. Hierfür müssen bestehende Datensätze der Registerdokumentation in klinische Dokumentationspakete aufgesplittet und bestehenden IT-Systemen zugeordnet werden. Durch die Entwicklung einer entsprechenden IT-Architektur können diese Daten an-schließend ausgetauscht und damit weiterverwendet werden. Die Fallbeispiele Prostatakarzinom, Psychoonkologie und Melanom belegen die grundlegende Umsetzbarkeit dieses Modells. Praktische Schlussfolgerungen: Die Single-Source Tumordokumentation konnte am CCC EN in den klinischen Prozess integriert werden. Zudem konnte durch Weiterverwendung der Daten der Patientenversorgung Mehrfachdokumentation vermindert werden. Dabei müssen medizinische Mitarbeiter durch neue Funktionalitäten entlastet und damit zur elektronischen Erfassung von onkologischen Daten motiviert werden. Ein weisungsbefugtes Gremium zur hausinternen Standardisierung onkologischer Abläufe muss zudem organisatorische Abläufe, Dokumentationsstandards und die Ausgestaltung einer einheitlichen IT-Struktur festlegen. Die exemplarische Umsetzung einer Single-Source Tumordokumentation am CCC EN war zeit- und ressourcenintensiv. Durch folgende Tätigkeiten könnte dieser Aufwand signifikant reduziert werden: •Auf nationaler Ebene müssen bestehende Qualitätssicherungsverfahren, Zertifizie-rungsprogramme und onkologische Datensätze besser aufeinander abgestimmt werden. •Nationale Gremien müssen bestehende IT-Standards (HL7, CDA) nutzen, um Syntax und Semantik für den Austausch onkologischer Informationen eindeutig festzulegen. •Durch vermehrte Erfahrungsberichte anderer Kliniken sollten verschiedene Strategien innerhalb der Tumordokumentation herausgearbeitet, verglichen und Best Practices abgeleitet werden
Pressure for drug development in lysosomal storage disorders – a quantitative analysis thirty years beyond the US orphan drug act
Background: Lysosomal storage disorders are a heterogeneous group of approximately 50 monogenically inherited orphan conditions. A defect leads to the storage of complex molecules in the lysosome, and patients develop a complex multisystemic phenotype of high morbidity often associated with premature death. More than 30Â years ago the Orphan Drug Act of 1983 passed the United States legislation intended to facilitate the development of drugs for rare disorders. We directed our efforts in assessing which lysosomal diseases had drug development pressure and what distinguished those with successful development and approvals from diseases not treated or without orphan drug designation. Methods: Analysis of the FDA database for orphan drug designations through descriptive and comparative statistics. Results: Between 1983 and 2013, fourteen drugs for seven conditions received FDA approval. Overall, orphan drug status was designated 70 times for 20 conditions. Approved therapies were enzyme replacement therapies (N = 10), substrate reduction therapies (N = 1), small molecules facilitating lysosomal substrate transportation (N = 3). FDA approval was significantly associated with a disease prevalence higher than 0.5/100,000 (p = 0.00742) and clinical development programs that did not require a primary neurological endpoint (p = 0.00059). Orphan drug status was designated for enzymes, modified enzymes, fusion proteins, chemical chaperones, small molecules leading to substrate reduction, or facilitating subcellular substrate transport, stem cells as well as gene therapies. Conclusions: Drug development focused on more common diseases. Primarily neurological diseases were neglected. Small clinical trials with either somatic or biomarker endpoints were successful. Enzyme replacement therapy was the most successful technology. Four factors played a key role in successful orphan drug development or orphan drug designations: 1) prevalence of disease 2) endpoints 3) regulatory precedent, and 4) technology platform. Successful development seeded further innovation
Making Sense of the Sustainable Smart PSS Value Proposition
While academia attributes superior value potential to sustainable smart PSS (SSPSS), in practice, they are not widely implemented. To address this gap, we analyze how the notion of SSPSS value is constructed through sensemaking. Adopting a case study approach, we explore differences in organizational sensemaking. Moreover, we analyze how the three functional roles “digital innovation and technology”, “sustainability”, and “market” involved in innovating SSPSS make sense of the value proposition. We conclude that value is subjective and the value proposition of SSPSS is multi-faceted. Each facet is constructed through the interaction of organizational, functional roles’, and individual sensemaking. At the organizational level, commitment, identity, and expectations influence the creation of shared meaning. At the functional role level, actors differ in their sensemaking based on the cognitive frames applied. At the individual level, subjective beliefs impact sensemaking. Hence, sensemaking is a multi-level process that raises the question of alignment
Visualizing the functional architecture of the endocytic machinery.
Clathrin-mediated endocytosis is an essential process that forms vesicles from the plasma membrane. Although most of the protein components of the endocytic protein machinery have been thoroughly characterized, their organization at the endocytic site is poorly understood. We developed a fluorescence microscopy method to track the average positions of yeast endocytic proteins in relation to each other with a time precision below 1 s and with a spatial precision of ~10 nm. With these data, integrated with shapes of endocytic membrane intermediates and with superresolution imaging, we could visualize the dynamic architecture of the endocytic machinery. We showed how different coat proteins are distributed within the coat structure and how the assembly dynamics of N-BAR proteins relate to membrane shape changes. Moreover, we found that the region of actin polymerization is located at the base of the endocytic invagination, with the growing ends of filaments pointing toward the plasma membrane
Comparison of Bayesian optimization and the reduction of resonance driving terms in the optimization of the dynamic aperture of the BESSY III MBA lattice
HZB is currently designing the lattice for BESSY III, the successor of the 1.7 GeV electron storage ring running in Berlin since 1998. HZB follows a deterministic lattice design strategy, where the natural substructures of a non-hybrid MBA lattice are optimized separately. The substructures consist of only a few parameters, that can be derived from the strategic goals of the project. In the next step, the focusing and de-focusing sextupole families are split up, to optimize the longitudinal and the transverse apertures. The paper compares two approaches to select the optimal sextupole strengths. The first one is multi-objective Bayesian optimization, where the dynamic aperture volume from tracking simulations is used as an objective to be maximized. The second approach does not involve tracking and minimizes the geometric and chromatic resonance driving terms. The comparison of the two results includes their quality in terms of the size of the achievable 3D dynamic aperture and the computational effort involved
Sustainable smart product-service systems: a causal logic framework for impact design
Digital technologies can elevate product-service systems (PSS) to smart PSS, which focus on performance rather than ownership and are considered a means for dematerialization. However, transitioning to smart PSS does not guarantee sustainability. To understand the impact of smart PSS holistically, we take a two-pronged approach. First, we use the theory of change to conceptualize the causal link between sustainable smart PSS and their ultimate impact. We develop a three-step causal logic framework consisting of design, causation, and impact. Within this framework, we identify the business model properties of sustainable smart PSS as design characteristics and categorize the eventual impacts based on the triple bottom line. We introduce the term multi-causal pathway to describe the causation processes underlining the possibility of non-linearity and multi-causality. Second, we conduct a systematic literature review to investigate the mechanisms linking design and impact. Based on an analysis of 63 publications, we identify 17 specific mechanisms and group them into four types: information, resource, empowerment, and adverse mechanisms. Visualizing our results, we develop a morphological box as a toolkit for managers to develop their own impact-oriented logic model by identifying and activating the multi-causal pathway that fosters the desired sustainability effects. Moreover, discussing our framework, we develop research propositions and managerial questions for impact design. By linking the theory of change with the business model impact, we contribute toward a conceptual synthesis for understanding the impact of (sustainable) smart PSS.Open Access funding enabled and organized by Projekt DEAL.Bundesministerium für Bildung und Forschung http://dx.doi.org/10.13039/501100002347Friedrich-Alexander-Universität Erlangen-Nürnberg (1041
Navigating the long tail - Towards practical guidance for researchers on how to select a repository for long tail data
With nearly 2000 entries in the Registry of Research Data Repositories (re3data.org, November 2017) researchers are confronted with a plethora of repositories to deposit research data. Given the diversity of these services, we have noticed that researchers find it challenging to make an informed decision, especially when they are dealing with data from the so-called “long tail” (small, diverse, individual, less standardized data). Although, re3data.org provides a very comprehensive list of criteria (i.e. filters) to narrow down the number of choices, there is still advice needed, for example, on evaluating the importance of a criteria (e.g. type of repository) or the impact of a certain choice (e.g. which PID?).
In this poster presentation, we take the perspective of the research data management helpdesk, a central service facility at the Friedrich-Schiller University in Jena (Germany), and investigate how we could address this selection challenge. The aim is to develop a practical guide for researchers from domains where there is no obvious choice or well-established repository available (i.e. the long tail) and where researchers rely on general-purpose repositories.
In a first step, we compared five generic repositories for long tail data (Figshare, Zenodo, Dryad, RADAR, Digital Library Thuringia) using the individual descriptions and properties on re3data.org. For some criteria, the information content in re3data.org was rather limited, so we also explore the individual websites of the repository providers. For example, the criteria “Quality Management” only says whether a repository provider does quality management, but not what exactly that means. Another example for rather sparse information is the level of data curation available and applied to the data in a certain repository. Such information would be helpful in the evaluation process.
In a second step, we took a number of real cases from our work at the helpdesk and investigated the matching between the researcher’s intentions and expectations with the means and information available to evaluate a repository (both, at re3data.org, repository website). This might be straightforward, for example, if the intention is to make data citable, where one needs to check whether a PID is provided. But it might more difficult, for example, if a researcher would like to assess the visibility a dataset may gain from publishing with a certain repository. In this case, one should look at a number of properties (e.g. Metrics, Sydications, API, Licences) with rather technical information
- …