32 research outputs found

    Funktion carboxyterminaler Phosphorylierungsstellen bei der ligandenabhängigen Internalisierung des µ-Opioid Rezeptors

    Get PDF
    Bei der Behandlung starker Schmerzen ist Morphin nach wie vor das am häufigsten verwendete Schmerzmittel. Dem therapeutischen Nutzen wirkt allerdings die Entstehung einer Toleranz entgegen. Die molekularen bzw. zellulären Mechanismen, die zur Toleranz führen, sind bisher aber noch wenig verstanden. Das Zielprotein, über das Opioid-Analgetika, wie das Morphin, ihre Wirkung vermitteln, ist der µ-Opioid Rezeptor (MOR), der zu den G-Protein gekoppelten Rezeptoren gehört. Im Vorfeld dieser Arbeit stand die Beobachtung, dass Agonisten, die auf zellulärer Ebene nach Rezeptorbindung zu einer Internalisierung des Rezeptors führen, in vivo ein verringertes Toleranzpotential besitzen. Daher galt es zu verstehen, welche Ereignisse auf molekularer Ebene am Rezeptor zur Internalisierung beitragen. Eine entscheidende Modifikation erfährt der Rezeptor nach Agonistbindung durch Phosphorylierung an verschiedenen Serin- und Threoninresten des Carboxy-Terminus. Zur Identifikation der für die Internalisierung wichtigen Phosphorylierungsstellen wurden für in vitro Versuche in HEK 293 Zellen verschiedene Rezeptormutanten erzeugt. Bei diesen wurden bestimmte Serine und Threonine durch Alanin ausgetauscht. Internalisierungsversuche mit immunzytochemischen Methoden, qualitativ in der Fluoreszenzmikroskopie und quantitativ im ELISA-Assay, gaben dabei Hinweise auf die potentiell beteiligten Phosphorylierungsstellen. Gegen die betreffenden Aminosäurereste wurden phosphospezifische Antikörper hergestellt. Mit diesen konnte im Westernblot gezeigt werden, dass der Rezeptor ligandenabhängig, in unterschiedlicher Stärke an Threonin 370, Serin 375 und Threonin 379 phosphoryliert wird. Die Phosphorylierung dieser Aminosäurereste, sowie der Grad der Phosphorylierung, scheinen darüber zu entscheiden, ob der Rezeptor internalisiert werden kann

    Cardiogenic Shock Management and Research: Past, Present, and Future Outlook

    Get PDF
    Although great strides have been made in the pathophysiological understanding, diagnosis and management of cardiogenic shock (CS), morbidity and mortality in patients presenting with the condition remain high. Acute MI is the commonest cause of CS; consequently, most existing literature concerns MI-associated CS. However, there are many more phenotypes of patients with acute heart failure. Medical treatment and mechanical circulatory support are well-established therapeutic options, but evidence for many current treatment regimens is limited. The issue is further complicated by the fact that implementing adequately powered, randomized controlled trials are challenging for many reasons. In this review, the authors discuss the history, landmark trials, current topics of medical therapy and mechanical circulatory support regimens, and future perspectives of CS management

    Gotchas from mining bug reports

    No full text
    Over the years, it has become common practice in empirical software engineering to mine data from version archives and bug databases to learn where bugs have been fixed in the past, or to build prediction models to find error-prone code in the future. However, most of these approach rely on strong assumptions that need to be verified to ensure that resulting models are accurate and reflect the intended property which can have serious consequences for decisions based on such flawed models

    The Impact of Tangled Code Changes on Defect Prediction Models

    No full text
    When interacting with source control management system, developers often commit unrelated or loosely related code changes in a single transaction. When analyzing version histories, such tangled changes will make all changes to all modules appear related, possibly compromising the resulting analyses through noise and bias. In an investigation of five open-source Java projects, we found between 7 % and 20 % of all bug fixes to consist of multiple tangled changes. Using a multi-predictor approach to untangle changes, we show that on average at least 16.6 % of all source files are incorrectly associated with bug reports. These incorrect bug file associations seem to not significantly impact models classifying source files to have at least one bug or no bugs. But our experiments show that untangling tangled code changes can result in more accurate regression bug prediction models when compared to models trained and tested on tangled bug datasets--in our experiments, the statistically significant accuracy improvements lies between 5 % and 200 %. We recommend better change organization to limit the impact of tangled changes

    Towards the next generation of bug tracking systems

    No full text
    Developers typically rely on the information submitted by end-users to resolve bugs. We conducted a survey on information needs and commonly faced problems with bug reporting among several hundred developers and users of the APACHE, ECLIPSE and MOZILLA projects. In this paper, we present the results of a card sort on the 175 comments sent back to us by the responders of the survey. The card sort revealed several hurdles involved in reporting and resolving bugs, which we present in a collection of recommendations for the design of new bug tracking systems. Such systems could provide contextual assistance, reminders to add information, and most important, assistance to collect and report crucial information to developers. 1

    Switching to Git: The Good, the Bad, and the Ugly

    No full text
    Since its introduction 10 years ago, GIT has taken the world of version control systems (VCS) by storm. Its success is partly due to creating opportunities for new usage patterns that empower developers to work more efficiently. However, the resulting change in both user behavior and the way GIT stores changes impacts data mining and data analytics procedures [6], [13]. While some of these unique characteristics can be managed by adjusting mining and analytical techniques, others can lead to severe data loss and the inability to audit code changes, e.g. knowing the full history of changes of code related to security and privacy functionality. Thus, switching to GIT comes with challenges to established development process analytics. This paper is based on our experience in attempting to provide continuous process analysis for Microsoft product teams who switching to GIT as their primary VCS. We illustrate how GIT’s concepts and usage patterns create a need for changing well-established data analytic processes. The goal of this paper is to raise awareness how certain GIT operations may damage or even destroy information about historical code changes necessary for continuous data development process analytics. To that end, we provide a list of common GIT usage patterns with a description of how these operations impact data mining applications. Finally, we provide examples of how one may counteract the effects of such destructive operations in the future. We further provide a new algorithm to detect integration paths that is specific to distributed version control systems like GIT, which allows us to reconstruct the information that is crucial to most development process analytics
    corecore