2,674 research outputs found

    Generalists, Specialists, and the Best Experts: Where do Systems Thinkers Fit In?

    Get PDF
    GENERALIST / SPECIALIST: A generalist is someone who has studied a little bit of everything, and in the end knows nothing well in particular. By contrast, a specialist is someone who has studied a single subject, and as a consequence does not even know his own subject, because every item of knowledge is related to other components of the whole system. The good scholar or scientist--like the good chef, manager, clinician, or orchestra conductor--is an expert in one field or craft, and knowledgeable in many. Like a mouse, he can explore the details of a terrain; and, like an owl, he can also soar to get a good view of the landscape--mice and all. He is capable of learning new subjects as needed, as well as placing every particular subject in a wide context and a long-term perspective. He is thus open to multiple inputs and capable of multiple outputs. In sum, the best expert is the specialist turned generalist. This holds in all fields of thought and action, particularly in philosophy. -- Mario Bunge, Philosophical DictionaryBunge\u27s definitions of the generalist, the specialist, and the best expert are thought-provoking (and may provoke other responses as well). Are systems practitioners, analysts, and theorists generalists, specialists, or the best experts? Because systems science concerns itself with general theories (e.g. graph theory, information theory, control theory, game theory, etc.) that can be applied to a wide range of problems, it appears to be a generalist field; but systems science has its own contributors, jargon, and history and is not widely studied (at least in the U.S.), and so appears to be a specialist field as well. And yet many of the early contributors to the systems project such as von Bertalanffy, Boulding, Wiener, and Ashby did indeed fit Bunge\u27s definition of the best expert, as all were specialists turned generalists. Since systems science is mostly taught at the graduate level, perhaps Bunge\u27s position is an implicit assumption in the systems field.You may not agree with all of Bunge\u27s assertions (or the conjecture above), but it is clear the views of the mice and the owls are needed for most (if not all) problems. Is the systems view that of the owls or that of mice in owl clothing? The answer may be fuzzy and a good starting point for our discussion. Perhaps a more interesting question is this: How we can use systems thinking to improve our problem solving abilities? A quick look at the jobs graduates of the PSU Systems Science Graduate Program have gone on to (https://www.pdx.edu/sysc/resources-jobs) makes it clear that systems principles are applicable in all kinds of fields. It is also clear that systems science can be useful for framing and solving global problems related to economics, energy, climate, and politics. So whether generalist or specialist--or whether one can meet the criteria Bunge requires of a best expert--what roles can a systems thinker fill?Here are a few questions to get the discussion going: Are you interested in being a general problem solver, or do you have a specific (i.e. specialized) problem you\u27d like to solve using systems thinking? Can you describe an instance when your knowledge of systems science gave you an insight you would not otherwise have had? What roles can systems theorists, analysts, and practitioners play in national and global debates? Do (or will) the public, politicians, and other experts accept systems thinkers as experts? Can (or do) systems practitioners and theorists act as liasons between specialists or between specialists and the public? Can you think of a field or a problem that is not being considered from a systems perspective but should be? (Extra credit) Can you think any field in which systems science would not be useful? This discussion can also be an opportunity for new students to ask questions about the systems field and discuss what they hope to gain with systems science knowledge, and for other students, graduates, and faculty to share their insights and experiences about the systems field and what they have gained from their systems science knowledge.https://pdxscholar.library.pdx.edu/systems_science_seminar_series/1043/thumbnail.jp

    The Limits of Control, or How I Learned to Stop Worrying and Love Regulation (Discussion)

    Get PDF
    When we want to solve a problem, we talk about how we might manage or regulate—control it. Control is a a central concept in systems science, along with system, environment, utility, and information. With his information-theoretic Law of Requisite Variety, Ashby proved that to control a system we need as much variability in our regulator as we have in our system (“only variety can destroy variety”), something like a method of control for everything we want to control. For engineered systems, this appears to be the case (at least sometimes). But what about for social systems? Does a group of humans behave with the same level of variability as a machine? Not usually. And when control is applied to a human system, in the form of a new law or regulation, individuals within it may deliberately change their behavior. A machine\u27s behavior may also change when a control is applied to it—think of how emissions equipment affects the performance of an automobile (less pollution, but less power too)—but the machine doesn\u27t (typically) adapt. People do. Does this pose a difficulty if we want to employ Ashby\u27s law to solve a control problem in a human system? Or could our ability to adapt provide an advantage?Ashby acknowledged that for very large systems regulation is more difficult, and many social systems are very large. With limited resources we may not be able to control for all the variety and possible disturbances in a very large system, and therefore we must make choices. We can leave a system unregulated; we can reduce the amount of the system we want to control; we can increase control over certain forms of variety and disturbances; or we could find constraint or structure in the system\u27s variety and disturbances—in other words, create better, more accurate models of our system and its environment.Creating better models has always been a driving force in the development of systems science. Conant and Ashby proved that “every good regulator of a system must be a model of that system” in a paper of the same name. Intuitively this makes sense: if we have a better understanding of the system—a better model—we should be better able to control the system. But how well are we able to able to model human systems? For example, how well do we model intersections? Think about your experience in a car or on a bike at a downtown intersection during rush hour. Now think about that same intersection from the perspective of a pedestrian late in the evening. Did the traffic signals control the intersection in an efficient manner under both conditions? What if we consider all the downtown intersections, or the entire Portland-area traffic system? What about even larger systems? How well can we model the U.S. health care system? What is the chance that in a few thousand pages of new controls a few of them will cause some unforeseen consequence? How well do we understand the economy? Enough to create a law limiting CEO compensation? Might just one seemingly straightforward control lead to something unforeseen?!So what level of understanding must we have of a system, i.e., how well must we be able to model it, before we regulate it? We must still react to and manage, as best we can, a man-made or natural disaster, even when we may know very little about it at the start. Our ability to adapt is critical in these situations. But at the same time, with our ability to adapt we can also (with the proper resources) circumvent the intent of regulations or use regulations to protect or increase our influence: consider “loopholes” in the tax code or legislation with which large corporations can easily comply but causes great difficulties for smaller businesses.No matter what problem we have, it\u27s important to understand what limits our ability to control and how controls may cause new and different problems; this will be the general focus of this seminar. A brief overview of Ashby\u27s Law of Requisite Variety, along with a conceptual example, will be presented.https://pdxscholar.library.pdx.edu/systems_science_seminar_series/1023/thumbnail.jp

    Obstruction sets for classes of cubic graphs

    Get PDF
    This dissertation establishes two theorems which characterize the set of minimal obstructions for two classes of graphs. A minimal obstruction for a class of graphs is a graph that is not in the class but every graph that it properly contains, under some containment relation, is in the class. In Chapter 2, we provide a characterization of the class of cubic outer-planar graphs in terms of its minimal obstructions which are also called cubic obstructions in this setting. To do this, we first show that all the obstructions containing loops can be obtained from the complete set of loopless obstructions via an easily specified operation. We subsequently prove that there are only two loopless obstructions and then generate the complete list of 5 obstructions. In Chapters 3 and 4, we provide a characterization for the more general class of outer-cylindrical graphs—those graphs that can be embedded in the plane so that there are two faces whose boundaries together contain all the vertices of the graph. In particular, in Chapter 3, we build upon the ideas of Chapter 2 by considering the operation used to generate all obstructions containing loops from those that are loopless and extend this operation to the class of outer-cylindrical graphs. We also provide a list of 26 loopless graphs and prove that each of these is a cubic obstruction for outer-cylindrical graphs. In Chapter 4, we prove that these 26 graphs are the only loopless cubic obstructions for outer-cylindrical graphs. Combining the results of Chapters 3 and 4, we then generate the complete list of 124 obstructions which is provided in an appendix

    Developing Intentional Hospitality In First Baptist Church Lenoir, North Carolina

    Get PDF
    The project’s purpose is to reveal how hospitable First Baptist Church Lenoir, North Carolina currently is, and how that can be improved by instilling a commitment to hospitality in both the Sunday morning volunteers and the church congregation. Mystery guests attended worship and their initial experiences were assessed. The Hospitality Team underwent training, and an intensive Bible study was conducted for the Wednesday Night congregation. The mystery guests worshipped again, and their responses were assessed to see if their worship experiences had improved. In conclusion, the hospitality training was successful, and the worship experiences of the mystery guests improved significantly

    Formalizing the Panarchy Adaptive Cycle with the Cusp Catastrophe

    Get PDF
    The panarchy adaptive cycle, a general model for change in natural and human systems, can be formalized by the cusp catastrophe of René Thom\u27s topological theory. Both the adaptive cycle and the cusp catastrophe have been used to model ecological, economic, and social systems in which slow and small continuous changes in two control variables produce fast and large discontinuous changes in system behavior. The panarchy adaptive cycle, the more recent of the two models, has been used so far only for qualitative descriptions of typical dynamics of such systems. The cusp catastrophe, while also often employed qualitatively, is a mathematical model capable of being used rigorously. If the control variables from the adaptive cycle are taken as parameters in the equation for the cusp catastrophe, a cycle very similar to the adaptive cycle can be constructed. Formalizing the panarchy adaptive cycle with the cusp catastrophe may provide direction for more rigorous applications of the adaptive cycle, thereby augmenting its usefulness in guiding sustainability efforts

    Modeling reactivity to biological macromolecules with a deep multitask network

    Get PDF
    Most small-molecule drug candidates fail before entering the market, frequently because of unexpected toxicity. Often, toxicity is detected only late in drug development, because many types of toxicities, especially idiosyncratic adverse drug reactions (IADRs), are particularly hard to predict and detect. Moreover, drug-induced liver injury (DILI) is the most frequent reason drugs are withdrawn from the market and causes 50% of acute liver failure cases in the United States. A common mechanism often underlies many types of drug toxicities, including both DILI and IADRs. Drugs are bioactivated by drug-metabolizing enzymes into reactive metabolites, which then conjugate to sites in proteins or DNA to form adducts. DNA adducts are often mutagenic and may alter the reading and copying of genes and their regulatory elements, causing gene dysregulation and even triggering cancer. Similarly, protein adducts can disrupt their normal biological functions and induce harmful immune responses. Unfortunately, reactive metabolites are not reliably detected by experiments, and it is also expensive to test drug candidates for potential to form DNA or protein adducts during the early stages of drug development. In contrast, computational methods have the potential to quickly screen for covalent binding potential, thereby flagging problematic molecules and reducing the total number of necessary experiments. Here, we train a deep convolution neural networkthe XenoSite reactivity modelusing literature data to accurately predict both sites and probability of reactivity for molecules with glutathione, cyanide, protein, and DNA. On the site level, cross-validated predictions had area under the curve (AUC) performances of 89.8% for DNA and 94.4% for protein. Furthermore, the model separated molecules electrophilically reactive with DNA and protein from nonreactive molecules with cross-validated AUC performances of 78.7% and 79.8%, respectively. On both the site- and molecule-level, the model’s performances significantly outperformed reactivity indices derived from quantum simulations that are reported in the literature. Moreover, we developed and applied a selectivity score to assess preferential reactions with the macromolecules as opposed to the common screening traps. For the entire data set of 2803 molecules, this approach yielded totals of 257 (9.2%) and 227 (8.1%) molecules predicted to be reactive only with DNA and protein, respectively, and hence those that would be missed by standard reactivity screening experiments. Site of reactivity data is an underutilized resource that can be used to not only predict if molecules are reactive, but also show where they might be modified to reduce toxicity while retaining efficacy. The XenoSite reactivity model is available at http://swami.wustl.edu/xenosite/p/reactivity
    • …
    corecore