88 research outputs found
Southern Columns Fall/Winter 1994
Features the article Students Care . Put together a sunny fall day, 535 student and faculty volunteers, and thirty or forty community projects.https://knowledge.e.southern.edu/alumni_newsletter/1132/thumbnail.jp
What prospective young workers should know about labor relations according to the views of selected management executives.
Thesis (M.A.)--Boston Universit
Southern Columns Summer 1989
Features the article Coping with the Challenge of CHANGE . A behind the scenes look at curriculum planning at Southern College.https://knowledge.e.southern.edu/alumni_newsletter/1116/thumbnail.jp
Revisiting the Impact of Judicial Review on Agency Rulemakings: An Empirical Investigation
This study examines the entire set of air toxic emission regulations promulgated by the Environmental Protection Agency (EPA), with particular attention to those rules appealed to judgment in the court of appeals and discovers significant disconnects between popular understanding of judicial review and rule-making reality. Of these air toxic rules, the courts were summoned to review only a small fraction (8%), despite evidence that many air toxic rules may have problems, at least from the public interest perspective. Moreover, although virtually all of the litigation brought by public interest groups against the EPAâs air toxic rules was successful, the resulting victories have not yet had much impact in practice. For most of its vacated regulations, the EPA has either ignored or limited the courtsâ opinions and has not re-promulgated revised rules. Thus, while the tenor of the opinions seems to re-affirm the courtsâ role as guardian of the public interest, the actual impact of these opinions on agency practice may be less influential than one might expect. A concluding section takes the analysis one step further and explores the possibility that the net effect of judicial review may actually be more perverse. The ability of the dominant parties to threaten the agency with expensive and time-consuming litigation could provide these groups with legal leverage that, in the aggregate serves to further undermine the agencyâs ability to act on behalf of the public interest.The Kay Bailey Hutchison Center for Energy, Law, and Busines
A novel interaction between dengue virus nonstructural protein 1 and the NS4A-2K-4B precursor is required for viral RNA replication but not for formation of the membranous replication organelle
<div><p>Dengue virus (DENV) has emerged as major human pathogen. Despite the serious socio-economic impact of DENV-associated diseases, antiviral therapy is missing. DENV replicates in the cytoplasm of infected cells and induces a membranous replication organelle, formed by invaginations of the endoplasmic reticulum membrane and designated vesicle packets (VPs). Nonstructural protein 1 (NS1) of DENV is a multifunctional protein. It is secreted from cells to counteract antiviral immune responses, but also critically contributes to the severe clinical manifestations of dengue. In addition, NS1 is indispensable for viral RNA replication, but the underlying molecular mechanism remains elusive. In this study, we employed a combination of genetic, biochemical and imaging approaches to dissect the determinants in NS1 contributing to its various functions in the viral replication cycle. Several important observations were made. First, we identified a cluster of amino acid residues in the exposed region of the <i>ÎČ-ladder</i> domain of NS1 that are essential for NS1 secretion. Second, we revealed a novel interaction of NS1 with the NS4A-2K-4B cleavage intermediate, but not with mature NS4A or NS4B. This interaction is required for RNA replication, with two residues within the connector region of the NS1 â<i>Wing</i>â domain being crucial for binding of the NS4A-2K-4B precursor. By using a polyprotein expression system allowing the formation of VPs in the absence of viral RNA replication, we show that the NS1 âNS4A-2K-4B interaction is not required for VP formation, arguing that the association between these two proteins plays a more direct role in the RNA amplification process. Third, through analysis of polyproteins containing deletions in NS1, and employing a <i>trans</i>-complementation assay, we show that both <i>cis</i> and <i>trans</i> acting elements within NS1 contribute to VP formation, with the capability of NS1 mutants to form VPs correlating with their capability to support RNA replication. In conclusion, these results reveal a direct role of NS1 in VP formation that is independent from RNA replication, and argue for a critical function of a previously unrecognized NS4A-2K-NS4B precursor specifically interacting with NS1 and promoting viral RNA replication.</p></div
Recommended from our members
The Computational Attitude in Music Theory
Music studiesâs turn to computation during the twentieth century has engendered particular habits of thought about music, habits that remain in operation long after the music scholar has stepped away from the computer. The computational attitude is a way of thinking about music that is learned at the computer but can be applied away from it. It may be manifest in actual computer use, or in invocations of computationalism, a theory of mind whose influence on twentieth-century music theory is palpable. It may also be manifest in more informal discussions about music, which make liberal use of computational metaphors. In Chapter 1, I describe this attitude, the stakes for considering the computer as one of its instruments, and the kinds of historical sources and methodologies we might draw on to chart its ascendance. The remainder of this dissertation considers distinct and varied cases from the mid-twentieth century in which computers or computationalist musical ideas were used to pursue new musical objects, to quantify and classify musical scores as data, and to instantiate a generally music-structuralist mode of analysis.
I present an account of the decades-long effort to prepare an exhaustive and accurate catalog of the all-interval twelve-tone series (Chapter 2). This problem was first posed in the 1920s but was not solved until 1959, when the composer Hanns Jelinek collaborated with the computer engineer Heinz Zemanek to jointly develop and run a computer program. Recognizing the transformation wrought on modern statistics and communications technology by information theory, I revisit Abraham Molesâs book Information Theory and Esthetic Perception (orig. 1958) and use its vocabulary to contextualize contemporary information-theoretic work on music that various evokes the computational mind by John. R. Pierce and Mary Shannon, Wilhelm Fucks, and Henry Quastler (Chapter 3). I conclude with a detailed look into a score-segmentation algorithm of the influential American music theorist Allen Forte (Chapter 4). Forte was a skilled programmer who spent several years at MIT in the 1960s, with cutting-edge computers and the company of first-rank figures in the nascent fields of computer science and artificial intelligence. Each one of the researchers whose work is treated in these case studiesâat some stage in their relationship with musicâadopted what I call the computational attitude to music, to varying degrees and for diverse ends. Of the many questions this dissertation seeks to answer: what was gained by adopting such an attitude? What was lost? Having understood these past explorations of the computational attitude to music, we are better suited ask of ourselves the same questions today
On pulsar radio emission
This work intends to contribute to the understanding of the radio emission of pulsars. Pulsars are neutron stars with a radius of about 10^6 cm and a mass of about one to three solar masses, that rotate with a period between seconds and milliseconds. They exhibit tremendous magnetic
fields of 10^8 to 10^13 Gauss. These fields facilitate the conversion of rotational energy to mainly dipole radiation, x-ray emission and the pulsar wind. Less than a thousandth of the total energy loss is being emitted as radio emission. This contribution however is generated by a collective plasma radiation process that acts coherently on a time scale of nanoseconds and below. Since the topic has been an active field of research for nearly half a century, we introduce the resulting theoretical concepts and ideas for an emission process and the appearance of the so called âmagnetosphereâ, the plasma filled volume around a pulsar, in Chapter 1. We show that many basic questions have been answered satisfactorily. Questions concerning the emission process, however, suffer some uncertainty. Especially the exact energy source of the radio emission remains unclear. The early works of Goldreich and Julian [1969] and Ruderman and Sutherland [1975] predict high electric fields to arise that are capable of driving a strong electric current. To supplement the energy to power the radio emission, rather mildly relativistic particle energies and a moderate current are favourable. How the system converts current into flow is unclear. In fact, the earlier theories are opposed by recent simulations that also do not predict a relativistic flow near the pulsar.
We examine the observed radiation and its form, especially in light of the illustrated models in Chapter 2. We notice that the radio emission is generated in extremely short time scales, that are comparable to the inverse of the Plasma frequency. We elaborate why this places high demands on the theoretical models leaving in fact only one viable candidate process. We conclude that profound questions of energy flow and energy source remain unanswered by current theory. Furthermore, the compression of available energy in space and time to a few centimetres and nanoseconds remains unclear, especially when facing the fact that only a small fraction of the theoretically available energy is being converted.
Since the fluctuations relevant for the compression of the energy take place on an intermediate scale of nanoseconds to micro- and milliseconds, it should be possible to detect these observationally. To facilitate this, we decide to analyse the statistics of the Receiver equation of radio radiation in Chapter 3, also since this is relevant to other topics of Pulsar research.
The results presented in Chapter 4 show that the developed Bayesian method excels conventional methods to extract parameters from observation data in both precision and accuracy.
The method for example weights rotation phase measurements differently than conventional techniques and assigns a more accurate error estimation to single measurements. This is of
great relevance to gravitational wave search with so called âpulsar timing arrayâ, as the validity of the total measurement is substantially dependent on the understanding of the accuracy
assigned to the single observations.
However, the work on single observation data with Bayesian techniques also exemplifies the
numerical limits of this method. It is desirable to enable algorithms to include single observation data in the analysis. Therefore we developed a runtime library that writes out currently unneeded data to hard disk, being capable to manage huge data sets (substantial fractions of the hard disk space, not the main memory) in Chapter 5. This library has been written in a generic form so that it can be also used in other data-intensive areas of research.
While we thereby lay the foundations to evaluate fluctuation models by observational data, we
approach the problem from theoretical grounds in Chapter 6. We propose that the energetic
coupling of radio emission could be of magnetic origin, as this is also a relevant mechanism in
solar flare physics. We argue in a general way that the rotation of the pulsar pumps energy into
the magnetic field, due to topological reasons. This energy can be released again by current
decay. We show that already the annihilation of electrons and positrons may suffice to generate
radio emission on non-negligible energy scales. This mechanism is not dependent on relativistic
flow and thus does not suffer from the problem of requiring high kinetic particle energies. We conclude that the existing gaps in the theory of the radio emission process could possibly be
closed in the future, if we analyse observational data statistically more accurate and especially
if we put more effort into understanding the problem of energy transport.
This thesis serves as an example that scientific investigation of a very theoretical question such
as the origin of radio emission can lead to results that may be used directly in other Areas of research
Privacy and Power: Computer Databases and Metaphors for Information Privacy
Journalists, politicians, jurists, and legal academics often describe the privacy problem created by the collection and use of personal information through computer databases and the Internet with the metaphor of Big Brother - the totalitarian government portrayed in George Orwell\u27s Nineteen Eighty-Four. Professor Solove argues that this is the wrong metaphor. The Big Brother metaphor as well as much of the law that protects privacy emerges from a longstanding paradigm for conceptualizing privacy problems. Under this paradigm, privacy is invaded by uncovering one\u27s hidden world, by surveillance, and by the disclosure of concealed information. The harm caused by such invasions consists of inhibition, self-censorship, embarrassment, and damage to one\u27s reputation. Privacy law has developed with this paradigm in mind, and consequently, it has failed to adapt to grapple effectively with the database problem. Professor Solove argues that the Big Brother metaphor merely reinforces this paradigm and that the problem is better captured by Franz Kafka\u27s The Trial. Understood with the Kafka metaphor, the problem is the powerlessness, vulnerability, and dehumanization created by the assembly of dossiers of personal information where individuals lack any meaningful form of participation in the collection and use of their information. Professor Solove illustrates that conceptualizing the problem with the Kafka metaphor has profound implications for the law of information privacy as well as which legal approaches are taken to solve the problem
IdIOT: second-order cybernetics in the 'smart' home
During thesis brings second-order cybernetics into design research, in the context of the Internet of Things (IoT) and âsmartâ homes. My main proposition is to question and critically analyse the embedded epistemology in IoT technology in relation to human centred activities.
I examine how human lives are represented within the quantified approaches inherent in current notions of âsmartâ technology, derived from Artificial Intelligence (AI), and characterise this as the Algorithmic Paradigm. I explore questions of how complex, lived, human experience is oversimplified in the IoT. By adopting an epistemology derived from second-order cybernetics â acknowledging the importance of the observer â combined with my âIdIoT Propositionâ, a way of âslowing downâ research on a fast-paced topic, I explore designing reflectively.
The IdIoT is a methodological framework characterised by the process of slowing down and asking âWhat are we busy doing?â in order to become aware of algorithmic oversimplifications. This methodological approach provides self- awareness and self-reflection on âthe way of knowing the worldâ to the researcher and to the participants, in the context of the Algorithmic Paradigm applied in IoT.
Through a series of practice-based projects, I use the figure of the âSMARTâ fridge to examine the implications of the Algorithmic Paradigm in the âsmartâ home. The consideration that âsmartnessâ is relational is investigated in Becoming Your âSMARTâ Fridge, in which I position myself as the algorithm behind a âsmartâ fridge, using quantitative and qualitative data to make sense and ânonsenseâ outcomes, and exploring householdersâ interpretations. In the âSMARTâ Fridge Session, I developed scripted dialogues characterised by active, reflective users, and assigned roles in which the âsmartnessâ of the algorithms is explored via professional performances and fictitious roles taken on by members of the public. The findings reveal the value of second-order cybernetics, acknowledging an unpredictable observer and embracing âsmartâ as relational in interaction with IoT technology. They suggest that a shift in perspective is required to create more meaningful interactions with devices in the âsmartâ home, questioning the current technological path, challenging the dominant epistemology and proposing alternatives. My methodological approach demonstrates how design research and
1
second-order considerations can work together, asking novel questions to inform disciplines with an interest in the IoT, both from a design perspective and in terms of broader implications for society. The work has value for design, HCI, Critical Algorithm Studies, and for technical developers involved in the creation of IoT systems
- âŠ