2,368 research outputs found
Norm Flexibility and Private Initiative
We model an enforcement problem where firms can take a known and lawful action or seek a profitable innovation that may enhance or reduce welfare. The legislator sets fines calibrated to the harmfulness of unlawful actions. The range of fines defines norm flexibility. Expected sanctions guide firms’ choices among unlawful actions (marginal deterrence) and/or stunt their initiative altogether (average deterrence). With loyal enforcers, maximum norm flexibility is optimal, so as to exploit both marginal and average deterrence. With corrupt enforcers, instead, the legislator should prefer more rigid norms that prevent bribery and misreporting, at the cost of reducing marginal deterrence and stunting private initiative. The greater is potential corruption, the more rigid the optimal norms.norm design, initiative, enforcement, corruption
Incentives to Innovate and Social Harm: Laissez-Faire, Authorization or Penalties?
We analyze optimal policy design when firms' research activity may lead to socially harmful innovations. Public intervention, affecting the expected profitability of innovation, may both thwart the incentives to undertake research (average deterrence) and guide the use to which innovation is put (marginal deterrence). We show that public intervention should become increasingly stringent as the probability of social harm increases, switching first from laissez-faire to a penalty regime, then to a lenient authorization regime, and finally to a strict one. In contrast, absent innovative activity, regulation should rely only on authorizations, and laissez-faire is never optimal. Therefore, in innovative industries regulation should be softer.innovation, liability for harm, safety regulation, authorization
Wrenches in the works: drug discovery targeting the SCF ubiquitin ligase and APC/C complexes
Recently, the ubiquitin proteasome system (UPS) has matured as a drug discovery arena, largely on the strength of the proven clinical activity of the proteasome inhibitor Velcade in multiple myeloma. Ubiquitin ligases tag cellular proteins, such as oncogenes and tumor suppressors, with ubiquitin. Once tagged, these proteins are degraded by the proteasome. The specificity of this degradation system for particular substrates lies with the E3 component of the ubiquitin ligase system (ubiquitin is transferred from an E1 enzyme to an E2 enzyme and finally, thanks to an E3 enzyme, directly to a specific substrate). The clinical effectiveness of Velcade (as it theoretically should inhibit the output of all ubiquitin ligases active in the cell simultaneously) suggests that modulating specific ubiquitin ligases could result in an even better therapeutic ratio. At present, the only ubiquitin ligase leads that have been reported inhibit the degradation of p53 by Mdm2, but these have not yet been developed into clinical therapeutics. In this review, we discuss the biological rationale, assays, genomics, proteomics and three-dimensional structures pertaining to key targets within the UPS (SCFSkp2 and APC/C) in order to assess their drug development potential
Cell Division, a new open access online forum for and from the cell cycle community
Cell Division is a new, open access, peer-reviewed online journal that publishes cutting-edge articles, commentaries and reviews on all exciting aspects of cell cycle control in eukaryotes. A major goal of this new journal is to publish timely and significant studies on the aberrations of the cell cycle network that occur in cancer and other diseases
Waterfall Traffic Classification: A Quick Approach to Optimizing Cascade Classifiers
Heterogeneous wireless communication networks, like 4G LTE, transport diverse kinds of IP traffic: voice, video, Internet data, and more. In order to effectively manage such networks, administrators need adequate tools, of which traffic classification is the basis for visualizing, shaping, and filtering the broad streams of IP packets observed nowadays. In this paper, we describe a modular, cascading traffic classification system—the Waterfall architecture—and we extensively describe a novel technique for its optimization—in terms of CPU time, number of errors, and percentage of unrecognized flows. We show how to significantly accelerate the process of exhaustive search for the best performing cascade. We employ five datasets of real Internet transmissions and seven traffic analysis methods to demonstrate that our proposal yields valid results and outperforms a greedy optimizer
The total capacity of customers in the MMPP/GI/∞ queueing system
In the paper, the infinite-server queueing system with a random capacity of customers is considered. In this system, the total capacity of customers is analysed by means of the asymptotic analysis method with high-rate Markov Modulated Poisson Process arrivals. It is obtained that the stationary probability distribution of the total customer capacity can be approximated by the Gaussian distribution. Parameters of the approximation is also derived in the pape
Wagging the Dogma Tissue-Specific Cell Cycle Control in the Mouse Embryo
AbstractThe family of cyclin-dependent kinases (Cdks) lies at the core of the machinery that drives the cell division cycle. Studies in cultured mammalian cells have provided insight into the cellular functions of many Cdks. Recent Cdk and cyclin knockouts in the mouse show that the functions of G1 cell cycle regulatory genes are often essential only in specific cell types, pointing to our limited understanding of tissue-specific expression, redundancy, and compensating mechanisms in the Cdk network
The fluid flow approximation of the TCP vegas and reno congestion control mechanism
TCP congestion control algorithms have been design to improve Internet transmission performance and stability. In recent years the classic Tahoe/Reno/NewReno TCP congestion control, based on losses as congestion indicators, has been improved and many congestion control algorithms have been proposed. In this paper the performance of standard TCP NewReno algorithm is compared to the performance of TCP Vegas, which tries to avoid congestion by reducing the congestion window (CWND) size before packets are lost. The article uses fluid flow approximation to investigate the influence of the two above-mentioned TCP congestion control mechanisms on CWND evolution, packet loss probability, queue length and its variability. Obtained results show that TCP Vegas is a fair algorithm, however it has problems with the use of available bandwidth
- …