2,612 research outputs found

    Modular categories from finite crossed modules

    Full text link
    It is known that finite crossed modules provide premodular tensor categories. These categories are in fact modularizable. We construct the modularization and show that it is equivalent to the module category of a finite Drinfeld double.Comment: 21 pages, typos correcte

    Outgrowing the Commerce Clause: Finding Endangered Species a Home in the Constitutional Framework

    Get PDF
    This Comment examines the controversial relationship between the ESA and the Commerce Clause. Part I provides an overview of the Commerce Clause and the ESA. Part II reviews the evolution of the Commerce Clause and examines, in its current form, the Constitution\u27s capacity to support the ESA. Part III examines the likelihood of Supreme Court review of the ESA due to conflicting circuit court opinions and recent changes in the Supreme Court composition. Part IV identifies several factors that endanger the ESA at the Supreme Court level. The Comment concludes that, despite several seemingly favorable factors, the Commerce Clause framework is still inadequate to support the ESA, which remains in danger of a constitutional attack at the Supreme Court level. Though our current constitutional framework leaves the ESA vulnerable to attack, the ESA should not suffer as a result of our court system\u27s shortcomings. Therefore, Part V proposes solutions to this inadequacy, including a shared responsibility between state and federal entities, several legislative remedies, and a recommendation for the Supreme Court to adopt the Fifth Circuit\u27s rationale. Within these solutions, the Comment ultimately favors the comprehensive scheme rationale as applied to the ESA. Though not perfect in all respects, it is the solution that would allow for the broadest protection for all endangered species, and is therefore the most desirable among those in the environmental community

    Outgrowing the Commerce Clause: Finding Endangered Species a Home in the Constitutional Framework

    Get PDF
    This Comment examines the controversial relationship between the ESA and the Commerce Clause. Part I provides an overview of the Commerce Clause and the ESA. Part II reviews the evolution of the Commerce Clause and examines, in its current form, the Constitution\u27s capacity to support the ESA. Part III examines the likelihood of Supreme Court review of the ESA due to conflicting circuit court opinions and recent changes in the Supreme Court composition. Part IV identifies several factors that endanger the ESA at the Supreme Court level. The Comment concludes that, despite several seemingly favorable factors, the Commerce Clause framework is still inadequate to support the ESA, which remains in danger of a constitutional attack at the Supreme Court level. Though our current constitutional framework leaves the ESA vulnerable to attack, the ESA should not suffer as a result of our court system\u27s shortcomings. Therefore, Part V proposes solutions to this inadequacy, including a shared responsibility between state and federal entities, several legislative remedies, and a recommendation for the Supreme Court to adopt the Fifth Circuit\u27s rationale. Within these solutions, the Comment ultimately favors the comprehensive scheme rationale as applied to the ESA. Though not perfect in all respects, it is the solution that would allow for the broadest protection for all endangered species, and is therefore the most desirable among those in the environmental community

    The new "p-n junction": Plasmonics enables photonic access to the nanoworld

    Get PDF
    Since the development of the light microscope in the 16th century, optical device size and performance have been limited by diffraction. Optoelectronic devices of today are much bigger than the smallest electronic devices for this reason. Achieving control of light-material interactions for photonic device applications at the nanoscale requires structures that guide electromagnetic energy with subwavelength-scale mode confinement. By converting the optical mode into nonradiating surface plasmons, electromagnetic energy can be guided in structures with lateral dimensions of less than 10% of the free-space wavelength. A variety of methods-including electron-beam lithography and self-assembly-have been used to construct both particle and planar plasmon waveguides. Recent experimental studies have confirmed the strongly coupled collective plasmonic modes of metallic nanostructures. In plasmon waveguides consisting of closely spaced silver rods, electromagnetic energy transport over distances of 0.5 mu m has been observed. Moreover, numerical simulations suggest the possibility of multi-centimeter plasmon propagation in thin metallic stripes. Thus, there appears to be no fundamental scaling limit to the size and density of photonic devices, and ongoing work is aimed at identifying important device performance criteria in the subwavelength size regime. Ultimately, it may be possible to design an entire class of subwavelength-scale optoelectronic components (waveguides, sources, detectors, modulators) that could form the building blocks of an optical device technology-a technology scalable to molecular dimensions, with potential imaging, spectroscopy, and interconnection applications in computing, communications, and chemical/biological detection

    Comparison of Gain-Like Properties of Eye Position Signals in Inferior Colliculus Versus Auditory Cortex of Primates

    Get PDF
    We evaluated to what extent the influence of eye position in the auditory pathway of primates can be described as a gain field. We compared single unit activity in the inferior colliculus (IC), core auditory cortex (A1) and the caudomedial belt (CM) region of auditory cortex (AC) in primates, and found stronger evidence for gain field-like interactions in the IC than in AC. In the IC, eye position signals showed both multiplicative and additive interactions with auditory responses, whereas in AC the effects were not as well predicted by a gain field model

    Multi-Unit Efficiency Assessment and Multidimensional Polygon Analysis in a Small, Full-Service Restaurant Chain

    Get PDF
    Purpose: Restaurant revenue management practices and profit optimization techniques are evolving into more complex data analysis processes. The “big data” revolution has created a wealth of information on revenue, pricing, key operational performance indicators, and various productivity/efficiency variables. Advanced research analysis that can identify these key factors across multiple operating units may be useful to restaurant managers unaccustomed to data analytics or those seeking a deeper understanding of unit-level business performance. The overarching goal of this study was to utilize mixed research methods across conceptually dissimilar units of a multi-unit chain restaurant, enabling researchers to build on the resulting outcomes and restaurant operators to apply it to optimize unit-to-unit profit. Design/Methodology: A mixed research methodology was used to evaluate multidimensional operating efficiencies and labor productivity across multiple restaurant concepts. Data envelopment analysis (DEA), between-unit multidimensional analysis, and within-unit ratio analyses were utilized. While DEA was applied as a primary diagnostic tool to identify productivity/efficiency benchmarking factors, supplemental between- and within-unit measures provided more in-depth information regarding the effects of operating expense variables. Findings: Restaurant analytics that effectively measure input and output variables between and within multiple units promote a data-rich organizational culture. For the small multi-unit organization that was the focus of this study, this is certainly the case. DEA diagnostic results to inform targeted analysis to particular units of operation indicated that all units are operating at maximum efficiency in terms of generating sales given the respective numbers of seats and square footages. However, subsequent analyses indicated multiple problems in terms of expense management. This same approach may help other operators optimize operations. Originality/value: The proposed model provides restaurant operators the opportunity to identify the impact of different operating expense variables and their impact on overall profitability. The use of the polygon analysis in itself makes complex sensitivity analysis of certain operating variables to profit outcomes a much easier process. We recommend other operators perform similar analyses in order to enhance operational productivity

    Note on Inaugural Edition

    Get PDF

    Methoden für die C-Bogen Kegelstrahl-Computertomographie zur Kniebildgebung unter Last

    Get PDF
    Weight-bearing C-arm cone-beam computed tomography (CBCT) of the knees is an imaging technique that can be applied to acquire information about the structures of the knee joint under natural loading conditions. An analysis of the knee joints under load can provide valuable information about the disease progression of patients suffering from osteoarthritis (OA). OA is a degenerative disease that causes breakdown of soft tissues like articular cartilage leading to severe pain especially during movement. Weight-bearing CBCT is beneficial compared with standing 2-D radiographs traditionally applied for OA diagnosis, as it provides 3-D information about the structures of the knees. However, this acquisition technique also poses some challenges which are addressed in this thesis, including detector saturation, motion artifacts owing to postural sway during the scan, and the analysis of the resulting reconstructions. During weight-bearing CBCT acquisition, the C-arm rotates on a horizontal trajectory around the knees. The high X-ray dose required in lateral views to penetrate through both femurs leads to detector saturation in less dense tissue regions. To address this issue, an approach for preventing detector saturation in the analog domain is presented. It non-linearly transforms the intensities using an analog tone mapping operator (TMO) before digitization in order to increase the dynamic range of the detector. Furthermore, a second approach is described that replaces saturated regions in the projection images with information obtained from an additional lowdose scan. A marker-based 3-D non-rigid alignment step makes the approach robust to subject motion between scans. Both approaches lead to improved image quality in simulations, and a clinical evaluation confirms the feasibility of the second approach. The swaying motion of naturally standing subjects during weight-bearing CBCT acquisition leads to blurring and streaking artifacts in the reconstructions. To correct these artifacts, an inertial measurement unit (IMU)-based rigid and non-rigid motion compensation is developed and evaluated in a simulation study using the recorded motion of real standing subjects. The approaches lead to improved image quality on optimal simulations. A noisy signal simulation reveals the limitations of the approach towards an application in real acquisitions. A subsequent phantom study shows that additional motion is induced by the vibration of the C-arm during the scan, which can not be measured by the IMU attached to the legs of the scanned subject. Afterwards, the thickness analysis of tibial cartilage is addressed. First, an analysis of manual segmentations of the tibial cartilage surface of multiple raters is performed showing that low-pass filtered single-rater segmentations are more similar to the consensus of multiple raters than the original segmentations. Furthermore, as a fast and repeatable alternative to manual segmentations, an automatic convolutional neural network (CNN)-based approach for cartilage surface segmentation is developed. As there is no standard measure for cartilage thickness in literature, the results of four cartilage thickness measures are compared revealing their similarities and differences. A subsequent evaluation of the change in cartilage thickness over time supports the expectation that lateral cartilage thickness decreases under load. This thesis provides valuable tools for the pipeline aimed at analyzing cartilage in OA. The presented methods contribute to the improvement of data acquisition and processing in weight-bearing CBCT and pave the road for the evaluation of clinical data through a detailed and thorough analysis of all described processing steps.Die C-Bogen Kegelstrahl-Computertomographie (CBCT) zur Kniebildgebung unter Last ist ein bildgebendes Verfahren, mit dem das Kniegelenk unter natürlichen Belastungsbedingungen untersucht werden kann. Eine Analyse der Kniegelenke von Arthrosepatienten unter Belastung kann wertvolle Informationen über das Fortschreiten der Erkrankung liefern. Arthrose ist eine degenerative Gelenkerkrankung, die durch den Abbau von Weichteilen wie Knorpel gekennzeichnet ist, was zu starken Schmerzen insbesondere bei Bewegung führt. Gegenüber traditionell für die Diagnose verwendeten stehenden 2-D Röntgenaufnahmen hat die CBCT den Vorteil, dass sie 3-D Informationen über das Kniegelenk liefert. Diese Aufnahmetechnik bringt jedoch auch einige Herausforderungen mit sich, die in dieser Arbeit behandelt werden. Zu diesen zählen Detektorsättigung, Bewegungsartefakte durch das Schwanken während der stehenden Aufnahme und die Analyse der rekonstruierten Volumen. Während der CBCT Aufnahme rotiert der C-Arm auf einer horizontalen Trajektorie um die Knie. In seitlichen Ansichten ist zur Durchdringung beider Femora eine hohe Röntgendosis nötig, die jedoch zu Detektorsättigung bei der Darstellung weniger dichter Gewebe führt. Um Detektorsättigung zu vermeiden, wird ein analoger Dynamikkompressionsansatz vorgestellt, der die gemessenen Intensitäten vor der Digitalisierung nichtlinear transformiert und so den Dynamikbereich des Detektors erweitert. In einem zweiten Korrekturansatz werden gesättigte Bereiche der Projektionen mithilfe einer zusätzlichen Niedrigdosis-Aufnahme ersetzt. Eine markerbasierte nichtrigide 3-D-Registrierung macht den Ansatz robust gegen Probandenbewegung zwischen den Aufnahmen. Beide Ansätze führen in Simulationen zu verbesserter Bildqualität, was für den zweiten Ansatz in einer klinischen Auswertung bestätigt wird. Das Schwanken der stehenden Probanden während der CBCT Aufnahme führt zu Unschärfe und Streifenartefakten in den Rekonstruktionen. In dieser Arbeit wird eine rigide und nichtrigide Bewegungskompensation mithilfe von inertialen Messeinheiten (IMUs) vorgestellt und in einer Simulationsstudie mit realer Probandenbewegungen evaluiert. In optimalen Simulationen führt der Ansatz zu verbesserter Bildqualität, jedoch zeigt eine Simulation von Signalrauschen dessen Grenzen für eine Anwendung in realen Aufnahmen auf. Eine anschließende Phantomstudie zeigt, dass durch die Vibration des C-Bogens während der Aufnahme zusätzliche Bewegung erzeugt wird, die von den am Probanden angebrachten IMUs nicht gemessen werden kann. Danach wird die Analyse der Tibiaknorpeldicke behandelt. Es wird gezeigt, dass durch Tiefpassfilterung geglättete manuelle Segmentierungen der Tibiaknorpeloberfläche eines Raters dem Konsens mehrerer Rater ähnlicher sind als dessen originale Segmentierungen. Zudem wird ein faltendes neuronales Netzwerk (CNN) zur automatischen Knorpeloberflächensegmentierung als schnelle und reproduzierbare Alternative zu manuellen Segmentierungen vorgestellt. Da es in der Literatur kein einheitliches Maß für die Knorpeldicke gibt, werden Gemeinsamkeiten und Unterschiede von vier Messmethoden aufgezeigt. Eine Auswertung der Knorpeldickenveränderung über die Zeit bestärkt die Annahme einer Abnahme der lateralen Knorpeldicke unter Last. Diese Arbeit liefert wertvolle Werkzeuge für die Knorpelanalysepipeline. Die vorgestellten Methoden tragen zur Verbesserung der Datenaufnahme und -verarbeitung der CBCT unter Last bei und ebnen den Weg für die Auswertung klinischer Daten durch eine detaillierte und gründliche Analyse aller beschriebener Prozessschritte
    corecore