321 research outputs found
Exploring the Impact of Socio-Technical Core-Periphery Structures in Open Source Software Development
In this paper we apply the social network concept of core-periphery structure
to the sociotechnical structure of a software development team. We propose a
socio-technical pattern that can be used to locate emerging coordination
problems in Open Source projects. With the help of our tool and method called
TESNA, we demonstrate a method to monitor the socio-technical core-periphery
movement in Open Source projects. We then study the impact of different
core-periphery movements on Open Source projects. We conclude that a steady
core-periphery shift towards the core is beneficial to the project, whereas
shifts away from the core are clearly not good. Furthermore, oscillatory shifts
towards and away from the core can be considered as an indication of the
instability of the project. Such an analysis can provide developers with a good
insight into the health of an Open Source project. Researchers can gain from
the pattern theory, and from the method we use to study the core-periphery
movements
Classifying the unknown: discovering novel gravitational-wave detector glitches using similarity learning
The observation of gravitational waves from compact binary coalescences by
LIGO and Virgo has begun a new era in astronomy. A critical challenge in making
detections is determining whether loud transient features in the data are
caused by gravitational waves or by instrumental or environmental sources. The
citizen-science project \emph{Gravity Spy} has been demonstrated as an
efficient infrastructure for classifying known types of noise transients
(glitches) through a combination of data analysis performed by both citizen
volunteers and machine learning. We present the next iteration of this project,
using similarity indices to empower citizen scientists to create large data
sets of unknown transients, which can then be used to facilitate supervised
machine-learning characterization. This new evolution aims to alleviate a
persistent challenge that plagues both citizen-science and instrumental
detector work: the ability to build large samples of relatively rare events.
Using two families of transient noise that appeared unexpectedly during LIGO's
second observing run (O2), we demonstrate the impact that the similarity
indices could have had on finding these new glitch types in the Gravity Spy
program
Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science
(abridged for arXiv) With the first direct detection of gravitational waves,
the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has
initiated a new field of astronomy by providing an alternate means of sensing
the universe. The extreme sensitivity required to make such detections is
achieved through exquisite isolation of all sensitive components of LIGO from
non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to
a variety of instrumental and environmental sources of noise that contaminate
the data. Of particular concern are noise features known as glitches, which are
transient and non-Gaussian in their nature, and occur at a high enough rate so
that accidental coincidence between the two LIGO detectors is non-negligible.
In this paper we describe an innovative project that combines crowdsourcing
with machine learning to aid in the challenging task of categorizing all of the
glitches recorded by the LIGO detectors. Through the Zooniverse platform, we
engage and recruit volunteers from the public to categorize images of glitches
into pre-identified morphological classes and to discover new classes that
appear as the detectors evolve. In addition, machine learning algorithms are
used to categorize images after being trained on human-classified examples of
the morphological classes. Leveraging the strengths of both classification
methods, we create a combined method with the aim of improving the efficiency
and accuracy of each individual classifier. The resulting classification and
characterization should help LIGO scientists to identify causes of glitches and
subsequently eliminate them from the data or the detector entirely, thereby
improving the rate and accuracy of gravitational-wave observations. We
demonstrate these methods using a small subset of data from LIGO's first
observing run.Comment: 27 pages, 8 figures, 1 tabl
Multimodal probes : superresolution and transmission electron microscopy imaging of mitochondria, and oxygen mapping of cells, using small-molecule Ir(III) luminescent complexes
We describe an Ir(III)-based small-molecule, multimodal probe for use in both light and electron microscopy. The direct correlation of data between light- and electron-microscopy-based imaging to investigate cellular processes at the ultrastructure level is a current challenge, requiring both dyes that must be brightly emissive for luminescence imaging and scatter electrons to give contrast for electron microscopy, at a single working concentration suitable for both methods. Here we describe the use of Ir(III) complexes as probes that provide excellent image contrast and quality for both luminescence and electron microscopy imaging, at the same working concentration. Significant contrast enhancement of cellular mitochondria was observed in transmission electron microscopy imaging, with and without the use of typical contrast agents. The specificity for cellular mitochondria was also confirmed with MitoTracker using confocal and 3D-structured illumination microscopy. These phosphorescent dyes are part of a very exclusive group of transition-metal complexes that enable imaging beyond the diffraction limit. Triplet excited-state phosphorescence was also utilized to probe the O2 concentration at the mitochondria in vitro, using lifetime mapping techniques
Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice
Purpose: This concise review aims to explore the potential for the clinical implementation of artificial intelligence (AI) strategies for detecting glaucoma and monitoring glaucoma progression. / Methods: Nonsystematic literature review using the search combinations "Artificial Intelligence," "Deep Learning," "Machine Learning," "Neural Networks," "Bayesian Networks," "Glaucoma Diagnosis," and "Glaucoma Progression." Information on sensitivity and specificity regarding glaucoma diagnosis and progression analysis as well as methodological details were extracted. / Results: Numerous AI strategies provide promising levels of specificity and sensitivity for structural (e.g. optical coherence tomography [OCT] imaging, fundus photography) and functional (visual field [VF] testing) test modalities used for the detection of glaucoma. Area under receiver operating curve (AROC) values of > 0.90 were achieved with every modality. Combining structural and functional inputs has been shown to even more improve the diagnostic ability. Regarding glaucoma progression, AI strategies can detect progression earlier than conventional methods or potentially from one single VF test. / Conclusions: AI algorithms applied to fundus photographs for screening purposes may provide good results using a simple and widely accessible test. However, for patients who are likely to have glaucoma more sophisticated methods should be used including data from OCT and perimetry. Outputs may serve as an adjunct to assist clinical decision making, whereas also enhancing the efficiency, productivity, and quality of the delivery of glaucoma care. Patients with diagnosed glaucoma may benefit from future algorithms to evaluate their risk of progression. Challenges are yet to be overcome, including the external validity of AI strategies, a move from a "black box" toward "explainable AI," and likely regulatory hurdles. However, it is clear that AI can enhance the role of specialist clinicians and will inevitably shape the future of the delivery of glaucoma care to the next generation. / Translational Relevance: The promising levels of diagnostic accuracy reported by AI strategies across the modalities used in clinical practice for glaucoma detection can pave the way for the development of reliable models appropriate for their translation into clinical practice. Future incorporation of AI into healthcare models may help address the current limitations of access and timely management of patients with glaucoma across the world
The dimensions of software engineering success
Software engineering research and practice are hampered by the lack of a well-understood, top-level dependent variable. Recent initiatives on General Theory of Software Engineering suggest a multifaceted variable – Software Engineering Success. However, its exact dimensions are unknown. This paper investigates the dimensions (not causes) of software engineering success. An interdisciplinary sample of 191 design professionals (68 in the software industry) were interviewed concerning their perceptions of success. Non-software designers (e.g. architects) were included to increase the breadth of ideas and facilitate comparative analysis. Transcripts were subjected to supervised, semi-automated semantic content analysis, including a software developer vs. other professionals comparison. Findings suggest that participants view their work as time-constrained projects with explicit clients and other stakeholders. Success depends on stakeholder impacts – financial, social, physical and emotional – and is understood through feedback. Concern with meeting explicit requirements is peculiar to software engineering and design is not equated with aesthetics in many other fields. Software engineering success is a complex multifaceted variable, which cannot sufficiently be explained by traditional dimensions including user satisfaction, profitability or meeting requirements, budgets and schedules. A proto-theory of success is proposed, which models success as the net impact on a particular stakeholder at a particular time. Stakeholder impacts are driven by project efficiency, artifact quality and market performance. Success is not additive, e.g., ‘low’ success for clients does not average with ‘high’ success for developers to make ‘moderate’ success overall; rather, a project may be simultaneously successful and unsuccessful from different perspectives
Advancing Glitch Classification in Gravity Spy: Multi-view Fusion with Attention-based Machine Learning for Advanced LIGO's Fourth Observing Run
The first successful detection of gravitational waves by ground-based
observatories, such as the Laser Interferometer Gravitational-Wave Observatory
(LIGO), marked a revolutionary breakthrough in our comprehension of the
Universe. However, due to the unprecedented sensitivity required to make such
observations, gravitational-wave detectors also capture disruptive noise
sources called glitches, potentially masking or appearing as gravitational-wave
signals themselves. To address this problem, a community-science project,
Gravity Spy, incorporates human insight and machine learning to classify
glitches in LIGO data. The machine learning classifier, integrated into the
project since 2017, has evolved over time to accommodate increasing numbers of
glitch classes. Despite its success, limitations have arisen in the ongoing
LIGO fourth observing run (O4) due to its architecture's simplicity, which led
to poor generalization and inability to handle multi-time window inputs
effectively. We propose an advanced classifier for O4 glitches. Our
contributions include evaluating fusion strategies for multi-time window
inputs, using label smoothing to counter noisy labels, and enhancing
interpretability through attention module-generated weights. This development
seeks to enhance glitch classification, aiding in the ongoing exploration of
gravitational-wave phenomena
The Inter-organizational Business Case in ES Implementations: Exploring the Impact of Coordination Structures and Their Properties
Developing the business case (BC) for an inter-organizational network is a major challenge. Factors like competition and differences in semantics between actors influence the stakeholders’ willingness to share information necessary for the BC development. In this paper we develop an exploratory framework showing the effect that coordination structure and project scope have on the development of a shared BC. We defined several coordination properties, such as competition, decision making location and decision power that mitigate this effect. We applied the framework in a case study where a BC is developed for an inter-organizational network. Our findings show that current BC development methods need to be re-stated and complemented by extra tools and interventions to support stakeholders in the inter-organizational specific setting
- …