136,454 research outputs found
Recommended from our members
Global morphogenetic flow is accurately predicted by the spatial distribution of myosin motors.
During embryogenesis tissue layers undergo morphogenetic flow rearranging and folding into specific shapes. While developmental biology has identified key genes and local cellular processes, global coordination of tissue remodeling at the organ scale remains unclear. Here, we combine in toto light-sheet microscopy of the Drosophila embryo with quantitative analysis and physical modeling to relate cellular flow with the patterns of force generation during the gastrulation process. We find that the complex spatio-temporal flow pattern can be predicted from the measured meso-scale myosin density and anisotropy using a simple, effective viscous model of the tissue, achieving close to 90% accuracy with one time dependent and two constant parameters. Our analysis uncovers the importance of a) spatial modulation of myosin distribution on the scale of the embryo and b) the non-locality of its effect due to mechanical interaction of cells, demonstrating the need for the global perspective in the study of morphogenetic flow
Investigating information systems with mixed-methods research
Mixed-methods research, which comprises both quantitative and qualitative components, is widely perceived as a means to resolve the inherent limitations of traditional single method designs and is thus expected to yield richer and more holistic findings. Despite such distinctive benefits and continuous advocacy from Information Systems (IS) researchers, the use of mixed-methods approach in the IS field has not been high. This paper discusses some of the key reasons that led to this low application rate of mixed-methods design in the IS field, ranging from misunderstanding the term with multiple-methods research to practical difficulties for design and implementation. Two previous IS studies are used as examples to illustrate the discussion. The paper concludes by recommending that in order to apply mixed-methods design successfully, IS researchers need to plan and consider thoroughly how the quantitative and qualitative components (i.e. from data collection to data analysis to reporting of findings) can be genuinely integrated together and supplement one another, in relation to the predefined research questions and the specific research contexts
Understanding the perception of very small software companies towards the adoption of process standards
This paper is concerned with understanding the issues that affect the adoption of software process standards by Very Small Entities (VSEs), there needs from process standards and there willingness to engage with the new ISO/IEC 29110 standard in particular. In order to achieve this goal, a series of industry data collection studies were undertaken with a collection of VSEs. A twin track approach of a qualitative data collection (interviews and focus groups) and quantitative data collection (questionnaire), with data analysis being completed separately and finally results merged, using the coding mechanisms of grounded theory. This paper serves as a roadmap for both researchers wishing to understand the issues of process standards adoption by very small companies and also for the software process standards community
Statistical methods for tissue array images - algorithmic scoring and co-training
Recent advances in tissue microarray technology have allowed
immunohistochemistry to become a powerful medium-to-high throughput analysis
tool, particularly for the validation of diagnostic and prognostic biomarkers.
However, as study size grows, the manual evaluation of these assays becomes a
prohibitive limitation; it vastly reduces throughput and greatly increases
variability and expense. We propose an algorithm - Tissue Array Co-Occurrence
Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on
textural regularity summarized by local inter-pixel relationships. The
algorithm can be easily trained for any staining pattern, is absent of
sensitive tuning parameters and has the ability to report salient pixels in an
image that contribute to its score. Pathologists' input via informative
training patches is an important aspect of the algorithm that allows the
training for any specific marker or cell type. With co-training, the error rate
of TACOMA can be reduced substantially for a very small training sample (e.g.,
with size 30). We give theoretical insights into the success of co-training via
thinning of the feature set in a high-dimensional setting when there is
"sufficient" redundancy among the features. TACOMA is flexible, transparent and
provides a scoring process that can be evaluated with clarity and confidence.
In a study based on an estrogen receptor (ER) marker, we show that TACOMA is
comparable to, or outperforms, pathologists' performance in terms of accuracy
and repeatability.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS543 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Exploiting Qualitative Information for Decision Support in Scenario Analysis
The development of scenario analysis (SA) to assist decision makers and stakeholders has been growing over the last few years through mainly exploiting qualitative information provided by experts. In this study, we present SA based on the use of qualitative data for strategy planning. We discuss the potential of SA as a decision-support tool, and provide a structured approach for the interpretation of SA data, and an empirical validation of expert evaluations that can help to measure the consistency of the analysis. An application to a specific case study is provided, with reference to the European organic farming business
Recommended from our members
Educational Technology Topic Guide
This guide aims to contribute to what we know about the relationship between educational technology (edtech) and educational outcomes by addressing the following overarching question: What is the evidence that the use of edtech, by teachers or students, impacts teaching and learning practices, or learning outcomes? It also offers recommendations to support advisors to strengthen the design, implementation and evaluation of programmes that use edtech.
We define edtech as the use of digital or electronic technologies and materials to support teaching and learning. Recognising that technology alone does not enhance learning, evaluations must also consider how programmes are designed and implemented, how teachers are supported, how communities are developed and how outcomes are measured (see http://tel.ac.uk/about-3/, 2014).
Effective edtech programmes are characterised by:
a clear and specific curriculum focus
the use of relevant curriculum materials
a focus on teacher development and pedagogy
evaluation mechanisms that go beyond outputs.
These findings come from a wide range of technology use including:
interactive radio instruction (IRI)
classroom audio or video resources accessed via teachersâ mobile phones
student tablets and eReaders
computer-assisted learning (CAL) to supplement classroom teaching.
However, there are also examples of large-scale investment in edtech â particularly computers for student use â that produce limited educational outcomes. We need to know more about:
how to support teachers to develop appropriate, relevant practices using edtech
how such practices are enacted in schools, and what factors contribute to or mitigate against
successful outcomes.
Recommendations:
1. Edtech programmes should focus on enabling educational change, not delivering technology. In doing so, programmes should provide adequate support for teachers and aim to capture changes in teaching practice and learning outcomes in evaluation.
2. Advisors should support proposals that further develop successful practices or that address gaps in evidence and understanding.
3. Advisors should discourage proposals that have an emphasis on technology over education, weak programmatic support or poor evaluation.
4. In design and evaluation, value-for-money metrics and cost-effectiveness analyses should be carried out
A methodology for analysing and evaluating narratives in annual reports: a comprehensive descriptive profile and metrics for disclosure quality attributes
There is a consensus that the business reporting model needs to expand to serve the changing information needs of the market and provide the information required for enhanced corporate transparency and accountability. Worldwide, regulators view narrative disclosures as the key to achieving the desired step-change in the quality of corporate reporting. In recent years, accounting researchers have increasingly focused their efforts on investigating disclosure and it is now recognised that there is an urgent need to develop disclosure metrics to facilitate research into voluntary disclosure and quality [Core, J. E. (2001). A review of the empirical disclosure literature. Journal of Accounting and Economics, 31(3), 441â456]. This paper responds to this call and contributes in two principal ways. First, the paper introduces to the academic literature a comprehensive four-dimensional framework for the holistic content analysis of accounting narratives and presents a computer-assisted methodology for implementing this framework. This procedure provides a rich descriptive profile of a company's narrative disclosures based on the coding of topic and three type attributes. Second, the paper explores the complex concept of quality, and the problematic nature of quality measurement. It makes a preliminary attempt to identify some of the attributes of quality (such as relative amount of disclosure and topic spread), suggests observable proxies for these and offers a tentative summary measure of disclosure quality
Recommended from our members
System-level key performance indicators for building performance evaluation
Quantifying building energy performance through the development and use of key performance indicators (KPIs) is an essential step in achieving energy saving goals in both new and existing buildings. Current methods used to evaluate improvements, however, are not well represented at the system-level (e.g., lighting, plug-loads, HVAC, service water heating). Instead, they are typically only either measured at the whole building level (e.g., energy use intensity) or at the equipment level (e.g., chiller efficiency coefficient of performance (COP)) with limited insights for benchmarking and diagnosing deviations in performance of aggregated equipment that delivers a specific service to a building (e.g., space heating, lighting). The increasing installation of sensors and meters in buildings makes the evaluation of building performance at the system level more feasible through improved data collection. Leveraging this opportunity, this study introduces a set of system-level KPIs, which cover four major end-use systems in buildings: lighting, MELs (Miscellaneous Electric Loads, aka plug loads), HVAC (heating, ventilation, and air-conditioning), and SWH (service water heating), and their eleven subsystems. The system KPIs are formulated in a new context to represent various types of performance, including energy use, peak demand, load shape, occupant thermal comfort and visual comfort, ventilation, and water use. This paper also presents a database of system KPIs using the EnergyPlus simulation results of 16 USDOE prototype commercial building models across four vintages and five climate zones. These system KPIs, although originally developed for office buildings, can be applied to other building types with some adjustment or extension. Potential applications of system KPIs for system performance benchmarking and diagnostics, code compliance, and measurement and verification are discussed
Implementation of Internet based courses in computer information systems at Milwaukee Area Technical College
Includes bibliographical references
Compressive Sensing for Dynamic XRF Scanning
X-Ray Fluorescence (XRF) scanning is a widespread technique of high
importance and impact since it provides chemical composition maps crucial for
several scientific investigations. There are continuous requirements for
larger, faster and highly resolved acquisitions in order to study complex
structures. Among the scientific applications that benefit from it, some of
them, such as wide scale brain imaging, are prohibitively difficult due to time
constraints. However, typically the overall XRF imaging performance is
improving through technological progress on XRF detectors and X-ray sources.
This paper suggests an additional approach where XRF scanning is performed in a
sparse way by skipping specific points or by varying dynamically acquisition
time or other scan settings in a conditional manner. This paves the way for
Compressive Sensing in XRF scans where data are acquired in a reduced manner
allowing for challenging experiments, currently not feasible with the
traditional scanning strategies. A series of different compressive sensing
strategies for dynamic scans are presented here. A proof of principle
experiment was performed at the TwinMic beamline of Elettra synchrotron. The
outcome demonstrates the potential of Compressive Sensing for dynamic scans,
suggesting its use in challenging scientific experiments while proposing a
technical solution for beamline acquisition software.Comment: 16 pages, 7 figures, 1 tabl
- âŠ