1,969 research outputs found

    Context-Aware Source Code Identifier Splitting and Expansion for Software Maintenance

    Get PDF
    RÉSUMÉ La comprĂ©hension du code source des programmes logiciels est une Ă©tape nĂ©cessaire pour plusieurs tĂąches de comprĂ©hension de programmes, retro-ingĂ©nierie, ou re-documentation. Dans le code source, les informations textuelles telles que les identifiants et les commentaires reprĂ©sentent une source d’information importante. Le problĂšme d’extraction et d’analyse des informations textuelles utilisĂ©es dans les artefacts logiciels n’a Ă©tĂ© reconnu par la communautĂ© du gĂ©nie logiciel que rĂ©cemment. Des mĂ©thodes de recherche d’information ont Ă©tĂ© proposĂ©es pour aider les tĂąches de comprĂ©hension de programmes telles que la localisation des concepts et la traçabilitĂ© des exigences au code source. Afin de mieux tirer bĂ©nĂ©fice des approches basĂ©es sur la recherche d’information, le langage utilisĂ© au niveau de tous les artefacts logiciels doit ĂȘtre le mĂȘme. Ceci est dĂ» au fait que les requĂȘtes de la recherche d’information ne peuvent pas retourner des documents pertinents si le vocabulaire utilisĂ© dans les requĂȘtes contient des mots qui ne figurent pas au niveau du vocabulaire du code source. Malheureusement, le code source contient une proportion Ă©levĂ©e de mots qui ne sont pas significatifs, e.g., abrĂ©viations, acronymes, ou concatĂ©nation de ces types. En effet, le code source utilise un langage diffĂ©rent de celui des autres artefacts logiciels. Cette discordance de vocabulaire provient de l’hypothĂšse implicite faite par les techniques de recherche de l’information et du traitement de langage naturel qui supposent l’utilisation du mĂȘme vocabulaire. Ainsi, la normalisation du vocabulaire du code source est un grand dĂ©fi. La normalisation aligne le vocabulaire utilisĂ© dans le code source des systĂšmes logiciels avec celui des autres artefacts logiciels. La normalisation consiste Ă  dĂ©composer les identifiants (i.e., noms de classes, mĂ©thodes, variables, attributs, paramĂštres, etc.) en termes et Ă  Ă©tendre ces termes aux concepts (i.e., mots d’un dictionnaire spĂ©cifique) correspondants. Dans cette thĂšse, nous proposons deux contributions Ă  la normalisation avec deux nouvelles approches contextuelles: TIDIER et TRIS. Nous prenons en compte le contexte car nos Ă©tudes expĂ©rimentales ont montrĂ© l’importance des informations contextuelles pour la normalisation du vocabulaire du code source. En effet, nous avons effectuĂ© deux Ă©tudes expĂ©rimentales avec des Ă©tudiants de baccalaurĂ©at, maĂźtrise et doctorat ainsi que des stagiaires postdoctoraux. Nous avons choisi alĂ©atoirement un ensemble d’identifiants Ă  partir d’un corpus de systĂšmes Ă©crits en C et nous avons demandĂ© aux participants de les normaliser en utilisant diffĂ©rents niveaux de contexte. En particulier, nous avons considĂ©rĂ© un contexte interne qui consiste en le contenu des fonctions, fichiers et systĂšmes contenant les identifiants ainsi qu’un niveau externe sous forme de documentation externe. Les rĂ©sultats montrent l’importance des informations contextuelles pour la normalisation. Ils rĂ©vĂšlent Ă©galement que les fichiers de code source sont plus utiles que les fonctions et que le contexte construit au niveau des systĂšmes logiciels n’apporte pas plus d’amĂ©lioration que celle obtenue avec le contexte construit au niveau des fichiers. La documentation externe, par contre, aide parfois. En rĂ©sumĂ©, les rĂ©sultats confirment notre hypothĂšse sur l’importance du contexte pour la comprĂ©hension de programmes logiciels en gĂ©nĂ©ral et la normalisation du vocabulaire utilisĂ© dans le code source systĂšmes logiciels en particulier. Ainsi, nous proposons une approche contextuelle TIDIER, inspirĂ©e par les techniques de la reconnaissance de la parole et utilisant le contexte sous forme de dictionnaires spĂ©cialisĂ©s (i.e., contenant des acronymes, abrĂ©viations et termes spĂ©cifiques au domaine des systĂšme logiciels). TIDIER est plus performante que les approches qui la prĂ©cĂ©dent (i.e., CamelCase et samurai). SpĂ©cifiquement, TIDIER atteint 54% de prĂ©cision en termes de dĂ©composition des identifiants lors de l’utilisation un dictionnaire construit au niveau du systĂšme logiciel en question et enrichi par la connaissance du domaine. CamelCase et Samurai atteint seulement 30% et 31% en termes de prĂ©cision, respectivement. En outre, TIDIER est la premiĂšre approche qui met en correspondance les termes abrĂ©gĂ©s avec les concepts qui leurs correspondent avec une prĂ©cision de 48% pour un ensemble de 73 abrĂ©viations. La limitation principale de TIDIER est sa complexitĂ© cubique qui nous a motivĂ© Ă  proposer une solution plus rapide mais tout aussi performante, nommĂ©e TRIS. TRIS est inspirĂ©e par TIDIER, certes elle traite le problĂšme de la normalisation diffĂ©remment. En effet, elle le considĂšre comme un problĂšme d’optimisation (minimisation) dont le but est de trouver le chemin le plus court (i.e., dĂ©composition et extension optimales) dans un graphe acyclique. En outre, elle utilise la frĂ©quence des termes comme contexte local afin de dĂ©terminer la normalisation la plus probable. TRIS est plus performante que CamelCase, Samurai et TIDIER, en termes de prĂ©cision et de rappel, pour des systĂšmes logiciels Ă©crits en C et C++. Aussi, elle fait mieux que GenTest de 4% en termes d’exactitude de dĂ©composition d’identifiants. L’amĂ©lioration apportĂ©e par rapport Ă  GenTest n’est cependant pas statistiquement significative. TRIS utilise une reprĂ©sentation basĂ©e sur une arborescence qui rĂ©duit considĂ©rablement sa complexitĂ© et la rend plus efficace en terme de temps de calcul. Ainsi, TRIS produit rapidement une normalisation optimale en utilisant un algorithme ayant une complexitĂ© quadratique en la longueur de l’identifiant Ă  normaliser. Ayant dĂ©veloppĂ© des approches contextuelles pour la normalisation, nous analysons alors son impact sur deux tĂąches de maintenance logicielle basĂ©es sur la recherche d’information, Ă  savoir, la traçabilitĂ© des exigences au code source et la localisation des concepts. Nous Ă©tudions l’effet de trois stratĂ©gies de normalisation: CamelCase, Samurai et l’oracle sur deux techniques de localisation des concepts. La premiĂšre est basĂ©e sur les informations textuelles seulement, quant Ă  la deuxiĂšme, elle combine les informations textuelles et dynamiques (traces d’exĂ©cution). Les rĂ©sultats obtenus confirment que la normalisation amĂ©liore les techniques de localisation des concepts basĂ©es sur les informations textuelles seulement. Quand l’analyse dynamique est prise en compte, n’importe qu’elle technique de normalisation suffit. Ceci est dĂ» au fait que l’analyse dynamique rĂ©duit considĂ©rablement l’espace de recherche et donc l’apport de la normalisation n’est pas comparable Ă  celui des informations dynamiques. En rĂ©sumĂ©, les rĂ©sultats montrent l’intĂ©rĂȘt de dĂ©velopper des techniques de normalisation avancĂ©es car elles sont utiles dans des situations oĂč les traces d’exĂ©cution ne sont pas disponibles. Nous avons aussi effectuĂ© une Ă©tude empirique sur l’effet de la normalisation sur la traçabilitĂ© des exigences au code source. Dans cette Ă©tude, nous avons analysĂ© l’impact des trois approches de normalisation prĂ©citĂ©es sur deux techniques de traçabilitĂ©. La premiĂšre utilise une technique d’indexation sĂ©mantique latente (LSI) alors que la seconde repose sur un modĂšle d’espace vectoriel (VSM). Les rĂ©sultats indiquent que les techniques de normalisation amĂ©liorent la prĂ©cision et le rappel dans quelques cas. Notre analyse qualitative montre aussi que l’impact de la normalisation sur ces deux techniques de traçabilitĂ© dĂ©pend de la qualitĂ© des donnĂ©es Ă©tudiĂ©es. Finalement, nous pouvons conclure que cette thĂšse contribue Ă  l'Ă©tat de l’art sur la normalisation du vocabulaire de code source et l’importance du contexte pour la comprĂ©hension de programmes logiciels. En plus, cette thĂšse contribue Ă  deux domaines de la maintenance logicielle et spĂ©cifiquement Ă  la localisation des concepts et Ă  la traçabilitĂ© des exigences au code source. Les rĂ©sultats thĂ©oriques et pratiques de cette thĂšse sont utiles pour les praticiens ainsi que les chercheurs. Nos travaux de recherche futures relatifs Ă  la comprĂ©hension de programmes logiciels et la maintenance logicielle consistent en l’évaluation de nos approches sur d’autres systĂšmes logiciels Ă©crits en d’autres langages de programmation ainsi que l’application de nos approches de normalisation sur d’autres tĂąches de comprĂ©hension de programmes logiciels (e.g., rĂ©capitulation de code source). Nous sommes aussi en cours de la prĂ©paration d’une deuxiĂšme Ă©tude sur l’effet du contexte sur la normalisation du vocabulaire de code source en utilisant l’oculomĂ©trie afin de mieux analyser les stratĂ©gies adoptĂ©es par les dĂ©veloppeurs lors de l’utilisation des informations contextuelles. Le deuxiĂšme volet que nous avons entamĂ© actuellement concerne l’impact des styles des identifiants sur la qualitĂ© des systĂšmes logiciels. En effet, nous sommes entrain d’infĂ©rer, en utilisant un modĂšle statistique (i.e., modĂšle de Markov cache), les styles des identifiants adoptĂ©s par les dĂ©veloppeurs dans les systĂšmes logiciels. Nous sommes Ă©galement entrain d’étudier l’impact de ces styles sur la qualitĂ© des systĂšmes logiciels. L’idĂ©e est de montrer d’abord, si les dĂ©veloppeurs utilisent leur propre style de nommage issu de leur propre expĂ©rience ou s’ils s’adaptent au projet, i.e., aux conventions de nommage suivies (s’il y en a) et d’analyser, ensuite, les styles d’identifiants (e.g., abrĂ©viations ou acronymes) qui mĂšnent Ă  l’introduction de bogues et Ă  la dĂ©gradation des attributs de qualitĂ© internes, notamment, le couplage et cohĂ©sion sĂ©mantiques.----------ABSTRACT Understanding source code is a necessary step for many program comprehension, reverse-engineering, or re-documentation tasks. In source code, textual information such as identifiers and comments represent an important source of information. The problem of extracting and analyzing the textual information in software artifacts was recognized by the software engineering research community only recently. Information Retrieval (IR) methods were proposed to support program comprehension tasks, such as feature (or concept) location and traceability link recovery. However, to reap the full benefit of IR-based approaches, the language used across all software artifacts must be the same, because IR queries cannot return relevant documents if the query vocabulary contains words that are not in the source code vocabulary. Unfortunately, source code contains a significant proportion of vocabulary that is not made up of full (meaningful) words, e.g., abbreviations, acronyms, or concatenation of these. In effect, source code uses a different language than other software artifacts. This vocabulary mismatch stems from the implicit assumption of IR and Natural Language Processing (NLP) techniques which assume the use of a single natural-language vocabulary. Therefore, vocabulary normalization is a challenging problem. Vocabulary normalization aligns the vocabulary found in the source code with that found in other software artifacts. Normalization must both split an identifier into its constituent parts and expand each part into a full dictionary word to match vocabulary in other artifacts. In this thesis, we deal with the challenge of normalizing source code vocabulary by developing two novel context-aware approaches. We use the context because the results of our experimental studies have shown that context is relevant for source code vocabulary normalization. In fact, we conducted two user studies with 63 participants who were asked to split and expand a set of 50 identifiers from a corpus of open-source C programs with the availability of different context levels. In particular, we considered an internal context consisting of the content of functions, source code files, and applications where the identifiers appear and an external context involving external documentation. We reported evidence on the usefulness of contextual information for source code vocabulary normalization. We observed that the source code files are more helpful than just looking at function source code, and that the application-level contextual information does not help any further. The availability of external sources of information (e.g., thesaurus of abbreviations and acronyms) only helps in some circumstances. The obtained results confirm the conjecture that contextual information is useful in program comprehension, including when developers split and expand identifiers to understand them. Thus, we suggest a novel contextual approach for vocabulary normalization, TIDIER. TIDIER is inspired by speech recognition techniques and exploits contextual information in the form of specialized dictionaries (e.g., acronyms, contractions and domain specific terms). TIDIER significantly outperforms its previous approaches (i.e., CamelCase and Samurai which are the approaches that exist before TIDIER). Specifically, TIDIER achieves with a program-level dictionary complemented with domain knowledge, 54% of correct splits, compared to 30% obtained with CamelCase and 31% of correct splits attained using Samurai. Moreover, TIDIER was able to correctly map identifiers’ terms to dictionary words with a precision of 48% for a set of 73 abbreviations. The main limitations of TIDIER is its cubic complexity that leads us to propose a fast solution, namely, TRIS. TRIS is inspired by TIDIER, but it deals with the vocabulary normalization problem differently. It maps it to a graph optimization (minimization) problem to find the optimal path (i.e., optimal splitting-expansion) in an acyclic weighted graph. In addition, it uses the relative frequency of source code terms as a local context to determine the most likely identifier splitting-expansion. TRIS significantly outperforms CamelCase and Samurai in terms of precision and recall of splitting, and TIDIER, in terms of identifier expansion precision and recall, with a medium to large effect size, for C and C++ systems. In addition, TRIS shows an improvement of 4%, in terms of identifier splitting correctness, over GenTest (a more recent splitter suggested after TIDIER). The latter improvement is not statistically significant. TRIS uses a tree-based representation that makes it—in addition to being more accurate than other approaches—efficient in terms of computation time. Thus, TRIS produces one optimal split and expansion fast using an identifier processing algorithm having a quadratic complexity in the length of the identifier to split/expand. We also investigate the impact of identifier splitting on two IR-based software maintenance tasks, namely, feature location and traceability recovery. Our study on feature location analyzes the effect of three identifier splitting strategies: CamelCase, Samurai, and an Oracle on two feature location techniques (FLTs). The first is based on IR while the second relies on the combination of IR and dynamic analysis (i.e., execution traces). The obtained results support our conjecture that when only textual information is available, an improved splitting technique can help improve effectiveness of feature location. The results also show that when both textual and execution information are used, any splitting algorithm will suffice, as FLTs produced equivalent results. In other words, because dynamic information helps pruning the search space considerably, the benefit of an advanced splitting algorithm is comparably smaller than that of the dynamic information; hence the splitting algorithm will have little impact on the final results. Overall, our findings outline potential benefits of creating advanced preprocessing techniques as they can be useful in situations where execution information cannot be easily collected. In addition, we study the impact of identifier splitting on two traceability recovery techniques utilizing the same three identifier splitting strategies that we used in our study on feature location. The first traceability recovery technique uses Latent Semantic Indexing (LSI) while the second is based on Vector Space Model (VSM). The results indicate that advanced splitting techniques help increase the precision and recall of the studied traceability techniques but only in some cases. In addition, our qualitative analysis shows that the impact or improvement brought by such techniques depends on the quality of the studied data. Overall, this thesis contributes to the state-of-the-art on identifier splitting and expansion, context, and their importance for program comprehension. In addition, it contributes to the fields of feature location and traceability recovery. Theoretical and practical findings of this thesis are useful for both practitioners and researchers. Our future research directions in the areas of program comprehension and software maintenance will extend our empirical evaluations to other software systems belonging to other programming languages. In addition, we will apply our source code vocabulary normalization approaches on other program comprehension tasks (e.g., code summarization). We are also preparing a replication of our study on the effect of context on vocabulary normalization using Eye-Tracking to analyze the different strategies adopted by developers when exploring contextual information to perform identifier splitting and expansion. A second research direction that we are currently tackling concerns the impact of identifier style on software quality using mining software repositories. In fact, we are currently inferring the identifier styles used by developers in open-source projects using a statistical model, namely, the Hidden Markov Model (HMM). The aim is to show whether open-source developers adhere to the style of the projects they join and their naming conventions (if any) or they bring their own style. In addition, we want to analyze whether a specific identifier style (e.g., short abbreviations or acronyms) introduces bugs in the systems and whether it impacts internal software quality metrics, in particular, the semantic coupling and cohesion

    Some Properties of the Speciation Model for Food-Web Structure - Mechanisms for Degree Distributions and Intervality

    Full text link
    We present a mathematical analysis of the speciation model for food-web structure, which had in previous work been shown to yield a good description of empirical data of food-web topology. The degree distributions of the network are derived. Properties of the speciation model are compared to those of other models that successfully describe empirical data. It is argued that the speciation model unifies the underlying ideas of previous theories. In particular, it offers a mechanistic explanation for the success of the niche model of Williams and Martinez and the frequent observation of intervality in empirical food webs.Comment: 23 pages, 6 figures, minor rewrite

    Evolutionary analysis across mammals reveals distinct classes of long non-coding RNAs

    Get PDF
    BACKGROUND: Recent advances in transcriptome sequencing have enabled the discovery of thousands of long non-coding RNAs (lncRNAs) across many species. Though several lncRNAs have been shown to play important roles in diverse biological processes, the functions and mechanisms of most lncRNAs remain unknown. Two significant obstacles lie between transcriptome sequencing and functional characterization of lncRNAs: identifying truly non-coding genes from de novo reconstructed transcriptomes, and prioritizing the hundreds of resulting putative lncRNAs for downstream experimental interrogation. RESULTS: We present slncky, a lncRNA discovery tool that produces a high-quality set of lncRNAs from RNA-sequencing data and further uses evolutionary constraint to prioritize lncRNAs that are likely to be functionally important. Our automated filtering pipeline is comparable to manual curation efforts and more sensitive than previously published computational approaches. Furthermore, we developed a sensitive alignment pipeline for aligning lncRNA loci and propose new evolutionary metrics relevant for analyzing sequence and transcript evolution. Our analysis reveals that evolutionary selection acts in several distinct patterns, and uncovers two notable classes of intergenic lncRNAs: one showing strong purifying selection on RNA sequence and another where constraint is restricted to the regulation but not the sequence of the transcript. CONCLUSION: Our results highlight that lncRNAs are not a homogenous class of molecules but rather a mixture of multiple functional classes with distinct biological mechanism and/or roles. Our novel comparative methods for lncRNAs reveals 233 constrained lncRNAs out of tens of thousands of currently annotated transcripts, which we make available through the slncky Evolution Browser

    Population dynamic of the extinct European aurochs: genetic evidence of a north-south differentiation pattern and no evidence of post-glacial expansion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The aurochs (<it>Bos primigenius</it>) was a large bovine that ranged over almost the entirety of the Eurasian continent and North Africa. It is the wild ancestor of the modern cattle (<it>Bos taurus</it>), and went extinct in 1627 probably as a consequence of human hunting and the progressive reduction of its habitat. To investigate in detail the genetic history of this species and to compare the population dynamics in different European areas, we analysed <it>Bos primigenius </it>remains from various sites across Italy.</p> <p>Results</p> <p>Fourteen samples provided ancient DNA fragments from the mitochondrial hypervariable region. Our data, jointly analysed with previously published sequences, support the view that Italian aurochsen were genetically similar to modern bovine breeds, but very different from northern/central European aurochsen. Bayesian analyses and coalescent simulations indicate that the genetic variation pattern in both Italian and northern/central European aurochsen is compatible with demographic stability after the last glaciation. We provide evidence that signatures of population expansion can erroneously arise in stable aurochsen populations when the different ages of the samples are not taken into account.</p> <p>Conclusions</p> <p>Distinct groups of aurochsen probably inhabited Italy and northern/central Europe after the last glaciation, respectively. On the contrary, Italian and Fertile Crescent aurochsen likely shared several mtDNA sequences, now common in modern breeds. We argue that a certain level of genetic homogeneity characterized aurochs populations in Southern Europe and the Middle East, and also that post-glacial recolonization of northern and central Europe advanced, without major demographic expansions, from eastern, and not southern, refugia.</p

    Creating architecture for a digital information system leveraging virtual environments

    Get PDF
    Abstract. The topic of the thesis was the creation of a proof of concept digital information system, which utilizes virtual environments. The focus was finding a working design, which can then be expanded upon. The research was conducted using design science research, by creating the information system as the artifact. The research was conducted for Nokia Networks in Oulu, Finland; in this document referred to as “the target organization”. An information system is a collection of distributed computing components, which come together to create value for an organization. Information system architecture is generally derived from enterprise architecture, and consists of a data-, technical- and application architectures. Data architecture outlines the data that the system uses, and the policies related to its usage, manipulation and storage. Technical architecture relates to various technological areas, such as networking and protocols, as well as any environmental factors. The application architecture consists of deconstructing the applications that are used in the operations of the information system. Virtual reality is an experience, where the concepts of presence, autonomy and interaction come together to create an immersive alternative to a regular display-based computer environment. The most typical form of virtual reality consists of a headmounted device, controllers and movement-tracking base stations. The user’s head- and body movement can be tracked, which changes their position in the virtual environment. The proof-of-concept information system architecture used a multi-server -based solution, where one central physical server hosted multiple virtual servers. The system consisted of a website, which was the knowledge-center and where a client software could be downloaded. The client software was the authorization portal, which determined the virtual environments that were available to the user. The virtual reality application included functionalities, which enable co-operative, virtualized use of various Nokia products, in immersive environments. The system was tested in working situations, such as during exhibitions with customers. The proof-of-concept system fulfilled many of the functional requirements set for it, allowing for co-operation in the virtual reality. Additionally, a rudimentary model for access control was available in the designed system. The shortcomings of the system were related to areas such as security and scaling, which can be further developed by introducing a cloud-hosted environment to the architecture

    1.5. Enhancing Archaeological Data Collection and Student Learning with a Mobile Relational Database

    Get PDF
    In 2011, the Proyecto de InvestigaciĂłn ArqueolĂłgico Regional Ancash (PIARA) inaugurated an archaeological field school that employed a comprehensive digital data collection protocol. Students learned to record data on iPads using our customized relational databases for excavation, human skeletal analysis, and artifact classification. The databases integrated digital media, such as vector drawings and annotated photos. In a final research project, the students used the tablet system to analyze excavation contexts and artifacts, visualize relationships between the data, conduct literature reviews, and present their findings. This chapter discusses how students develop a greater comprehension of archaeological concepts and stronger research skills when they collect and analyze data using a relational database. More precisely, it argues that the database develops more perceptive archaeologists who can immediately recognize and interpret relationships between archaeological materials, contexts, and features. The technology, then, not only aids in-field planning and interpretation, but also cultivates analytical thinking.https://dc.uwm.edu/arthist_mobilizingthepast/1006/thumbnail.jp

    Supporting the Maintenance of Identifier Names: A Holistic Approach to High-Quality Automated Identifier Naming

    Get PDF
    A considerable part of the source code is identifier names-- unique lexical tokens that provide information about entities, and entity interactions, within the code. Identifier names provide human-readable descriptions of classes, functions, variables, etc. Poor or ambiguous identifier names (i.e., names that do not correctly describe the code behavior they are associated with) will lead developers to spend more time working towards understanding the code\u27s behavior. Bad naming can also have detrimental effects on tools that rely on natural language clues; degrading the quality of their output and making them unreliable. Additionally, misinterpretations of the code, caused by poor names, can result in the injection of quality issues into the system under maintenance. Thus, improved identifier naming increases developer effectiveness, higher-quality software, and higher-quality software analysis tools. In this dissertation, I establish several novel concepts that help measure and improve the quality of identifiers. The output of this dissertation work is a set of identifier name appraisal and quality tools that integrate into the developer workflow. Through a sequence of empirical studies, I have formulated a series of heuristics and linguistic patterns to evaluate the quality of identifier names in the code and provide naming structure recommendations. I envision and working towards supporting developers in integrating my contributions, discussed in this dissertation, into their development workflow to significantly improve the process of crafting and maintaining high-quality identifier names in the source code

    Pressure Losses Experienced By Liquid Flow Through Pdms Microchannels With Abrupt Area Changes

    Get PDF
    Given the surmounting disagreement amongst researchers in the area of liquid flow behavior at the microscale for the past thirty years, this work presents a fundamental approach to analyzing the pressure losses experienced by the laminar flow of water (Re = 7 to Re = 130) through both rectangular straight duct microchannels (of widths ranging from 50 to 130 micrometers), and microchannels with sudden expansions and contractions (with area ratios ranging from 0.4 to 1.0) all with a constant depth of 104 micrometers. The simplified Bernoulli equations for uniform, steady, incompressible, internal duct flow were used to compare flow through these microchannels to macroscale theory predictions for pressure drop. One major advantage of the channel design (and subsequent experimental set-up) was that pressure measurements could be taken locally, directly before and after the test section of interest, instead of globally which requires extensive corrections to the pressure measurements before an accurate result can be obtained. Bernoulli\u27s equation adjusted for major head loses (using Darcy friction factors) and minor head losses (using appropriate K values) was found to predict the flow behavior within the calculated theoretical uncertainty (~12%) for all 150+ microchannels tested, except for sizes that pushed the aspect ratio limits of the manufacturing process capabilities (microchannels fabricated via soft lithography using PDMS). The analysis produced conclusive evidence that liquid flow through microchannels at these relative channel sizes and Reynolds numbers follow macroscale predictions without experiencing any of the reported anomalies expressed in other microfluidics research. This work also perfected the delicate technique required to pierce through the PDMS material and into the microchannel inlets, exit and pressure ports without damaging the microchannel. Finally, two verified explanations for why prior researchers have obtained poor agreement between macroscale theory predictions and tests at the microscale were due to the presence of bubbles in the microchannel test section (producing higher than expected pressure drops), and the occurrence of localized separation between the PDMS slabs and thus, the microchannel itself (producing lower than expected pressure drops)

    Effects of choral singing versus health education on cognitive decline and aging: a randomized controlled trial.

    Get PDF
    We conducted a randomized controlled trial to examine choral singing's effect on cognitive decline in aging. Older Singaporeans who were at high risk of future dementia were recruited: 47 were assigned to choral singing intervention (CSI) and 46 were assigned to health education program (HEP). Participants attended weekly one-hour choral singing or weekly one-hour health education for two years. Change in cognitive function was measured by a composite cognitive test score (CCTS) derived from raw scores of neuropsychological tests; biomarkers included brain magnetic resonance imaging, oxidative damage and immunosenescence. The average age of the participants were 70 years and 73/93 (78.5%) were female. The change of CCTS from baseline to 24 months was 0.05 among participants in the CSI group and -0.1 among participants in the HEP group. The between-group difference (0.15, p=0.042) became smaller (0.12, p=0.09) after adjusting for baseline CCTS. No between-group differences on biomarkers were observed. Our data support the role of choral singing in improving cognitive health in aging. The beneficial effect is at least comparable than that of health education in preventing cognitive decline in a community of elderly people. Biological mechanisms underlying the observed efficacy should be further studied
    • 

    corecore