207 research outputs found

    Adult English Learners\u27 Perceptions of their Pronunciation and Linguistic SelfConfidence

    Get PDF
    Second language pronunciation research often focuses on intelligibility from the perspectives of native-speakers. However, few studies focus on English learners’ (ELs) perceptions of pronunciation, and few studies examine linguistic self-confidence (LSC). This study explores advanced-level adult ELs’ perceptions of their own pronunciation and the relationship between their perceptions and LSC. Inspiration for this study comes from ELs in my classes and Tracy Derwing’s 2003 study. This mixed methods study utilized an initial questionnaire followed by individual interviews. Results from data obtained suggest that adult ELs perceive English pronunciation affects quality of life in a variety of ways. Results also suggest that a relationship between ELs’ perspectives of their own pronunciation and LSC exists, but to what extent is unclear. LSC is a highly changeable construct that is affected by personal, cultural, and social elements as well as the context of the communicative situation

    From Measure to Leisure: Extending Theory on Technology in the Workplace

    Full text link
    The values present both in modern organizations and in research on these organizations reflect the organizational culture that has developed gradually over time. For example, research on organizations regularly focuses on the aspects of work that can be most easily quantified, such as the hierarchy within the organization or the physical arrangement of the office. Less defined aspects of organizations, such as the support for visibility and reflection, are more difficult to study and potentially less valued by the organizational culture. Similarly, the scientific management movement that spurred the Industrial Revolution is a very visible example of the high value that has been assigned to quantifiable efficiency within the workplace itself. Though the scientific management movement was soon contradicted by findings that showed the importance of psychological factors such as individual recognition, the ultimate response within organizations was to quantify additional aspects of the work environment, to varying degrees of success. The values that give efficiency and quantification this prominence in the workplace and in organizational research also impact the design and use of computing technology in the workplace. Computing has become a significant element in the modern organization, but the accepted role for computing technologies is often restricted to the automation of analytic tasks formerly accomplished by workers. In this way, computing technology becomes a surrogate for a human brain, attempting to model the way a specific type of work has traditionally been done. The mental processes involved in work, however, are not simply analytical. David Levy (2005) contends that the excess of information available for analysis in contemporary work environments cannot be meaningfully processed without allowing workers time for reflection and contemplation. This time may help workers draw connections that are still difficult for computers, or it may provide workers with opportunities for collaboration and diversification. The elevation of the importance of visibility and reflection within the workplace may have more success if undertaken in conjunction with the installation of technology designed for this purpose. Because current organizational studies typically omit activities with complex motivations, initial studies on the subject must gather data for the purpose of grounded (inductive) theory generation. The study described herein addresses traditional organizational research topics as well as the presence and use of non-task-based activities in the workplace. The study takes a broad look at a university department encompassing approximately 60 individuals, utilizing surveys and interviews to collect a variety of background information. As an additional intervention, a prototype technology devise with ludic intentions was introduced to the department, and its use provided further insight into the role of technology in the workplace. Ultimately, a series of testable hypotheses are proposed to guide further research into visibility and reflection in the workplace

    Knotty Articulations: Professors and Preservice Teachers on Teaching Literacy in Urban Schools

    Get PDF
    In this qualitative study, we examined preservice teachers’ articulations of what it meant to teach literacy in urban settings and the roles that we as university instructors played in their understandings of the terms urban, literacy, and teacher. We framed the study within extant studies of teacher education and research on metaphors. Data indicated that the participants metaphorically constructed literacy as an object that could be passed from teacher to student and that was often missing, hidden, or buried in urban settings. Implications of the study suggest that faculty members are one factor among several important influences in preservice teachers becoming professionals, and the metaphors faculty use in teaching preservice teachers deserve careful consideration

    Engaging Researchers in Data Dialogues: Designing Collaborative Programming to Promote Research Data Sharing

    Get PDF
    A range of regulatory pressures emanating from funding agencies and scholarly journals increasingly encourage researchers to engage in formal data sharing practices. As academic libraries continue to refine their role in supporting researchers in this data sharing space, one particular challenge has been finding new ways to meaningfully engage with campus researchers. Libraries help shape norms and encourage data sharing through education and training, and there has been significant growth in the services these institutions are able to provide and the ways in which library staff are able to collaborate and communicate with researchers. Evidence also suggests that within disciplines, normative pressures and expectations around professional conduct have a significant impact on data sharing behaviors (Kim and Adler 2015; Sigit Sayogo and Pardo 2013; Zenk-Moltgen et al. 2018). Duke University Libraries\u27 Research Data Management program has recently centered part of its outreach strategy on leveraging peer networks and social modeling to encourage and normalize robust data sharing practices among campus researchers. The program has hosted two panel discussions on issues related to data management—specifically, data sharing and research reproducibility. This paper reflects on some lessons learned from these outreach efforts and outlines next steps

    Fast Nonlinear Least Squares Optimization of Large-Scale Semi-Sparse Problems

    Get PDF
    Many problems in computer graphics and vision can be formulated as a nonlinear least squares optimization problem, for which numerous off-the-shelf solvers are readily available. Depending on the structure of the problem, however, existing solvers may be more or less suitable, and in some cases the solution comes at the cost of lengthy convergence times. One such case is semi-sparse optimization problems, emerging for example in localized facial performance reconstruction, where the nonlinear least squares problem can be composed of hundreds of thousands of cost functions, each one involving many of the optimization parameters. While such problems can be solved with existing solvers, the computation time can severely hinder the applicability of these methods. We introduce a novel iterative solver for nonlinear least squares optimization of large-scale semi-sparse problems. We use the nonlinear Levenberg-Marquardt method to locally linearize the problem in parallel, based on its firstorder approximation. Then, we decompose the linear problem in small blocks, using the local Schur complement, leading to a more compact linear system without loss of information. The resulting system is dense but its size is small enough to be solved using a parallel direct method in a short amount of time. The main benefit we get by using such an approach is that the overall optimization process is entirely parallel and scalable, making it suitable to be mapped onto graphics hardware (GPU). By using our minimizer, results are obtained up to one order of magnitude faster than other existing solvers, without sacrificing the generality and the accuracy of the model. We provide a detailed analysis of our approach and validate our results with the application of performance-based facial capture using a recently-proposed anatomical local face deformation model

    A Perceptual Shape Loss for Monocular 3D Face Reconstruction

    Full text link
    Monocular 3D face reconstruction is a wide-spread topic, and existing approaches tackle the problem either through fast neural network inference or offline iterative reconstruction of face geometry. In either case carefully-designed energy functions are minimized, commonly including loss terms like a photometric loss, a landmark reprojection loss, and others. In this work we propose a new loss function for monocular face capture, inspired by how humans would perceive the quality of a 3D face reconstruction given a particular image. It is widely known that shading provides a strong indicator for 3D shape in the human visual system. As such, our new 'perceptual' shape loss aims to judge the quality of a 3D face estimate using only shading cues. Our loss is implemented as a discriminator-style neural network that takes an input face image and a shaded render of the geometry estimate, and then predicts a score that perceptually evaluates how well the shaded render matches the given image. This 'critic' network operates on the RGB image and geometry render alone, without requiring an estimate of the albedo or illumination in the scene. Furthermore, our loss operates entirely in image space and is thus agnostic to mesh topology. We show how our new perceptual shape loss can be combined with traditional energy terms for monocular 3D face optimization and deep neural network regression, improving upon current state-of-the-art results.Comment: Accepted to PG 2023. Project page: https://studios.disneyresearch.com/2023/10/09/a-perceptual-shape-loss-for-monocular-3d-face-reconstruction/ Video: https://www.youtube.com/watch?v=RYdyoIZEuU

    Interactive Sculpting of Digital Faces Using an Anatomical Modeling Paradigm

    Get PDF
    Digitally sculpting 3D human faces is a very challenging task. It typically requires either 1) highly-skilled artists using complex software packages for high quality results, or 2) highly-constrained simple interfaces for consumer-level avatar creation, such as in game engines. We propose a novel interactive method for the creation of digital faces that is simple and intuitive to use, even for novice users, while consistently producing plausible 3D face geometry, and allowing editing freedom beyond traditional video game avatar creation. At the core of our system lies a specialized anatomical local face model (ALM), which is constructed from a dataset of several hundred 3D face scans. User edits are propagated to constraints for an optimization of our data-driven ALM model, ensuring the resulting face remains plausible even for simple edits like clicking and dragging surface points. We show how several natural interaction methods can be implemented in our framework, including direct control of the surface, indirect control of semantic features like age, ethnicity, gender, and BMI, as well as indirect control through manipulating the underlying bony structures. The result is a simple new method for creating digital human faces, for artists and novice users alike. Our method is attractive for low-budget VFX and animation productions, and our anatomical modeling paradigm can complement traditional game engine avatar design packages

    Design and update of a classification system: The UCSD map of science

    Get PDF
    Global maps of science can be used as a reference system to chart career trajectories, the location of emerging research frontiers, or the expertise profiles of institutes or nations. This paper details data preparation, analysis, and layout performed when designing and subsequently updating the UCSD map of science and classification system. The original classification and map use 7.2 million papers and their references from Elsevier's Scopus (about 15,000 source titles, 2001-2005) and Thomson Reuters' Web of Science (WoS) Science, Social Science, Arts & Humanities Citation Indexes (about 9,000 source titles, 2001-2004)-about 16,000 unique source titles. The updated map and classification adds six years (2005-2010) of WoS data and three years (2006-2008) from Scopus to the existing category structure-increasing the number of source titles to about 25,000. To our knowledge, this is the first time that a widely used map of science was updated. A comparison of the original 5-year and the new 10-year maps and classification system show (i) an increase in the total number of journals that can be mapped by 9,409 journals (social sciences had a 80% increase, humanities a 119% increase, medical (32%) and natural science (74%)), (ii) a simplification of the map by assigning all but five highly interdisciplinary journals to exactly one discipline, (iii) a more even distribution of journals over the 554 subdisciplines and 13 disciplines when calculating the coefficient of variation, and (iv) a better reflection of journal clusters when compared with paper-level citation data. When evaluating the map with a listing of desirable features for maps of science, the updated map is shown to have higher mapping accuracy, easier understandability as fewer journals are multiply classified, and higher usability for the generation of data overlays, among others

    Conversation with Daniel Goleman about the relationship between the person viewing art and the art itself

    Get PDF
    Daniel Goleman, best known for his worldwide bestseller “Emotional Intelligence,” is most recently co-author of “Altered Traits: Science Reveals How Meditation Changes Your Mind, Brain and Body.” A meditator since his college days, Goleman has spent two years in India, first as a Harvard Predoctoral Traveling Fellow and then again on a Post-Doctoral Fellowship. Dr. Goleman’s first book, “The Meditative Mind: The Varieties of Meditative Experience,” is written on the basis of that research, offering an overview of various meditation paths. Goleman has moderated several Mind and Life dialogues between the Dalai Lama and scientists, ranging from topics such as “Emotions and Health” to “Environment, Ethics and Interdependence.” Goleman’s 2014 book, “A Force for Good: The Dalai Lama's Vision for Our World,” combines the Dalai Lama’s key teachings, empirical evidence, and true accounts of people putting his lessons into practice, offering readers guidance for making the world a better place. Having worked with leaders, teachers, and groups around the globe, Daniel Goleman has transformed the way the world educates children, relates to family and friends, and conducts business

    Draper Station Analysis Tool

    Get PDF
    Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems
    • …
    corecore