606 research outputs found

    Designing for Our New Scale: A Provocation

    Get PDF
    “Growth” is endemic to our economic system. Economies of scale, long-held as critical means to efficiency and profitability, have taken on new meanings: whereas they once relied on the skills of the mechanical engineer to realize wide-scaling manufacturing capabilities, our cultural shift towards the individual means a focus placed on customization of both product research and design. In the age of networked information and networked publics—built on the “solutions” of large, multinational companies like Cisco and EMC, as well as small, connected product and service startups— this paradoxical emphasis on broad ranging customization is resolved through the automated tools at our disposal today: platforms, data, and algorithms, to name a few. Throughout the decades, however, scalability and growth has always been perceived as essential by investors and the public alike. As commercial practitioners, we are often tasked with the design and development of projects meant to reach a large audience—at the time of release or further down the line. For instance, a website for a large NGO must scale in terms of both content and reach, accommodating a broad swath of information types for a global audience. Increasingly, as the integration of social streams and other “open” sources of content becomes valued by clients, the access of publicly available APIs requires an accommodation of the parameters set by those sources—often multi-billion dollar corporations. To develop designs and technologies that scale, we build on or develop our own platforms. These platforms include those with which we are familiar and interact every day, such as YouTube (and its API), Twitter, and various content management systems such as SiteCore or WordPress. The way in which we are able to design for scale today is enabled by our ability to capture a tremendous (and often overwhelming) amount of data. “Big Data” has become common parlance. We use the data that we capture to make inferences about the users for whom we design, giving us the ability to scale solutions across geographies, demographics, and markets. Algorithms are pervasive in today’s experience of designing at scale, especially as the time and cognition required to process the volume of information with which we interact increases. Once the sources of our data and content are identified, in order to present that information back to our end-user in a means unique to our project, we must process it. Infusing this data with value requires moving it through algorithms—ones that aggregate, analyze, modify, and more

    Designing systems for praxis and critical engagement in design education: the speculative design method and the revelation of theory

    Get PDF
    To design systems that encourage learners to think systematically and consider the systems that exert power over their lives and the lives of others is to imbue them with the ability to free themselves from those powers. This paper seeks to present the perspectives of design educators working on implementing critical systemic thinking, hopefully inspiring awareness through practice and discourse in the domain of speculative and experimental design. “Whenever we need a revolution,” writes Neil Postman as he paraphrases Lawrence Cremin in Technopoly: The Surrender of Culture to Technology, “we get a new curriculum.” The design of learning experiences, whether for high school history classes, undergraduate design classes, or online learning communities, is the design of systems. The practice of designing learning technologies is also a practice of system design.Within the design of every system is embedded a particular ideology. Whether few or many, designers, writers, administrators, policy-makers, and other individuals and groups of people establish the systems that shape our lives. Acting on the authorship of these systems are personal and cultural forces that shape how the systems are designed and understood.Possibly the most important ideology that we can embed within the design of contemporary learning systems is that of critical discourse through praxis: a tangible criticality. The creation of this tangible criticality can be thought of as a visualization of a system: a way to see and understand the systems that influence our thinking. We facilitate the creation of artifacts and experiences that function as a part of larger systems. These artifacts include products, websites, maps, books, prototypes, and proposals for technologies or experiences that may not yet be able to exist, all the while co-opting the language of the mainstream—one that is, today, based on the same values as the forces of the technologies upon which they act: objectivity, fact, proof, benefit, and other seemingly “neutral” concepts. To be certain, this language does not exist in a vacuum. Rather, it is propagated through the channels and media with which producers sell their wares—product packaging, marketing materials, sales pitches, advertisements, and, most importantly, the programming of the product itself (be it analog or digital). At the helm of these channels sits the designer, trained to question critically the situation in order to design a final product or campaign that best addresses the needs of those seeking her expertise. As well intentioned or conscientious as our designer may be, however, as Robin Greeley offers, there is no escaping “that intricate web of social structures and practices within which the designer’s conscious—and unconscious—decisions are made as to which set of forms will carry what significations.” Further in the closing chapter of his work, Postman notes that, “to chart the ascent of man, we must join art and science.” The cross-pollination of systems thinking and design praxis to elucidate the forces acting on students’ worlds builds this bridge. By designing systems that encourage praxis and a critical engagement with the world, we prompt learners to reconsider the implications of the systems at work on them, those that facilitate and create the relationships driving our everyday experiences. These designed systems for praxis and critical inquiry blur the line between technological and experiential. We design curricula based on the speculative design method as a way to prompt learners to consider the trajectory on which our current society travels and to imagine and plot a point in the future through both design and writing. We also design learning technologies, some of which only become partially realized in the world, as a way to inform classroom pedagogy

    Investigation of the critical heat flux in a rod bundle configuration under low pressure conditions

    Get PDF
    Diverse boiling phenomena occur during the operation of light-water reactors. Their understanding is necessary to guarantee a safe service and to avoid unstable operating modes. For example, the comportment of the coolant could either be subcooled boiling during normal operation or even critical boiling during the occurrence of a disturbance. Besides, boiling effects also appear on the secondary loop of the steam generator. The boiling process allows significantly higher heat transfer rates compared to the single-phase convection. But this heat transport can be suddenly decreased when the limit of the critical heat flux (CHF) is reached. The occurrence of the boiling crisis leads generally to severe damage of the facility components and has to be avoided during reactor operation. Until today, there is no reliable method predicting this phenomenon based on universally valid correlations. A substantial benefit for the reactor safety research would be a prediction method which is based on the solution of the transport equations for the two-phase flow of water and steam. There exist many correlations based on observations in experiments or theoretical reflections which try to explain the occurrence and the development of the critical heat flux. Unfortunately, they cannot be combined to one complete model as they are counter-predicting effects or are set up on different physical effects. For example, the ‘Near Wall Bubble Crowding Model’ [Kandlikar, S. G., 2011] postulates the decrease of the liquid flow to the wall due to turbulence with increasing heat flux as bubbles will concentrate near the wall. Whereas the ‘Interfacial Lift-Off Model’ [Galloway, J., Mudawar, I., 1993] predicts pseudo-periodic ‘wetting-fronts’ which cause the agglomeration of steam leading to the CHF as these zones lift off from the wall. Using the COSMOS-L test facility, IKET at KIT tries to contribute to analyzing the different existing theories and to examine specific phenomena like flow pattern or void distribution for flow boiling

    La influencia del tamaño de grupo en el rendimiento académico. Un estudio empírico

    Get PDF
    [SPA] El Espacio Europeo de Educación Superior (EEES) implica una nueva organización del sistema universitario tradicional, caracterizado por la docencia en aulas masificadas. Estudios previos ponen de manifiesto diversos problemas asociados a las clases numerosas. El objetivo que se plantea este trabajo es analizar la influencia del tamaño del grupo en el rendimiento académico de los estudiantes teniendo en cuenta el efecto que la nota de acceso de los estudiantes puede tener en los resultados. Para ello se analiza una muestra de 223 grupos de docencia de estudios de grado de la Escuela Técnica Superior de Ingeniería Industrial de la Universidad Politécnica de Cartagena. Los resultados indican que tanto los alumnos con notas de acceso elevadas como los alumnos con notas de acceso bajas obtienen peores resultados de rendimiento académico conforme aumenta el tamaño del grupo. [ENG] The European Higher Education Area (EHEA) implies a new organization of the traditional university system, characterized by teaching in overcrowded classrooms. Previous studies reveal various problems associated with large groups. The objective this work is to analyze the influence of class size on academic performance of students taking into account the effect that GPA can have on the results. With this aim a sample of 223 groups of undergraduate studies at the School of Industrial Engineering (Universidad Politécnica de Cartagena) is analyzed. The results indicate that class size influence negatively on performance in boths students with high grades as students with low grades

    Dynamic Resting-State Functional Connectivity in Major Depression

    Get PDF
    Major depressive disorder (MDD) is characterized by abnormal resting-state functional connectivity (RSFC), especially in medial prefrontal cortical (MPFC) regions of the default network. However, prior research in MDD has not examined dynamic changes in functional connectivity as networks form, interact, and dissolve over time. We compared unmedicated individuals with MDD (n=100) to control participants (n=109) on dynamic RSFC (operationalized as SD in RSFC over a series of sliding windows) of an MPFC seed region during a resting-state functional magnetic resonance imaging scan. Among participants with MDD, we also investigated the relationship between symptom severity and RSFC. Secondary analyses probed the association between dynamic RSFC and rumination. Results showed that individuals with MDD were characterized by decreased dynamic (less variable) RSFC between MPFC and regions of parahippocampal gyrus within the default network, a pattern related to sustained positive connectivity between these regions across sliding windows. In contrast, the MDD group exhibited increased dynamic (more variable) RSFC between MPFC and regions of insula, and higher severity of depression was related to increased dynamic RSFC between MPFC and dorsolateral prefrontal cortex. These patterns of highly variable RSFC were related to greater frequency of strong positive and negative correlations in activity across sliding windows. Secondary analyses indicated that increased dynamic RSFC between MPFC and insula was related to higher levels of recent rumination. These findings provide initial evidence that depression, and ruminative thinking in depression, are related to abnormal patterns of fluctuating communication among brain systems involved in regulating attention and self-referential thinking

    Absolute photoluminescence quantum yields of IR26 and IR-emissive Cd₁₋ₓHgₓTe and PbS quantum dots: method- and material-inherent challenges

    Get PDF
    Bright emitters with photoluminescence in the spectral region of 800–1600 nm are increasingly important as optical reporters for molecular imaging, sensing, and telecommunication and as active components in electrooptical and photovoltaic devices. Their rational design is directly linked to suitable methods for the characterization of their signal-relevant properties, especially their photoluminescence quantum yield (Ίf ). Aiming at the development of bright semiconductor nanocrystals with emission >1000 nm, we designed a new NIR/IR integrating sphere setup for the wavelength region of 600–1600 nm. We assessed the performance of this setup by acquiring the corrected emission spectra and Ίf of the organic dyes |trybe, IR140, and IR26 and several infrared (IR)-emissive Cd₁₋ₓHgₓTe and PbS semiconductor nanocrystals and comparing them to data obtained with two independently calibrated fluorescence instruments absolutely or relative to previously evaluated reference dyes. Our results highlight special challenges of photoluminescence studies in the IR ranging from solvent absorption to the lack of spectral and intensity standards together with quantum dot-specific challenges like photobrightening and photodarkening and the size-dependent air stability and photostability of differently sized oleate-capped PbS colloids. These effects can be representative of lead chalcogenides. Moreover, we redetermined the Ίf of IR26, the most frequently used IR reference dye, to 1.1 × 10⁻³ in 1,2-dichloroethane DCE with a thorough sample reabsorption and solvent absorption correction. Our results indicate the need for a critical reevaluation of Ίf values of IR-emissive nanomaterials and offer guidelines for improved Ίf measurements

    Contact-controlled amoeboid motility induces dynamic cell trapping in 3D-microstructured surfaces.

    Get PDF
    On flat substrates, several cell types exhibit amoeboid migration, which is characterized by restless stochastic successions of pseudopod protrusions. The orientation and frequency of new membrane protrusions characterize efficient search modes, which can respond to external chemical stimuli as observed during chemotaxis in amoebae. To quantify the influence of mechanical stimuli induced by surface topography on the migration modes of the amoeboid model organism Dictyostelium discoideum, we apply high resolution motion analysis in microfabricated pillar arrays of defined density and geometry. Cell motion is analyzed by a two-state motility-model, distinguishing directed cellular runs from phases of isotropic migration that are characterized by randomly oriented cellular protrusions. Cells lacking myosin II or cells deprived of microtubules show significantly different behavior concerning migration velocities and migrational angle distribution, without pronounced attraction to pillars. We conclude that microtubules enhance cellular ability to react with external 3D structures. Our experiments on wild-type cells show that the switching from randomly formed pseudopods to a stabilized leading pseudopod is triggered by contact with surface structures. These alternating processes guide cells according to the available surface in their 3D environment, which we observed dynamically and in steady-state situations. As a consequence, cells perform "home-runs" in low-density pillar arrays, crawling from pillar to pillar, with a characteristic dwell time of 75 s. At the boundary between a flat surface and a 3D structured substrate, cells preferentially localize in contact with micropillars, due to the additionally available surface in the microstructured arrays. Such responses of cell motility to microstructures might open new possibilities for cell sorting in surface structured arrays

    Ecosystem restoration strengthens pollination network resilience and function.

    Get PDF
    Land degradation results in declining biodiversity and the disruption of ecosystem functioning worldwide, particularly in the tropics. Vegetation restoration is a common tool used to mitigate these impacts and increasingly aims to restore ecosystem functions rather than species diversity. However, evidence from community experiments on the effect of restoration practices on ecosystem functions is scarce. Pollination is an important ecosystem function and the global decline in pollinators attenuates the resistance of natural areas and agro-environments to disturbances. Thus, the ability of pollination functions to resist or recover from disturbance (that is, the functional resilience) may be critical for ensuring a successful restoration process. Here we report the use of a community field experiment to investigate the effects of vegetation restoration, specifically the removal of exotic shrubs, on pollination. We analyse 64 plant-pollinator networks and the reproductive performance of the ten most abundant plant species across four restored and four unrestored, disturbed mountaintop communities. Ecosystem restoration resulted in a marked increase in pollinator species, visits to flowers and interaction diversity. Interactions in restored networks were more generalized than in unrestored networks, indicating a higher functional redundancy in restored communities. Shifts in interaction patterns had direct and positive effects on pollination, especially on the relative and total fruit production of native plants. Pollinator limitation was prevalent at unrestored sites only, where the proportion of flowers producing fruit increased with pollinator visitation, approaching the higher levels seen in restored plant communities. Our results show that vegetation restoration can improve pollination, suggesting that the degradation of ecosystem functions is at least partially reversible. The degree of recovery may depend on the state of degradation before restoration intervention and the proximity to pollinator source populations in the surrounding landscape. We demonstrate that network structure is a suitable indicator for pollination quality, highlighting the usefulness of interaction networks in environmental management

    Recently photoassimilated carbon and fungus-delivered nitrogen are spatially correlated in the ectomycorrhizal tissue of Fagus sylvatica

    Get PDF
    Ectomycorrhizal plants trade plant‐assimilated carbon for soil nutrients with their fungal partners. The underlying mechanisms, however, are not fully understood. Here we investigate the exchange of carbon for nitrogen in the ectomycorrhizal symbiosis of Fagus sylvatica across different spatial scales from the root system to the cellular level. We provided (15)N‐labelled nitrogen to mycorrhizal hyphae associated with one half of the root system of young beech trees, while exposing plants to a (13)CO(2) atmosphere. We analysed the short‐term distribution of (13)C and (15)N in the root system with isotope‐ratio mass spectrometry, and at the cellular scale within a mycorrhizal root tip with nanoscale secondary ion mass spectrometry (NanoSIMS). At the root system scale, plants did not allocate more (13)C to root parts that received more (15)N. Nanoscale secondary ion mass spectrometry imaging, however, revealed a highly heterogenous, and spatially significantly correlated distribution of (13)C and (15)N at the cellular scale. Our results indicate that, on a coarse scale, plants do not allocate a larger proportion of photoassimilated C to root parts associated with N‐delivering ectomycorrhizal fungi. Within the ectomycorrhizal tissue, however, recently plant‐assimilated C and fungus‐delivered N were spatially strongly coupled. Here, NanoSIMS visualisation provides an initial insight into the regulation of ectomycorrhizal C and N exchange at the microscale
    • 

    corecore