2,397 research outputs found
Comparing the cognitive profile of the HCI professional and the HCI educator
Previous research into Human-Computer Interaction (HCI) education has focussed mainly on the curriculum, pedagogy and the gap between education, and little is known about the cognitive profile of the HCI practitioner or educator, or how their individual differences impact upon practice in the field or the classroom. This research intends to address this gap by investigating the cognitive style of HCI practitioners, educators, and those with both roles.
315 professionals responded to a global online survey which captured their individual cognitive style using the Allinson and Hayes Cognitive Style Index (CSI) which tests whether the subject tends more towards an intuitivist or analyst, and the Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) which suggests a three dimensional model of cognitive style – object imagers who prefer to construct pictorial images, spatial imagers who prefer schematic representations and verbalizers who prefer to use verbal-analytical tools. Together, these two instruments provide a profile that matches the skills required to work within the field of HCI. The respondents included practitioners in the field (N=179), educators (N=61), and some who were both practitioner and educator (N=75).
A one-way between-groups ANOVA and MANOVA was performed to investigate differences in the role of the professional, and the CSI and OSIVQ profiles respectively, followed by the Welch t-test to compare their OSIVQ scores with the published normative values.
The ANOVA comparing the CSI scores for each of the groups revealed a statistically significant difference of F(2, 312) = 3.35, p= 0.38 and post-hoc comparisons using the Tukey HSD test indicated that the mean score for the educators was significantly different from that of the ‘both’ group. The practitioners did not differ significantly from either the educators or ‘both’. This may in some part be explained by the fact that very often HCI is taught by an academic with a computer science background rather than an HCI specialist, but further investigation is needed in this area.
The MANOVA used the three constructs of the OSIVQ as dependent variables. No significant difference was found between the groups. However, the t-tests comparing the professional against the normative data revealed that whilst there was no significant difference between the object imager score of the HCI professional and the scientist, there was a difference between the spatial imager score of the HCI professional and the visual artist, perhaps again reflecting the computer science background of many professionals.
24 survey respondents have been interviewed and the resulting data will form the basis of a thematic analysis to extend the cognitive profile, and to identify the predominant technological frames of operation. Applying this concept of technological frames to the domain of HCI, will help to make sense of the adoption and application of knowledge, tools and techniques amongst this community.
In order for the curriculum to meet the needs of the market, the educator needs to understand the practitioner in order to produce graduates equipped for the role. Finally, as HCI is delivered in a multidisciplinary environment, should it not also be taught by a multidisciplinary team
River water-level estimation using visual sensing
This paper reports our initial work on the extraction of en-
vironmental information from images sampled from a camera deployed to monitor a river environment. It demonstrates very promising results for the use of a visual sensor in a smart multi-modal sensor network
Report of the sensor readout electronics panel
The findings of the Sensor Readout Electronics Panel are summarized in regard to technology assessment and recommended development plans. In addition to two specific readout issues, cryogenic readouts and sub-electron noise, the panel considered three advanced technology areas that impact the ability to achieve large format sensor arrays. These are mega-pixel focal plane packaging issues, focal plane to data processing module interfaces, and event driven readout architectures. Development in each of these five areas was judged to have significant impact in enabling the sensor performance desired for the Astrotech 21 mission set. Other readout issues, such as focal plane signal processing or other high volume data acquisition applications important for Eos-type mapping, were determined not to be relevant for astrophysics science goals
Deployable Payloads with Starbug
We explore the range of wide field multi-object instrument concepts taking
advantage of the unique capabilities of the Starbug focal plane positioning
concept. Advances to familiar instrument concepts, such as fiber positioners
and deployable fiber-fed IFUs, are discussed along with image relays and
deployable active sensors. We conceive deployable payloads as components of
systems more traditionally regarded as part of telescope systems rather than
instruments - such as adaptive optics and ADCs. Also presented are some of the
opportunities offered by the truly unique capabilities of Starbug, such as
microtracking to apply intra-field distortion correction during the course of
an observation.Comment: 12 pages, 8 figures, to be published in Proc. SPIE 6273
"Opto-Mechanical Technologies for Astronomy
Advances on CMOS image sensors
This paper offers an introduction to the technological advances of image sensors designed using
complementary metal–oxide–semiconductor (CMOS) processes along the last decades. We review
some of those technological advances and examine potential disruptive growth directions for CMOS
image sensors and proposed ways to achieve them. Those advances include breakthroughs on
image quality such as resolution, capture speed, light sensitivity and color detection and advances on
the computational imaging. The current trend is to push the innovation efforts even further as the
market requires higher resolution, higher speed, lower power consumption and, mainly, lower cost
sensors. Although CMOS image sensors are currently used in several different applications from
consumer to defense to medical diagnosis, product differentiation is becoming both a requirement and
a difficult goal for any image sensor manufacturer. The unique properties of CMOS process allows the
integration of several signal processing techniques and are driving the impressive advancement of the
computational imaging. With this paper, we offer a very comprehensive review of methods,
techniques, designs and fabrication of CMOS image sensors that have impacted or might will impact
the images sensor applications and markets
CMOS-3D smart imager architectures for feature detection
This paper reports a multi-layered smart image sensor architecture for feature extraction based on detection of interest points. The architecture is conceived for 3-D integrated circuit technologies consisting of two layers (tiers) plus memory. The top tier includes sensing and processing circuitry aimed to perform Gaussian filtering and generate Gaussian pyramids in fully concurrent way. The circuitry in this tier operates in mixed-signal domain. It embeds in-pixel correlated double sampling, a switched-capacitor network for Gaussian pyramid generation, analog memories and a comparator for in-pixel analog-to-digital conversion. This tier can be further split into two for improved resolution; one containing the sensors and another containing a capacitor per sensor plus the mixed-signal processing circuitry. Regarding the bottom tier, it embeds digital circuitry entitled for the calculation of Harris, Hessian, and difference-of-Gaussian detectors. The overall system can hence be configured by the user to detect interest points by using the algorithm out of these three better suited to practical applications. The paper describes the different kind of algorithms featured and the circuitry employed at top and bottom tiers. The Gaussian pyramid is implemented with a switched-capacitor network in less than 50 μs, outperforming more conventional solutions.Xunta de Galicia 10PXIB206037PRMinisterio de Ciencia e Innovación TEC2009-12686, IPT-2011-1625-430000Office of Naval Research N00014111031
A Bio-Inspired Vision Sensor With Dual Operation and Readout Modes
This paper presents a novel event-based vision sensor with two operation modes: intensity mode and spatial contrast detection. They can be combined with two different readout approaches: pulse density modulation and time-to-first spike. The sensor is conceived to be a node of an smart camera network made up of several independent an autonomous nodes that send information to a central one. The user can toggle the operation and the readout modes with two control bits. The sensor has low latency (below 1 ms under average illumination conditions), low power consumption (19 mA), and reduced data flow, when detecting spatial contrast. A new approach to compute the spatial contrast based on inter-pixel event communication less prone to mismatch effects than diffusive networks is proposed. The sensor was fabricated in the standard AMS4M2P 0.35-um process. A detailed system-level description and experimental results are provided.Office of Naval Research (USA) N00014-14-1-0355Ministerio de Economía y Competitividad TEC2012- 38921-C02-02, P12-TIC-2338, IPT-2011-1625-43000
Recommended from our members
A 25 micron-thin microscope for imaging upconverting nanoparticles with NIR-I and NIR-II illumination.
Rationale: Intraoperative visualization in small surgical cavities and hard-to-access areas are essential requirements for modern, minimally invasive surgeries and demand significant miniaturization. However, current optical imagers require multiple hard-to-miniaturize components including lenses, filters and optical fibers. These components restrict both the form-factor and maneuverability of these imagers, and imagers largely remain stand-alone devices with centimeter-scale dimensions. Methods: We have engineered INSITE (Immunotargeted Nanoparticle Single-Chip Imaging Technology), which integrates the unique optical properties of lanthanide-based alloyed upconverting nanoparticles (aUCNPs) with the time-resolved imaging of a 25-micron thin CMOS-based (complementary metal oxide semiconductor) imager. We have synthesized core/shell aUCNPs of different compositions and imaged their visible emission with INSITE under either NIR-I and NIR-II photoexcitation. We characterized aUCNP imaging with INSITE across both varying aUCNP composition and 980 nm and 1550 nm excitation wavelengths. To demonstrate clinical experimental validity, we also conducted an intratumoral injection into LNCaP prostate tumors in a male nude mouse that was subsequently excised and imaged with INSITE. Results: Under the low illumination fluences compatible with live animal imaging, we measure aUCNP radiative lifetimes of 600 μs - 1.3 ms, which provides strong signal for time-resolved INSITE imaging. Core/shell NaEr0.6Yb0.4F4 aUCNPs show the highest INSITE signal when illuminated at either 980 nm or 1550 nm, with signal from NIR-I excitation about an order of magnitude brighter than from NIR-II excitation. The 55 μm spatial resolution achievable with this approach is demonstrated through imaging of aUCNPs in PDMS (polydimethylsiloxane) micro-wells, showing resolution of micrometer-scale targets with single-pixel precision. INSITE imaging of intratumoral NaEr0.8Yb0.2F4 aUCNPs shows a signal-to-background ratio of 9, limited only by photodiode dark current and electronic noise. Conclusion: This work demonstrates INSITE imaging of aUCNPs in tumors, achieving an imaging platform that is thinned to just a 25 μm-thin, planar form-factor, with both NIR-I and NIR-II excitation. Based on a highly paralleled array structure INSITE is scalable, enabling direct coupling with a wide array of surgical and robotic tools for seamless integration with tissue actuation, resection or ablation
- …