184 research outputs found

    A Controlled Study of the Flipped Classroom With Numerical Methods for Engineers

    Get PDF
    Recent advances in technology and ideology have unlocked entirely new directions for education research. Mounting pressure from increasing tuition costs and free, online course offerings are opening discussion and catalyzing change in the physical classroom. The flipped classroom is at the center of this discussion. The flipped classroom is a new pedagogical method, which employs asynchronous video lectures, practice problems as homework, and active, group-based problem-solving activities in the classroom. It represents a unique combination of learning theories once thought to be incompatible—active, problem-based learning activities founded upon constructivist schema and instructional lectures derived from direct instruction methods founded upon behaviorist principles. The primary reason for examining this teaching method is that it holds the promise of delivering the best from both worlds. A controlled study of a sophomore-level numerical methods course was conducted using video lectures and model-eliciting activities (MEAs) in one section (treatment) and traditional group lecture-based teaching in the other (comparison). This study compared knowledge-based outcomes on two dimensions: conceptual understanding and conventional problem-solving ability. Homework and unit exams were used to assess conventional problem-solving ability, while quizzes and a conceptual test were used to measure conceptual understanding. There was no difference between sections on conceptual under- standing as measured by quizzes and concept test scores. The difference between average exam scores was also not significant. However, homework scores were significantly lower by 15.5 percentage points (out of 100), which was equivalent to an effect size of 0.70. This difference appears to be due to the fact that students in the MEA/video lecture section had a higher workload than students in the comparison section and consequently neglected to do some of the homework because it was not heavily weighted in the final course grade. A comparison of student evaluations across the sections of this course revealed that perceptions were significantly lower for the MEA/video lecture section on 3 items (out of 18). Based on student feedback, it is recommended that future implementations ensure tighter integration between MEAs and other required course assignments. This could involve using a higher number of shorter MEAs and more focus on the early introduction of MEAs to students

    Appearance-based image splitting for HDR display systems

    Get PDF
    High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies

    The efficient use of data from different sources for production and application of digital elevation models

    Get PDF
    The emphasis of the investigation reported in this thesis is on the use of digital elevation data of two resolutions originating from two different sources. The high resolution DEM was captured from aerial photographs (first source) at a scale of 1:30,000 and the low resolution DEM was captured from SPOT images (second source). It is well known that the resolution of DEM data depends a great deal on the scale of the images used. The technique for capturing DEMs is static measurement of the spot heights in a regular grid. The grid spacing of the high resolution DEM was 30 m, and of the low resolution DEM was 100 m. The aims of this thesis are as follows: 1. To assess the feasibility of using SPOT stereodata as a source of height information and merged with data from aerial photography. This is carried out by comparison of the elevation data derived from SPOT with the digital elevation data derived from aerial photography. From the comparison of these two sources of height information, some results are derived which show the possible heighting accuracy levels which can realistically be achieved. A systematic error in the estimated average of the elevation differences was found and many tests have been carried out to find the reasons for the presence of this systematic error. 2. To develop methods to manipulate the captured data. 2.1. Gross error (blunder) detection. Blunders made during the data capturing procedure affect the accuracy of the final product. Therefore it is necessary to trap and to remove them. A pointwise local self-checking blunder detection algorithm was developed in order to check the grid elevation data, particularly those which are derived from the second source. 2.2. Data coordinates transformation. The data must be transformed into a common projection in order to be directly comparable. The projection and coordinate systems employed are studied in this project, and the errors caused by the transformations are estimated. 2.3. Data merging. Data of different reliability have to be merged into a single set of data. In this project data from two different sources are merged in order to create a final product of known and uniform accuracy. The effect of the lower resolution source on the high resolution source was studied, in dense and in sparse form. 2.4. Data structure. To structure the data by changing the format in order to be in an acceptable form for DEM creation and display, through the commercially available Laser-Scan package DTMCREATE. 3. DEM production and contouring. To produce DEMs from the initial data and that derived from the two merged sources, and to find the accuracy of the interpolation procedure by comparing the derived interpolated data with the high resolution DEM which has been derived from aerial photography. Finally to interpolate contours directly from the "raw" SPOT data and to compare them with those derived from the aerial photography in order to find out the feasibility and capability of using SPOT data in contouring for topographic maps

    Navigating the roadblocks to spectral color reproduction: data-efficient multi-channel imaging and spectral color management

    Get PDF
    Commercialization of spectral imaging for color reproduction will require the identification and traversal of roadblocks to its success. Among the drawbacks associated with spectral reproduction is a tremendous increase in data capture bandwidth and processing throughput. Methods are proposed for attenuating these increases with data-efficient methods based on adaptive multi-channel visible-spectrum capture and with low-dimensional approaches to spectral color management. First, concepts of adaptive spectral capture are explored. Current spectral imaging approaches require tens of camera channels although previous research has shown that five to nine channels can be sufficient for scenes limited to pre-characterized spectra. New camera systems are proposed and evaluated that incorporate adaptive features reducing capture demands to a similar few channels with the advantage that a priori information about expected scenes is not needed at the time of system design. Second, proposals are made to address problems arising from the significant increase in dimensionality within the image processing stage of a spectral image workflow. An Interim Connection Space (ICS) is proposed as a reduced dimensionality bottleneck in the processing workflow allowing support of spectral color management. In combination these investigations into data-efficient approaches improve two critical points in the spectral reproduction workflow: capture and processing. The progress reported here should help the color reproduction community appreciate that the route to data-efficient multi-channel visible spectrum imaging is passable and can be considered for many imaging modalities

    Development of an interactive computer graphics system with application to data fitting

    Get PDF
    The work reported in this thesis is organized into two parts. Part I presents a review study of the existing graphics facilities in terms of hardware and software (Chapter 2), interactive input techniques (Chapter 3) and the organization of graphics output processes and application data structures (Chapter 4). Finally, in Part I, a full account is presented concerning the development and implementation of the basic graphics software package LIGHT. Part II contains a detailed discussion of the implementation of several application programs which employ the basic graphics software developed in Part I. The applications cover the following problem areas: (1) Interpolatory Data Fitting (IDF); (2) Interactive Contour Tracing (ICT); (3) Triangular Mesh Generation (TMG). Finally, full program listings of the basic software and the application modules are given in the Appendices accompanying this thesis

    Computer-Aided Geometry Modeling

    Get PDF
    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design

    Earth resources: A continuing bibliography with indexes (issue 55)

    Get PDF
    This bibliography lists 368 reports, articles and other documents introduced into the NASA scientific and technical information system between July 1 and September 30, 1987. Emphasis is placed on the use of remote sensing and geographical instrumentation in spacecraft and aircraft to survey and inventory natural resources and urban areas. Subject matter is grouped according to agriculture and forestry, environmental changes and cultural resources, geodesy and cartography, geology and mineral resources, hydrology and water management, data processing and distribution systems, instrumentation and sensors, and economic analysis

    Development of techniques to classify marine benthic habitats using hyperspectral imagery in oligotrophic, temperate waters

    Get PDF
    There is an increasing need for more detailed knowledge about the spatial distribution and structure of shallow water benthic habitats for marine conservation and planning. This, linked with improvements in hyperspectral image sensors provides an increased opportunity to develop new techniques to better utilise these data in marine mapping projects. The oligotrophic, optically-shallow waters surrounding Rottnest Island, Western Australia, provide a unique opportunity to develop and apply these new mapping techniques. The three flight lines of HyMap hyperspectral data flown for the Rottnest Island Reserve (RIR) in April 2004 were corrected for atmospheric effects, sunglint and the influence of the water column using the Modular Inversion and Processing System. A digital bathymetry model was created for the RIR using existing soundings data and used to create a range of topographic variables (e.g. slope) and other spatially relevant environmental variables (e.g. exposure to waves) that could be used to improve the ecological description of the benthic habitats identified in the hyperspectral imagery. A hierarchical habitat classification scheme was developed for Rottnest Island based on the dominant habitat components, such as Ecklonia radiata or Posidonia sinuosa. A library of 296 spectral signatures at HyMap spectral resolution (~15 nm) was created from >6000 in situ measurements of the dominant habitat components and subjected to spectral separation analysis at all levels of the habitat classification scheme. A separation analysis technique was developed using a multivariate statistical optimisation approach that utilised a genetic algorithm in concert with a range of spectral metrics to determine the optimum set of image bands to achieve maximum separation at each classification level using the entire spectral library. These results determined that many of the dominant habitat components could be separated spectrally as pure spectra, although there were almost always some overlapping samples from most classes at each split in the scheme. This led to the development of a classification algorithm that accounted for these overlaps. This algorithm was tested using mixture analysis, which attempted to identify 10 000 synthetically mixed signatures, with a known dominant component, on each run. The algorithm was applied directly to the water-corrected bottom reflectance data to classify the benthic habitats. At the broadest scale, bio-substrate regions were separated from bare substrates in the image with an overall accuracy of 95% and, at the finest scale, bare substrates, Posidonia, Amphibolis, Ecklonia radiata, Sargassum species, algal turf and coral were separated with an accuracy of 70%. The application of these habitat maps to a number of marine planning and management scenarios, such as marine conservation and the placement of boat moorings at dive sites was demonstrated. Committee Informatio

    Colorimetric tolerances of various digital image displays

    Get PDF
    Visual experiments on four displays (two LCD, one CRT and hardcopy) were conducted to determine colorimetric tolerances of images systematically altered via three different transfer curves. The curves used were: Sigmoidal compression in L*, linear reduction in C*, and additive rotations in hab. More than 30 observers judged the detectability of these alterations on three pictorial images for each display. Standard probit analysis was then used to determine the detection thresholds for the alterations. It was found that the detection thresholds on LCD\u27s were similar or lower than for the CRT\u27s in this type of experiment. Summarizing pixel-by-pixel image differences using the 90th percentile color difference in E*ab was shown to be more consistent than similar measures in E94 and a prototype E2000. It was also shown that using the 90th percentile difference was more consistent than the average pixel wise difference. Furthermore, SCIELAB pre-filtering was shown to have little to no effect on the results of this experiment since only global color-changes were applied and no spatial alterations were used
    • …
    corecore