564 research outputs found

    Investigation of Alignment between Goals of Schooling Relevant to Georgia and the Georgia Performance Standards

    Get PDF
    Since the American Revolution free public education has been a discussion of political debate. The purpose that such an institution should play in society is a debate fervently argued when the founding fathers wanted to build a republic based on meritocracy. The problem this study addresses is the undefined relationship between the goals of schooling relevant to Georgia and the Georgia Performance Standards (GPS) which is a critical piece to creating a complete systemic view of public schooling in Georgia. The purpose of this study is to investigate the alignment between the GPS and schooling goals. The guiding question and sub-questions are: How well are the GPS, or the intended curriculum of Georgia schools, and each of the various stated goals of schooling aligned? How relevant are the eighth-grade GPS to the latent themes of each of the stated goals of schooling? How balanced are the latent themes of each of the stated goals of schooling in the eighth-grade GPS? Through a historical investigation of the literature and current policy the author establishes the currently relevant goals of schooling which serve as the latent goals for which the method will seek to find evidence within the Georgia Performance Standards. The study employs a quantitative content analysis of a significant section of the Georgia Performance Standards (GPS) looking for themes associated with various stated goals of schooling as indicated by the literature review. The manifest themes, developed from the latent goals of schooling, are incorporated as the dependent variables in the study, while the GPS serve as the independent variable. Neuendorf’s (2001) framework for content analysis is used to develop a new method for investigating the goal-curriculum alignment relationship through new measures of Curricular Balance, Curricular Relevance, and Manifest Theme Presence. This study presents a new visual model to compare a curriculum’s alignment to multiple goals of schooling called the Goal-Curriculum Alignment Measures (G-CAM) model. This study finds that the GPS are strongly aligned to the goals of Americanization, high student test scores, post-secondary enrollment, and national gain, while poorly aligned to democratic participation and social justice. Evidence for these conclusions are discussed and related to the current socio-political literature

    Multi-Modal Biometrics: Applications, Strategies and Operations

    Get PDF
    The need for adequate attention to security of lives and properties cannot be over-emphasised. Existing approaches to security management by various agencies and sectors have focused on the use of possession (card, token) and knowledge (password, username)-based strategies which are susceptible to forgetfulness, damage, loss, theft, forgery and other activities of fraudsters. The surest and most appropriate strategy for handling these challenges is the use of naturally endowed biometrics, which are the human physiological and behavioural characteristics. This paper presents an overview of the use of biometrics for human verification and identification. The applications, methodologies, operations, integration, fusion and strategies for multi-modal biometric systems that give more secured and reliable human identity management is also presented

    Online Multi-Stage Deep Architectures for Feature Extraction and Object Recognition

    Get PDF
    Multi-stage visual architectures have recently found success in achieving high classification accuracies over image datasets with large variations in pose, lighting, and scale. Inspired by techniques currently at the forefront of deep learning, such architectures are typically composed of one or more layers of preprocessing, feature encoding, and pooling to extract features from raw images. Training these components traditionally relies on large sets of patches that are extracted from a potentially large image dataset. In this context, high-dimensional feature space representations are often helpful for obtaining the best classification performances and providing a higher degree of invariance to object transformations. Large datasets with high-dimensional features complicate the implementation of visual architectures in memory constrained environments. This dissertation constructs online learning replacements for the components within a multi-stage architecture and demonstrates that the proposed replacements (namely fuzzy competitive clustering, an incremental covariance estimator, and multi-layer neural network) can offer performance competitive with their offline batch counterparts while providing a reduced memory footprint. The online nature of this solution allows for the development of a method for adjusting parameters within the architecture via stochastic gradient descent. Testing over multiple datasets shows the potential benefits of this methodology when appropriate priors on the initial parameters are unknown. Alternatives to batch based decompositions for a whitening preprocessing stage which take advantage of natural image statistics and allow simple dictionary learners to work well in the problem domain are also explored. Expansions of the architecture using additional pooling statistics and multiple layers are presented and indicate that larger codebook sizes are not the only step forward to higher classification accuracies. Experimental results from these expansions further indicate the important role of sparsity and appropriate encodings within multi-stage visual feature extraction architectures

    Secure covert communications over streaming media using dynamic steganography

    Get PDF
    Streaming technologies such as VoIP are widely embedded into commercial and industrial applications, so it is imperative to address data security issues before the problems get really serious. This thesis describes a theoretical and experimental investigation of secure covert communications over streaming media using dynamic steganography. A covert VoIP communications system was developed in C++ to enable the implementation of the work being carried out. A new information theoretical model of secure covert communications over streaming media was constructed to depict the security scenarios in streaming media-based steganographic systems with passive attacks. The model involves a stochastic process that models an information source for covert VoIP communications and the theory of hypothesis testing that analyses the adversary‘s detection performance. The potential of hardware-based true random key generation and chaotic interval selection for innovative applications in covert VoIP communications was explored. Using the read time stamp counter of CPU as an entropy source was designed to generate true random numbers as secret keys for streaming media steganography. A novel interval selection algorithm was devised to choose randomly data embedding locations in VoIP streams using random sequences generated from achaotic process. A dynamic key updating and transmission based steganographic algorithm that includes a one-way cryptographical accumulator integrated into dynamic key exchange for covert VoIP communications, was devised to provide secure key exchange for covert communications over streaming media. The discrete logarithm problem in mathematics and steganalysis using t-test revealed the algorithm has the advantage of being the most solid method of key distribution over a public channel. The effectiveness of the new steganographic algorithm for covert communications over streaming media was examined by means of security analysis, steganalysis using non parameter Mann-Whitney-Wilcoxon statistical testing, and performance and robustness measurements. The algorithm achieved the average data embedding rate of 800 bps, comparable to other related algorithms. The results indicated that the algorithm has no or little impact on real-time VoIP communications in terms of speech quality (< 5% change in PESQ with hidden data), signal distortion (6% change in SNR after steganography) and imperceptibility, and it is more secure and effective in addressing the security problems than other related algorithms

    1994 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Remediation in the hybrid media environment : Understanding countermedia in context

    Get PDF
    We examine the position of five online-only 'countermedia' publications often publicly labelled as 'fake media' and use them to indicate how the 'post-truth era' takes place. Both academic and public discussions perceive countermedia as separate and distinct from the established, professionally produced journalist media outlets. We argue that the studied outlets are an integral part of the hybrid media environment. Our data show countermedia mainly remediate content initially published by professional Finnish media. We also suggest that media references are used strategically to explicate a relationship with mainstream media, as there are different ways of remediating the mainstream media content. This evidence contributes to the growing body of work criticising the usage of the 'fake media' concept and attempts to create a more nuanced understanding of countermedia's role in its contexts. Furthermore, we suggest remediation as a lens may help scholars understand the integrated hybrid media environment.Peer reviewe

    The Lines Between the Checkboxes : The Experiences of Racially Ambiguous People of Color

    Get PDF
    The influences of race on people’s lived experiences are vast and enumerable. Despite advancements in multicultural counseling literature, the experiences of racially ambiguous people of color, or persons who do not align with preexisting ideas about race (Brown & Brown, 2004; James &Tucker, 2003; Young, Sanchez, & Wilton, 2013), are relatively unknown. Further, the racially ambiguous experience is often conflated with persons of mixed-race heritage (Young, Sanchez, & Wilton, 2013). The goal of this dissertation study was to understand the lived experiences of racially ambiguous people of color. Participants identifying as racially ambiguous were recruited to discuss their lived experiences. Grounded in Critical Race Theory (Crenshaw, Gotanda, Peller, & Thomas, 1995; Haskins & Singh, 2015), this phenomenological, qualitative study included two in-depth, semi-structured interviews. A cross section of 14 participants with varying ages, genders, racial compositions, ethnicities, sexual orientations, and cultures engaged in this study. Findings suggest that the construct of racial ambiguity is not confined to persons of mixed-race heritage, and racially ambiguous people of color have a unique lived experience. Participants identified being racially ambiguous resulted in a distinct understanding of race, varying interpersonal dynamics, and an emotional internal experience, affecting participants’ sense of self, wellness, and belonging. Implications for counseling practice, counselor education and supervision, and future research were provided
    • …
    corecore