2,665 research outputs found

    KS3 and KS4 learners' use of Web 2.0 technologies in and out of school - summary

    Get PDF
    This is a summary of the second report from research commissioned by Becta into Web 2.0 technologies for learning at Key Stages 3 and 4. This report describes findings from data collected using a guided survey of 2,611 Year 8 and Year 10 pupils and 60 focus groups held with approximately 300 learners. The analysis explores learner use of Web 2.0 technologies and their motivations for using social networking sites and the implications of these findings for teachers and providers

    Living the Past in the Future

    Get PDF

    Engene: A genetic algorithm classifier for content-based recommender systems that does not require continuous user feedback

    Get PDF
    We present Engene, a genetic algorithm based classifier which is designed for use in content-based recommender systems. Once bootstrapped Engene does not need any human feedback. Although it is primarily used as an online classifier, in this paper we present its use as a one-class document batch classifier and compare its performance against that of a one-elms k-NN classifier

    Phage display-derived inhibitor of the essential cell wall biosynthesis enzyme MurF

    Get PDF
    Background To develop antibacterial agents having novel modes of action against bacterial cell wall biosynthesis, we targeted the essential MurF enzyme of the antibiotic resistant pathogen Pseudomonas aeruginosa. MurF catalyzes the formation of a peptide bond between D-Alanyl-D-Alanine (D-Ala-D-Ala) and the cell wall precursor uridine 5'-diphosphoryl N-acetylmuramoyl-L-alanyl-D-glutamyl-meso-diaminopimelic acid (UDP-MurNAc-Ala-Glu-meso-A2pm) with the concomitant hydrolysis of ATP to ADP and inorganic phosphate, yielding UDP-N-acetylmuramyl-pentapeptide. As MurF acts on a dipeptide, we exploited a phage display approach to identify peptide ligands having high binding affinities for the enzyme. Results Screening of a phage display 12-mer library using purified P. aeruginosa MurF yielded to the identification of the MurFp1 peptide. The MurF substrate UDP-MurNAc-Ala-Glumeso-A2pm was synthesized and used to develop a sensitive spectrophotometric assay to quantify MurF kinetics and inhibition. MurFp1 acted as a weak, time-dependent inhibitor of MurF activity but was a potent inhibitor when MurF was pre-incubated with UDP-MurNAc-Ala-Glu-meso-A2pm or ATP. In contrast, adding the substrate D-Ala-D-Ala during the pre-incubation nullified the inhibition. The IC50 value of MurFp1 was evaluated at 250 μM, and the Ki was established at 420 μM with respect to the mixed type of inhibition against D-Ala-D-Ala. Conclusion MurFp1 exerts its inhibitory action by interfering with the utilization of D-Ala-D-Ala by the MurF amide ligase enzyme. We propose that MurFp1 exploits UDP-MurNAc-Ala-Glu-meso-A2pm-induced structural changes for better interaction with the enzyme. We present the first peptide inhibitor of MurF, an enzyme that should be exploited as a target for antimicrobial drug development

    A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    Get PDF
    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect’s infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately.</jats:p

    Dual mode microwave microfluidic sensor for temperature variant liquid characterization

    Get PDF
    A dual mode, microstrip, microfluidic sensor was designed, built, and tested, which has the ability to measure a liquid's permittivity at 2.5 GHz and, simultaneously, compensate for temperature variations. The active liquid volume is small, only around 4.5 μL. The sensor comprises two quarter ring microstrip resonators, which are excited in parallel. The first of these is a microfluidic sensor whose resonant frequency and quality factor depend on the dielectric properties of a liquid sample. The second is used as a reference to adjust for changes in the ambient temperature. To validate this method, two liquids (water and chloroform) have been tested over a temperature range from 23 °C to 35 °C, with excellent compensation results

    Rapid Online Analysis of Local Feature Detectors and Their Complementarity

    Get PDF
    A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. © 2013 by the authors; licensee MDPI, Basel, Switzerland

    Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Get PDF
    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems
    corecore