We present our work on developing a supervised multi-label classification system based on automatic content-based exploration of
large sets of video lectures. The system integrates emerging cognitive tools to extract features from video transcripts and text embedded in visual frames, going beyond simple word frequencies. Preliminary results promise an improvement in terms of precision and recall. Moreover, the system is highly-customizable in terms of feature types and classification algorithms to be easily tailored to different contexts and applications. Preliminary results demonstrate the effectiveness, unique capabilities and future challenges of this novel system
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.