In recent years videogame companies have recognized the role of player
engagement as a major factor in user experience and enjoyment. This encouraged
a greater investment in new types of game controllers such as the WiiMote, Rock
Band instruments and the Kinect. However, the native software of these
controllers was not originally designed to be used in other game applications.
This work addresses this issue by building a middleware framework, which maps
body poses or voice commands to actions in any game. This not only warrants a
more natural and customized user-experience but it also defines an
interoperable virtual controller. In this version of the framework, body poses
and voice commands are respectively recognized through the Kinect's built-in
cameras and microphones. The acquired data is then translated into the native
interaction scheme in real time using a lightweight method based on spatial
restrictions. The system is also prepared to use Nintendo's Wiimote as an
auxiliary and unobtrusive gamepad for physically or verbally impractical
commands. System validation was performed by analyzing the performance of
certain tasks and examining user reports. Both confirmed this approach as a
practical and alluring alternative to the game's native interaction scheme. In
sum, this framework provides a game-controlling tool that is totally
customizable and very flexible, thus expanding the market of game consumers.Comment: WorldCIST'13 Internacional Conferenc