Quantcast
Channel: Jive Syndication Feed
Viewing all articles
Browse latest Browse all 10881

Gesture Interaction Library for SAP Web/ Desktop applications

$
0
0

We are a team from Software Logistics a part of Lifecycle Management, we bring in front of you the next generation input mechanism (gesture and voice). This technology exists already but the question is it possible to integrate this technology to any current running application(web/desktop)? If it is how easy is it to integrate and use the existing gestures to be mapped to the actions/ navigations of the application. So we bring the solution to you. The gesture and voice library using Kinect sensor.

 

The Gesture SDK is a framework/plugin that is easily consumed on the web/desktop Applications used to recognize the various body gestures. These Gestures can be used in combination with the speech commands to trigger the web based/OS level actions.

 

The Server module is used to connect to the Kinect device and obtain the coordinates of the Human body detection at 30 Frames per Second. The Server handles the following:

 

  1. Skeleton/Joints tracking of a human.
  2. Speech recognition using KINECT.
  3. Data broadcast using Socket listener.
  4. Configuration based gesture and speech recognition.


gestureBlog1.png

The SDK recognizes the gestures the users/developers can use the existing gestures and bind them to the actions of the HTML page.

 

                                                             INTEGRATION TO HTML APPLICATIONS

 

The Gesture SDK could easily be plugged in to any HTML page. Web-Socket allows the application to communicate to the Server and identify the gesture from the server. The gesture identified is communicated to the application. The application can then trigger actions based on gestures.

 

gestureBlog2.png

As long as there is a skeleton to give the gesture to the KINECT device, the application responds to the triggers mapped to the gestures.

 

     INTEGRATION TO WINDOWS OS APPLICATIONS

 

The server can trigger the OS level actions like keyboard shortcuts and process level actions. The ability of OS level actins using gestures indirectly helps the desktop applications to be triggered using gestures.                               

 

So here the gestures/speech could be mapped to the keyboard shortcuts and the key board shortcuts could be fired using the gestures. Similarly we can trigger the mouse movements using the gesture/speech or we can switch between different foreground/background processes by interacting to the operating system using the captured gesture or speech


                                                  HOW TO ACHIEVE

 

The KINECT Sensor is used to capture the skeleton enabling detection of gesture motion. The skeleton data is transferred using socket programming.

 

The skeleton data consist of information of the  joints on the body to be tracked continuously. Similarly KINECT can also help in the audio detection with noise filtering capability.

 

  The gestures detected are processed by the gesture SDK, which identifies the gestures and sends it to the application. The application triggers the action based on the gestures identified


Viewing all articles
Browse latest Browse all 10881

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>