Virtual Mouse Control Using Webcam

Categories: Technology

Abstract

The Human Computer Interaction (HCI) is in rise, and is evolving day by day. This paper proposes a method for a vision-based cursor control system which is a modern approach for HCI. This system uses a web-cam and color detection technique for implementing the same.

This system proves and alternative technique for the conventional touch screen systems. The aim of this project is to let the user control the system using the hand gestures as commands. The actions such as left-click, right-click, drag, scroll operations can be performed using these commands.

This project is developed using the Python programming language. This project can be implemented by just a webcam which is usually in-built in most of the systems hence it is very cost-effective.

Introduction

Hand gestures are the effective way of communication among the humans. A mouse is analogous to a gesture when it comes to Graphical User Interface (GUI), where mouse commands the processor about the operations to be performed.

Get quality help now
WriterBelle
WriterBelle
checked Verified writer

Proficient in: Technology

star star star star 4.7 (657)

“ Really polite, and a great writer! Task done as described and better, responded to all my questions promptly too! ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

Hence gestures can be used for the development of the HCI technology. Gesture means “A movement of part of body, especially a hand or head, to express an idea or meaning”. Number of technologies exist to convert the hand gestures into the machine understandable form.

Two such recognized techniques are the Data-Glove method and the Vision-Based method. The Data-Glove method makes use of a glove with sensors embedded on it for translating the finger movements into data. The sensors used in these systems may vary and the cost also varies accordingly, making it expensive to implement.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

On the other hand, The Vision-Based approach makes use of a camera which interprets the hand signs into commands for the computer using various image processing techniques [2].

Our project uses a vision- based approach and executes functions like left-click, right-click, drag etc. This project is developed using the Python programming language. Python provides more accuracy when it comes to image-processing than the other options available. Thanks to the powerful libraries that Python possesses like Open CV, pillow, NumPy etc.

This project also makes use of such powerful libraries to make it efficient. The Open CV is used for the image processing and converting the images from one form to another form. The library NumPy allows us to perform various operations on the image by converting it into an array form. PyAutoGui allows the Python to control the mouse and keyboard. Hence this library is also used in our project. However, many other supporting libraries are put to use but the pillars of this project are Open CV, NumPy and PyAutoGui.

Proposed Method

The proposed system uses the color caps for defining the hand-gestures [1]. The block diagram of the system proposed is as shown in the figure1. Here the web-camera is used to provide the input which is them used to convert the gestures into a machine-readable form. The libraries such as Open CV will detect the color. Various operations will be performed using the library NumPy to extract only the color and ignoring the background. Once the color is extracted, its position with respect to the frame is calculated which is used to define the position of the cursor [4]. The various entities are [image: image2.png]explained below in more detail.

Web Camera

The web-camera captures the hand movements generated by the user. In Open CV the video can be read by the camera connected to the computer. For this, firstly a VideoCapture object needs to be created. Here a camera device or a video file can be used. For using the device, the device index needs to be passed in the argument of the object created.

Color Detection

Color detection means the technique of determining a specific-colored region in an image or video. A range of color is defined, and the upper and lower threshold values are set. Each pixel of a frame is then checked whether it falls in the category of defined range. In this way the color is detected and it can be visualized as shown in figure 4.

Contour Extraction

Contours are nothing but the boundaries of an image. Hence the contours of the colored pixels determined are calculated and depending on the contours obtained the gesture of the hand is recognized. Here, two types of gestures are put to use, the open-gesture and the closed-gesture. Therefore, these contours will help us determine the type of gestures. The contours extracted are as shown in figure 5.

Hand Tracking

Based on the contours the mouse-movements will be decided. The position of the fingers with respect to the frame will be calculated and accordingly the mouse movements will be decided. Also the number of colored objects will be calculated which will help in determining the operation to be performed.

Methodology

The algorithm design is as shown in the figure 2. Here the steps taken at each phase are mentioned and they are briefly explained in the sections below. The whole algorithm design can be divided into two phases, first one being the image acquisition and the second one is mouse movements.

Image Acquisition

  • Setting up Camera

The camera is the heart of this project. The job of collecting the inputs is done by the camera connected to the laptop or desktop. As discussed earlier a VideoCapture object is created and as argument is passed which specifies the device index. If the camera which is in-built in the system is used, then ‘0’ is passed in the argument whereas ‘1’ is passed if externally connected camera is used.

  • Capturing frames

Here camera plays a vital role in acquiring the inputs. Here an infinite loop is applied on the camera so that it captures the frames continuously at every instance when the program is running. The captured frames are processed using libraries and converted from RGB color-space to HSV color-space. There are more than 150 color-space conversion methods available in OpenCV

  • Masking technique

The extracted colors are detected by using the masking technique. The contours help us extract the boundaries of the colors. Now, masking will help us display only the colored object while blacking out the background. This is done by performing the bitwise and operation of the input image and the threshold image. The result of the AND operation is the displayed using the imshow ( ) function [2]. The screenshot of the masking techniques are as shown in the figure 4.

  • Displaying Frames

All the image acquisition phase can be displayed on the screen using the imshow ( ) function. Here the waitkey function plays a vital role in displaying the frames on the screen. The wait key basically introduces a delay.

Mouse Movements

The second phase of the algorithm design is the mouse movements. For instructing the mouse movements the colored objects are detected. The center of both the objects is calculated by taking the average of bounding boxes maximum and minimum points. Hence, we get two points. When the open gesture is used the distance between both the objects is calculated which can be seen in figure 5. The center of this line is highlighted. The position of this point with respect to the frame is determined which acts a reference to the cursor. Hence, the mouse location is set [image: image3.jpg]according to the reference calculated. Here, open-gesture corresponds to the mouse movements [3]. When only one object is observed on the frame click operation is performed. These operations are explained below in detail.

  • Click

The click operation is performed by the close gesture. This is similar to the open gesture, but the difference is that the close gesture appears as a single object and we calculate center of this single object. This operation will be performed on the location where the mouse is already set. Whenever the system realizes a closed gesture the left click operation will be performed.

  • Right-Click

The right-click operation also makes use of the closed gesture. Here a delay is introduced to differentiate the right-click from the left click. If the system gets close gesture for more than 10 frames then the right-click operation is performed.

  • Drag

In order to implement the dragging we introduce a variable called ‘drag flag’. If the click operation is performed then this flag will be set to 1. So, after the flag is set to one and the open gesture is used immediately then the drag operation will be performed. If the drag flag is not set to 1 and there is an open gesture then the mouse movements will be employed.

Results

As this system is mainly aimed to reduce the use of hardware, this system is made at zero-cost. This system can be run irrespective of the platform. However, the pre-requisites which needs to be considered before implementing this is that, the system should at least have a 2MP camera and a Pentium processor and at least 256 MB of RAM. The screenshots of the project are shared below.

Conclusion

Hence, this project serves a good option over the conventional touch screen panels. There are lot of advantages of this project such as zero-cost for implementation, less hardware etc. But along with this there are also some limitations associated with it, like the resolution of the camera makes a difference and also the lighting conditions of the surrounding affects the performance of the system. Along with its pros and cons this project yet sets an example of the new ear of the HCI technology.

References

  1. SandeepThakur, RajeshMehra, BuddiPrakash.” Vision Based Mouse Control Using Hand Gestures”, 2015 International Conference on Soft Computing Techniques and implementation.(ICSCTI), 2015
  2. R. MeenaPrakash, T. Deepa, T.Gunasundari, N. Kasturi “ Gesture recognition and finger tip detection for human computer interaction”,2017 Internation Conference on Innovation in Informations,(ICIIES),2017.
Updated: Feb 23, 2024
Cite this page

Virtual Mouse Control Using Webcam. (2024, Feb 12). Retrieved from https://studymoose.com/document/virtual-mouse-control-using-webcam

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment