Finger Detection and Tracking using OpenCV and Python

Picture can not loading...

Tracking the movement of a finger is an important feature of many computer vision applications. In this application, A histogram based approach is used to separate out the hand from the background frame. Thresholding and Filtering techniques are used for background cancellation to obtain optimum results.

One of the challenges that I faced in detecting fingers is differentiating a hand from the background and identifying the tip of a finger. I’ll show you my technique for tracking a finger, which I used in this project. To see finger detection and tracking in action check out this video.

In an application where you want to track a user’s hand movement, skin color histogram will be very useful. This histogram is then used to subtracts the background from an image, only leaving parts of the image that contain skin tone.

A much simpler method to detect skin would be to find pixels that are in a certain RGB or HSV range. If you want to know more about this approach follow here.

The problem with the above approach is that changing light conditions and skin colors can really mess with the skin detection. While on the other hand, Histogram tends to be more accurate and takes into account the current light conditions.

Hand over the rectangles

Green rectangles are drawn on the frame and the user places their hand inside these rectangles. Application is taking skin color samples from the user’s hand and then creates a histogram.

The rectangles are drawn with the following function:

  def draw_rect(frame):
  rows, cols, _ = frame.shape
  global total_rectangle, hand_rect_one_x, hand_rect_one_y, hand_rect_two_x, hand_rect_two_y
   
  hand_rect_one_x = np.array(
  [6 * rows / 20, 6 * rows / 20, 6 * rows / 20, 9 * rows / 20, 9 * rows / 20, 9 * rows / 20, 12 * rows / 20,
  12 * rows / 20, 12 * rows / 20], dtype=np.uint32)
   
  hand_rect_one_y = np.array(
  [9 * cols / 20, 10 * cols / 20, 11 * cols / 20, 9 * cols / 20, 10 * cols / 20, 11 * cols / 20, 9 * cols / 20,
  10 * cols / 20, 11 * cols / 20], dtype=np.uint32)
   
  hand_rect_two_x = hand_rect_one_x + 10
  hand_rect_two_y = hand_rect_one_y + 10
   
  for i in range(total_rectangle):
  cv2.rectangle(frame, (hand_rect_one_y[i], hand_rect_one_x[i]),
  (hand_rect_two_y[i], hand_rect_two_x[i]),
  (0, 255, 0), 1)
   
  return frame

view rawDraw Rectangles in Frame.py hosted with ❤ by GitHub

There’s nothing too complicated going on here. I have created four arrayshand_rect_one_xhand_rect_one_yhand_rect_two_xhand_rect_two_y to hold the coordinates of each rectangle. The code then iterates over these arrays and draws them on the frame using cv2.rectangle. Heretotal_rectangle is just the length of the array i.e. 9.

Now that the user understands where to place his or her palm, the succeeding step is to extract pixels from these rectangles and use them to generate an HSV histogram.

  def hand_histogram(frame):
  global hand_rect_one_x, hand_rect_one_y
   
  hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
  roi = np.zeros([90, 10, 3], dtype=hsv_frame.dtype)
   
  for i in range(total_rectangle):
  roi[i * 10: i * 10 + 10, 0: 10] = hsv_frame[hand_rect_one_x[i]:hand_rect_one_x[i] + 10,
  hand_rect_one_y[i]:hand_rect_one_y[i] + 10]
   
  hand_hist = cv2.calcHist([roi], [0, 1], None, [180, 256], [0, 180, 0, 256])
  return cv2.normalize(hand_hist, hand_hist, 0, 255, cv2.NORM_MINMAX)

view rawGenerate HSV Histogram.py hosted with ❤ by GitHub

Here function transforms the input frame to HSV. Using Numpy, we create an image of size [90 * 10] with 3 color channels and we name it as ROI (Region of Intrest). It then takes the 900-pixel values from the green rectangles and puts them in the ROI matrix.

The cv2.calcHist creates a histogram using the ROI matrix for the skin color and cv2.normalize normalizes this matrix using the norm Typecv2.NORM_MINMAX. Now we have a histogram to detect skin regions in the frames.

Now that the user understands where to place his or her palm, the succeeding step is to extract pixels from these rectangles and use them to generate an HSV histogram.

Now that we hold a skin color histogram we can use it to find the components of the frame that contains skin. OpenCV provides us with a convenient method, cv2.calcBackProject, that uses a histogram to separate features in an image. I used this function to apply the skin color histogram to a frame. If you want to read more about back projection, you can read from here and here.

  def hist_masking(frame, hist):
  hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
  dst = cv2.calcBackProject([hsv], [0, 1], hist, [0, 180, 0, 256], 1)
   
  disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (31, 31))
  cv2.filter2D(dst, -1, disc, dst)
   
  ret, thresh = cv2.threshold(dst, 150, 255, cv2.THRESH_BINARY)
   
  thresh = cv2.merge((thresh, thresh, thresh))
   
  return cv2.bitwise_and(frame, thresh)

view rawBack Project.py hosted with ❤ by GitHub

In the first two lines, I changed the input frame to HSV and then appliedcv2.calcBackProject with the skin color histogram hist. Following that, I have used Filtering and Thresholding function to smoothen the image. Lastly, I masked the input frame using the cv2.bitwise_and function. This final frame should just contain skin color regions of the frame.

Hand seperated from background (1)

Hand seperated from background (2)

Now we have a frame with skin color regions only, but what we really want is to find the location of a fingertip. Using OpenCV you can find contours in a frame if you don’t know what contour is you can read here. Using contours you can find convexity defects, which will be potential fingertip location.

In my application, I needed to find the tip of a finger with which a user is aiming. To do this I determined the convexity defect, which is furthest from the centroid of the contour. This is done by the following code:

  def manage_image_opr(frame, hand_hist):
  hist_mask_image = hist_masking(frame, hand_hist)
  contour_list = contours(hist_mask_image)
  max_cont = max_contour(contour_list)
   
  cnt_centroid = centroid(max_cont)
  cv2.circle(frame, cnt_centroid, 5, [255, 0, 255], -1)
   
  if max_cont is not None:
  hull = cv2.convexHull(max_cont, returnPoints=False)
  defects = cv2.convexityDefects(max_cont, hull)
  far_point = farthest_point(defects, max_cont, cnt_centroid)
  print("Centroid : " + str(cnt_centroid) + ", farthest Point : " + str(far_point))
  cv2.circle(frame, far_point, 5, [0, 0, 255], -1)
  if len(traverse_point) < 20:
  traverse_point.append(far_point)
  else:
  traverse_point.pop(0)
  traverse_point.append(far_point)
   
  draw_circles(frame, traverse_point)

view rawFingertip point.py hosted with ❤ by GitHub

Contour in Frame (1)

Contour in Frame (2)<br>

Then it determines the largest contour. For the largest contour, it finds the hull, centroid, and defects.

Defects in red circle and Centroid in purple circle

Now that you have all these defects you find the one that is farthest from the center of the contour. This point is assumed to be the pointing finger. The center is purple and the farthest point is red. And there you have it, you’ve found a fingertip.

Centroid in purple color and Farthest point in red color

All hard part is done up until now, now all we have to do is to create alist to store the changed location of the farthest_point in the frame. It’s up to you that how many changed points you want to store. I am storing only 20 points.

Finger Detection and Tracking using OpenCV and Python<br>

Lastly, thank you for reading this post. For more awesome posts, you can also follow me on Twitter — iamarpandey, Github — amarlearning.

Happy coding! 🤓

Article from: dev.to

添加新评论

Restricted HTML

  • You can align images (data-align="center"), but also videos, blockquotes, and so on.
  • You can caption images (data-caption="Text"), but also videos, blockquotes, and so on.