Quantcast
Channel: OpenCV Q&A Forum - RSS feed
Viewing all articles
Browse latest Browse all 2088

How to mask and unmask a particular object in an image using its location

$
0
0
Hello guys, It is going to be a bit long explanation (I have included the code, and hence it is bit long), so please bear it with me. I have an image [Trolley_problem.jpg](/upfiles/1521831287642846.jpg) and I want to detect the track and the objects in it and draw the bounding box around the object. I detected the object and the tracks, but the problem was: In the track (i.e. line) detection code, the lines on the tram were also detected (which is not desired). The result is as you can see [Result1](/upfiles/15218318133388192.jpg) The code was modified as follows: 1. Mask the tram 2. Detect the tacks 3. Unmask the tram 4. Detect the tram and the other objects The result is: [Result 2](/upfiles/15218321206621661.jpg) and the code is as below: import numpy as np import cv2 # Read the main image inputImage = cv2.imread('/home/raov/Downloads/Trolley Problem/Trolley_problem.jpg') # Convert it to grayscale inputImageGray = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY) # Read the templates tramTemplate = cv2.imread('/home/raov/Downloads/Trolley Problem/tram1.jpg') humanTemplate = cv2.imread('/home/raov/Downloads/Trolley Problem/h1.jpg') humanSlantTemplate = cv2.imread('/home/raov/Downloads/Trolley Problem/hr1.jpg') leverTemplate = cv2.imread('/home/raov/Downloads/Trolley Problem/lever.jpg') #Store channel, width and height of the Tram template in w and h height,width,channels = tramTemplate.shape # Region of Interest roi = inputImage[0:height, 0:width] # Color Values white = [255,255,255] black = [0,0,0] # Hiding the Tram for x in range(0,width): for y in range(0,height): channels_xy = inputImage[y,x] inputImage[y,x] = white # Show the Result cv2.imwrite(r'/home/raov/Downloads/Trolley Problem/newImage.jpg', inputImage) # Open the Result newImage = cv2.imread('/home/raov/Downloads/Trolley Problem/newImage.jpg') # Convert it to grayscale newImageGray = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY) # Line Detection edges = cv2.Canny(newImageGray,150,200,apertureSize = 3) minLineLength = 30 #Minimum length of line. Line segments shorter than this are rejected. maxLineGap = 5 #Maximum allowed gap between line segments to treat them as single line lines = cv2.HoughLinesP(edges,cv2.HOUGH_PROBABILISTIC, np.pi/180, 30, minLineLength,maxLineGap) for x in range(0, len(lines)): for x1,y1,x2,y2 in lines[x]: pts = np.array([[x1, y1 ], [x2 , y2]], np.int32) cv2.polylines(newImage, [pts], True, (0,255,0)) # Insert the text font = cv2.FONT_HERSHEY_COMPLEX cv2.putText(newImage,"Tracks Detected", (500,250), font, 0.5, 255) # Convert the templates to grayscale tramTemplateGray = cv2.cvtColor(tramTemplate, cv2.COLOR_BGR2GRAY) humanTemplateGray = cv2.cvtColor(humanTemplate, cv2.COLOR_BGR2GRAY) humanSlantTemplateGray = cv2.cvtColor(humanSlantTemplate, cv2.COLOR_BGR2GRAY) leverTemplateGray = cv2.cvtColor(leverTemplate, cv2.COLOR_BGR2GRAY) #Store width and height of the templates in w and h h1,w1 = tramTemplateGray.shape h2,w2 = humanTemplateGray.shape h3,w3 = leverTemplateGray.shape h4,w4 = humanSlantTemplateGray.shape # Now create a mask of the tram Template and create its inverse mask also ret, mask = cv2.threshold(tramTemplateGray, 10, 255, cv2.THRESH_BINARY) #mask_inv = cv2.bitwise_not(mask) # Add the Tram Template in the new image newImage_bg = cv2.bitwise_and(roi,roi,mask = mask) tramTemplate_fg = cv2.bitwise_and(tramTemplate,tramTemplate,mask = mask) # Put tram in ROI and modify the new image dst = cv2.add(newImage_bg,tramTemplate_fg) newImage[0:height, 0:width] = dst cv2.imwrite(r'/home/raov/Downloads/Trolley Problem/New_Image.jpg', newImage) # Open the Resultant Image inputImage= cv2.imread('/home/raov/Downloads/Trolley Problem/New_Image.jpg') # Convert it to grayscale inputImageGray = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY) # Perform match operations. tramResult = cv2.matchTemplate(inputImageGray,tramTemplateGray, cv2.TM_CCOEFF_NORMED) humanResult = cv2.matchTemplate(inputImageGray,humanTemplateGray, cv2.TM_CCOEFF_NORMED) leverResult = cv2.matchTemplate(inputImageGray,leverTemplateGray, cv2.TM_CCOEFF_NORMED) humanSlantResult = cv2.matchTemplate(inputImageGray,humanSlantTemplateGray, cv2.TM_CCOEFF_NORMED) # Specify a threshold threshold = 0.75 # Store the coordinates of matched area in a numpy array loc1 = np.where( tramResult >= threshold) loc2 = np.where( humanResult >= threshold) loc3 = np.where( leverResult >= threshold) loc4 = np.where( humanSlantResult >= 0.80) # Draw a rectangle around the matched region. for pt in zip(*loc1[::-1]): cv2.rectangle(inputImage,pt, (pt[0] + w1, pt[1] + h1), (0,255,255), 1) #cv2.putText(inputImage,'Tram Detected',(pt[0] + w1, pt[1] + h1),font,0.5,255) cv2.putText(inputImage,"Tram Detected", (200,50), font, 0.5, 255) for pt in zip(*loc2[::-1]): cv2.rectangle(inputImage,pt, (pt[0] + w2, pt[1] + h2), (0,0,255), 1) cv2.putText(inputImage,"Humans Detected", (800,50), font, 0.5, 255) for pt in zip(*loc3[::-1]): cv2.rectangle(inputImage,pt, (pt[0] + w3, pt[1] + h3), (255,255,0), 1) cv2.putText(inputImage,"Lever Detected", (480,50), font, 0.5, 255) for pt in zip(*loc4[::-1]): cv2.rectangle(inputImage,pt, (pt[0] + w4, pt[1] + h4), (0,0,255), 1) # Show the final result cv2.imwrite(r'/home/raov/Downloads/Trolley Problem/Trolley_Problem_Result.jpg', inputImage) The above code works fine only for the input "Trolley_problem.jpg", but it doesn't when I change the input. For example, when the input is [FatManTrolleyProblem.jpg](/upfiles/1521832688211398.jpg), the result is [FatMan_Result](/upfiles/15218327455704797.jpg) which is undesired Now I understand that the one way to do it is: 1. Detect the tram (and the bridge too in case of FatManTrolleyProblem.jpg) 2. Mask the tram (since the location will be known i.e. the bounding box) (and the bridge in case of FatManTrolleyProblem.jpg) 3. detect the tracks and the other object 4. Unmask the tram (and the bridge) Can somebody help me out with the point 2 (mask the tram) and point 4 (unmask the tram)? the only way I can think of is to find the contours of the tram template, but it doesn't work as expected. Regards

Viewing all articles
Browse latest Browse all 2088

Trending Articles