The estimate of several people in OpenCV using OpenPoce (2023)

In ourPrevious publication, we use itOpen maneuverImplementation modelHuman PoseFor a single person. In this publication, we will discuss how it worksPosen estimate of several people.

If there are several people in a photo, the pose estimate creates several independent points. We have to find out which phrase belongs to the same person.

We will use the 18 -point model that is trained in the coconut data record for this article. The most important points, together with the number used by the coconut data record, are presented below:

Coconut output format
Nose - 0, neck - 1 right shoulder - 2, right side - 3, right doll - 4,
Left shoulder - 5, left elbows - 6, left doll - 7, right hip - 8,
Right knee - 9, right ankle - 10, left hip - 11, left knee - 12,
Lankle - 13, right eye - 14, left eye - 15, right ear - 16,
Left ear - 17, background - 18

1. Network Architecture

The open architecture is displayed below. Click here to expand the image.

The model takes an image of the size of H x W and creates a matrix of matrices as an output, consisting of the trusted cards of the points -chave and the heat cards of the affinity for each few keyboard points. The previous network architecture consists oftwo phases, as explained below:

  1. Level 0: The first 10 layers of VGGNET are used to create property cards for the input image.
  2. Level 1: A 2 branch CNN in the multiple phase is used where
    • The first branch foresees a set of 2DConfidence cards(S) of body places (eg elbows, knees, etc.) A reliable map is a gray scale that has a high value in places where the probability of a particular body is high. Example: the reliable card forThe left shoulder is shown in Figure 2 below. It has high values everywhere where there is a left shoulder.

      For the 18 -point model, the first 19 output arrays correspond to the reliable cards.

      Figure 2: Show confidence cards to the left shoulder to the specified image

      Master of CV Generative

      Get specialized guidance, tips and internal tricks. You think impressive images, learn how to adapt distribution models, advanced image processing techniques such as color, pix2pix and more to spin

      (Video) Human Pose Estimation using opencv | python | OpenPose | stepwise implementation for beginners

    • The second branch provides for a set of 2D (L) parts vector fields (PAF), which encodes the degree of association between the parts (Points)., affinity, affinity, affinity of the part between the neck and the left shoulder is shown.

      Figure 3: Showing neck affinity cards - left shoulder torque to the specified image

Trust cards are used to find important points and affinity cards are used to get valid connections between the -chave points.

2. Download model weights

Use the file provided with the code to download the weight's weight file. Leve into consideration that the configuration protocols are now available in the folders.

Place the following in the folder downloaded of the command line.

sudo chmod a+x

See the folders to ensure that the biner files have been downloaded (. Caffemodel).HereOnce you download the weight file, put it in the "pus/coconut/" folder.

3rd Step 1: Generate the image output

4.1 Charger rot

Download codeTo simply follow this tutorial, download the code by clicking the button below.

Download code


protofile = "pose/coco/pose_deploy_linevec.prototxt" wightsfile = "pose/coco/pose_iter_440000.caffemodel" net = cv2.dnn.readnetfromcaffe (protofile, gewichtsfile)

C ++

cv :: dnn :: net inputNet = cv :: dnn :: readNetFromcaffe ("./ pose/coco/pose_deploy_linevec.prototxt", ".

3.2.Load the image and create input lot


Image1 = cv2.imread ("group.jpg")# Correct the input height and receive width according to the occurrence of aspect aspect = 368inwidth = int ((fort/framework)*Framewidth) Inpblob = CV2.dnn.BLOBFROM Image (Image11), 1.0/ 255, (internal, harmful), (0, 0, 0), swaprb = false, cultivation = wrong)

C ++

std :: string inputfile = "./group.jpg";if (argc> 1) {inputfile = std :: string (argv [1]);} cv :: mat input = cv :: iMread (inputFile, cv_load_image_color); cv :: mat inputblob = cv :: dnn :: blobfromimimage(Entrada, 1.0 /255.0, CV :: size ((int) ((368*input.cols) /input.rows), 368), CV :: Scalar (escalar (Scalar (0,0,0), Falso,Falso);
(Video) Multi-Person Pose Estimation using OpenPose + OpenCV (C++/Python)

3.3.For the network passes through the network


net.setInput (inpblob) output = net.forward ()

C ++

inputNet.setInput (inputBlob); CV :: Mat netOutputBlob = inputNet.forward ();

3.4 Health probe

First, change the output size to the same size as the input. In then, we checked the reliable card that corresponds to the nose -cash point. You can also use the CV2 function.

i = 0ProbMap = Ausgabe [0, i, ::] probMap = cv2.Resize (probMap, (FrameBreite, FrameHeight)) plt.imshow (cv2.cvtcolor (image1, cv2.color_bgr2rgb))Althmap, alfa -= 0,6)

Figure 4: Show the reliable map that corresponds to the nose -cash point.

4th Step 2: Discovery of Points -Chave

As can be seen in the previous illustration, Nuloth -matrix provides the nose's trust card. The first matrix corresponds to the neck, etc.The location of each -chave point, just finding the maximum of the reliable card. However, we cannot do this to the stage of several people.

To use:The explanation and code fragments in this section are part of the GetKeyPoint () function.

For each important point, we apply a limit (0.1 in this case) to the trusted card.


Mapsmooth = cv2.gaussianblur (probmap, (3,3), 0,0) mapMask = np.uint8 (mapsmooth & amp; gt; umbral)

C ++

CV :: mattprobmap;CV :: GAUßiansianblur (Probmap, Aveveprobmap, CV :: Größe (3, 3), 0, 0);CV :: MAT FRINTED :: THRESH_BIARY);

This provides a matrix containing blobs in the area that corresponds to the following -chave point.
Figure 5: Trust card after using the limit

To find the exact location of the most important points, we need to find the maximum for each bubble. Let's do the following:

  1. Find all the contours of the region that meet the most important points.
  2. Create a mask for this region.
  3. Remove probmap to this area by multiplying probmap for this mask.
  4. Find the most local maximum for this region. This is made for each contour (region of the -chave point).


(BLOBMASK, CNT, 1) MaskedProbMap = MapsMooth * BlobMask _, maxval, _, maxloc = cv2.MinMaxLoc (enmascaradoprobMap) keypoints.append (maxloc + (probamp [maxloc [1], maxoc [0])))).

C ++

STD :: VETOR <STD :: VECTOR <CV :: Point >> Contours;CV :: Findcontours (maskedProbmap, contours, cv :: ret_tree, cv :: chilled_approx_simple);for (int i = 0; i <contournssss.size (); ++ i) {cv :: mat blobmask = cv :: mat :: zeros (smoothprobmap.rows, avaveveprobmap.cols, soft -sofbmap.type ());CV :: Reletconvexpoly (Blobmask, contours [i], CV], CV], CV], CV], CV], CV] :: SCINARAR (1));Doppelmaxval;CV :: Punkt Maxloc;CV :: Minmaxloc (kettprobmap.mul (Blabmask), 0, and Maxval, 0y maxloc);keyboard

We follow the X and the probability assessment for each -chave point. We also attribute an ID to each -chave point we find. This is later used connecting the pieces or connections between the top meshes.

The most important points recognized for the input image are detailed below. You can see that you are also doing a good job for a partially visible person and even for a person who goes away from the camera.

Figure 6: Points -Cave recognized in the input image

In addition, the -chave points are displayed in the input image below without overlays.
Figure 7: Point points are overflowing in a black background.

From the first photo, you can see that we find all the points -chau. However, if the cava tips do not overlap in the figure (Figure 7), we cannot say which part belongs to which person. We have to map in a mappingRugged like any -chave point in one person. This is not trivial and can lead to many errors if not done correctly. For that, we find the couples valid (or valid) between the -chau points and then set up these connections.Create skeletons for all people.

(Video) Open Pose Tutorial #7 - How does Pose Estimation Work | OpenCV Python | Computer Vision (2021)

5. Step 3: Find valid couples

A valid couple is a body that connects two important points that belong to the same person. A simple way to find valid colleagues illustrated that it is specified below can find the distance between the marked nose and all the others.the corresponding person.

Figure 8: Get the connection between the -chave points with a simple distance.

This approach may not work for everyone at the same age; especially if the image contains many people or there is a occlusion of parts. For example, for the couple, the left wrist -> the left pulse -the third person's pulse is closer to the elbowof the second person compared to his own doll. Therefore, he will not lead to a valid couple.

Figure 9: Only the use of the distance between the points -chave can fail in some cases.

Here the affinity cards come into play. They are the direction along with the affinity between two pairs of joints. Therefore, the torque should not only be a minimum distance, but the address must also correspond to the address of the heat PAF card.

The heat plate is shown below to the left left connection.

Figure 10: Hot card for the left elbow's left pair on the left.

Despite the fact that distance measurement incorrectly identifies the torque, OpenPos provides a correct result, since PAF corresponds only to the unit vector that combines the elbow and the second person's pulse.

The approach adopted in the document is as follows:

  1. Share the line that connects the two points of which the couple ends.
  2. Make sure PAF has the same address at these points as the line that connects the points for this torque.
  3. If the address coincides to some extent, it is a valid couple.

Let's see how the code fragments belong to the GetvalidPairs () function in the specified code.

For every two of the body, we do the following:

  1. Take the most important points to a pair.-> right horder.

Figure 11: Show the candidates that correspond to the torque.


A DIP = saída [0, pidx [c] [k], ::] em fb = out [0, pidx [em f2

C ++

// a-> b forms a limBCV :: mat pafa = necoutputputputparts [Mapidx [k] .FIRST];CV :: MAT PAFB = NetoutputParts [Mapidx [K] .second];// The most important points for the first and second and second and second and second and Secondumbconst STD :: Vector <KeyyPoint> & Cany = Detected Keepings [PoePairs [K] .FIRST];Const
  1. Find the unit vector that took the two points into consideration. This provides the direction of the line that connects them.


d_ij = np.subract (CANDB [j] [: 2], canda [i] [: 2]) norma = np.linalg.norm (d_ij) Si norma: d_ij = d_ij / norma

C ++

STD :: par <float, floating> distance (candb [j] ..i] .. I] .spot.y);GLATTING STANDARD = STD :: SQRT (Removal. First * Distance.First-Distance * Distance.scond);com (! Removal / = Norm
  1. Create a matrix of 10 interpolated points that connect the two points.


# BusCar P (U) interp_coord = list (zip (np.linspace (canda [i] [0], candb [j] [0], num = n_interp_samples), n_linspace (canda [i] [1], candb [j.] [1], NUM = n_interp_samples))) # BUSCAR 50 (P (U)) PAF_INTERP = [p (] PAF_Interp = [] PAF_Interp = [] PAF_Interp = [] PAF_Interp = [] PAF_Interp = [] para k en range (len (interp_interp.append ([paf_interp.append ([paf_interp.append ([paf_interp.append ([paf_interp.Append ([PAFA [int (redonde [K], [1], [1]), int (redonde (interp_coord [k] [0])] PAFB [int (redonde (interp_coord [K] [1]), int (redonde(INTERP_COOD [k] [0]))))))

C ++

// Find P (U) STD :: VECTOR <CV :: Point> Interpcoords;Populentintpponts (Canda [i] .point, candb [j] .// Find l (p (u)) std :: vector <std :: torinterpcoords.size (); ++ l) {pafinterp.push_back (std :: tor<fueat> (interpcoords [l] .y, interpcoords [l] .x));};};
(Video) Tensorflow Multi-Person Pose Estimation with Python // Machine Learning Tutorial
  1. Take the product between PAF in these points and the unit vector d_ij


# Encontre ePAF_SCORES = (paf_inter, d_ij) avg_paf_score = sum (paf_scores)/len (paf_scores)

C ++

STD :: vector <loat> pafscores;Float SumofpAfscores = 0;int numoverth = 0;for (int l = 0; l <pafinterp.size (); ++ l) {Stork float = pafinterp [l].if (score> pafscoreth) {++ numoverth;} pafscores.push_back (score);} float avgpapafscore = Sumofpapafscores/((flott) Pafinterp.tamaño ());
  1. Torque ends as valid when 70% of points meet the criteria.


# Check that the connection is valid max_j = j maxscore = avg_paf_score

C ++

IF (((((float) NURMOVERTH)/(((float) NINTERPSAMPLES)> CONFH) {IF (AVGPAFSCORE> maxscore) {maxj = j; maxscore = AVGPAFSCORE;

Step 6: Mount the most important points of people in relation to the person

Now that we gather all the important points in pairs, we can gather our colleagues who share the same candidates for recognizing an entire body of various people.

Let's take a look at how this occurs in the code. Code fragments in this section are part of the GetPersepoint () function in the provided code

  1. First, we created empty lists to save the most important points for each person. Then we checked each couple and verify that the torque part is now available in one of the lists. If this list and part of this couple should also belong toThis person.


For J in the area (Len (PessoawisekeyPoints))): If the staff at the time of point [j] [index] == parties [i]: Person_idx = j= Partibs [i]

C ++

For (int j = 0 ;! fands && j <personisekypoints.size (); ++ j) {if (index <personwisekeypoint [j] .size () && hilfe) {personidx = j;Found = True;}}/ * j */ if (found) {personicinepoints [personidx] .at (indexb) = localValidPairs [i] .bid;}
  1. If you are not present in any of the lists, this means that the couple belongs to a new person who is not on the list and therefore a new list is created.


ELIF NOT FOUND AND K <17: LINE = -1 * NP.ONES (19) LINE [INDEX] = Part [I] Line [INDEXB] = Partbs [I]]

C ++

Sonst if (k <17) {std :: Vector <t> lpkp (std :: vector <t> (18, -1)); (indexA) = localValidPairs [i] .aid; (indexb) = localValidPairs [i] .bid; personalinwisePoints.push_back (lpkp);};

7. Results

We checked all people and draw the skeleton in the input image


Para I en el rango (17): para n en el rango (Len (personalinwisekeypoint): índice = personalInwisePoint [n] [np.array (pose_pairs [i])] si -1 en índice: continua b = np.int32 (Keypoints_list [index.astype (int), 0]) a = np.int32 (keyypoints_list [index.astype (int), 1]) cv2.Line (FRAMECLONE, (b [0], a [0]), ((B [b [b [1], a [1]), Farben [i], 2, cv2.line_aa) cv2.imshow ("Pose Detecado", Frameclone) cv2.waitKey (0)

C ++

para (int i = 0; i <npoints-1; ++ i) {für (int n = 0; n <personwoypoints.size (); ++ n) {const std :: par <int, int> & poepair= Posepairs [i]; int indexa = personaliskeypoint [n] [poepair.first]; int indexb = personaliskeypoints [n] [poepair.second]; if (indexa == -1 || indexb == -1) {continua;} Const KeyPoint & Kpa = Keyypointslist [indexa]; const KeyPoint & kpb = keyypointslist [indexb]; cv :: line (outputframe, kpa.point, kpb.point, colores [i], 3, cv :: line_aa);}} cv: cv :: line_aa);}} cv :::: iMshow ("Pose Detecado", saídaframe); cv :: waitKey (0);

The following figure shows skeletons to each of the discovered people!

Take a look at the code provided with the publication!

I would like to thank my team -Company Chandrashekara Keralapura for writing the C ++ version of the code.

Subscribe to the code and download

If you liked this article and the code (C ++ and Python) and the example images used in this C ++/Python publication and algorithms and computer vision and automatic learning news.

Example Decarding Code


[Video for Used Demonstration]
[Paper opening]
[OpenPos Re -implemented in Keras]


1. EmguCV # 77 Pose Estimation using OpenPose and EmguCV
2. Real time Human Pose estimation using opencv
(Python Life)
3. How Pose Estimation works : OpenPose Part 2
4. OpenPose Estimation using Deep learning & OpenCV on Windows
(Machine Learning Hub)
5. Pose Estimation in Python Using OpenCV
(Imurgence Learning)
6. OpenPose Tutorial with Tensorflow
(Krish Naik)
Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated: 02/09/2023

Views: 6728

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.