In this article, we go through how you can train detectron2 to recognize personalized objects in thisDetetron2 colab notebookAfter reading, you can train its detector detector detector detector 2 custom detector 2, changing only a line of code for personalized data import.

Ripper
- General Detectron2 Description
- General description of our customizationComputational Vision Data set
- Install detectron2 facilities
- Download the defined userDetectron2 object detection data
- To seeDetectron2 training data
- Write our detectron2 training configuration
- Detectron2 training
- Evaluate detectron2 yield
- Use detectron2 inference to test images
Custom Detectron2 training resources
- Notebook colab implement detectron2 in personalized data
- Globular audiences: recognition data assistance
General Detectron2 Description
Detetron2 is a popular modular computational vision library in Pytorch -Base.es Datektron's second iteration, which was originally written in CAFFE2. With the detectron2 system, you can connect the state defined by the user of artistic computer vision technologies in itsworkDetectron2 locking relay:
Detectron2 includes all the models available in the original detectron, such as: B.R-cnn faster, r-cnn mask, retinanet and that plaster. It also has several new models, including R-CNN cascade, Panoptic FPN and tensor mask, and we will continueAdding other algorithms. We also have functions such as the synchronous battery pattern and support for new data records, such as the added LVI.

In this article, we find how detectron2 can be trained for personalized data for personalized dataespecialRecognition of objectsHowever, according to the conclusion of the reading, they are familiar with the detection ecosystem2 and can be generalized to other functions contained in detectron2 (for example, how to do itDetails the arrest as segmentation for the instance defined by customization).
General description of custom data
We will open our custom detector of detectron2Detection of public blood cellsData lodged for free in Roboflow. Data record for the detection of blood blood cells is representative of a small set of personalized object detection data that can be collected to create a personalized object detection system.We have to train underlying networks for our personalized task.

If you want to follow the passage tutorial by step, you can give up itBase of public globules.CanUntil your own data setEmEach annotation format.
Use your own data with detectron2
To export your own data to this tutorial, Register forRoboFlowmiMake a public work areaor make a new public work area in your existing account. If your data is private, you can use a plan paid to export an update for the use of external training routines such as this or prove the useRoboflow internal training solution.
Install detectron2 facilities
To start, make a copyNotebook colab implement detectron2 in personalized data.Gogogle Colab provides us with free GPU features.
To train our custom detector, we installTorch == 1.5
miTorchVision == 0,6
- Then, after importingTorch
We can verify the version of the torch and guarantee twice for a GPU to be printed1.5.0+Cu101 True
.
Then we install the library detected2 and create several imports of sub -Modos.
!als npimport cv2import randomfrom google.colab.patches importieren cv2_imshow# importieren einige übliche detektron2 -Dienstprogramme von DETECTRON2 DETECTRON2 MODEL_ZOFROM DETECTRON2.Engine Import DefaultPredictorfrom DETECTRON2.CONFIG -ImportdataTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaTaSualisualisualisualisualisualisualizer.
Download personalized data to detect Defortron2 objects
We download our personalized data downCoco json -formatFrom Roboflow with a single line of code: This is the only line of code you need to change to train your own personalized objects!
Note: In this tutorial, we exportObject detection data with limit pictures.If is looking forTrain a semantic segmentation model, Roboflow is also compatibleAnnotation, export of the data setGiftsUser -defined model training notes, EAutoml -TreiningmiMissionSolutions.

if you havein whiteImages, you must mark them first. For free open source labels -s -We are the following instructionsFirst steps with LabelimgoFirst steps with CVAT(oTag meoPerceive) Annotation tools. I intendant to mark ~ 50 images to continue in this tutorial. Idea improve the performance of your model later.
You can also considerCreation of a free data record for the recognition of objects from open images.
As soon as he identified data to postpone his data in Roboflow,Create a new accountAnd then you can collect your data record in any format: (CreateEml jsonGiftsTensorflow -Object CSV recognition, etc.).
After loading, you can select the preprocessing and increase the steps:

So clickTo generate
miDescargar
And you can chooseCoco json -format.

When requested, select "Show code -snippet".Annotation format for object recognition.
Then detect a list of data records available in aRecord
Therefore, we have to register our personalized data in Deteron2 so that it can be called for training.
Aus Detecron2.data.Datasets Importieren Sie Regist_Coco_instancesRegister_Coco_Instances ("my_dataset_train", {}, "/content/train/notances.coco.json", "/content/trens),"/contenido/valid ") registrado_coco_instances (" my_dataSetSet ",", ",{}, "/content/test/_annotações.coco.json",/content/test")
Display training data detectron2
Detectron2 simply facilitates the visualization of our training data to ensure that the data has been imported correctly. We do this with the following
,:, ::-1], metadata = my_dataset_train_metadata, escala = 0,5) vis = visualizador.draw_dataset_dict (d) cv2_imshow (vis.get_image () [:: :::-1])

It seems that our data record was recorded correctly!
Write our detectron2 training configuration
Then we write our personalized training configuration.
",) cf.tatasets.dataloader.num_workers = 4cfg.model.weaks = model_zoo.get_checkpoint_url (" coco -detctct/faster_rcnnn_x_101_32x8d_fpn_3x.yaml) # dejar treinamentomodel.model.model.test.eval_period = 500
The biggest devices we call hereRCNN faster.Detectron2 allows you to determine your models architecture, which you can see in theDetectron2 model zoo.
The following models are only available for object recognition:

The other great configuration option we did is thatMax_iter
Parameter. This indicates how long the model will be trained.It may be necessary to adapt and adapt according to the validation metrics you see.
Detectron2 training
Before training, we have to make sure the model is confirmedAgainst our validation rate.
We can simply do this defining our personalized coach based on the definedStander coach
asCoconut evaluator
:
)
Ok now that we have oursCoconut coach
We can start training:

The training is done for a while and the evaluation metrics are printed in our validation judgment. Do you know what card it is for the evaluation?Cancel the map (average precision).
Once the training is ready, we can continue with the evaluation and the conclusion!
Evaluate detectron2 yield
First, we can show a tensioner plate with results to see how the training process was carried out.

There are many metrics there, especiallyTotal loss
miValidation card
.
We carry out the same evaluation procedure used on our validation card in the testing set.
De Detecron2.Data, Você INMPORTA DATASETCATALOG, METADATATALOG, Build_Detection_Tection_LoaderFrom Detectron2.Valuation CocoEvaluator, Infere_on_datasetcfg.wilts = Os.path.85.modell_dirs.
Surrender:
Collection of evaluation results ... Fact (t = 0.03s). Average price (AP) @[IU = 0.50: 0.95 | Range = all | Maxdets = 100] = 0.592 Average precision (AP) @[iou = 0.50 |Interval = all | maxdets = 100] = 0.881 average precision (ap) @[iou = 0.75 | range = all | maxDets = 100] = 0.677 Average precision (ap) @[iou = 0.50: 0.95 | Range = small | Maxdets =100] = US call return (air) @[iou = 0.50: 0.95 | range = all | maxdets = 1] = 0.392 Average memory (air) @[iou = 0.50: 0.95 | RANGE = all | maxdets = 10] = 0.633Average memory (air) @[iou = 0.50: 0.95 | range = all | maxdets = 100] = 0.684 Average curve (air) @[iou = 0.50: 0.95 | Area = small | Maxdets = 100] = 0.257 Back midf) @[iou = 0.50: 0.95 | Area = average | maxdets = 100] = 0.709 Return of average calls (air) @[iou = 0.50: 0.95 | Area = Great | Maxdets = 100] = 0.439 [23/23 18 18 18: 39: 47 d2.Evaluationap | Ap50 |-: -------:---- ---------------------------------------- [06/23/28:39:2 .------------------------------------|: ------- |: ---------- |: -------------------------------- -------- | 60,326 || WBC | 77,039 |||||
In this way, you will receive a good idea of how your new client works in nature, the specific customer detector. If you are curious to learn more about these metricsHe made the card.
Use detectron2 inference to test images
And finallyThe model never saw before.
cfg.model.weaks = os.path.join (cfg.output_dir, "model_final.ph") cfg.datasets.test = ("my_dataset_test",) cfg.model.roi_heads.score_thresh_test = 0.7 #modelpedicttict = cfg.model.roi_heads.prhresh_terst = 0.7) test_metadata = metadataCatalog.get ("my_dataset_test") von Detektron2.Utils.Visualizer ImportA COLORMODEMPORT GLOBFOR GLOBINEAME EM GLOB.GLOB ('/Content/Test/*jpg'): iM = cv2.Imreame (ImaginameMe)((imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (imaginame) (impusiename) (imbil) (impusiename) (impusiename) (impusiename) (impusiename) (impusiename) (impusiename)) ausgänge = predictor (im) v = visualizador (im [:, :::-1], metadata = test_metadata, escala =0, ::--1], metadata = test_metadata, escala = 0,8) out = V.Draw_Instance_prredictions (ausgänge ["Instanzen"]. Para ("CPU")) CV2_IMSHOW (Out.get_image () [: ,,,,,:::-1])
Surrender:

Our model makes good predictions that show that you have learned to identify red blood cells, white blood cells and platelets.
You can consider playing with himScore_thresh_test
To change the trusted threshold that the model needs to make a prognosis.
Now you can keep your weights inOS.PATH.JOIN (CFG.output_dir, "model_final.pt")
For a future inference, exporting Google Drive.
You can also see the underlying forecast in theDepartures
Object that can be used in other parts of its application.
Use your Detetron2 trained model
He now knows how to train his own detection detector 2 in a completely new domain.
Don't you see the results you need to follow?Object detection modelshave been improved since the launch of detectron2zoo- Check some of our other tutorials, such asHow to train Yolov5miHow to train YOLOV4.
Create and provide free roboflow
Manage roboflow to manage data records, train single -click models and provide them with the web, mobile or edge. With some photos, you can train a computer vision model in operation in one afternoon.