Fork me on GitHub

基于YOLO算法的物体定位与识别

前言

本文基于YOLO算法实现对视频中物体的定位与识别。

在本文中我们将进行如下内容:

  • 讨论YOLO物体检测算法的模型和架构
  • 应用YOLO检测视频流中的对象
  • 讨论YOLO物体检测算法的一些缺点

YOLO物体检测算法

在基于深度学习的物体检测算法领域中存在三种常用的算法模型

  • R-CNN以及它的变生算法 包括original R-CNN, Fast R- CNN和Faster R-CNN
  • Single Shot Detector (SSDs)
  • YOLO
  • 先说一下R-CNN,它是一种典型的非端到端物体检测算法,即是一种two-stage检测算法。大体上就是说这种方法需要先在图像中提取可能含有目标的候选框(region proposal),然后将这些候选框输入到CNN模型,让CNN判断候选框中是否真的有目标,以及目标的类别是什么。这个过程,其实是有两部分组成,一是目标所在位置及大小,二是目标的类别。在整个算法中,目标位置和大小其实是包含在region proposal的过程里,而类别的判定则是在CNN中来判定的。由于输入CNN模型中并 不是我们的原始图像,而是经过处理后的可能包含物体的边界框(candidate bounding boxes),所以说它是非端到端检测算法。它的特点就是检测效果(体现在能识别物体的最小尺寸、物体的密集程度等)好,但是速度相当慢(特别是对于一般配置的笔记本,哭唧唧)。补充一下,经取证,现在 Faster R-CNN已经成为端到端检测算法(删掉了 Selective Search requirement )

再说一下YOLO(此YOLO可不是及时行乐,活在当下的意思哦),YOLO是端到端物体检测算法(也可理解为single-stage)的经典代表,现在已经更新到Version-3了。在前R-CNN中,CNN本质的作用还是用来分类,定位的功能其并没有做到。而YOLO这种方法就是只通过CNN网络,就能够实现目标的定位和识别。也就是原始图像输入到CNN网络中,直接输出图像中所有目标的位置和目标的类别,称之为端到端。

YOLO算法的作者们基于ImageNet分类数据集和COCO检测数据集将该算法在对象检测和分类进行了联训(joint training),可以识别生活中较多的常见物体。如果感兴趣可以阅读一下YOLOv3教程

算法的实现

下面我们用python来实现一下YOLO算法

导入必要的包

# import the necessary packages
import numpy as np
import argparse
import imutils
import time
import cv2
import os #控制系统进行某些操作

命令行参数的解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser() #宏定义
ap.add_argument("-i", "--input", #required=True,
help="path to input video")
ap.add_argument("-o", "--output", required=True,
help="path to output video")
ap.add_argument("-y", "--yolo", required=True,
help="base path to YOLO directory")
ap.add_argument("-c", "--confidence", type=float, default=0.5,
help="minimum probability to filter weak detections") #置信分数
ap.add_argument("-t", "--threshold", type=float, default=0.3,
help="threshold when applyong non-maxima suppression")#总数是13x13的网格,
每个网格预测5个BOX,所以最终有854个BOX,证据表明绝大 多数的BOX得分会很低,我们只要保留30%BOX即可
args = vars(ap.parse_args())
  • –input:输入视频的路径
  • –output:经检测算法处理完的保存路径
  • –yolo:YOLO检测器的路径
  • –confidence:置信分数阈值,置信分数低于此值的方框将被过滤掉
  • –threshold:非最大值抑制阈值,参见non-maxima suppression

加载类库中物体的标签,并随机为它们配置颜色

# load the COCO class labels our YOLO model was trained on
labelsPath = os.path.sep.join([args["yolo"], "coco.names"])
LABELS = open(labelsPath).read().strip().split("\n")

# initialize a list of colors to represent each possible class label
np.random.seed(42)
COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),
    dtype="uint8") 

定义YOLO权重和配置文件的路径,然后从磁盘加载YOLO:

1
2
3
4
5
6
7
8
9
10
# derive the paths to the YOLO weights and model configuration
weightsPath = os.path.sep.join([args["yolo"], "yolov3.weights"])
configPath = os.path.sep.join([args["yolo"], "yolov3.cfg"])

# load our YOLO object detector trained on COCO dataset (80 classes)
# and determine only the *output* layer names that we need from YOLO
print("[INFO] loading YOLO from disk...")
net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

导入视频及定义视频编写器和帧尺寸,尝试确定视频文件中的总帧数,以便我们估算整个视频的处理时间

# initialize the video stream, pointer to output video file, and
# frame dimensions
if not args.get('input', False):
    vs= cv2.VideoCapture(0)

else:
    vs = cv2.VideoCapture(args['input'])
# vs = cv2.VideoCapture(args["input"])
writer = None
(W, H) = (None, None)

# try to determine the total number of frames in the video file
try:
    prop = cv2.cv.CV_CAP_PROP_FRAME_COUNT if imutils.is_cv2() \
        else cv2.CAP_PROP_FRAME_COUNT
    total = int(vs.get(prop))
    print("[INFO] {} total frames in video".format(total))

# an error occurred while trying to determine the total
# number of frames in the video file
except:
    print("[INFO] could not determine # of frames in video")
    print("[INFO] no approx. completion time can be provided")
    total = -1

开始逐个处理帧,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
# loop over frames from the video file stream
while True:
# read the next frame from the file
(grabbed, frame) = vs.read()

# if the frame was not grabbed, then we have reached the end
# of the stream
if not grabbed:
break

# if the frame dimensions are empty, grab them
if W is None or H is None:
(H, W) = frame.shape[:2]

让我们使用当前帧作为输入执行YOLO的前向传递。在这里,我们构建一个blob并将其传递通过网络,从而获得预测。我已经用时间戳包围了前向传递操作,因此我们可以计算在一帧上进行预测所用的时间 - 这将有助于我们估计处理整个视频所需的时间。然后我们将继续初始化我们将会使用的三个列表:box,confidences和classIDs

# construct a blob from the input frame and then perform a forward
# pass of the YOLO object detector, giving us our bounding boxes
# and associated probabilities
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
    swapRB=True, crop=False)
net.setInput(blob)
start = time.time()
layerOutputs = net.forward(ln)
end = time.time()

# initialize our lists of detected bounding boxes, confidences,
# and class IDs, respectively
boxes = []
confidences = []
 classIDs = []

接着,我们循环输出图和检测结果(labels),提取classID并过滤掉弱预测(是否存在物体),然后,计算边界框坐标并不断更新我们各列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# loop over each of the layer outputs
for output in layerOutputs:
# loop over each of the detections
for detection in output:
# extract the class ID and confidence (i.e., probability)
# of the current object detection
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]

# filter out weak predictions by ensuring the detected
# probability is greater than the minimum probability
if confidence > args["confidence"]:
# scale the bounding box coordinates back relative to
# the size of the image, keeping in mind that YOLO
# actually returns the center (x, y)-coordinates of
# the bounding box followed by the boxes' width and
# height
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")

# use the center (x, y)-coordinates to derive the top
# and and left corner of the bounding box
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))

# update our list of bounding box coordinates,
# confidences, and class IDs
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)

进一步,我们调用cv2.dnn.NMSBoxes函数以抑制弱的重叠边界框,循环遍历由NMS计算的idx并绘制相应的边界框+标签

# apply non-maxima suppression to suppress weak, overlapping
# bounding boxes
idxs = cv2.dnn.NMSBoxes(boxes, confidences, args["confidence"],
    args["threshold"])

# ensure at least one detection exists
if len(idxs) > 0:
    # loop over the indexes we are keeping
    for i in idxs.flatten():
        # extract the bounding box coordinates
        (x, y) = (boxes[i][0], boxes[i][1])
        (w, h) = (boxes[i][2], boxes[i][3])

        # draw a bounding box rectangle and label on the frame
        color = [int(c) for c in COLORS[classIDs[i]]]
        cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
        text = "{}: {:.4f}".format(LABELS[classIDs[i]],
            confidences[i])
        cv2.putText(frame, text, (x, y - 5),
        cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)

接着,初始化视频编写器,print出我们对处理视频所需时间的估算,将处理后得到的将帧写入输出视频文件。最后,清理和释放指针和窗口。如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  # check if the video writer is None
if writer is None:
# initialize our video writer
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
writer = cv2.VideoWriter(args["output"], fourcc, 5,
(frame.shape[1], frame.shape[0]), True)

# some information on processing single frame
if total > 0:
elap = (end - start)
print("[INFO] single frame took {:.4f} seconds".format(elap))
print("[INFO] estimated total time to finish: {:.4f}".format(
elap * total))

# write the output frame to disk
writer.write(frame)

cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF

if key == ord("q"):
break

# release the file pointers
cv2.destroyAllWindows()
print("[INFO] cleaning up...")
writer.release()
vs.release()

YOLO算法的缺点

1、对于体型较小的物体处理效果不好
2、不擅长处理相互距离很近的对象

-------------本文结束感谢您的阅读-------------