|
|
Milestones
|
|
|
(I) Technologies to be developed
- Inter-camera correspondence
There are several cues for inter-camera correspondence: face recognition algorithm can
provide a similarity score for the faces captured from multiple cameras; space-time cue can
estimate probabilities of a human entering a certain camera at a certain time given the
location, time and velocity of its exit from other cameras based on the scene modeling
results; human appearance cue can be represented by using the distance in color space. In
this project, we will develop the algorithms to quantize those cues and method to fuse those
factors together
- Self calibration and site modeling
Without manual inter-camera calibration, this system needs self calibration and modeling
the scene by training session to learn the camera topology and path probabilities of targets.
During the training stage, it is not necessary to track all persons across cameras. Only the
best matches (those closest in face and appearance) will be used for learning. Related
algorithms will be developed in this project.
(II) Innovation use of existing technologies
- Face Recognition
Frontal faces will be generated by our formerly developed face reconstruction [15-17],
which will be adopted as input for face recognition module to get similarity score of a pair of
faces. We will select appropriate face recognition algorithm and implement it in this project.
- Human tracking in single camera level
We will follow our previous research on human tracking at single camera level [11-14].
Furthermore, for those severely occluded case, regular tracking algorithms may fail to track
the targets even at single camera level. With the assistance of face recognition results,
advanced tracking method will be developed in this project to handle these extreme cases.
|
|
|
|
|
|