OpenFace API Documentation¶
- The code is available on GitHub at cmusatyalab/openface
- The main website is available at http://cmusatyalab.github.io/openface.
Contents:
openface package¶
openface.AlignDlib class¶
-
class
openface.
AlignDlib
(facePredictor)[source]¶ Use dlib’s landmark estimation to align faces.
The alignment preprocess faces for input into a neural network. Faces are resized to the same size (such as 96x96) and transformed to make landmarks (such as the eyes and nose) appear at the same location on every image.
Normalized landmarks:
Instantiate an ‘AlignDlib’ object.
Parameters: facePredictor (str) – The path to dlib’s -
INNER_EYES_AND_BOTTOM_LIP
= [39, 42, 57]¶ Landmark indices.
-
OUTER_EYES_AND_NOSE
= [36, 45, 33]¶
-
align
(imgDim, rgbImg, bb=None, landmarks=None, landmarkIndices=INNER_EYES_AND_BOTTOM_LIP)[source]¶ Transform and align a face in an image.
Parameters: - imgDim (int) – The edge length in pixels of the square the image is resized to.
- rgbImg (numpy.ndarray) – RGB image to process. Shape: (height, width, 3)
- bb (dlib.rectangle) – Bounding box around the face to align. Defaults to the largest face.
- landmarks (list of (x,y) tuples) – Detected landmark locations. Landmarks found on bb if not provided.
- landmarkIndices (list of ints) – The indices to transform to.
- skipMulti (bool) – Skip image if more than one face detected.
Returns: The aligned RGB image. Shape: (imgDim, imgDim, 3)
Return type: numpy.ndarray
-
findLandmarks
(rgbImg, bb)[source]¶ Find the landmarks of a face.
Parameters: - rgbImg (numpy.ndarray) – RGB image to process. Shape: (height, width, 3)
- bb (dlib.rectangle) – Bounding box around the face to find landmarks for.
Returns: Detected landmark locations.
Return type: list of (x,y) tuples
-
getAllFaceBoundingBoxes
(rgbImg)[source]¶ Find all face bounding boxes in an image.
Parameters: rgbImg (numpy.ndarray) – RGB image to process. Shape: (height, width, 3) Returns: All face bounding boxes in an image. Return type: dlib.rectangles
-
getLargestFaceBoundingBox
(rgbImg, skipMulti=False)[source]¶ Find the largest face bounding box in an image.
Parameters: - rgbImg (numpy.ndarray) – RGB image to process. Shape: (height, width, 3)
- skipMulti (bool) – Skip image if more than one face detected.
Returns: The largest face bounding box in an image, or None.
Return type: dlib.rectangle
-
openface.TorchNeuralNet class¶
-
class
openface.
TorchNeuralNet
(self, model=defaultModel, imgDim=96, cuda=False)[source]¶ Use a Torch subprocess for feature extraction.
It also can be used as context manager using with statement.
with TorchNeuralNet(model=model) as net: # code
or
net = TorchNeuralNet(model=model) with net: # use Torch's neural network
In this way Torch processes will be closed at the end of the with block. PEP 343
Instantiate a ‘TorchNeuralNet’ object.
Starts openface_server.lua as a subprocess.
Parameters: - model (str) – The path to the Torch model to use.
- imgDim (int) – The edge length of the square input image.
- cuda (bool) – Flag to use CUDA in the subprocess.
-
defaultModel
= '/home/docs/checkouts/readthedocs.org/user_builds/openface-api/checkouts/latest/openface/../models/openface/nn4.small2.v1.t7'¶ The default Torch model to use.
openface.data module¶
Module for image data.
-
class
openface.data.
Image
(cls, name, path)[source]¶ Object containing image metadata.
Instantiate an ‘Image’ object.
Parameters: - cls (str) – The image’s class; the name of the person.
- name (str) – The image’s name.
- path (str) – Path to the image on disk.
-
openface.data.
iterImgs
(directory)[source]¶ Iterate through the images in a directory.
The images should be organized in subdirectories named by the image’s class (who the person is):
$ tree directory person-1 ├── image-1.jpg ├── image-2.png ... └── image-p.png ... person-m ├── image-1.png ├── image-2.jpg ... └── image-q.png
Parameters: directory (str) – The directory to iterate through. Returns: An iterator over Image objects.