Skip to content

Latest commit

 

History

History
143 lines (106 loc) · 3.9 KB

File metadata and controls

143 lines (106 loc) · 3.9 KB

Vision

The Google Cloud Vision (Vision API docs) API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands of categories (e.g., "sailboat", "lion", "Eiffel Tower"), detects individual objects and faces within images, and finds and reads printed words contained within images. You can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis. Analyze images uploaded in the request or integrate with your image storage on Google Cloud Storage.

Authentication and Configuration

>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()

or pass in credentials explicitly.

>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient(
...     credentials=creds,
... )

Annotate an Image

You can call the :meth:`annotate_image` method directly:

>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.annotate_image({
...   'image': {'source': {'image_uri': 'gs://my-test-bucket/image.jpg'}},
...   'features': [{'type': vision.enums.Feature.Type.FACE_DETECTION}],
... })
>>> len(response.annotations)
2
>>> for face in response.annotations[0].faces:
...     print(face.joy)
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY
>>> for logo in response.annotations[0].logos:
...     print(logo.description)
'google'
'github'

Single-feature Shortcuts

If you are only requesting a single feature, you may find it easier to ask for it using our direct methods:

>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.face_detection({
...   'source': {'image_uri': 'gs://my-test-bucket/image.jpg'},
... })
>>> len(response.annotations)
1
>>> for face in response.annotations[0].faces:
...     print(face.joy)
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY

No results found

If no results for the detection performed can be extracted from the image, then an empty list is returned. This behavior is similar with all detection types.

Example with :meth:`~google.cloud.vision.ImageAnnotatorClient.logo_detection`:

>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> with open('./image.jpg', 'rb') as image_file:
...     content = image_file.read()
>>> response = client.logo_detection({
...     'content': content,
... })
>>> len(response.annotations)
0

API Reference

.. toctree::
  :maxdepth: 2

  gapic/v1/api
  gapic/v1/types

Releases

For a list of all google-cloud-vision releases:

.. toctree::

  releases