diff --git a/.bumpversion.cfg b/.bumpversion.cfg index a1a5402f1..0951758da 100644 --- a/.bumpversion.cfg +++ b/.bumpversion.cfg @@ -1,5 +1,5 @@ [bumpversion] -current_version = 4.0.4 +current_version = 4.1.0 commit = True [bumpversion:file:ibm_watson/version.py] diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md new file mode 100644 index 000000000..c046c47f5 --- /dev/null +++ b/.github/CODE_OF_CONDUCT.md @@ -0,0 +1,46 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at ehdsouza27@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] + +[homepage]: http://contributor-covenant.org +[version]: http://contributor-covenant.org/version/1/4/ \ No newline at end of file diff --git a/.github/issue_template.md b/.github/issue_template.md deleted file mode 100644 index c789fbb17..000000000 --- a/.github/issue_template.md +++ /dev/null @@ -1,12 +0,0 @@ -#### Expected behavior - -#### Actual behavior - -#### Steps to reproduce the problem - -#### Code snippet (Note: Do not paste your credentials) - -#### python sdk version - -#### python version - diff --git a/CHANGELOG.md b/CHANGELOG.md index 6fae472a4..1c425f246 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,18 @@ +# [4.1.0](https://github.com/watson-developer-cloud/python-sdk/compare/v4.0.4...v4.1.0) (2019-11-27) + + +### Features + +* **assistantv1:** New param `new_disambiguation_opt_out ` in `create_dialog_node` ([5a5b840](https://github.com/watson-developer-cloud/python-sdk/commit/5a5b84076ff4b0d87355ed71cf7a2cbb9612c866)) +* **assistantv1:** New param `new_disambiguation_opt_out ` in `update_dialog_node() ` ([6e52e07](https://github.com/watson-developer-cloud/python-sdk/commit/6e52e07b3e3ab0a9bc2687406b8a98c5e5826e33)) +* **assistantv1:** New param `webhooks` in `create_workspace()` and `update_workspace()` ([0134b69](https://github.com/watson-developer-cloud/python-sdk/commit/0134b6981c09fc7132297aeb161eb75029bbd54d)) +* **assistantv1:** New properties `randomize` and `max_ssuggestions` in `WorkspaceSystemSettingsDisambiguation` ([27a8cd7](https://github.com/watson-developer-cloud/python-sdk/commit/27a8cd7173a48fb6aaf909598fc3eb34e1320fe4)) +* **assistantv1:** New property `off_topic` in `WorkspaceSystemSettings` ([5f93c55](https://github.com/watson-developer-cloud/python-sdk/commit/5f93c552828b539b846c9a44df4f69ed888d27b4)) +* **discoveryv1:** `title` property not part of `QueryNoticesResult` and `QueryResult` ([2ce0ad3](https://github.com/watson-developer-cloud/python-sdk/commit/2ce0ad33c91714eb6d9b2adb7ac44ff70ad378e9)) +* **discoveryv2:** Add examples for discoveryv2 ([2b54527](https://github.com/watson-developer-cloud/python-sdk/commit/2b54527725438d229e4acd80dc31d0869bdaa464)) +* **discoveryv2:** New discovery v2 available on CP4D ([73df7e4](https://github.com/watson-developer-cloud/python-sdk/commit/73df7e4a53ef83ad1271b71215ab357f7a538177)) +* **VisualRecognitionv4:** New method `get_training_usage` ([a5bec46](https://github.com/watson-developer-cloud/python-sdk/commit/a5bec467005db9340f6983654c293c94587258d9)) + ## [4.0.4](https://github.com/watson-developer-cloud/python-sdk/compare/v4.0.3...v4.0.4) (2019-11-22) diff --git a/README.md b/README.md index 0f148b3d4..20c1ecbc3 100755 --- a/README.md +++ b/README.md @@ -13,6 +13,7 @@ Python client library to quickly get started with the various [Watson APIs][wdc] * [Before you begin](#before-you-begin) * [Installation](#installation) * [Examples](#examples) + * [Discovery v2 only on CP4D](#discovery-v2-only-on-cp4d) * [Running in IBM Cloud](#running-in-ibm-cloud) * [Authentication](#authentication) * [Getting credentials](#getting-credentials) @@ -83,6 +84,9 @@ For more details see [#405](https://github.com/watson-developer-cloud/python-sdk The [examples][examples] folder has basic and advanced examples. The examples within each service assume that you already have [service credentials](#getting-credentials). +## Discovery v2 only on CP4D +Discovery v2 is only available on Cloud Pak for Data. + ## Running in IBM Cloud If you run your app in IBM Cloud, the SDK gets credentials from the [`VCAP_SERVICES`][vcap_services] environment variable. diff --git a/examples/discovery_v2.py b/examples/discovery_v2.py new file mode 100644 index 000000000..07bbfdd59 --- /dev/null +++ b/examples/discovery_v2.py @@ -0,0 +1,84 @@ +import json +import os +from ibm_watson import DiscoveryV2 +from ibm_watson.discovery_v2 import TrainingExample +from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator, BearerTokenAuthenticator + +## Important: Discovery v2 is only available on Cloud Pak for Data. ## + +## Authentication ## +## Option 1: username/password +authenticator = CloudPakForDataAuthenticator('', + '', + '', + disable_ssl_verification=True) + +## Option 2: bearer token +authenticator = BearerTokenAuthenticator('your bearer token') + +## Initialize discovery instance ## +discovery = DiscoveryV2(version='2019-11-22', authenticator=authenticator) +discovery.set_service_url( + '' +) +discovery.set_disable_ssl_verification(True) + +PROJECT_ID = 'your project id' +## List Collections ## +collections = discovery.list_collections(project_id=PROJECT_ID).get_result() +print(json.dumps(collections, indent=2)) + +## Component settings ## +settings_result = discovery.get_component_settings( + project_id=PROJECT_ID).get_result() +print(json.dumps(settings_result, indent=2)) + +## Add Document ## +COLLECTION_ID = 'your collection id' +with open(os.path.join(os.getcwd(), '..', 'resources', + 'simple.html')) as fileinfo: + add_document_result = discovery.add_document(project_id=PROJECT_ID, + collection_id=COLLECTION_ID, + file=fileinfo).get_result() +print(json.dumps(add_document_result, indent=2)) +document_id = add_document_result.get('document_id') + +## Create Training Data ## +training_example = TrainingExample(document_id=document_id, + collection_id=COLLECTION_ID, + relevance=1) +create_query = discovery.create_training_query( + project_id=PROJECT_ID, + natural_language_query='How is the weather today?', + examples=[training_example]).get_result() +print(json.dumps(create_query, indent=2)) + +training_queries = discovery.list_training_queries( + project_id=PROJECT_ID).get_result() +print(json.dumps(training_queries, indent=2)) + +## Queries ## +query_result = discovery.query( + project_id=PROJECT_ID, + collection_ids=[COLLECTION_ID], + natural_language_query='How is the weather today?').get_result() +print(json.dumps(query_result, indent=2)) + +autocomplete_result = discovery.get_autocompletion( + project_id=PROJECT_ID, prefix="The content").get_result() +print(json.dumps(autocomplete_result, indent=2)) + +query_notices_result = discovery.query_notices( + project_id=PROJECT_ID, natural_language_query='warning').get_result() +print(json.dumps(query_notices_result, indent=2)) + +list_fields = discovery.list_fields(project_id=PROJECT_ID).get_result() +print(json.dumps(list_fields, indent=2)) + +## Cleanup ## +discovery.delete_training_queries(project_id=PROJECT_ID).get_result() + +delete_document_result = discovery.delete_document( + project_id=PROJECT_ID, collection_id=COLLECTION_ID, + document_id=document_id).get_result() +print(json.dumps(delete_document_result, indent=2)) diff --git a/examples/visual_recognition_v4.py b/examples/visual_recognition_v4.py index 6ea4eb7b0..bb690deee 100644 --- a/examples/visual_recognition_v4.py +++ b/examples/visual_recognition_v4.py @@ -35,9 +35,15 @@ TrainingDataObject(object='giraffe training data', location=Location(64, 270, 755, 784)) ]).get_result() +print(json.dumps(training_data, indent=2)) # train collection train_result = service.train(collection_id).get_result() +print(json.dumps(train_result, indent=2)) + +# training usage +training_usage = service.get_training_usage() +print(json.dumps(training_usage, indent=2)) # analyze dog_path = os.path.join(os.path.dirname(__file__), '../resources/dog.jpg') @@ -51,7 +57,7 @@ FileWithMetadata(giraffe_files) ], image_url=['https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/American_Eskimo_Dog.jpg/1280px-American_Eskimo_Dog.jpg']).get_result() - assert analyze_images is not None + print(json.dumps(analyze_images, indent=2)) # delete collection service.delete_collection(collection_id) diff --git a/ibm_watson/__init__.py b/ibm_watson/__init__.py index 8b757e30e..4f8afd62c 100755 --- a/ibm_watson/__init__.py +++ b/ibm_watson/__init__.py @@ -24,6 +24,7 @@ from .text_to_speech_v1 import TextToSpeechV1 from .tone_analyzer_v3 import ToneAnalyzerV3 from .discovery_v1 import DiscoveryV1 +from .discovery_v2 import DiscoveryV2 from .compare_comply_v1 import CompareComplyV1 from .visual_recognition_v3 import VisualRecognitionV3 from .version import __version__ diff --git a/ibm_watson/assistant_v1.py b/ibm_watson/assistant_v1.py index c47cd91d7..c93fc3ada 100644 --- a/ibm_watson/assistant_v1.py +++ b/ibm_watson/assistant_v1.py @@ -243,6 +243,7 @@ def create_workspace(self, entities=None, dialog_nodes=None, counterexamples=None, + webhooks=None, **kwargs): """ Create workspace. @@ -272,6 +273,7 @@ def create_workspace(self, describing the dialog nodes in the workspace. :param list[Counterexample] counterexamples: (optional) An array of objects defining input examples that have been marked as irrelevant input. + :param list[Webhook] webhooks: (optional) :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse @@ -287,6 +289,8 @@ def create_workspace(self, dialog_nodes = [self._convert_model(x) for x in dialog_nodes] if counterexamples is not None: counterexamples = [self._convert_model(x) for x in counterexamples] + if webhooks is not None: + webhooks = [self._convert_model(x) for x in webhooks] headers = {} if 'headers' in kwargs: @@ -306,7 +310,8 @@ def create_workspace(self, 'intents': intents, 'entities': entities, 'dialog_nodes': dialog_nodes, - 'counterexamples': counterexamples + 'counterexamples': counterexamples, + 'webhooks': webhooks } url = '/v1/workspaces' @@ -388,6 +393,7 @@ def update_workspace(self, entities=None, dialog_nodes=None, counterexamples=None, + webhooks=None, append=None, **kwargs): """ @@ -419,6 +425,7 @@ def update_workspace(self, describing the dialog nodes in the workspace. :param list[Counterexample] counterexamples: (optional) An array of objects defining input examples that have been marked as irrelevant input. + :param list[Webhook] webhooks: (optional) :param bool append: (optional) Whether the new data is to be appended to the existing data in the workspace. If **append**=`false`, elements included in the new data completely replace the corresponding existing @@ -445,6 +452,8 @@ def update_workspace(self, dialog_nodes = [self._convert_model(x) for x in dialog_nodes] if counterexamples is not None: counterexamples = [self._convert_model(x) for x in counterexamples] + if webhooks is not None: + webhooks = [self._convert_model(x) for x in webhooks] headers = {} if 'headers' in kwargs: @@ -464,7 +473,8 @@ def update_workspace(self, 'intents': intents, 'entities': entities, 'dialog_nodes': dialog_nodes, - 'counterexamples': counterexamples + 'counterexamples': counterexamples, + 'webhooks': webhooks } url = '/v1/workspaces/{0}'.format(*self._encode_path_vars(workspace_id)) @@ -2387,6 +2397,7 @@ def create_dialog_node(self, digress_out=None, digress_out_slots=None, user_label=None, + disambiguation_opt_out=None, **kwargs): """ Create dialog node. @@ -2437,6 +2448,8 @@ def create_dialog_node(self, top-level nodes while filling out slots. :param str user_label: (optional) A label that can be displayed externally to describe the purpose of the node to users. + :param bool disambiguation_opt_out: (optional) Whether the dialog node + should be excluded from disambiguation suggestions. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse @@ -2480,7 +2493,8 @@ def create_dialog_node(self, 'digress_in': digress_in, 'digress_out': digress_out, 'digress_out_slots': digress_out_slots, - 'user_label': user_label + 'user_label': user_label, + 'disambiguation_opt_out': disambiguation_opt_out } url = '/v1/workspaces/{0}/dialog_nodes'.format( @@ -2561,6 +2575,7 @@ def update_dialog_node(self, new_digress_out=None, new_digress_out_slots=None, new_user_label=None, + new_disambiguation_opt_out=None, **kwargs): """ Update dialog node. @@ -2613,6 +2628,8 @@ def update_dialog_node(self, to top-level nodes while filling out slots. :param str new_user_label: (optional) A label that can be displayed externally to describe the purpose of the node to users. + :param bool new_disambiguation_opt_out: (optional) Whether the dialog node + should be excluded from disambiguation suggestions. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse @@ -2656,7 +2673,8 @@ def update_dialog_node(self, 'digress_in': new_digress_in, 'digress_out': new_digress_out, 'digress_out_slots': new_digress_out_slots, - 'user_label': new_user_label + 'user_label': new_user_label, + 'disambiguation_opt_out': new_disambiguation_opt_out } url = '/v1/workspaces/{0}/dialog_nodes/{1}'.format( @@ -2789,7 +2807,8 @@ def list_all_logs(self, :param str filter: A cacheable parameter that limits the results to those matching the specified filter. You must specify a filter query that - includes a value for `language`, as well as a value for `workspace_id` or + includes a value for `language`, as well as a value for + `request.context.system.assistant_id`, `workspace_id`, or `request.context.metadata.deployment`. For more information, see the [documentation](https://cloud.ibm.com/docs/services/assistant?topic=assistant-filter-reference#filter-reference). :param str sort: (optional) How to sort the returned log events. You can @@ -3696,6 +3715,8 @@ class DialogNode(): top-level nodes while filling out slots. :attr str user_label: (optional) A label that can be displayed externally to describe the purpose of the node to users. + :attr bool disambiguation_opt_out: (optional) Whether the dialog node should be + excluded from disambiguation suggestions. :attr bool disabled: (optional) For internal use only. :attr datetime created: (optional) The timestamp for creation of the object. :attr datetime updated: (optional) The timestamp for the most recent update to @@ -3722,6 +3743,7 @@ def __init__(self, digress_out=None, digress_out_slots=None, user_label=None, + disambiguation_opt_out=None, disabled=None, created=None, updated=None): @@ -3767,6 +3789,8 @@ def __init__(self, top-level nodes while filling out slots. :param str user_label: (optional) A label that can be displayed externally to describe the purpose of the node to users. + :param bool disambiguation_opt_out: (optional) Whether the dialog node + should be excluded from disambiguation suggestions. :param bool disabled: (optional) For internal use only. :param datetime created: (optional) The timestamp for creation of the object. @@ -3791,6 +3815,7 @@ def __init__(self, self.digress_out = digress_out self.digress_out_slots = digress_out_slots self.user_label = user_label + self.disambiguation_opt_out = disambiguation_opt_out self.disabled = disabled self.created = created self.updated = updated @@ -3803,8 +3828,8 @@ def _from_dict(cls, _dict): 'dialog_node', 'description', 'conditions', 'parent', 'previous_sibling', 'output', 'context', 'metadata', 'next_step', 'title', 'type', 'event_name', 'variable', 'actions', 'digress_in', - 'digress_out', 'digress_out_slots', 'user_label', 'disabled', - 'created', 'updated' + 'digress_out', 'digress_out_slots', 'user_label', + 'disambiguation_opt_out', 'disabled', 'created', 'updated' ] bad_keys = set(_dict.keys()) - set(valid_keys) if bad_keys: @@ -3854,6 +3879,8 @@ def _from_dict(cls, _dict): args['digress_out_slots'] = _dict.get('digress_out_slots') if 'user_label' in _dict: args['user_label'] = _dict.get('user_label') + if 'disambiguation_opt_out' in _dict: + args['disambiguation_opt_out'] = _dict.get('disambiguation_opt_out') if 'disabled' in _dict: args['disabled'] = _dict.get('disabled') if 'created' in _dict: @@ -3903,6 +3930,9 @@ def _to_dict(self): _dict['digress_out_slots'] = self.digress_out_slots if hasattr(self, 'user_label') and self.user_label is not None: _dict['user_label'] = self.user_label + if hasattr(self, 'disambiguation_opt_out' + ) and self.disambiguation_opt_out is not None: + _dict['disambiguation_opt_out'] = self.disambiguation_opt_out if hasattr(self, 'disabled') and self.disabled is not None: _dict['disabled'] = self.disabled if hasattr(self, 'created') and self.created is not None: @@ -4084,6 +4114,7 @@ class TypeEnum(Enum): SERVER = "server" CLOUD_FUNCTION = "cloud_function" WEB_ACTION = "web_action" + WEBHOOK = "webhook" class DialogNodeCollection(): @@ -5018,7 +5049,8 @@ class DialogSuggestion(): DialogSuggestion. :attr str label: The user-facing label for the disambiguation option. This label - is taken from the **user_label** property of the corresponding dialog node. + is taken from the **title** or **user_label** property of the corresponding + dialog node, depending on the disambiguation options. :attr DialogSuggestionValue value: An object defining the message input, intents, and entities to be sent to the Watson Assistant service if the user selects the corresponding disambiguation option. @@ -5035,8 +5067,8 @@ def __init__(self, label, value, *, output=None, dialog_node=None): Initialize a DialogSuggestion object. :param str label: The user-facing label for the disambiguation option. This - label is taken from the **user_label** property of the corresponding dialog - node. + label is taken from the **title** or **user_label** property of the + corresponding dialog node, depending on the disambiguation options. :param DialogSuggestionValue value: An object defining the message input, intents, and entities to be sent to the Watson Assistant service if the user selects the corresponding disambiguation option. @@ -7472,7 +7504,7 @@ class RuntimeResponseGeneric(): :attr list[DialogSuggestion] suggestions: (optional) An array of objects describing the possible matching dialog nodes from which the user can choose. **Note:** The **suggestions** property is part of the disambiguation feature, - which is only available for Premium users. + which is only available for Plus and Premium users. """ def __init__(self, @@ -7522,7 +7554,7 @@ def __init__(self, describing the possible matching dialog nodes from which the user can choose. **Note:** The **suggestions** property is part of the disambiguation - feature, which is only available for Premium users. + feature, which is only available for Plus and Premium users. """ self.response_type = response_type self.text = text @@ -8071,6 +8103,151 @@ def __ne__(self, other): return not self == other +class Webhook(): + """ + A webhook that can be used by dialog nodes to make programmatic calls to an external + function. + **Note:** Currently, only a single webhook named `main_webhook` is supported. + + :attr str url: The URL for the external service or application to which you want + to send HTTP POST requests. + :attr str name: The name of the webhook. Currently, `main_webhook` is the only + supported value. + :attr list[WebhookHeader] headers: (optional) An optional array of HTTP headers + to pass with the HTTP request. + """ + + def __init__(self, url, name, *, headers=None): + """ + Initialize a Webhook object. + + :param str url: The URL for the external service or application to which + you want to send HTTP POST requests. + :param str name: The name of the webhook. Currently, `main_webhook` is the + only supported value. + :param list[WebhookHeader] headers: (optional) An optional array of HTTP + headers to pass with the HTTP request. + """ + self.url = url + self.name = name + self.headers = headers + + @classmethod + def _from_dict(cls, _dict): + """Initialize a Webhook object from a json dictionary.""" + args = {} + valid_keys = ['url', 'name', 'headers'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class Webhook: ' + + ', '.join(bad_keys)) + if 'url' in _dict: + args['url'] = _dict.get('url') + else: + raise ValueError( + 'Required property \'url\' not present in Webhook JSON') + if 'name' in _dict: + args['name'] = _dict.get('name') + else: + raise ValueError( + 'Required property \'name\' not present in Webhook JSON') + if 'headers' in _dict: + args['headers'] = [ + WebhookHeader._from_dict(x) for x in (_dict.get('headers')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'url') and self.url is not None: + _dict['url'] = self.url + if hasattr(self, 'name') and self.name is not None: + _dict['name'] = self.name + if hasattr(self, 'headers') and self.headers is not None: + _dict['headers'] = [x._to_dict() for x in self.headers] + return _dict + + def __str__(self): + """Return a `str` version of this Webhook object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class WebhookHeader(): + """ + A key/value pair defining an HTTP header and a value. + + :attr str name: The name of an HTTP header (for example, `Authorization`). + :attr str value: The value of an HTTP header. + """ + + def __init__(self, name, value): + """ + Initialize a WebhookHeader object. + + :param str name: The name of an HTTP header (for example, `Authorization`). + :param str value: The value of an HTTP header. + """ + self.name = name + self.value = value + + @classmethod + def _from_dict(cls, _dict): + """Initialize a WebhookHeader object from a json dictionary.""" + args = {} + valid_keys = ['name', 'value'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class WebhookHeader: ' + + ', '.join(bad_keys)) + if 'name' in _dict: + args['name'] = _dict.get('name') + else: + raise ValueError( + 'Required property \'name\' not present in WebhookHeader JSON') + if 'value' in _dict: + args['value'] = _dict.get('value') + else: + raise ValueError( + 'Required property \'value\' not present in WebhookHeader JSON') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'name') and self.name is not None: + _dict['name'] = self.name + if hasattr(self, 'value') and self.value is not None: + _dict['value'] = self.value + return _dict + + def __str__(self): + """Return a `str` version of this WebhookHeader object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class Workspace(): """ Workspace. @@ -8098,6 +8275,7 @@ class Workspace(): the dialog nodes in the workspace. :attr list[Counterexample] counterexamples: (optional) An array of counterexamples. + :attr list[Webhook] webhooks: (optional) """ def __init__(self, @@ -8115,7 +8293,8 @@ def __init__(self, intents=None, entities=None, dialog_nodes=None, - counterexamples=None): + counterexamples=None, + webhooks=None): """ Initialize a Workspace object. @@ -8144,6 +8323,7 @@ def __init__(self, describing the dialog nodes in the workspace. :param list[Counterexample] counterexamples: (optional) An array of counterexamples. + :param list[Webhook] webhooks: (optional) """ self.name = name self.description = description @@ -8159,6 +8339,7 @@ def __init__(self, self.entities = entities self.dialog_nodes = dialog_nodes self.counterexamples = counterexamples + self.webhooks = webhooks @classmethod def _from_dict(cls, _dict): @@ -8167,7 +8348,7 @@ def _from_dict(cls, _dict): valid_keys = [ 'name', 'description', 'language', 'metadata', 'learning_opt_out', 'system_settings', 'workspace_id', 'status', 'created', 'updated', - 'intents', 'entities', 'dialog_nodes', 'counterexamples' + 'intents', 'entities', 'dialog_nodes', 'counterexamples', 'webhooks' ] bad_keys = set(_dict.keys()) - set(valid_keys) if bad_keys: @@ -8226,6 +8407,10 @@ def _from_dict(cls, _dict): Counterexample._from_dict(x) for x in (_dict.get('counterexamples')) ] + if 'webhooks' in _dict: + args['webhooks'] = [ + Webhook._from_dict(x) for x in (_dict.get('webhooks')) + ] return cls(**args) def _to_dict(self): @@ -8264,6 +8449,8 @@ def _to_dict(self): _dict['counterexamples'] = [ x._to_dict() for x in self.counterexamples ] + if hasattr(self, 'webhooks') and self.webhooks is not None: + _dict['webhooks'] = [x._to_dict() for x in self.webhooks] return _dict def __str__(self): @@ -8369,15 +8556,18 @@ class WorkspaceSystemSettings(): related to the Watson Assistant user interface. :attr WorkspaceSystemSettingsDisambiguation disambiguation: (optional) Workspace settings related to the disambiguation feature. - **Note:** This feature is available only to Premium users. + **Note:** This feature is available only to Plus and Premium users. :attr dict human_agent_assist: (optional) For internal use only. + :attr WorkspaceSystemSettingsOffTopic off_topic: (optional) Workspace settings + related to detection of irrelevant input. """ def __init__(self, *, tooling=None, disambiguation=None, - human_agent_assist=None): + human_agent_assist=None, + off_topic=None): """ Initialize a WorkspaceSystemSettings object. @@ -8385,18 +8575,23 @@ def __init__(self, settings related to the Watson Assistant user interface. :param WorkspaceSystemSettingsDisambiguation disambiguation: (optional) Workspace settings related to the disambiguation feature. - **Note:** This feature is available only to Premium users. + **Note:** This feature is available only to Plus and Premium users. :param dict human_agent_assist: (optional) For internal use only. + :param WorkspaceSystemSettingsOffTopic off_topic: (optional) Workspace + settings related to detection of irrelevant input. """ self.tooling = tooling self.disambiguation = disambiguation self.human_agent_assist = human_agent_assist + self.off_topic = off_topic @classmethod def _from_dict(cls, _dict): """Initialize a WorkspaceSystemSettings object from a json dictionary.""" args = {} - valid_keys = ['tooling', 'disambiguation', 'human_agent_assist'] + valid_keys = [ + 'tooling', 'disambiguation', 'human_agent_assist', 'off_topic' + ] bad_keys = set(_dict.keys()) - set(valid_keys) if bad_keys: raise ValueError( @@ -8411,6 +8606,9 @@ def _from_dict(cls, _dict): _dict.get('disambiguation')) if 'human_agent_assist' in _dict: args['human_agent_assist'] = _dict.get('human_agent_assist') + if 'off_topic' in _dict: + args['off_topic'] = WorkspaceSystemSettingsOffTopic._from_dict( + _dict.get('off_topic')) return cls(**args) def _to_dict(self): @@ -8424,6 +8622,8 @@ def _to_dict(self): self, 'human_agent_assist') and self.human_agent_assist is not None: _dict['human_agent_assist'] = self.human_agent_assist + if hasattr(self, 'off_topic') and self.off_topic is not None: + _dict['off_topic'] = self.off_topic._to_dict() return _dict def __str__(self): @@ -8444,7 +8644,7 @@ def __ne__(self, other): class WorkspaceSystemSettingsDisambiguation(): """ Workspace settings related to the disambiguation feature. - **Note:** This feature is available only to Premium users. + **Note:** This feature is available only to Plus and Premium users. :attr str prompt: (optional) The text of the introductory prompt that accompanies disambiguation options presented to the user. @@ -8457,6 +8657,12 @@ class WorkspaceSystemSettingsDisambiguation(): to intent detection conflicts. Set to **high** if you want the disambiguation feature to be triggered more often. This can be useful for testing or demonstration purposes. + :attr bool randomize: (optional) Whether the order in which disambiguation + suggestions are presented should be randomized (but still influenced by relative + confidence). + :attr int max_suggestions: (optional) The maximum number of disambigation + suggestions that can be included in a `suggestion` response. + :attr str suggestion_text_policy: (optional) For internal use only. """ def __init__(self, @@ -8464,7 +8670,10 @@ def __init__(self, prompt=None, none_of_the_above_prompt=None, enabled=None, - sensitivity=None): + sensitivity=None, + randomize=None, + max_suggestions=None, + suggestion_text_policy=None): """ Initialize a WorkspaceSystemSettingsDisambiguation object. @@ -8479,18 +8688,28 @@ def __init__(self, feature to intent detection conflicts. Set to **high** if you want the disambiguation feature to be triggered more often. This can be useful for testing or demonstration purposes. + :param bool randomize: (optional) Whether the order in which disambiguation + suggestions are presented should be randomized (but still influenced by + relative confidence). + :param int max_suggestions: (optional) The maximum number of disambigation + suggestions that can be included in a `suggestion` response. + :param str suggestion_text_policy: (optional) For internal use only. """ self.prompt = prompt self.none_of_the_above_prompt = none_of_the_above_prompt self.enabled = enabled self.sensitivity = sensitivity + self.randomize = randomize + self.max_suggestions = max_suggestions + self.suggestion_text_policy = suggestion_text_policy @classmethod def _from_dict(cls, _dict): """Initialize a WorkspaceSystemSettingsDisambiguation object from a json dictionary.""" args = {} valid_keys = [ - 'prompt', 'none_of_the_above_prompt', 'enabled', 'sensitivity' + 'prompt', 'none_of_the_above_prompt', 'enabled', 'sensitivity', + 'randomize', 'max_suggestions', 'suggestion_text_policy' ] bad_keys = set(_dict.keys()) - set(valid_keys) if bad_keys: @@ -8506,6 +8725,12 @@ def _from_dict(cls, _dict): args['enabled'] = _dict.get('enabled') if 'sensitivity' in _dict: args['sensitivity'] = _dict.get('sensitivity') + if 'randomize' in _dict: + args['randomize'] = _dict.get('randomize') + if 'max_suggestions' in _dict: + args['max_suggestions'] = _dict.get('max_suggestions') + if 'suggestion_text_policy' in _dict: + args['suggestion_text_policy'] = _dict.get('suggestion_text_policy') return cls(**args) def _to_dict(self): @@ -8520,6 +8745,14 @@ def _to_dict(self): _dict['enabled'] = self.enabled if hasattr(self, 'sensitivity') and self.sensitivity is not None: _dict['sensitivity'] = self.sensitivity + if hasattr(self, 'randomize') and self.randomize is not None: + _dict['randomize'] = self.randomize + if hasattr(self, + 'max_suggestions') and self.max_suggestions is not None: + _dict['max_suggestions'] = self.max_suggestions + if hasattr(self, 'suggestion_text_policy' + ) and self.suggestion_text_policy is not None: + _dict['suggestion_text_policy'] = self.suggestion_text_policy return _dict def __str__(self): @@ -8546,6 +8779,59 @@ class SensitivityEnum(Enum): HIGH = "high" +class WorkspaceSystemSettingsOffTopic(): + """ + Workspace settings related to detection of irrelevant input. + + :attr bool enabled: (optional) Whether enhanced irrelevance detection is enabled + for the workspace. + """ + + def __init__(self, *, enabled=None): + """ + Initialize a WorkspaceSystemSettingsOffTopic object. + + :param bool enabled: (optional) Whether enhanced irrelevance detection is + enabled for the workspace. + """ + self.enabled = enabled + + @classmethod + def _from_dict(cls, _dict): + """Initialize a WorkspaceSystemSettingsOffTopic object from a json dictionary.""" + args = {} + valid_keys = ['enabled'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class WorkspaceSystemSettingsOffTopic: ' + + ', '.join(bad_keys)) + if 'enabled' in _dict: + args['enabled'] = _dict.get('enabled') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'enabled') and self.enabled is not None: + _dict['enabled'] = self.enabled + return _dict + + def __str__(self): + """Return a `str` version of this WorkspaceSystemSettingsOffTopic object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class WorkspaceSystemSettingsTooling(): """ Workspace settings related to the Watson Assistant user interface. diff --git a/ibm_watson/assistant_v2.py b/ibm_watson/assistant_v2.py index 8f36be0bc..6489bfab1 100644 --- a/ibm_watson/assistant_v2.py +++ b/ibm_watson/assistant_v2.py @@ -89,7 +89,10 @@ def create_session(self, assistant_id, **kwargs): Create a session. Create a new session. A session is used to send user input to a skill and receive - responses. It also maintains the state of the conversation. + responses. It also maintains the state of the conversation. A session persists + until it is deleted, or until it times out because of inactivity. (For more + information, see the + [documentation](https://cloud.ibm.com/docs/services/assistant?topic=assistant-assistant-settings). :param str assistant_id: Unique identifier of the assistant. To find the assistant ID in the Watson Assistant user interface, open the assistant @@ -127,7 +130,9 @@ def delete_session(self, assistant_id, session_id, **kwargs): """ Delete session. - Deletes a session explicitly before it times out. + Deletes a session explicitly before it times out. (For more information about the + session inactivity timeout, see the + [documentation](https://cloud.ibm.com/docs/services/assistant?topic=assistant-assistant-settings)). :param str assistant_id: Unique identifier of the assistant. To find the assistant ID in the Watson Assistant user interface, open the assistant @@ -680,7 +685,8 @@ class DialogSuggestion(): DialogSuggestion. :attr str label: The user-facing label for the disambiguation option. This label - is taken from the **user_label** property of the corresponding dialog node. + is taken from the **title** or **user_label** property of the corresponding + dialog node, depending on the disambiguation options. :attr DialogSuggestionValue value: An object defining the message input to be sent to the assistant if the user selects the corresponding disambiguation option. @@ -693,8 +699,8 @@ def __init__(self, label, value, *, output=None): Initialize a DialogSuggestion object. :param str label: The user-facing label for the disambiguation option. This - label is taken from the **user_label** property of the corresponding dialog - node. + label is taken from the **title** or **user_label** property of the + corresponding dialog node, depending on the disambiguation options. :param DialogSuggestionValue value: An object defining the message input to be sent to the assistant if the user selects the corresponding disambiguation option. diff --git a/ibm_watson/common.py b/ibm_watson/common.py index 5929c6b57..81ecc7ec0 100644 --- a/ibm_watson/common.py +++ b/ibm_watson/common.py @@ -21,21 +21,30 @@ USER_AGENT_HEADER = 'User-Agent' SDK_NAME = 'watson-apis-python-sdk' + def get_system_info(): - return '{0} {1} {2}'.format(platform.system(), # OS - platform.release(), # OS version - platform.python_version()) # Python version + return '{0} {1} {2}'.format( + platform.system(), # OS + platform.release(), # OS version + platform.python_version()) # Python version + + def get_user_agent(): return user_agent + def get_sdk_analytics(service_name, service_version, operation_id): return 'service_name={0};service_version={1};operation_id={2}'.format( service_name, service_version, operation_id) + user_agent = '{0}-{1} {2}'.format(SDK_NAME, __version__, get_system_info()) + def get_sdk_headers(service_name, service_version, operation_id): headers = {} - headers[SDK_ANALYTICS_HEADER] = get_sdk_analytics(service_name, service_version, operation_id) + headers[SDK_ANALYTICS_HEADER] = get_sdk_analytics(service_name, + service_version, + operation_id) headers[USER_AGENT_HEADER] = get_user_agent() return headers diff --git a/ibm_watson/discovery_v1.py b/ibm_watson/discovery_v1.py index e5e5145a4..eac3e348e 100644 --- a/ibm_watson/discovery_v1.py +++ b/ibm_watson/discovery_v1.py @@ -1276,7 +1276,7 @@ def add_document(self, :param str collection_id: The ID of the collection. :param file file: (optional) The content of the document to ingest. The maximum supported file size when adding a file to a collection is 50 - megabytes, the maximum supported file size when testing a confiruration is + megabytes, the maximum supported file size when testing a configuration is 1 megabyte. Files larger than the supported size are rejected. :param str filename: (optional) The filename for file. :param str file_content_type: (optional) The content type of file. @@ -1391,7 +1391,7 @@ def update_document(self, :param str document_id: The ID of the document. :param file file: (optional) The content of the document to ingest. The maximum supported file size when adding a file to a collection is 50 - megabytes, the maximum supported file size when testing a confiruration is + megabytes, the maximum supported file size when testing a configuration is 1 megabyte. Files larger than the supported size are rejected. :param str filename: (optional) The filename for file. :param str file_content_type: (optional) The content type of file. @@ -9649,7 +9649,6 @@ class QueryNoticesResult(): containing the document for this result. :attr QueryResultMetadata result_metadata: (optional) Metadata of a query result. - :attr str title: (optional) Automatically extracted result title. :attr int code: (optional) The internal status code returned by the ingestion subsystem indicating the overall result of ingesting the source document. :attr str filename: (optional) Name of the original source file (if available). @@ -9665,7 +9664,6 @@ def __init__(self, metadata=None, collection_id=None, result_metadata=None, - title=None, code=None, filename=None, file_type=None, @@ -9681,7 +9679,6 @@ def __init__(self, containing the document for this result. :param QueryResultMetadata result_metadata: (optional) Metadata of a query result. - :param str title: (optional) Automatically extracted result title. :param int code: (optional) The internal status code returned by the ingestion subsystem indicating the overall result of ingesting the source document. @@ -9697,7 +9694,6 @@ def __init__(self, self.metadata = metadata self.collection_id = collection_id self.result_metadata = result_metadata - self.title = title self.code = code self.filename = filename self.file_type = file_type @@ -9724,9 +9720,6 @@ def _from_dict(cls, _dict): args['result_metadata'] = QueryResultMetadata._from_dict( _dict.get('result_metadata')) del xtra['result_metadata'] - if 'title' in _dict: - args['title'] = _dict.get('title') - del xtra['title'] if 'code' in _dict: args['code'] = _dict.get('code') del xtra['code'] @@ -9759,8 +9752,6 @@ def _to_dict(self): if hasattr(self, 'result_metadata') and self.result_metadata is not None: _dict['result_metadata'] = self.result_metadata._to_dict() - if hasattr(self, 'title') and self.title is not None: - _dict['title'] = self.title if hasattr(self, 'code') and self.code is not None: _dict['code'] = self.code if hasattr(self, 'filename') and self.filename is not None: @@ -9780,8 +9771,8 @@ def _to_dict(self): def __setattr__(self, name, value): properties = { - 'id', 'metadata', 'collection_id', 'result_metadata', 'title', - 'code', 'filename', 'file_type', 'sha1', 'notices' + 'id', 'metadata', 'collection_id', 'result_metadata', 'code', + 'filename', 'file_type', 'sha1', 'notices' } if not hasattr(self, '_additionalProperties'): super(QueryNoticesResult, self).__setattr__('_additionalProperties', @@ -10076,7 +10067,6 @@ class QueryResult(): containing the document for this result. :attr QueryResultMetadata result_metadata: (optional) Metadata of a query result. - :attr str title: (optional) Automatically extracted result title. """ def __init__(self, @@ -10085,7 +10075,6 @@ def __init__(self, metadata=None, collection_id=None, result_metadata=None, - title=None, **kwargs): """ Initialize a QueryResult object. @@ -10096,14 +10085,12 @@ def __init__(self, containing the document for this result. :param QueryResultMetadata result_metadata: (optional) Metadata of a query result. - :param str title: (optional) Automatically extracted result title. :param **kwargs: (optional) Any additional properties. """ self.id = id self.metadata = metadata self.collection_id = collection_id self.result_metadata = result_metadata - self.title = title for _key, _value in kwargs.items(): setattr(self, _key, _value) @@ -10125,9 +10112,6 @@ def _from_dict(cls, _dict): args['result_metadata'] = QueryResultMetadata._from_dict( _dict.get('result_metadata')) del xtra['result_metadata'] - if 'title' in _dict: - args['title'] = _dict.get('title') - del xtra['title'] args.update(xtra) return cls(**args) @@ -10143,8 +10127,6 @@ def _to_dict(self): if hasattr(self, 'result_metadata') and self.result_metadata is not None: _dict['result_metadata'] = self.result_metadata._to_dict() - if hasattr(self, 'title') and self.title is not None: - _dict['title'] = self.title if hasattr(self, '_additionalProperties'): for _key in self._additionalProperties: _value = getattr(self, _key, None) @@ -10153,9 +10135,7 @@ def _to_dict(self): return _dict def __setattr__(self, name, value): - properties = { - 'id', 'metadata', 'collection_id', 'result_metadata', 'title' - } + properties = {'id', 'metadata', 'collection_id', 'result_metadata'} if not hasattr(self, '_additionalProperties'): super(QueryResult, self).__setattr__('_additionalProperties', set()) if name not in properties: diff --git a/ibm_watson/discovery_v2.py b/ibm_watson/discovery_v2.py new file mode 100644 index 000000000..ef9dc5b8f --- /dev/null +++ b/ibm_watson/discovery_v2.py @@ -0,0 +1,5613 @@ +# coding: utf-8 + +# (C) Copyright IBM Corp. 2019. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +IBM Watson™ Discovery for IBM Cloud Pak for Data is a cognitive search and content +analytics engine that you can add to applications to identify patterns, trends and +actionable insights to drive better decision-making. Securely unify structured and +unstructured data with pre-enriched content, and use a simplified query language to +eliminate the need for manual filtering of results. +""" + +import json +from .common import get_sdk_headers +from enum import Enum +from ibm_cloud_sdk_core import BaseService +from ibm_cloud_sdk_core import datetime_to_string, string_to_datetime +from ibm_cloud_sdk_core import get_authenticator_from_environment +from ibm_cloud_sdk_core import read_external_sources +from os.path import basename + +############################################################################## +# Service +############################################################################## + + +class DiscoveryV2(BaseService): + """The Discovery V2 service.""" + + default_service_url = None + + def __init__( + self, + version, + authenticator=None, + ): + """ + Construct a new client for the Discovery service. + + :param str version: The API version date to use with the service, in + "YYYY-MM-DD" format. Whenever the API is changed in a backwards + incompatible way, a new minor version of the API is released. + The service uses the API version for the date you specify, or + the most recent version before that date. Note that you should + not programmatically specify the current date at runtime, in + case the API has been updated since your application's release. + Instead, specify a version date that is compatible with your + application, and don't change it until your application is + ready for a later version. + + :param Authenticator authenticator: The authenticator specifies the authentication mechanism. + Get up to date information from https://github.com/IBM/python-sdk-core/blob/master/README.md + about initializing the authenticator of your choice. + """ + + service_url = self.default_service_url + disable_ssl_verification = False + + config = read_external_sources('discovery') + if config.get('URL'): + service_url = config.get('URL') + if config.get('DISABLE_SSL'): + disable_ssl_verification = config.get('DISABLE_SSL') + + if not authenticator: + authenticator = get_authenticator_from_environment('discovery') + + BaseService.__init__(self, + service_url=service_url, + authenticator=authenticator, + disable_ssl_verification=disable_ssl_verification) + self.version = version + + ######################### + # Collections + ######################### + + def list_collections(self, project_id, **kwargs): + """ + List collections. + + Lists existing collections for the specified project. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'list_collections') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/collections'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + ######################### + # Queries + ######################### + + def query(self, + project_id, + *, + collection_ids=None, + filter=None, + query=None, + natural_language_query=None, + aggregation=None, + count=None, + return_=None, + offset=None, + sort=None, + highlight=None, + spelling_suggestions=None, + table_results=None, + suggested_refinements=None, + passages=None, + **kwargs): + """ + Query a project. + + By using this method, you can construct queries. For details, see the [Discovery + documentation](https://cloud.ibm.com/docs/services/discovery-data?topic=discovery-data-query-concepts). + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param list[str] collection_ids: (optional) A comma-separated list of + collection IDs to be queried against. + :param str filter: (optional) A cacheable query that excludes documents + that don't mention the query content. Filter searches are better for + metadata-type searches and for assessing the concepts in the data set. + :param str query: (optional) A query search returns all documents in your + data set with full enrichments and full text, but with the most relevant + documents listed first. Use a query search when you want to find the most + relevant search results. + :param str natural_language_query: (optional) A natural language query that + returns relevant documents by utilizing training data and natural language + understanding. + :param str aggregation: (optional) An aggregation search that returns an + exact answer by combining query search with filters. Useful for + applications to build lists, tables, and time series. For a full list of + possible aggregations, see the Query reference. + :param int count: (optional) Number of results to return. + :param list[str] return_: (optional) A list of the fields in the document + hierarchy to return. If this parameter not specified, then all top-level + fields are returned. + :param int offset: (optional) The number of query results to skip at the + beginning. For example, if the total number of results that are returned is + 10 and the offset is 8, it returns the last two results. + :param str sort: (optional) A comma-separated list of fields in the + document to sort on. You can optionally specify a sort direction by + prefixing the field with `-` for descending or `+` for ascending. Ascending + is the default sort direction if no prefix is specified. This parameter + cannot be used in the same query as the **bias** parameter. + :param bool highlight: (optional) When `true`, a highlight field is + returned for each result which contains the fields which match the query + with `` tags around the matching query terms. + :param bool spelling_suggestions: (optional) When `true` and the + **natural_language_query** parameter is used, the + **natural_language_query** parameter is spell checked. The most likely + correction is returned in the **suggested_query** field of the response (if + one exists). + :param QueryLargeTableResults table_results: (optional) Configuration for + table retrieval. + :param QueryLargeSuggestedRefinements suggested_refinements: (optional) + Configuration for suggested refinements. + :param QueryLargePassages passages: (optional) Configuration for passage + retrieval. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if table_results is not None: + table_results = self._convert_model(table_results) + if suggested_refinements is not None: + suggested_refinements = self._convert_model(suggested_refinements) + if passages is not None: + passages = self._convert_model(passages) + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'query') + headers.update(sdk_headers) + + params = {'version': self.version} + + data = { + 'collection_ids': collection_ids, + 'filter': filter, + 'query': query, + 'natural_language_query': natural_language_query, + 'aggregation': aggregation, + 'count': count, + 'return': return_, + 'offset': offset, + 'sort': sort, + 'highlight': highlight, + 'spelling_suggestions': spelling_suggestions, + 'table_results': table_results, + 'suggested_refinements': suggested_refinements, + 'passages': passages + } + + url = '/v2/projects/{0}/query'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='POST', + url=url, + headers=headers, + params=params, + data=data, + accept_json=True) + response = self.send(request) + return response + + def get_autocompletion(self, + project_id, + prefix, + *, + collection_ids=None, + field=None, + count=None, + **kwargs): + """ + Get Autocomplete Suggestions. + + Returns completion query suggestions for the specified prefix. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str prefix: The prefix to use for autocompletion. For example, the + prefix `Ho` could autocomplete to `Hot`, `Housing`, or `How do I upgrade`. + Possible completions are. + :param list[str] collection_ids: (optional) Comma separated list of the + collection IDs. If this parameter is not specified, all collections in the + project are used. + :param str field: (optional) The field in the result documents that + autocompletion suggestions are identified from. + :param int count: (optional) The number of autocompletion suggestions to + return. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if prefix is None: + raise ValueError('prefix must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'get_autocompletion') + headers.update(sdk_headers) + + params = { + 'version': self.version, + 'prefix': prefix, + 'collection_ids': self._convert_list(collection_ids), + 'field': field, + 'count': count + } + + url = '/v2/projects/{0}/autocompletion'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + def query_notices(self, + project_id, + *, + filter=None, + query=None, + natural_language_query=None, + count=None, + offset=None, + **kwargs): + """ + Query system notices. + + Queries for notices (errors or warnings) that might have been generated by the + system. Notices are generated when ingesting documents and performing relevance + training. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str filter: (optional) A cacheable query that excludes documents + that don't mention the query content. Filter searches are better for + metadata-type searches and for assessing the concepts in the data set. + :param str query: (optional) A query search returns all documents in your + data set with full enrichments and full text, but with the most relevant + documents listed first. + :param str natural_language_query: (optional) A natural language query that + returns relevant documents by utilizing training data and natural language + understanding. + :param int count: (optional) Number of results to return. The maximum for + the **count** and **offset** values together in any one query is **10000**. + :param int offset: (optional) The number of query results to skip at the + beginning. For example, if the total number of results that are returned is + 10 and the offset is 8, it returns the last two results. The maximum for + the **count** and **offset** values together in any one query is **10000**. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'query_notices') + headers.update(sdk_headers) + + params = { + 'version': self.version, + 'filter': filter, + 'query': query, + 'natural_language_query': natural_language_query, + 'count': count, + 'offset': offset + } + + url = '/v2/projects/{0}/notices'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + def list_fields(self, project_id, *, collection_ids=None, **kwargs): + """ + List fields. + + Gets a list of the unique fields (and their types) stored in the the specified + collections. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param list[str] collection_ids: (optional) Comma separated list of the + collection IDs. If this parameter is not specified, all collections in the + project are used. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'list_fields') + headers.update(sdk_headers) + + params = { + 'version': self.version, + 'collection_ids': self._convert_list(collection_ids) + } + + url = '/v2/projects/{0}/fields'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + ######################### + # Component settings + ######################### + + def get_component_settings(self, project_id, **kwargs): + """ + Configuration settings for components. + + Returns default configuration settings for components. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', + 'get_component_settings') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/component_settings'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + ######################### + # Documents + ######################### + + def add_document(self, + project_id, + collection_id, + *, + file=None, + filename=None, + file_content_type=None, + metadata=None, + x_watson_discovery_force=None, + **kwargs): + """ + Add a document. + + Add a document to a collection with optional metadata. + Returns immediately after the system has accepted the document for processing. + * The user must provide document content, metadata, or both. If the request is + missing both document content and metadata, it is rejected. + * The user can set the **Content-Type** parameter on the **file** part to + indicate the media type of the document. If the **Content-Type** parameter is + missing or is one of the generic media types (for example, + `application/octet-stream`), then the service attempts to automatically detect the + document's media type. + * The following field names are reserved and will be filtered out if present + after normalization: `id`, `score`, `highlight`, and any field with the prefix of: + `_`, `+`, or `-` + * Fields with empty name values after normalization are filtered out before + indexing. + * Fields containing the following characters after normalization are filtered + out before indexing: `#` and `,` + If the document is uploaded to a collection that has it's data shared with + another collection, the **X-Watson-Discovery-Force** header must be set to `true`. + **Note:** Documents can be added with a specific **document_id** by using the + **_/v2/projects/{project_id}/collections/{collection_id}/documents** method. + **Note:** This operation only works on collections created to accept direct file + uploads. It cannot be used to modify a collection that conects to an external + source such as Microsoft SharePoint. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str collection_id: The ID of the collection. + :param file file: (optional) The content of the document to ingest. The + maximum supported file size when adding a file to a collection is 50 + megabytes, the maximum supported file size when testing a confiruration is + 1 megabyte. Files larger than the supported size are rejected. + :param str filename: (optional) The filename for file. + :param str file_content_type: (optional) The content type of file. + :param str metadata: (optional) The maximum supported metadata file size is + 1 MB. Metadata parts larger than 1 MB are rejected. Example: ``` { + "Creator": "Johnny Appleseed", + "Subject": "Apples" + } ```. + :param bool x_watson_discovery_force: (optional) When `true`, the uploaded + document is added to the collection even if the data for that collection is + shared with other collections. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if collection_id is None: + raise ValueError('collection_id must be provided') + + headers = {'X-Watson-Discovery-Force': x_watson_discovery_force} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'add_document') + headers.update(sdk_headers) + + params = {'version': self.version} + + form_data = [] + if file: + if not filename and hasattr(file, 'name'): + filename = basename(file.name) + if not filename: + raise ValueError('filename must be provided') + form_data.append(('file', (filename, file, file_content_type or + 'application/octet-stream'))) + if metadata: + form_data.append(('metadata', (None, metadata, 'text/plain'))) + + url = '/v2/projects/{0}/collections/{1}/documents'.format( + *self._encode_path_vars(project_id, collection_id)) + request = self.prepare_request(method='POST', + url=url, + headers=headers, + params=params, + files=form_data, + accept_json=True) + response = self.send(request) + return response + + def update_document(self, + project_id, + collection_id, + document_id, + *, + file=None, + filename=None, + file_content_type=None, + metadata=None, + x_watson_discovery_force=None, + **kwargs): + """ + Update a document. + + Replace an existing document or add a document with a specified **document_id**. + Starts ingesting a document with optional metadata. + If the document is uploaded to a collection that has it's data shared with another + collection, the **X-Watson-Discovery-Force** header must be set to `true`. + **Note:** When uploading a new document with this method it automatically replaces + any document stored with the same **document_id** if it exists. + **Note:** This operation only works on collections created to accept direct file + uploads. It cannot be used to modify a collection that conects to an external + source such as Microsoft SharePoint. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str collection_id: The ID of the collection. + :param str document_id: The ID of the document. + :param file file: (optional) The content of the document to ingest. The + maximum supported file size when adding a file to a collection is 50 + megabytes, the maximum supported file size when testing a confiruration is + 1 megabyte. Files larger than the supported size are rejected. + :param str filename: (optional) The filename for file. + :param str file_content_type: (optional) The content type of file. + :param str metadata: (optional) The maximum supported metadata file size is + 1 MB. Metadata parts larger than 1 MB are rejected. Example: ``` { + "Creator": "Johnny Appleseed", + "Subject": "Apples" + } ```. + :param bool x_watson_discovery_force: (optional) When `true`, the uploaded + document is added to the collection even if the data for that collection is + shared with other collections. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if collection_id is None: + raise ValueError('collection_id must be provided') + if document_id is None: + raise ValueError('document_id must be provided') + + headers = {'X-Watson-Discovery-Force': x_watson_discovery_force} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'update_document') + headers.update(sdk_headers) + + params = {'version': self.version} + + form_data = [] + if file: + if not filename and hasattr(file, 'name'): + filename = basename(file.name) + if not filename: + raise ValueError('filename must be provided') + form_data.append(('file', (filename, file, file_content_type or + 'application/octet-stream'))) + if metadata: + form_data.append(('metadata', (None, metadata, 'text/plain'))) + + url = '/v2/projects/{0}/collections/{1}/documents/{2}'.format( + *self._encode_path_vars(project_id, collection_id, document_id)) + request = self.prepare_request(method='POST', + url=url, + headers=headers, + params=params, + files=form_data, + accept_json=True) + response = self.send(request) + return response + + def delete_document(self, + project_id, + collection_id, + document_id, + *, + x_watson_discovery_force=None, + **kwargs): + """ + Delete a document. + + If the given document ID is invalid, or if the document is not found, then the a + success response is returned (HTTP status code `200`) with the status set to + 'deleted'. + **Note:** This operation only works on collections created to accept direct file + uploads. It cannot be used to modify a collection that conects to an external + source such as Microsoft SharePoint. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str collection_id: The ID of the collection. + :param str document_id: The ID of the document. + :param bool x_watson_discovery_force: (optional) When `true`, the uploaded + document is added to the collection even if the data for that collection is + shared with other collections. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if collection_id is None: + raise ValueError('collection_id must be provided') + if document_id is None: + raise ValueError('document_id must be provided') + + headers = {'X-Watson-Discovery-Force': x_watson_discovery_force} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'delete_document') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/collections/{1}/documents/{2}'.format( + *self._encode_path_vars(project_id, collection_id, document_id)) + request = self.prepare_request(method='DELETE', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + ######################### + # Training data + ######################### + + def list_training_queries(self, project_id, **kwargs): + """ + List training queries. + + List the training queries for the specified project. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', + 'list_training_queries') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/training_data/queries'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + def delete_training_queries(self, project_id, **kwargs): + """ + Delete training queries. + + Removes all training queries for the specified project. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', + 'delete_training_queries') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/training_data/queries'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='DELETE', + url=url, + headers=headers, + params=params, + accept_json=False) + response = self.send(request) + return response + + def create_training_query(self, + project_id, + natural_language_query, + examples, + *, + filter=None, + **kwargs): + """ + Create training query. + + Add a query to the training data for this project. The query can contain a filter + and natural language query. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str natural_language_query: The natural text query for the training + query. + :param list[TrainingExample] examples: Array of training examples. + :param str filter: (optional) The filter used on the collection before the + **natural_language_query** is applied. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + examples = [self._convert_model(x) for x in examples] + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', + 'create_training_query') + headers.update(sdk_headers) + + params = {'version': self.version} + + data = { + 'natural_language_query': natural_language_query, + 'examples': examples, + 'filter': filter + } + + url = '/v2/projects/{0}/training_data/queries'.format( + *self._encode_path_vars(project_id)) + request = self.prepare_request(method='POST', + url=url, + headers=headers, + params=params, + data=data, + accept_json=True) + response = self.send(request) + return response + + def get_training_query(self, project_id, query_id, **kwargs): + """ + Get a training data query. + + Get details for a specific training data query, including the query string and all + examples. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str query_id: The ID of the query used for training. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if query_id is None: + raise ValueError('query_id must be provided') + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', 'get_training_query') + headers.update(sdk_headers) + + params = {'version': self.version} + + url = '/v2/projects/{0}/training_data/queries/{1}'.format( + *self._encode_path_vars(project_id, query_id)) + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + + def update_training_query(self, + project_id, + query_id, + natural_language_query, + examples, + *, + filter=None, + **kwargs): + """ + Update a training query. + + Updates an existing training query and it's examples. + + :param str project_id: The ID of the project. This information can be found + from the deploy page of the Discovery administrative tooling. + :param str query_id: The ID of the query used for training. + :param str natural_language_query: The natural text query for the training + query. + :param list[TrainingExample] examples: Array of training examples. + :param str filter: (optional) The filter used on the collection before the + **natural_language_query** is applied. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + if project_id is None: + raise ValueError('project_id must be provided') + if query_id is None: + raise ValueError('query_id must be provided') + if natural_language_query is None: + raise ValueError('natural_language_query must be provided') + if examples is None: + raise ValueError('examples must be provided') + examples = [self._convert_model(x) for x in examples] + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('discovery', 'V2', + 'update_training_query') + headers.update(sdk_headers) + + params = {'version': self.version} + + data = { + 'natural_language_query': natural_language_query, + 'examples': examples, + 'filter': filter + } + + url = '/v2/projects/{0}/training_data/queries/{1}'.format( + *self._encode_path_vars(project_id, query_id)) + request = self.prepare_request(method='POST', + url=url, + headers=headers, + params=params, + data=data, + accept_json=True) + response = self.send(request) + return response + + +class AddDocumentEnums(object): + + class FileContentType(Enum): + """ + The content type of file. + """ + APPLICATION_JSON = 'application/json' + APPLICATION_MSWORD = 'application/msword' + APPLICATION_VND_OPENXMLFORMATS_OFFICEDOCUMENT_WORDPROCESSINGML_DOCUMENT = 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' + APPLICATION_PDF = 'application/pdf' + TEXT_HTML = 'text/html' + APPLICATION_XHTML_XML = 'application/xhtml+xml' + + +class UpdateDocumentEnums(object): + + class FileContentType(Enum): + """ + The content type of file. + """ + APPLICATION_JSON = 'application/json' + APPLICATION_MSWORD = 'application/msword' + APPLICATION_VND_OPENXMLFORMATS_OFFICEDOCUMENT_WORDPROCESSINGML_DOCUMENT = 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' + APPLICATION_PDF = 'application/pdf' + TEXT_HTML = 'text/html' + APPLICATION_XHTML_XML = 'application/xhtml+xml' + + +############################################################################## +# Models +############################################################################## + + +class Collection(): + """ + A collection for storing documents. + + :attr str collection_id: (optional) The unique identifier of the collection. + :attr str name: (optional) The name of the collection. + """ + + def __init__(self, *, collection_id=None, name=None): + """ + Initialize a Collection object. + + :param str collection_id: (optional) The unique identifier of the + collection. + :param str name: (optional) The name of the collection. + """ + self.collection_id = collection_id + self.name = name + + @classmethod + def _from_dict(cls, _dict): + """Initialize a Collection object from a json dictionary.""" + args = {} + valid_keys = ['collection_id', 'name'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class Collection: ' + + ', '.join(bad_keys)) + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + if 'name' in _dict: + args['name'] = _dict.get('name') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, 'name') and self.name is not None: + _dict['name'] = self.name + return _dict + + def __str__(self): + """Return a `str` version of this Collection object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class Completions(): + """ + An object containing an array of autocompletion suggestions. + + :attr list[str] completions: (optional) Array of autcomplete suggestion based on + the provided prefix. + """ + + def __init__(self, *, completions=None): + """ + Initialize a Completions object. + + :param list[str] completions: (optional) Array of autcomplete suggestion + based on the provided prefix. + """ + self.completions = completions + + @classmethod + def _from_dict(cls, _dict): + """Initialize a Completions object from a json dictionary.""" + args = {} + valid_keys = ['completions'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class Completions: ' + + ', '.join(bad_keys)) + if 'completions' in _dict: + args['completions'] = _dict.get('completions') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'completions') and self.completions is not None: + _dict['completions'] = self.completions + return _dict + + def __str__(self): + """Return a `str` version of this Completions object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class ComponentSettingsAggregation(): + """ + Display settings for aggregations. + + :attr str name: (optional) Identifier used to map aggregation settings to + aggregation configuration. + :attr str label: (optional) User-friendly alias for the aggregation. + :attr bool multiple_selections_allowed: (optional) Whether users is allowed to + select more than one of the aggregation terms. + :attr str visualization_type: (optional) Type of visualization to use when + rendering the aggregation. + """ + + def __init__(self, + *, + name=None, + label=None, + multiple_selections_allowed=None, + visualization_type=None): + """ + Initialize a ComponentSettingsAggregation object. + + :param str name: (optional) Identifier used to map aggregation settings to + aggregation configuration. + :param str label: (optional) User-friendly alias for the aggregation. + :param bool multiple_selections_allowed: (optional) Whether users is + allowed to select more than one of the aggregation terms. + :param str visualization_type: (optional) Type of visualization to use when + rendering the aggregation. + """ + self.name = name + self.label = label + self.multiple_selections_allowed = multiple_selections_allowed + self.visualization_type = visualization_type + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ComponentSettingsAggregation object from a json dictionary.""" + args = {} + valid_keys = [ + 'name', 'label', 'multiple_selections_allowed', 'visualization_type' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ComponentSettingsAggregation: ' + + ', '.join(bad_keys)) + if 'name' in _dict: + args['name'] = _dict.get('name') + if 'label' in _dict: + args['label'] = _dict.get('label') + if 'multiple_selections_allowed' in _dict: + args['multiple_selections_allowed'] = _dict.get( + 'multiple_selections_allowed') + if 'visualization_type' in _dict: + args['visualization_type'] = _dict.get('visualization_type') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'name') and self.name is not None: + _dict['name'] = self.name + if hasattr(self, 'label') and self.label is not None: + _dict['label'] = self.label + if hasattr(self, 'multiple_selections_allowed' + ) and self.multiple_selections_allowed is not None: + _dict[ + 'multiple_selections_allowed'] = self.multiple_selections_allowed + if hasattr( + self, + 'visualization_type') and self.visualization_type is not None: + _dict['visualization_type'] = self.visualization_type + return _dict + + def __str__(self): + """Return a `str` version of this ComponentSettingsAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class VisualizationTypeEnum(Enum): + """ + Type of visualization to use when rendering the aggregation. + """ + AUTO = "auto" + FACET_TABLE = "facet_table" + WORD_CLOUD = "word_cloud" + MAP = "map" + + +class ComponentSettingsFieldsShown(): + """ + Fields shown in the results section of the UI. + + :attr ComponentSettingsFieldsShownBody body: (optional) Body label. + :attr ComponentSettingsFieldsShownTitle title: (optional) Title label. + """ + + def __init__(self, *, body=None, title=None): + """ + Initialize a ComponentSettingsFieldsShown object. + + :param ComponentSettingsFieldsShownBody body: (optional) Body label. + :param ComponentSettingsFieldsShownTitle title: (optional) Title label. + """ + self.body = body + self.title = title + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ComponentSettingsFieldsShown object from a json dictionary.""" + args = {} + valid_keys = ['body', 'title'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ComponentSettingsFieldsShown: ' + + ', '.join(bad_keys)) + if 'body' in _dict: + args['body'] = ComponentSettingsFieldsShownBody._from_dict( + _dict.get('body')) + if 'title' in _dict: + args['title'] = ComponentSettingsFieldsShownTitle._from_dict( + _dict.get('title')) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'body') and self.body is not None: + _dict['body'] = self.body._to_dict() + if hasattr(self, 'title') and self.title is not None: + _dict['title'] = self.title._to_dict() + return _dict + + def __str__(self): + """Return a `str` version of this ComponentSettingsFieldsShown object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class ComponentSettingsFieldsShownBody(): + """ + Body label. + + :attr bool use_passage: (optional) Use the whole passage as the body. + :attr str field: (optional) Use a specific field as the title. + """ + + def __init__(self, *, use_passage=None, field=None): + """ + Initialize a ComponentSettingsFieldsShownBody object. + + :param bool use_passage: (optional) Use the whole passage as the body. + :param str field: (optional) Use a specific field as the title. + """ + self.use_passage = use_passage + self.field = field + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ComponentSettingsFieldsShownBody object from a json dictionary.""" + args = {} + valid_keys = ['use_passage', 'field'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ComponentSettingsFieldsShownBody: ' + + ', '.join(bad_keys)) + if 'use_passage' in _dict: + args['use_passage'] = _dict.get('use_passage') + if 'field' in _dict: + args['field'] = _dict.get('field') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'use_passage') and self.use_passage is not None: + _dict['use_passage'] = self.use_passage + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + return _dict + + def __str__(self): + """Return a `str` version of this ComponentSettingsFieldsShownBody object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class ComponentSettingsFieldsShownTitle(): + """ + Title label. + + :attr str field: (optional) Use a specific field as the title. + """ + + def __init__(self, *, field=None): + """ + Initialize a ComponentSettingsFieldsShownTitle object. + + :param str field: (optional) Use a specific field as the title. + """ + self.field = field + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ComponentSettingsFieldsShownTitle object from a json dictionary.""" + args = {} + valid_keys = ['field'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ComponentSettingsFieldsShownTitle: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + return _dict + + def __str__(self): + """Return a `str` version of this ComponentSettingsFieldsShownTitle object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class ComponentSettingsResponse(): + """ + A response containing the default component settings. + + :attr ComponentSettingsFieldsShown fields_shown: (optional) Fields shown in the + results section of the UI. + :attr bool autocomplete: (optional) Whether or not autocomplete is enabled. + :attr bool structured_search: (optional) Whether or not structured search is + enabled. + :attr int results_per_page: (optional) Number or results shown per page. + :attr list[ComponentSettingsAggregation] aggregations: (optional) a list of + component setting aggregations. + """ + + def __init__(self, + *, + fields_shown=None, + autocomplete=None, + structured_search=None, + results_per_page=None, + aggregations=None): + """ + Initialize a ComponentSettingsResponse object. + + :param ComponentSettingsFieldsShown fields_shown: (optional) Fields shown + in the results section of the UI. + :param bool autocomplete: (optional) Whether or not autocomplete is + enabled. + :param bool structured_search: (optional) Whether or not structured search + is enabled. + :param int results_per_page: (optional) Number or results shown per page. + :param list[ComponentSettingsAggregation] aggregations: (optional) a list + of component setting aggregations. + """ + self.fields_shown = fields_shown + self.autocomplete = autocomplete + self.structured_search = structured_search + self.results_per_page = results_per_page + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ComponentSettingsResponse object from a json dictionary.""" + args = {} + valid_keys = [ + 'fields_shown', 'autocomplete', 'structured_search', + 'results_per_page', 'aggregations' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ComponentSettingsResponse: ' + + ', '.join(bad_keys)) + if 'fields_shown' in _dict: + args['fields_shown'] = ComponentSettingsFieldsShown._from_dict( + _dict.get('fields_shown')) + if 'autocomplete' in _dict: + args['autocomplete'] = _dict.get('autocomplete') + if 'structured_search' in _dict: + args['structured_search'] = _dict.get('structured_search') + if 'results_per_page' in _dict: + args['results_per_page'] = _dict.get('results_per_page') + if 'aggregations' in _dict: + args['aggregations'] = [ + ComponentSettingsAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'fields_shown') and self.fields_shown is not None: + _dict['fields_shown'] = self.fields_shown._to_dict() + if hasattr(self, 'autocomplete') and self.autocomplete is not None: + _dict['autocomplete'] = self.autocomplete + if hasattr(self, + 'structured_search') and self.structured_search is not None: + _dict['structured_search'] = self.structured_search + if hasattr(self, + 'results_per_page') and self.results_per_page is not None: + _dict['results_per_page'] = self.results_per_page + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this ComponentSettingsResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class DeleteDocumentResponse(): + """ + Information returned when a document is deleted. + + :attr str document_id: (optional) The unique identifier of the document. + :attr str status: (optional) Status of the document. A deleted document has the + status deleted. + """ + + def __init__(self, *, document_id=None, status=None): + """ + Initialize a DeleteDocumentResponse object. + + :param str document_id: (optional) The unique identifier of the document. + :param str status: (optional) Status of the document. A deleted document + has the status deleted. + """ + self.document_id = document_id + self.status = status + + @classmethod + def _from_dict(cls, _dict): + """Initialize a DeleteDocumentResponse object from a json dictionary.""" + args = {} + valid_keys = ['document_id', 'status'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class DeleteDocumentResponse: ' + + ', '.join(bad_keys)) + if 'document_id' in _dict: + args['document_id'] = _dict.get('document_id') + if 'status' in _dict: + args['status'] = _dict.get('status') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_id') and self.document_id is not None: + _dict['document_id'] = self.document_id + if hasattr(self, 'status') and self.status is not None: + _dict['status'] = self.status + return _dict + + def __str__(self): + """Return a `str` version of this DeleteDocumentResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class StatusEnum(Enum): + """ + Status of the document. A deleted document has the status deleted. + """ + DELETED = "deleted" + + +class DocumentAccepted(): + """ + Information returned after an uploaded document is accepted. + + :attr str document_id: (optional) The unique identifier of the ingested + document. + :attr str status: (optional) Status of the document in the ingestion process. A + status of `processing` is returned for documents that are ingested with a + *version* date before `2019-01-01`. The `pending` status is returned for all + others. + """ + + def __init__(self, *, document_id=None, status=None): + """ + Initialize a DocumentAccepted object. + + :param str document_id: (optional) The unique identifier of the ingested + document. + :param str status: (optional) Status of the document in the ingestion + process. A status of `processing` is returned for documents that are + ingested with a *version* date before `2019-01-01`. The `pending` status is + returned for all others. + """ + self.document_id = document_id + self.status = status + + @classmethod + def _from_dict(cls, _dict): + """Initialize a DocumentAccepted object from a json dictionary.""" + args = {} + valid_keys = ['document_id', 'status'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class DocumentAccepted: ' + + ', '.join(bad_keys)) + if 'document_id' in _dict: + args['document_id'] = _dict.get('document_id') + if 'status' in _dict: + args['status'] = _dict.get('status') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_id') and self.document_id is not None: + _dict['document_id'] = self.document_id + if hasattr(self, 'status') and self.status is not None: + _dict['status'] = self.status + return _dict + + def __str__(self): + """Return a `str` version of this DocumentAccepted object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class StatusEnum(Enum): + """ + Status of the document in the ingestion process. A status of `processing` is + returned for documents that are ingested with a *version* date before + `2019-01-01`. The `pending` status is returned for all others. + """ + PROCESSING = "processing" + PENDING = "pending" + + +class DocumentAttribute(): + """ + List of document attributes. + + :attr str type: (optional) The type of attribute. + :attr str text: (optional) The text associated with the attribute. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + """ + + def __init__(self, *, type=None, text=None, location=None): + """ + Initialize a DocumentAttribute object. + + :param str type: (optional) The type of attribute. + :param str text: (optional) The text associated with the attribute. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + """ + self.type = type + self.text = text + self.location = location + + @classmethod + def _from_dict(cls, _dict): + """Initialize a DocumentAttribute object from a json dictionary.""" + args = {} + valid_keys = ['type', 'text', 'location'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class DocumentAttribute: ' + + ', '.join(bad_keys)) + if 'type' in _dict: + args['type'] = _dict.get('type') + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'type') and self.type is not None: + _dict['type'] = self.type + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + return _dict + + def __str__(self): + """Return a `str` version of this DocumentAttribute object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class Field(): + """ + Object containing field details. + + :attr str field: (optional) The name of the field. + :attr str type: (optional) The type of the field. + :attr str collection_id: (optional) The collection Id of the collection where + the field was found. + """ + + def __init__(self, *, field=None, type=None, collection_id=None): + """ + Initialize a Field object. + + :param str field: (optional) The name of the field. + :param str type: (optional) The type of the field. + :param str collection_id: (optional) The collection Id of the collection + where the field was found. + """ + self.field = field + self.type = type + self.collection_id = collection_id + + @classmethod + def _from_dict(cls, _dict): + """Initialize a Field object from a json dictionary.""" + args = {} + valid_keys = ['field', 'type', 'collection_id'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class Field: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + if 'type' in _dict: + args['type'] = _dict.get('type') + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + if hasattr(self, 'type') and self.type is not None: + _dict['type'] = self.type + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + return _dict + + def __str__(self): + """Return a `str` version of this Field object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class TypeEnum(Enum): + """ + The type of the field. + """ + NESTED = "nested" + STRING = "string" + DATE = "date" + LONG = "long" + INTEGER = "integer" + SHORT = "short" + BYTE = "byte" + DOUBLE = "double" + FLOAT = "float" + BOOLEAN = "boolean" + BINARY = "binary" + + +class ListCollectionsResponse(): + """ + Response object containing an array of collection details. + + :attr list[Collection] collections: (optional) An array containing information + about each collection in the project. + """ + + def __init__(self, *, collections=None): + """ + Initialize a ListCollectionsResponse object. + + :param list[Collection] collections: (optional) An array containing + information about each collection in the project. + """ + self.collections = collections + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ListCollectionsResponse object from a json dictionary.""" + args = {} + valid_keys = ['collections'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ListCollectionsResponse: ' + + ', '.join(bad_keys)) + if 'collections' in _dict: + args['collections'] = [ + Collection._from_dict(x) for x in (_dict.get('collections')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'collections') and self.collections is not None: + _dict['collections'] = [x._to_dict() for x in self.collections] + return _dict + + def __str__(self): + """Return a `str` version of this ListCollectionsResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class ListFieldsResponse(): + """ + The list of fetched fields. + The fields are returned using a fully qualified name format, however, the format + differs slightly from that used by the query operations. + * Fields which contain nested objects are assigned a type of "nested". + * Fields which belong to a nested object are prefixed with `.properties` (for + example, `warnings.properties.severity` means that the `warnings` object has a + property called `severity`). + + :attr list[Field] fields: (optional) An array containing information about each + field in the collections. + """ + + def __init__(self, *, fields=None): + """ + Initialize a ListFieldsResponse object. + + :param list[Field] fields: (optional) An array containing information about + each field in the collections. + """ + self.fields = fields + + @classmethod + def _from_dict(cls, _dict): + """Initialize a ListFieldsResponse object from a json dictionary.""" + args = {} + valid_keys = ['fields'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class ListFieldsResponse: ' + + ', '.join(bad_keys)) + if 'fields' in _dict: + args['fields'] = [ + Field._from_dict(x) for x in (_dict.get('fields')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'fields') and self.fields is not None: + _dict['fields'] = [x._to_dict() for x in self.fields] + return _dict + + def __str__(self): + """Return a `str` version of this ListFieldsResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class Notice(): + """ + A notice produced for the collection. + + :attr str notice_id: (optional) Identifies the notice. Many notices might have + the same ID. This field exists so that user applications can programmatically + identify a notice and take automatic corrective action. Typical notice IDs + include: `index_failed`, `index_failed_too_many_requests`, + `index_failed_incompatible_field`, `index_failed_cluster_unavailable`, + `ingestion_timeout`, `ingestion_error`, `bad_request`, `internal_error`, + `missing_model`, `unsupported_model`, + `smart_document_understanding_failed_incompatible_field`, + `smart_document_understanding_failed_internal_error`, + `smart_document_understanding_failed_internal_error`, + `smart_document_understanding_failed_warning`, + `smart_document_understanding_page_error`, + `smart_document_understanding_page_warning`. **Note:** This is not a complete + list, other values might be returned. + :attr datetime created: (optional) The creation date of the collection in the + format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'. + :attr str document_id: (optional) Unique identifier of the document. + :attr str collection_id: (optional) Unique identifier of the collection. + :attr str query_id: (optional) Unique identifier of the query used for relevance + training. + :attr str severity: (optional) Severity level of the notice. + :attr str step: (optional) Ingestion or training step in which the notice + occurred. + :attr str description: (optional) The description of the notice. + """ + + def __init__(self, + *, + notice_id=None, + created=None, + document_id=None, + collection_id=None, + query_id=None, + severity=None, + step=None, + description=None): + """ + Initialize a Notice object. + + :param str notice_id: (optional) Identifies the notice. Many notices might + have the same ID. This field exists so that user applications can + programmatically identify a notice and take automatic corrective action. + Typical notice IDs include: `index_failed`, + `index_failed_too_many_requests`, `index_failed_incompatible_field`, + `index_failed_cluster_unavailable`, `ingestion_timeout`, `ingestion_error`, + `bad_request`, `internal_error`, `missing_model`, `unsupported_model`, + `smart_document_understanding_failed_incompatible_field`, + `smart_document_understanding_failed_internal_error`, + `smart_document_understanding_failed_internal_error`, + `smart_document_understanding_failed_warning`, + `smart_document_understanding_page_error`, + `smart_document_understanding_page_warning`. **Note:** This is not a + complete list, other values might be returned. + :param datetime created: (optional) The creation date of the collection in + the format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'. + :param str document_id: (optional) Unique identifier of the document. + :param str collection_id: (optional) Unique identifier of the collection. + :param str query_id: (optional) Unique identifier of the query used for + relevance training. + :param str severity: (optional) Severity level of the notice. + :param str step: (optional) Ingestion or training step in which the notice + occurred. + :param str description: (optional) The description of the notice. + """ + self.notice_id = notice_id + self.created = created + self.document_id = document_id + self.collection_id = collection_id + self.query_id = query_id + self.severity = severity + self.step = step + self.description = description + + @classmethod + def _from_dict(cls, _dict): + """Initialize a Notice object from a json dictionary.""" + args = {} + valid_keys = [ + 'notice_id', 'created', 'document_id', 'collection_id', 'query_id', + 'severity', 'step', 'description' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class Notice: ' + + ', '.join(bad_keys)) + if 'notice_id' in _dict: + args['notice_id'] = _dict.get('notice_id') + if 'created' in _dict: + args['created'] = string_to_datetime(_dict.get('created')) + if 'document_id' in _dict: + args['document_id'] = _dict.get('document_id') + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + if 'query_id' in _dict: + args['query_id'] = _dict.get('query_id') + if 'severity' in _dict: + args['severity'] = _dict.get('severity') + if 'step' in _dict: + args['step'] = _dict.get('step') + if 'description' in _dict: + args['description'] = _dict.get('description') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'notice_id') and self.notice_id is not None: + _dict['notice_id'] = self.notice_id + if hasattr(self, 'created') and self.created is not None: + _dict['created'] = datetime_to_string(self.created) + if hasattr(self, 'document_id') and self.document_id is not None: + _dict['document_id'] = self.document_id + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, 'query_id') and self.query_id is not None: + _dict['query_id'] = self.query_id + if hasattr(self, 'severity') and self.severity is not None: + _dict['severity'] = self.severity + if hasattr(self, 'step') and self.step is not None: + _dict['step'] = self.step + if hasattr(self, 'description') and self.description is not None: + _dict['description'] = self.description + return _dict + + def __str__(self): + """Return a `str` version of this Notice object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class SeverityEnum(Enum): + """ + Severity level of the notice. + """ + WARNING = "warning" + ERROR = "error" + + +class QueryAggregation(): + """ + An abstract aggregation type produced by Discovery to analyze the input provided. + + :attr str type: The type of aggregation command used. Options include: term, + histogram, timeslice, nested, filter, min, max, sum, average, unique_count, and + top_hits. + """ + + def __init__(self, type): + """ + Initialize a QueryAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + """ + self.type = type + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryAggregation object from a json dictionary.""" + args = {} + valid_keys = ['type'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryAggregation: ' + + ', '.join(bad_keys)) + if 'type' in _dict: + args['type'] = _dict.get('type') + else: + raise ValueError( + 'Required property \'type\' not present in QueryAggregation JSON' + ) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'type') and self.type is not None: + _dict['type'] = self.type + return _dict + + def __str__(self): + """Return a `str` version of this QueryAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryCalculationAggregation(): + """ + Returns a scalar calculation across all documents for the field specified. Possible + calculations include min, max, sum, average, and unique_count. + + :attr str field: The field to perform the calculation on. + :attr float value: (optional) The value of the calculation. + """ + + def __init__(self, type, field, *, value=None): + """ + Initialize a QueryCalculationAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str field: The field to perform the calculation on. + :param float value: (optional) The value of the calculation. + """ + self.field = field + self.value = value + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryCalculationAggregation object from a json dictionary.""" + args = {} + valid_keys = ['field', 'value'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryCalculationAggregation: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + else: + raise ValueError( + 'Required property \'field\' not present in QueryCalculationAggregation JSON' + ) + if 'value' in _dict: + args['value'] = _dict.get('value') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + if hasattr(self, 'value') and self.value is not None: + _dict['value'] = self.value + return _dict + + def __str__(self): + """Return a `str` version of this QueryCalculationAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryFilterAggregation(): + """ + A modifier that will narrow down the document set of the sub aggregations it precedes. + + :attr str match: The filter written in Discovery Query Language syntax applied + to the documents before sub aggregations are run. + :attr int matching_results: Number of documents matching the filter. + :attr list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + + def __init__(self, type, match, matching_results, *, aggregations=None): + """ + Initialize a QueryFilterAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str match: The filter written in Discovery Query Language syntax + applied to the documents before sub aggregations are run. + :param int matching_results: Number of documents matching the filter. + :param list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + self.match = match + self.matching_results = matching_results + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryFilterAggregation object from a json dictionary.""" + args = {} + valid_keys = ['match', 'matching_results', 'aggregations'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryFilterAggregation: ' + + ', '.join(bad_keys)) + if 'match' in _dict: + args['match'] = _dict.get('match') + else: + raise ValueError( + 'Required property \'match\' not present in QueryFilterAggregation JSON' + ) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryFilterAggregation JSON' + ) + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'match') and self.match is not None: + _dict['match'] = self.match + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this QueryFilterAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryHistogramAggregation(): + """ + Numeric interval segments to categorize documents by using field values from a single + numeric field to describe the category. + + :attr str field: The numeric field name used to create the histogram. + :attr int interval: The size of the sections the results are split into. + :attr list[QueryHistogramAggregationResult] results: (optional) Array of numeric + intervals. + """ + + def __init__(self, type, field, interval, *, results=None): + """ + Initialize a QueryHistogramAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str field: The numeric field name used to create the histogram. + :param int interval: The size of the sections the results are split into. + :param list[QueryHistogramAggregationResult] results: (optional) Array of + numeric intervals. + """ + self.field = field + self.interval = interval + self.results = results + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryHistogramAggregation object from a json dictionary.""" + args = {} + valid_keys = ['field', 'interval', 'results'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryHistogramAggregation: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + else: + raise ValueError( + 'Required property \'field\' not present in QueryHistogramAggregation JSON' + ) + if 'interval' in _dict: + args['interval'] = _dict.get('interval') + else: + raise ValueError( + 'Required property \'interval\' not present in QueryHistogramAggregation JSON' + ) + if 'results' in _dict: + args['results'] = [ + QueryHistogramAggregationResult._from_dict(x) + for x in (_dict.get('results')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + if hasattr(self, 'interval') and self.interval is not None: + _dict['interval'] = self.interval + if hasattr(self, 'results') and self.results is not None: + _dict['results'] = [x._to_dict() for x in self.results] + return _dict + + def __str__(self): + """Return a `str` version of this QueryHistogramAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryHistogramAggregationResult(): + """ + Histogram numeric interval result. + + :attr int key: The value of the upper bound for the numeric segment. + :attr int matching_results: Number of documents with the specified key as the + upper bound. + :attr list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + + def __init__(self, key, matching_results, *, aggregations=None): + """ + Initialize a QueryHistogramAggregationResult object. + + :param int key: The value of the upper bound for the numeric segment. + :param int matching_results: Number of documents with the specified key as + the upper bound. + :param list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + self.key = key + self.matching_results = matching_results + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryHistogramAggregationResult object from a json dictionary.""" + args = {} + valid_keys = ['key', 'matching_results', 'aggregations'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryHistogramAggregationResult: ' + + ', '.join(bad_keys)) + if 'key' in _dict: + args['key'] = _dict.get('key') + else: + raise ValueError( + 'Required property \'key\' not present in QueryHistogramAggregationResult JSON' + ) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryHistogramAggregationResult JSON' + ) + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'key') and self.key is not None: + _dict['key'] = self.key + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this QueryHistogramAggregationResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryLargePassages(): + """ + Configuration for passage retrieval. + + :attr bool enabled: (optional) A passages query that returns the most relevant + passages from the results. + :attr bool per_document: (optional) When `true`, passages will be returned + whithin their respective result. + :attr int max_per_document: (optional) Maximum number of passages to return per + result. + :attr list[str] fields: (optional) A list of fields that passages are drawn + from. If this parameter not specified, then all top-level fields are included. + :attr int count: (optional) The maximum number of passages to return. The search + returns fewer passages if the requested total is not found. The default is `10`. + The maximum is `100`. + :attr int characters: (optional) The approximate number of characters that any + one passage will have. + """ + + def __init__(self, + *, + enabled=None, + per_document=None, + max_per_document=None, + fields=None, + count=None, + characters=None): + """ + Initialize a QueryLargePassages object. + + :param bool enabled: (optional) A passages query that returns the most + relevant passages from the results. + :param bool per_document: (optional) When `true`, passages will be returned + whithin their respective result. + :param int max_per_document: (optional) Maximum number of passages to + return per result. + :param list[str] fields: (optional) A list of fields that passages are + drawn from. If this parameter not specified, then all top-level fields are + included. + :param int count: (optional) The maximum number of passages to return. The + search returns fewer passages if the requested total is not found. The + default is `10`. The maximum is `100`. + :param int characters: (optional) The approximate number of characters that + any one passage will have. + """ + self.enabled = enabled + self.per_document = per_document + self.max_per_document = max_per_document + self.fields = fields + self.count = count + self.characters = characters + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryLargePassages object from a json dictionary.""" + args = {} + valid_keys = [ + 'enabled', 'per_document', 'max_per_document', 'fields', 'count', + 'characters' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryLargePassages: ' + + ', '.join(bad_keys)) + if 'enabled' in _dict: + args['enabled'] = _dict.get('enabled') + if 'per_document' in _dict: + args['per_document'] = _dict.get('per_document') + if 'max_per_document' in _dict: + args['max_per_document'] = _dict.get('max_per_document') + if 'fields' in _dict: + args['fields'] = _dict.get('fields') + if 'count' in _dict: + args['count'] = _dict.get('count') + if 'characters' in _dict: + args['characters'] = _dict.get('characters') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'enabled') and self.enabled is not None: + _dict['enabled'] = self.enabled + if hasattr(self, 'per_document') and self.per_document is not None: + _dict['per_document'] = self.per_document + if hasattr(self, + 'max_per_document') and self.max_per_document is not None: + _dict['max_per_document'] = self.max_per_document + if hasattr(self, 'fields') and self.fields is not None: + _dict['fields'] = self.fields + if hasattr(self, 'count') and self.count is not None: + _dict['count'] = self.count + if hasattr(self, 'characters') and self.characters is not None: + _dict['characters'] = self.characters + return _dict + + def __str__(self): + """Return a `str` version of this QueryLargePassages object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryLargeSuggestedRefinements(): + """ + Configuration for suggested refinements. + + :attr bool enabled: (optional) Whether to perform suggested refinements. + :attr int count: (optional) Maximum number of suggested refinements texts to be + returned. The default is `10`. The maximum is `100`. + """ + + def __init__(self, *, enabled=None, count=None): + """ + Initialize a QueryLargeSuggestedRefinements object. + + :param bool enabled: (optional) Whether to perform suggested refinements. + :param int count: (optional) Maximum number of suggested refinements texts + to be returned. The default is `10`. The maximum is `100`. + """ + self.enabled = enabled + self.count = count + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryLargeSuggestedRefinements object from a json dictionary.""" + args = {} + valid_keys = ['enabled', 'count'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryLargeSuggestedRefinements: ' + + ', '.join(bad_keys)) + if 'enabled' in _dict: + args['enabled'] = _dict.get('enabled') + if 'count' in _dict: + args['count'] = _dict.get('count') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'enabled') and self.enabled is not None: + _dict['enabled'] = self.enabled + if hasattr(self, 'count') and self.count is not None: + _dict['count'] = self.count + return _dict + + def __str__(self): + """Return a `str` version of this QueryLargeSuggestedRefinements object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryLargeTableResults(): + """ + Configuration for table retrieval. + + :attr bool enabled: (optional) Whether to enable table retrieval. + :attr int count: (optional) Maximum number of tables to return. + """ + + def __init__(self, *, enabled=None, count=None): + """ + Initialize a QueryLargeTableResults object. + + :param bool enabled: (optional) Whether to enable table retrieval. + :param int count: (optional) Maximum number of tables to return. + """ + self.enabled = enabled + self.count = count + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryLargeTableResults object from a json dictionary.""" + args = {} + valid_keys = ['enabled', 'count'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryLargeTableResults: ' + + ', '.join(bad_keys)) + if 'enabled' in _dict: + args['enabled'] = _dict.get('enabled') + if 'count' in _dict: + args['count'] = _dict.get('count') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'enabled') and self.enabled is not None: + _dict['enabled'] = self.enabled + if hasattr(self, 'count') and self.count is not None: + _dict['count'] = self.count + return _dict + + def __str__(self): + """Return a `str` version of this QueryLargeTableResults object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryNestedAggregation(): + """ + A restriction that alter the document set used for sub aggregations it precedes to + nested documents found in the field specified. + + :attr str path: The path to the document field to scope sub aggregations to. + :attr int matching_results: Number of nested documents found in the specified + field. + :attr list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + + def __init__(self, type, path, matching_results, *, aggregations=None): + """ + Initialize a QueryNestedAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str path: The path to the document field to scope sub aggregations + to. + :param int matching_results: Number of nested documents found in the + specified field. + :param list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + self.path = path + self.matching_results = matching_results + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryNestedAggregation object from a json dictionary.""" + args = {} + valid_keys = ['path', 'matching_results', 'aggregations'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryNestedAggregation: ' + + ', '.join(bad_keys)) + if 'path' in _dict: + args['path'] = _dict.get('path') + else: + raise ValueError( + 'Required property \'path\' not present in QueryNestedAggregation JSON' + ) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryNestedAggregation JSON' + ) + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'path') and self.path is not None: + _dict['path'] = self.path + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this QueryNestedAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryNoticesResponse(): + """ + Object containing notice query results. + + :attr int matching_results: (optional) The number of matching results. + :attr list[Notice] notices: (optional) Array of document results that match the + query. + """ + + def __init__(self, *, matching_results=None, notices=None): + """ + Initialize a QueryNoticesResponse object. + + :param int matching_results: (optional) The number of matching results. + :param list[Notice] notices: (optional) Array of document results that + match the query. + """ + self.matching_results = matching_results + self.notices = notices + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryNoticesResponse object from a json dictionary.""" + args = {} + valid_keys = ['matching_results', 'notices'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryNoticesResponse: ' + + ', '.join(bad_keys)) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + if 'notices' in _dict: + args['notices'] = [ + Notice._from_dict(x) for x in (_dict.get('notices')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'notices') and self.notices is not None: + _dict['notices'] = [x._to_dict() for x in self.notices] + return _dict + + def __str__(self): + """Return a `str` version of this QueryNoticesResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryResponse(): + """ + A response containing the documents and aggregations for the query. + + :attr int matching_results: (optional) The number of matching results for the + query. + :attr list[QueryResult] results: (optional) Array of document results for the + query. + :attr list[QueryAggregation] aggregations: (optional) Array of aggregations for + the query. + :attr RetrievalDetails retrieval_details: (optional) An object contain retrieval + type information. + :attr str suggested_query: (optional) Suggested correction to the submitted + **natural_language_query** value. + :attr list[QuerySuggestedRefinement] suggested_refinements: (optional) Array of + suggested refinments. + :attr list[QueryTableResult] table_results: (optional) Array of table results. + """ + + def __init__(self, + *, + matching_results=None, + results=None, + aggregations=None, + retrieval_details=None, + suggested_query=None, + suggested_refinements=None, + table_results=None): + """ + Initialize a QueryResponse object. + + :param int matching_results: (optional) The number of matching results for + the query. + :param list[QueryResult] results: (optional) Array of document results for + the query. + :param list[QueryAggregation] aggregations: (optional) Array of + aggregations for the query. + :param RetrievalDetails retrieval_details: (optional) An object contain + retrieval type information. + :param str suggested_query: (optional) Suggested correction to the + submitted **natural_language_query** value. + :param list[QuerySuggestedRefinement] suggested_refinements: (optional) + Array of suggested refinments. + :param list[QueryTableResult] table_results: (optional) Array of table + results. + """ + self.matching_results = matching_results + self.results = results + self.aggregations = aggregations + self.retrieval_details = retrieval_details + self.suggested_query = suggested_query + self.suggested_refinements = suggested_refinements + self.table_results = table_results + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryResponse object from a json dictionary.""" + args = {} + valid_keys = [ + 'matching_results', 'results', 'aggregations', 'retrieval_details', + 'suggested_query', 'suggested_refinements', 'table_results' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryResponse: ' + + ', '.join(bad_keys)) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + if 'results' in _dict: + args['results'] = [ + QueryResult._from_dict(x) for x in (_dict.get('results')) + ] + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + if 'retrieval_details' in _dict: + args['retrieval_details'] = RetrievalDetails._from_dict( + _dict.get('retrieval_details')) + if 'suggested_query' in _dict: + args['suggested_query'] = _dict.get('suggested_query') + if 'suggested_refinements' in _dict: + args['suggested_refinements'] = [ + QuerySuggestedRefinement._from_dict(x) + for x in (_dict.get('suggested_refinements')) + ] + if 'table_results' in _dict: + args['table_results'] = [ + QueryTableResult._from_dict(x) + for x in (_dict.get('table_results')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'results') and self.results is not None: + _dict['results'] = [x._to_dict() for x in self.results] + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + if hasattr(self, + 'retrieval_details') and self.retrieval_details is not None: + _dict['retrieval_details'] = self.retrieval_details._to_dict() + if hasattr(self, + 'suggested_query') and self.suggested_query is not None: + _dict['suggested_query'] = self.suggested_query + if hasattr(self, 'suggested_refinements' + ) and self.suggested_refinements is not None: + _dict['suggested_refinements'] = [ + x._to_dict() for x in self.suggested_refinements + ] + if hasattr(self, 'table_results') and self.table_results is not None: + _dict['table_results'] = [x._to_dict() for x in self.table_results] + return _dict + + def __str__(self): + """Return a `str` version of this QueryResponse object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryResult(): + """ + Result document for the specified query. + + :attr str document_id: The unique identifier of the document. + :attr dict metadata: (optional) Metadata of the document. + :attr QueryResultMetadata result_metadata: Metadata of a query result. + :attr list[QueryResultPassage] document_passages: (optional) Passages returned + by Discovery. + """ + + def __init__(self, + document_id, + result_metadata, + *, + metadata=None, + document_passages=None, + **kwargs): + """ + Initialize a QueryResult object. + + :param str document_id: The unique identifier of the document. + :param QueryResultMetadata result_metadata: Metadata of a query result. + :param dict metadata: (optional) Metadata of the document. + :param list[QueryResultPassage] document_passages: (optional) Passages + returned by Discovery. + :param **kwargs: (optional) Any additional properties. + """ + self.document_id = document_id + self.metadata = metadata + self.result_metadata = result_metadata + self.document_passages = document_passages + for _key, _value in kwargs.items(): + setattr(self, _key, _value) + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryResult object from a json dictionary.""" + args = {} + xtra = _dict.copy() + if 'document_id' in _dict: + args['document_id'] = _dict.get('document_id') + del xtra['document_id'] + else: + raise ValueError( + 'Required property \'document_id\' not present in QueryResult JSON' + ) + if 'metadata' in _dict: + args['metadata'] = _dict.get('metadata') + del xtra['metadata'] + if 'result_metadata' in _dict: + args['result_metadata'] = QueryResultMetadata._from_dict( + _dict.get('result_metadata')) + del xtra['result_metadata'] + else: + raise ValueError( + 'Required property \'result_metadata\' not present in QueryResult JSON' + ) + if 'document_passages' in _dict: + args['document_passages'] = [ + QueryResultPassage._from_dict(x) + for x in (_dict.get('document_passages')) + ] + del xtra['document_passages'] + args.update(xtra) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_id') and self.document_id is not None: + _dict['document_id'] = self.document_id + if hasattr(self, 'metadata') and self.metadata is not None: + _dict['metadata'] = self.metadata + if hasattr(self, + 'result_metadata') and self.result_metadata is not None: + _dict['result_metadata'] = self.result_metadata._to_dict() + if hasattr(self, + 'document_passages') and self.document_passages is not None: + _dict['document_passages'] = [ + x._to_dict() for x in self.document_passages + ] + if hasattr(self, '_additionalProperties'): + for _key in self._additionalProperties: + _value = getattr(self, _key, None) + if _value is not None: + _dict[_key] = _value + return _dict + + def __setattr__(self, name, value): + properties = { + 'document_id', 'metadata', 'result_metadata', 'document_passages' + } + if not hasattr(self, '_additionalProperties'): + super(QueryResult, self).__setattr__('_additionalProperties', set()) + if name not in properties: + self._additionalProperties.add(name) + super(QueryResult, self).__setattr__(name, value) + + def __str__(self): + """Return a `str` version of this QueryResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryResultMetadata(): + """ + Metadata of a query result. + + :attr str document_retrieval_source: (optional) The document retrieval source + that produced this search result. + :attr str collection_id: The collection id associated with this training data + set. + :attr float confidence: (optional) The confidence score for the given result. + Calculated based on how relevant the result is estimated to be. confidence can + range from `0.0` to `1.0`. The higher the number, the more relevant the + document. The `confidence` value for a result was calculated using the model + specified in the `document_retrieval_strategy` field of the result set. This + field is only returned if the **natural_language_query** parameter is specified + in the query. + """ + + def __init__(self, + collection_id, + *, + document_retrieval_source=None, + confidence=None): + """ + Initialize a QueryResultMetadata object. + + :param str collection_id: The collection id associated with this training + data set. + :param str document_retrieval_source: (optional) The document retrieval + source that produced this search result. + :param float confidence: (optional) The confidence score for the given + result. Calculated based on how relevant the result is estimated to be. + confidence can range from `0.0` to `1.0`. The higher the number, the more + relevant the document. The `confidence` value for a result was calculated + using the model specified in the `document_retrieval_strategy` field of the + result set. This field is only returned if the **natural_language_query** + parameter is specified in the query. + """ + self.document_retrieval_source = document_retrieval_source + self.collection_id = collection_id + self.confidence = confidence + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryResultMetadata object from a json dictionary.""" + args = {} + valid_keys = [ + 'document_retrieval_source', 'collection_id', 'confidence' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryResultMetadata: ' + + ', '.join(bad_keys)) + if 'document_retrieval_source' in _dict: + args['document_retrieval_source'] = _dict.get( + 'document_retrieval_source') + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + else: + raise ValueError( + 'Required property \'collection_id\' not present in QueryResultMetadata JSON' + ) + if 'confidence' in _dict: + args['confidence'] = _dict.get('confidence') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_retrieval_source' + ) and self.document_retrieval_source is not None: + _dict['document_retrieval_source'] = self.document_retrieval_source + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, 'confidence') and self.confidence is not None: + _dict['confidence'] = self.confidence + return _dict + + def __str__(self): + """Return a `str` version of this QueryResultMetadata object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class DocumentRetrievalSourceEnum(Enum): + """ + The document retrieval source that produced this search result. + """ + SEARCH = "search" + CURATION = "curation" + + +class QueryResultPassage(): + """ + A passage query result. + + :attr str passage_text: (optional) The content of the extracted passage. + :attr int start_offset: (optional) The position of the first character of the + extracted passage in the originating field. + :attr int end_offset: (optional) The position of the last character of the + extracted passage in the originating field. + :attr str field: (optional) The label of the field from which the passage has + been extracted. + """ + + def __init__(self, + *, + passage_text=None, + start_offset=None, + end_offset=None, + field=None): + """ + Initialize a QueryResultPassage object. + + :param str passage_text: (optional) The content of the extracted passage. + :param int start_offset: (optional) The position of the first character of + the extracted passage in the originating field. + :param int end_offset: (optional) The position of the last character of the + extracted passage in the originating field. + :param str field: (optional) The label of the field from which the passage + has been extracted. + """ + self.passage_text = passage_text + self.start_offset = start_offset + self.end_offset = end_offset + self.field = field + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryResultPassage object from a json dictionary.""" + args = {} + valid_keys = ['passage_text', 'start_offset', 'end_offset', 'field'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryResultPassage: ' + + ', '.join(bad_keys)) + if 'passage_text' in _dict: + args['passage_text'] = _dict.get('passage_text') + if 'start_offset' in _dict: + args['start_offset'] = _dict.get('start_offset') + if 'end_offset' in _dict: + args['end_offset'] = _dict.get('end_offset') + if 'field' in _dict: + args['field'] = _dict.get('field') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'passage_text') and self.passage_text is not None: + _dict['passage_text'] = self.passage_text + if hasattr(self, 'start_offset') and self.start_offset is not None: + _dict['start_offset'] = self.start_offset + if hasattr(self, 'end_offset') and self.end_offset is not None: + _dict['end_offset'] = self.end_offset + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + return _dict + + def __str__(self): + """Return a `str` version of this QueryResultPassage object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QuerySuggestedRefinement(): + """ + A suggested additional query term or terms user to filter results. + + :attr str text: (optional) The text used to filter. + """ + + def __init__(self, *, text=None): + """ + Initialize a QuerySuggestedRefinement object. + + :param str text: (optional) The text used to filter. + """ + self.text = text + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QuerySuggestedRefinement object from a json dictionary.""" + args = {} + valid_keys = ['text'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QuerySuggestedRefinement: ' + + ', '.join(bad_keys)) + if 'text' in _dict: + args['text'] = _dict.get('text') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + return _dict + + def __str__(self): + """Return a `str` version of this QuerySuggestedRefinement object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTableResult(): + """ + A tables whose content or context match a search query. + + :attr str table_id: (optional) The identifier for the retrieved table. + :attr str source_document_id: (optional) The identifier of the document the + table was retrieved from. + :attr str collection_id: (optional) The identifier of the collection the table + was retrieved from. + :attr str table_html: (optional) HTML snippet of the table info. + :attr int table_html_offset: (optional) The offset of the table html snippet in + the original document html. + :attr TableResultTable table: (optional) Full table object retrieved from Table + Understanding Enrichment. + """ + + def __init__(self, + *, + table_id=None, + source_document_id=None, + collection_id=None, + table_html=None, + table_html_offset=None, + table=None): + """ + Initialize a QueryTableResult object. + + :param str table_id: (optional) The identifier for the retrieved table. + :param str source_document_id: (optional) The identifier of the document + the table was retrieved from. + :param str collection_id: (optional) The identifier of the collection the + table was retrieved from. + :param str table_html: (optional) HTML snippet of the table info. + :param int table_html_offset: (optional) The offset of the table html + snippet in the original document html. + :param TableResultTable table: (optional) Full table object retrieved from + Table Understanding Enrichment. + """ + self.table_id = table_id + self.source_document_id = source_document_id + self.collection_id = collection_id + self.table_html = table_html + self.table_html_offset = table_html_offset + self.table = table + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTableResult object from a json dictionary.""" + args = {} + valid_keys = [ + 'table_id', 'source_document_id', 'collection_id', 'table_html', + 'table_html_offset', 'table' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTableResult: ' + + ', '.join(bad_keys)) + if 'table_id' in _dict: + args['table_id'] = _dict.get('table_id') + if 'source_document_id' in _dict: + args['source_document_id'] = _dict.get('source_document_id') + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + if 'table_html' in _dict: + args['table_html'] = _dict.get('table_html') + if 'table_html_offset' in _dict: + args['table_html_offset'] = _dict.get('table_html_offset') + if 'table' in _dict: + args['table'] = TableResultTable._from_dict(_dict.get('table')) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'table_id') and self.table_id is not None: + _dict['table_id'] = self.table_id + if hasattr( + self, + 'source_document_id') and self.source_document_id is not None: + _dict['source_document_id'] = self.source_document_id + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, 'table_html') and self.table_html is not None: + _dict['table_html'] = self.table_html + if hasattr(self, + 'table_html_offset') and self.table_html_offset is not None: + _dict['table_html_offset'] = self.table_html_offset + if hasattr(self, 'table') and self.table is not None: + _dict['table'] = self.table._to_dict() + return _dict + + def __str__(self): + """Return a `str` version of this QueryTableResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTermAggregation(): + """ + Returns the top values for the field specified. + + :attr str field: The field in the document used to generate top values from. + :attr int count: (optional) The number of top values returned. + :attr list[QueryTermAggregationResult] results: (optional) Array of top values + for the field. + """ + + def __init__(self, type, field, *, count=None, results=None): + """ + Initialize a QueryTermAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str field: The field in the document used to generate top values + from. + :param int count: (optional) The number of top values returned. + :param list[QueryTermAggregationResult] results: (optional) Array of top + values for the field. + """ + self.field = field + self.count = count + self.results = results + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTermAggregation object from a json dictionary.""" + args = {} + valid_keys = ['field', 'count', 'results'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTermAggregation: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + else: + raise ValueError( + 'Required property \'field\' not present in QueryTermAggregation JSON' + ) + if 'count' in _dict: + args['count'] = _dict.get('count') + if 'results' in _dict: + args['results'] = [ + QueryTermAggregationResult._from_dict(x) + for x in (_dict.get('results')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + if hasattr(self, 'count') and self.count is not None: + _dict['count'] = self.count + if hasattr(self, 'results') and self.results is not None: + _dict['results'] = [x._to_dict() for x in self.results] + return _dict + + def __str__(self): + """Return a `str` version of this QueryTermAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTermAggregationResult(): + """ + Top value result for the term aggregation. + + :attr str key: Value of the field with a non-zero frequency in the document set. + :attr int matching_results: Number of documents containing the 'key'. + :attr list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + + def __init__(self, key, matching_results, *, aggregations=None): + """ + Initialize a QueryTermAggregationResult object. + + :param str key: Value of the field with a non-zero frequency in the + document set. + :param int matching_results: Number of documents containing the 'key'. + :param list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + self.key = key + self.matching_results = matching_results + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTermAggregationResult object from a json dictionary.""" + args = {} + valid_keys = ['key', 'matching_results', 'aggregations'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTermAggregationResult: ' + + ', '.join(bad_keys)) + if 'key' in _dict: + args['key'] = _dict.get('key') + else: + raise ValueError( + 'Required property \'key\' not present in QueryTermAggregationResult JSON' + ) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryTermAggregationResult JSON' + ) + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'key') and self.key is not None: + _dict['key'] = self.key + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this QueryTermAggregationResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTimesliceAggregation(): + """ + A specialized histogram aggregation that uses dates to create interval segments. + + :attr str field: The date field name used to create the timeslice. + :attr str interval: The date interval value. Valid values are seconds, minutes, + hours, days, weeks, and years. + :attr list[QueryTimesliceAggregationResult] results: (optional) Array of + aggregation results. + """ + + def __init__(self, type, field, interval, *, results=None): + """ + Initialize a QueryTimesliceAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param str field: The date field name used to create the timeslice. + :param str interval: The date interval value. Valid values are seconds, + minutes, hours, days, weeks, and years. + :param list[QueryTimesliceAggregationResult] results: (optional) Array of + aggregation results. + """ + self.field = field + self.interval = interval + self.results = results + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTimesliceAggregation object from a json dictionary.""" + args = {} + valid_keys = ['field', 'interval', 'results'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTimesliceAggregation: ' + + ', '.join(bad_keys)) + if 'field' in _dict: + args['field'] = _dict.get('field') + else: + raise ValueError( + 'Required property \'field\' not present in QueryTimesliceAggregation JSON' + ) + if 'interval' in _dict: + args['interval'] = _dict.get('interval') + else: + raise ValueError( + 'Required property \'interval\' not present in QueryTimesliceAggregation JSON' + ) + if 'results' in _dict: + args['results'] = [ + QueryTimesliceAggregationResult._from_dict(x) + for x in (_dict.get('results')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'field') and self.field is not None: + _dict['field'] = self.field + if hasattr(self, 'interval') and self.interval is not None: + _dict['interval'] = self.interval + if hasattr(self, 'results') and self.results is not None: + _dict['results'] = [x._to_dict() for x in self.results] + return _dict + + def __str__(self): + """Return a `str` version of this QueryTimesliceAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTimesliceAggregationResult(): + """ + A timeslice interval segment. + + :attr str key_as_string: String date value of the upper bound for the timeslice + interval in ISO-8601 format. + :attr int key: Numeric date value of the upper bound for the timeslice interval + in UNIX miliseconds since epoch. + :attr int matching_results: Number of documents with the specified key as the + upper bound. + :attr list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + + def __init__(self, + key_as_string, + key, + matching_results, + *, + aggregations=None): + """ + Initialize a QueryTimesliceAggregationResult object. + + :param str key_as_string: String date value of the upper bound for the + timeslice interval in ISO-8601 format. + :param int key: Numeric date value of the upper bound for the timeslice + interval in UNIX miliseconds since epoch. + :param int matching_results: Number of documents with the specified key as + the upper bound. + :param list[QueryAggregation] aggregations: (optional) An array of sub + aggregations. + """ + self.key_as_string = key_as_string + self.key = key + self.matching_results = matching_results + self.aggregations = aggregations + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTimesliceAggregationResult object from a json dictionary.""" + args = {} + valid_keys = [ + 'key_as_string', 'key', 'matching_results', 'aggregations' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTimesliceAggregationResult: ' + + ', '.join(bad_keys)) + if 'key_as_string' in _dict: + args['key_as_string'] = _dict.get('key_as_string') + else: + raise ValueError( + 'Required property \'key_as_string\' not present in QueryTimesliceAggregationResult JSON' + ) + if 'key' in _dict: + args['key'] = _dict.get('key') + else: + raise ValueError( + 'Required property \'key\' not present in QueryTimesliceAggregationResult JSON' + ) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryTimesliceAggregationResult JSON' + ) + if 'aggregations' in _dict: + args['aggregations'] = [ + QueryAggregation._from_dict(x) + for x in (_dict.get('aggregations')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'key_as_string') and self.key_as_string is not None: + _dict['key_as_string'] = self.key_as_string + if hasattr(self, 'key') and self.key is not None: + _dict['key'] = self.key + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'aggregations') and self.aggregations is not None: + _dict['aggregations'] = [x._to_dict() for x in self.aggregations] + return _dict + + def __str__(self): + """Return a `str` version of this QueryTimesliceAggregationResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTopHitsAggregation(): + """ + Returns the top documents ranked by the score of the query. + + :attr int size: The number of documents to return. + :attr QueryTopHitsAggregationResult hits: (optional) + """ + + def __init__(self, type, size, *, hits=None): + """ + Initialize a QueryTopHitsAggregation object. + + :param str type: The type of aggregation command used. Options include: + term, histogram, timeslice, nested, filter, min, max, sum, average, + unique_count, and top_hits. + :param int size: The number of documents to return. + :param QueryTopHitsAggregationResult hits: (optional) + """ + self.size = size + self.hits = hits + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTopHitsAggregation object from a json dictionary.""" + args = {} + valid_keys = ['size', 'hits'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTopHitsAggregation: ' + + ', '.join(bad_keys)) + if 'size' in _dict: + args['size'] = _dict.get('size') + else: + raise ValueError( + 'Required property \'size\' not present in QueryTopHitsAggregation JSON' + ) + if 'hits' in _dict: + args['hits'] = QueryTopHitsAggregationResult._from_dict( + _dict.get('hits')) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'size') and self.size is not None: + _dict['size'] = self.size + if hasattr(self, 'hits') and self.hits is not None: + _dict['hits'] = self.hits._to_dict() + return _dict + + def __str__(self): + """Return a `str` version of this QueryTopHitsAggregation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class QueryTopHitsAggregationResult(): + """ + A query response containing the matching documents for the preceding aggregations. + + :attr int matching_results: Number of matching results. + :attr list[dict] hits: (optional) An array of the document results. + """ + + def __init__(self, matching_results, *, hits=None): + """ + Initialize a QueryTopHitsAggregationResult object. + + :param int matching_results: Number of matching results. + :param list[dict] hits: (optional) An array of the document results. + """ + self.matching_results = matching_results + self.hits = hits + + @classmethod + def _from_dict(cls, _dict): + """Initialize a QueryTopHitsAggregationResult object from a json dictionary.""" + args = {} + valid_keys = ['matching_results', 'hits'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class QueryTopHitsAggregationResult: ' + + ', '.join(bad_keys)) + if 'matching_results' in _dict: + args['matching_results'] = _dict.get('matching_results') + else: + raise ValueError( + 'Required property \'matching_results\' not present in QueryTopHitsAggregationResult JSON' + ) + if 'hits' in _dict: + args['hits'] = _dict.get('hits') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, + 'matching_results') and self.matching_results is not None: + _dict['matching_results'] = self.matching_results + if hasattr(self, 'hits') and self.hits is not None: + _dict['hits'] = self.hits + return _dict + + def __str__(self): + """Return a `str` version of this QueryTopHitsAggregationResult object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class RetrievalDetails(): + """ + An object contain retrieval type information. + + :attr str document_retrieval_strategy: (optional) Indentifies the document + retrieval strategy used for this query. `relevancy_training` indicates that the + results were returned using a relevancy trained model. + **Note**: In the event of trained collections being queried, but the trained + model is not used to return results, the **document_retrieval_strategy** will be + listed as `untrained`. + """ + + def __init__(self, *, document_retrieval_strategy=None): + """ + Initialize a RetrievalDetails object. + + :param str document_retrieval_strategy: (optional) Indentifies the document + retrieval strategy used for this query. `relevancy_training` indicates that + the results were returned using a relevancy trained model. + **Note**: In the event of trained collections being queried, but the + trained model is not used to return results, the + **document_retrieval_strategy** will be listed as `untrained`. + """ + self.document_retrieval_strategy = document_retrieval_strategy + + @classmethod + def _from_dict(cls, _dict): + """Initialize a RetrievalDetails object from a json dictionary.""" + args = {} + valid_keys = ['document_retrieval_strategy'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class RetrievalDetails: ' + + ', '.join(bad_keys)) + if 'document_retrieval_strategy' in _dict: + args['document_retrieval_strategy'] = _dict.get( + 'document_retrieval_strategy') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_retrieval_strategy' + ) and self.document_retrieval_strategy is not None: + _dict[ + 'document_retrieval_strategy'] = self.document_retrieval_strategy + return _dict + + def __str__(self): + """Return a `str` version of this RetrievalDetails object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class DocumentRetrievalStrategyEnum(Enum): + """ + Indentifies the document retrieval strategy used for this query. + `relevancy_training` indicates that the results were returned using a relevancy + trained model. + **Note**: In the event of trained collections being queried, but the trained + model is not used to return results, the **document_retrieval_strategy** will be + listed as `untrained`. + """ + UNTRAINED = "untrained" + RELEVANCY_TRAINING = "relevancy_training" + + +class TableBodyCells(): + """ + Cells that are not table header, column header, or row header cells. + + :attr str cell_id: (optional) The unique ID of the cell in the current table. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + :attr str text: (optional) The textual contents of this cell from the input + document without associated markup content. + :attr int row_index_begin: (optional) The `begin` index of this cell's `row` + location in the current table. + :attr int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :attr int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :attr int column_index_end: (optional) The `end` index of this cell's `column` + location in the current table. + :attr list[TableRowHeaderIds] row_header_ids: (optional) A list of table row + header ids. + :attr list[TableRowHeaderTexts] row_header_texts: (optional) A list of table row + header texts. + :attr list[TableRowHeaderTextsNormalized] row_header_texts_normalized: + (optional) A list of table row header texts normalized. + :attr list[TableColumnHeaderIds] column_header_ids: (optional) A list of table + column header ids. + :attr list[TableColumnHeaderTexts] column_header_texts: (optional) A list of + table column header texts. + :attr list[TableColumnHeaderTextsNormalized] column_header_texts_normalized: + (optional) A list of table column header texts normalized. + :attr list[DocumentAttribute] attributes: (optional) A list of document + attributes. + """ + + def __init__(self, + *, + cell_id=None, + location=None, + text=None, + row_index_begin=None, + row_index_end=None, + column_index_begin=None, + column_index_end=None, + row_header_ids=None, + row_header_texts=None, + row_header_texts_normalized=None, + column_header_ids=None, + column_header_texts=None, + column_header_texts_normalized=None, + attributes=None): + """ + Initialize a TableBodyCells object. + + :param str cell_id: (optional) The unique ID of the cell in the current + table. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + :param str text: (optional) The textual contents of this cell from the + input document without associated markup content. + :param int row_index_begin: (optional) The `begin` index of this cell's + `row` location in the current table. + :param int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :param int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :param int column_index_end: (optional) The `end` index of this cell's + `column` location in the current table. + :param list[TableRowHeaderIds] row_header_ids: (optional) A list of table + row header ids. + :param list[TableRowHeaderTexts] row_header_texts: (optional) A list of + table row header texts. + :param list[TableRowHeaderTextsNormalized] row_header_texts_normalized: + (optional) A list of table row header texts normalized. + :param list[TableColumnHeaderIds] column_header_ids: (optional) A list of + table column header ids. + :param list[TableColumnHeaderTexts] column_header_texts: (optional) A list + of table column header texts. + :param list[TableColumnHeaderTextsNormalized] + column_header_texts_normalized: (optional) A list of table column header + texts normalized. + :param list[DocumentAttribute] attributes: (optional) A list of document + attributes. + """ + self.cell_id = cell_id + self.location = location + self.text = text + self.row_index_begin = row_index_begin + self.row_index_end = row_index_end + self.column_index_begin = column_index_begin + self.column_index_end = column_index_end + self.row_header_ids = row_header_ids + self.row_header_texts = row_header_texts + self.row_header_texts_normalized = row_header_texts_normalized + self.column_header_ids = column_header_ids + self.column_header_texts = column_header_texts + self.column_header_texts_normalized = column_header_texts_normalized + self.attributes = attributes + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableBodyCells object from a json dictionary.""" + args = {} + valid_keys = [ + 'cell_id', 'location', 'text', 'row_index_begin', 'row_index_end', + 'column_index_begin', 'column_index_end', 'row_header_ids', + 'row_header_texts', 'row_header_texts_normalized', + 'column_header_ids', 'column_header_texts', + 'column_header_texts_normalized', 'attributes' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableBodyCells: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'row_index_begin' in _dict: + args['row_index_begin'] = _dict.get('row_index_begin') + if 'row_index_end' in _dict: + args['row_index_end'] = _dict.get('row_index_end') + if 'column_index_begin' in _dict: + args['column_index_begin'] = _dict.get('column_index_begin') + if 'column_index_end' in _dict: + args['column_index_end'] = _dict.get('column_index_end') + if 'row_header_ids' in _dict: + args['row_header_ids'] = [ + TableRowHeaderIds._from_dict(x) + for x in (_dict.get('row_header_ids')) + ] + if 'row_header_texts' in _dict: + args['row_header_texts'] = [ + TableRowHeaderTexts._from_dict(x) + for x in (_dict.get('row_header_texts')) + ] + if 'row_header_texts_normalized' in _dict: + args['row_header_texts_normalized'] = [ + TableRowHeaderTextsNormalized._from_dict(x) + for x in (_dict.get('row_header_texts_normalized')) + ] + if 'column_header_ids' in _dict: + args['column_header_ids'] = [ + TableColumnHeaderIds._from_dict(x) + for x in (_dict.get('column_header_ids')) + ] + if 'column_header_texts' in _dict: + args['column_header_texts'] = [ + TableColumnHeaderTexts._from_dict(x) + for x in (_dict.get('column_header_texts')) + ] + if 'column_header_texts_normalized' in _dict: + args['column_header_texts_normalized'] = [ + TableColumnHeaderTextsNormalized._from_dict(x) + for x in (_dict.get('column_header_texts_normalized')) + ] + if 'attributes' in _dict: + args['attributes'] = [ + DocumentAttribute._from_dict(x) + for x in (_dict.get('attributes')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, + 'row_index_begin') and self.row_index_begin is not None: + _dict['row_index_begin'] = self.row_index_begin + if hasattr(self, 'row_index_end') and self.row_index_end is not None: + _dict['row_index_end'] = self.row_index_end + if hasattr( + self, + 'column_index_begin') and self.column_index_begin is not None: + _dict['column_index_begin'] = self.column_index_begin + if hasattr(self, + 'column_index_end') and self.column_index_end is not None: + _dict['column_index_end'] = self.column_index_end + if hasattr(self, 'row_header_ids') and self.row_header_ids is not None: + _dict['row_header_ids'] = [ + x._to_dict() for x in self.row_header_ids + ] + if hasattr(self, + 'row_header_texts') and self.row_header_texts is not None: + _dict['row_header_texts'] = [ + x._to_dict() for x in self.row_header_texts + ] + if hasattr(self, 'row_header_texts_normalized' + ) and self.row_header_texts_normalized is not None: + _dict['row_header_texts_normalized'] = [ + x._to_dict() for x in self.row_header_texts_normalized + ] + if hasattr(self, + 'column_header_ids') and self.column_header_ids is not None: + _dict['column_header_ids'] = [ + x._to_dict() for x in self.column_header_ids + ] + if hasattr( + self, + 'column_header_texts') and self.column_header_texts is not None: + _dict['column_header_texts'] = [ + x._to_dict() for x in self.column_header_texts + ] + if hasattr(self, 'column_header_texts_normalized' + ) and self.column_header_texts_normalized is not None: + _dict['column_header_texts_normalized'] = [ + x._to_dict() for x in self.column_header_texts_normalized + ] + if hasattr(self, 'attributes') and self.attributes is not None: + _dict['attributes'] = [x._to_dict() for x in self.attributes] + return _dict + + def __str__(self): + """Return a `str` version of this TableBodyCells object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableCellKey(): + """ + A key in a key-value pair. + + :attr str cell_id: (optional) The unique ID of the key in the table. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + :attr str text: (optional) The text content of the table cell without HTML + markup. + """ + + def __init__(self, *, cell_id=None, location=None, text=None): + """ + Initialize a TableCellKey object. + + :param str cell_id: (optional) The unique ID of the key in the table. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + :param str text: (optional) The text content of the table cell without HTML + markup. + """ + self.cell_id = cell_id + self.location = location + self.text = text + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableCellKey object from a json dictionary.""" + args = {} + valid_keys = ['cell_id', 'location', 'text'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableCellKey: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + if 'text' in _dict: + args['text'] = _dict.get('text') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + return _dict + + def __str__(self): + """Return a `str` version of this TableCellKey object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableCellValues(): + """ + A value in a key-value pair. + + :attr str cell_id: (optional) The unique ID of the value in the table. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + :attr str text: (optional) The text content of the table cell without HTML + markup. + """ + + def __init__(self, *, cell_id=None, location=None, text=None): + """ + Initialize a TableCellValues object. + + :param str cell_id: (optional) The unique ID of the value in the table. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + :param str text: (optional) The text content of the table cell without HTML + markup. + """ + self.cell_id = cell_id + self.location = location + self.text = text + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableCellValues object from a json dictionary.""" + args = {} + valid_keys = ['cell_id', 'location', 'text'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableCellValues: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + if 'text' in _dict: + args['text'] = _dict.get('text') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + return _dict + + def __str__(self): + """Return a `str` version of this TableCellValues object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableColumnHeaderIds(): + """ + An array of values, each being the `id` value of a column header that is applicable to + the current cell. + + :attr str id: (optional) The `id` value of a column header. + """ + + def __init__(self, *, id=None): + """ + Initialize a TableColumnHeaderIds object. + + :param str id: (optional) The `id` value of a column header. + """ + self.id = id + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableColumnHeaderIds object from a json dictionary.""" + args = {} + valid_keys = ['id'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableColumnHeaderIds: ' + + ', '.join(bad_keys)) + if 'id' in _dict: + args['id'] = _dict.get('id') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'id') and self.id is not None: + _dict['id'] = self.id + return _dict + + def __str__(self): + """Return a `str` version of this TableColumnHeaderIds object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableColumnHeaderTexts(): + """ + An array of values, each being the `text` value of a column header that is applicable + to the current cell. + + :attr str text: (optional) The `text` value of a column header. + """ + + def __init__(self, *, text=None): + """ + Initialize a TableColumnHeaderTexts object. + + :param str text: (optional) The `text` value of a column header. + """ + self.text = text + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableColumnHeaderTexts object from a json dictionary.""" + args = {} + valid_keys = ['text'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableColumnHeaderTexts: ' + + ', '.join(bad_keys)) + if 'text' in _dict: + args['text'] = _dict.get('text') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + return _dict + + def __str__(self): + """Return a `str` version of this TableColumnHeaderTexts object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableColumnHeaderTextsNormalized(): + """ + If you provide customization input, the normalized version of the column header texts + according to the customization; otherwise, the same value as `column_header_texts`. + + :attr str text_normalized: (optional) The normalized version of a column header + text. + """ + + def __init__(self, *, text_normalized=None): + """ + Initialize a TableColumnHeaderTextsNormalized object. + + :param str text_normalized: (optional) The normalized version of a column + header text. + """ + self.text_normalized = text_normalized + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableColumnHeaderTextsNormalized object from a json dictionary.""" + args = {} + valid_keys = ['text_normalized'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableColumnHeaderTextsNormalized: ' + + ', '.join(bad_keys)) + if 'text_normalized' in _dict: + args['text_normalized'] = _dict.get('text_normalized') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, + 'text_normalized') and self.text_normalized is not None: + _dict['text_normalized'] = self.text_normalized + return _dict + + def __str__(self): + """Return a `str` version of this TableColumnHeaderTextsNormalized object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableColumnHeaders(): + """ + Column-level cells, each applicable as a header to other cells in the same column as + itself, of the current table. + + :attr str cell_id: (optional) The unique ID of the cell in the current table. + :attr object location: (optional) The location of the column header cell in the + current table as defined by its `begin` and `end` offsets, respectfully, in the + input document. + :attr str text: (optional) The textual contents of this cell from the input + document without associated markup content. + :attr str text_normalized: (optional) If you provide customization input, the + normalized version of the cell text according to the customization; otherwise, + the same value as `text`. + :attr int row_index_begin: (optional) The `begin` index of this cell's `row` + location in the current table. + :attr int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :attr int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :attr int column_index_end: (optional) The `end` index of this cell's `column` + location in the current table. + """ + + def __init__(self, + *, + cell_id=None, + location=None, + text=None, + text_normalized=None, + row_index_begin=None, + row_index_end=None, + column_index_begin=None, + column_index_end=None): + """ + Initialize a TableColumnHeaders object. + + :param str cell_id: (optional) The unique ID of the cell in the current + table. + :param object location: (optional) The location of the column header cell + in the current table as defined by its `begin` and `end` offsets, + respectfully, in the input document. + :param str text: (optional) The textual contents of this cell from the + input document without associated markup content. + :param str text_normalized: (optional) If you provide customization input, + the normalized version of the cell text according to the customization; + otherwise, the same value as `text`. + :param int row_index_begin: (optional) The `begin` index of this cell's + `row` location in the current table. + :param int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :param int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :param int column_index_end: (optional) The `end` index of this cell's + `column` location in the current table. + """ + self.cell_id = cell_id + self.location = location + self.text = text + self.text_normalized = text_normalized + self.row_index_begin = row_index_begin + self.row_index_end = row_index_end + self.column_index_begin = column_index_begin + self.column_index_end = column_index_end + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableColumnHeaders object from a json dictionary.""" + args = {} + valid_keys = [ + 'cell_id', 'location', 'text', 'text_normalized', 'row_index_begin', + 'row_index_end', 'column_index_begin', 'column_index_end' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableColumnHeaders: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = _dict.get('location') + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'text_normalized' in _dict: + args['text_normalized'] = _dict.get('text_normalized') + if 'row_index_begin' in _dict: + args['row_index_begin'] = _dict.get('row_index_begin') + if 'row_index_end' in _dict: + args['row_index_end'] = _dict.get('row_index_end') + if 'column_index_begin' in _dict: + args['column_index_begin'] = _dict.get('column_index_begin') + if 'column_index_end' in _dict: + args['column_index_end'] = _dict.get('column_index_end') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, + 'text_normalized') and self.text_normalized is not None: + _dict['text_normalized'] = self.text_normalized + if hasattr(self, + 'row_index_begin') and self.row_index_begin is not None: + _dict['row_index_begin'] = self.row_index_begin + if hasattr(self, 'row_index_end') and self.row_index_end is not None: + _dict['row_index_end'] = self.row_index_end + if hasattr( + self, + 'column_index_begin') and self.column_index_begin is not None: + _dict['column_index_begin'] = self.column_index_begin + if hasattr(self, + 'column_index_end') and self.column_index_end is not None: + _dict['column_index_end'] = self.column_index_end + return _dict + + def __str__(self): + """Return a `str` version of this TableColumnHeaders object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableElementLocation(): + """ + The numeric location of the identified element in the document, represented with two + integers labeled `begin` and `end`. + + :attr int begin: The element's `begin` index. + :attr int end: The element's `end` index. + """ + + def __init__(self, begin, end): + """ + Initialize a TableElementLocation object. + + :param int begin: The element's `begin` index. + :param int end: The element's `end` index. + """ + self.begin = begin + self.end = end + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableElementLocation object from a json dictionary.""" + args = {} + valid_keys = ['begin', 'end'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableElementLocation: ' + + ', '.join(bad_keys)) + if 'begin' in _dict: + args['begin'] = _dict.get('begin') + else: + raise ValueError( + 'Required property \'begin\' not present in TableElementLocation JSON' + ) + if 'end' in _dict: + args['end'] = _dict.get('end') + else: + raise ValueError( + 'Required property \'end\' not present in TableElementLocation JSON' + ) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'begin') and self.begin is not None: + _dict['begin'] = self.begin + if hasattr(self, 'end') and self.end is not None: + _dict['end'] = self.end + return _dict + + def __str__(self): + """Return a `str` version of this TableElementLocation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableHeaders(): + """ + The contents of the current table's header. + + :attr str cell_id: (optional) The unique ID of the cell in the current table. + :attr object location: (optional) The location of the table header cell in the + current table as defined by its `begin` and `end` offsets, respectfully, in the + input document. + :attr str text: (optional) The textual contents of the cell from the input + document without associated markup content. + :attr int row_index_begin: (optional) The `begin` index of this cell's `row` + location in the current table. + :attr int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :attr int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :attr int column_index_end: (optional) The `end` index of this cell's `column` + location in the current table. + """ + + def __init__(self, + *, + cell_id=None, + location=None, + text=None, + row_index_begin=None, + row_index_end=None, + column_index_begin=None, + column_index_end=None): + """ + Initialize a TableHeaders object. + + :param str cell_id: (optional) The unique ID of the cell in the current + table. + :param object location: (optional) The location of the table header cell in + the current table as defined by its `begin` and `end` offsets, + respectfully, in the input document. + :param str text: (optional) The textual contents of the cell from the input + document without associated markup content. + :param int row_index_begin: (optional) The `begin` index of this cell's + `row` location in the current table. + :param int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :param int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :param int column_index_end: (optional) The `end` index of this cell's + `column` location in the current table. + """ + self.cell_id = cell_id + self.location = location + self.text = text + self.row_index_begin = row_index_begin + self.row_index_end = row_index_end + self.column_index_begin = column_index_begin + self.column_index_end = column_index_end + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableHeaders object from a json dictionary.""" + args = {} + valid_keys = [ + 'cell_id', 'location', 'text', 'row_index_begin', 'row_index_end', + 'column_index_begin', 'column_index_end' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableHeaders: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = _dict.get('location') + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'row_index_begin' in _dict: + args['row_index_begin'] = _dict.get('row_index_begin') + if 'row_index_end' in _dict: + args['row_index_end'] = _dict.get('row_index_end') + if 'column_index_begin' in _dict: + args['column_index_begin'] = _dict.get('column_index_begin') + if 'column_index_end' in _dict: + args['column_index_end'] = _dict.get('column_index_end') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, + 'row_index_begin') and self.row_index_begin is not None: + _dict['row_index_begin'] = self.row_index_begin + if hasattr(self, 'row_index_end') and self.row_index_end is not None: + _dict['row_index_end'] = self.row_index_end + if hasattr( + self, + 'column_index_begin') and self.column_index_begin is not None: + _dict['column_index_begin'] = self.column_index_begin + if hasattr(self, + 'column_index_end') and self.column_index_end is not None: + _dict['column_index_end'] = self.column_index_end + return _dict + + def __str__(self): + """Return a `str` version of this TableHeaders object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableKeyValuePairs(): + """ + Key-value pairs detected across cell boundaries. + + :attr TableCellKey key: (optional) A key in a key-value pair. + :attr list[TableCellValues] value: (optional) A list of values in a key-value + pair. + """ + + def __init__(self, *, key=None, value=None): + """ + Initialize a TableKeyValuePairs object. + + :param TableCellKey key: (optional) A key in a key-value pair. + :param list[TableCellValues] value: (optional) A list of values in a + key-value pair. + """ + self.key = key + self.value = value + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableKeyValuePairs object from a json dictionary.""" + args = {} + valid_keys = ['key', 'value'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableKeyValuePairs: ' + + ', '.join(bad_keys)) + if 'key' in _dict: + args['key'] = TableCellKey._from_dict(_dict.get('key')) + if 'value' in _dict: + args['value'] = [ + TableCellValues._from_dict(x) for x in (_dict.get('value')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'key') and self.key is not None: + _dict['key'] = self.key._to_dict() + if hasattr(self, 'value') and self.value is not None: + _dict['value'] = [x._to_dict() for x in self.value] + return _dict + + def __str__(self): + """Return a `str` version of this TableKeyValuePairs object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableResultTable(): + """ + Full table object retrieved from Table Understanding Enrichment. + + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + :attr str text: (optional) The textual contents of the current table from the + input document without associated markup content. + :attr TableTextLocation section_title: (optional) Text and associated location + within a table. + :attr TableTextLocation title: (optional) Text and associated location within a + table. + :attr list[TableHeaders] table_headers: (optional) An array of table-level cells + that apply as headers to all the other cells in the current table. + :attr list[TableRowHeaders] row_headers: (optional) An array of row-level cells, + each applicable as a header to other cells in the same row as itself, of the + current table. + :attr list[TableColumnHeaders] column_headers: (optional) An array of + column-level cells, each applicable as a header to other cells in the same + column as itself, of the current table. + :attr list[TableKeyValuePairs] key_value_pairs: (optional) An array of key-value + pairs identified in the current table. + :attr list[TableBodyCells] body_cells: (optional) An array of cells that are + neither table header nor column header nor row header cells, of the current + table with corresponding row and column header associations. + :attr list[TableTextLocation] contexts: (optional) An array of lists of textual + entries across the document related to the current table being parsed. + """ + + def __init__(self, + *, + location=None, + text=None, + section_title=None, + title=None, + table_headers=None, + row_headers=None, + column_headers=None, + key_value_pairs=None, + body_cells=None, + contexts=None): + """ + Initialize a TableResultTable object. + + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + :param str text: (optional) The textual contents of the current table from + the input document without associated markup content. + :param TableTextLocation section_title: (optional) Text and associated + location within a table. + :param TableTextLocation title: (optional) Text and associated location + within a table. + :param list[TableHeaders] table_headers: (optional) An array of table-level + cells that apply as headers to all the other cells in the current table. + :param list[TableRowHeaders] row_headers: (optional) An array of row-level + cells, each applicable as a header to other cells in the same row as + itself, of the current table. + :param list[TableColumnHeaders] column_headers: (optional) An array of + column-level cells, each applicable as a header to other cells in the same + column as itself, of the current table. + :param list[TableKeyValuePairs] key_value_pairs: (optional) An array of + key-value pairs identified in the current table. + :param list[TableBodyCells] body_cells: (optional) An array of cells that + are neither table header nor column header nor row header cells, of the + current table with corresponding row and column header associations. + :param list[TableTextLocation] contexts: (optional) An array of lists of + textual entries across the document related to the current table being + parsed. + """ + self.location = location + self.text = text + self.section_title = section_title + self.title = title + self.table_headers = table_headers + self.row_headers = row_headers + self.column_headers = column_headers + self.key_value_pairs = key_value_pairs + self.body_cells = body_cells + self.contexts = contexts + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableResultTable object from a json dictionary.""" + args = {} + valid_keys = [ + 'location', 'text', 'section_title', 'title', 'table_headers', + 'row_headers', 'column_headers', 'key_value_pairs', 'body_cells', + 'contexts' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableResultTable: ' + + ', '.join(bad_keys)) + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'section_title' in _dict: + args['section_title'] = TableTextLocation._from_dict( + _dict.get('section_title')) + if 'title' in _dict: + args['title'] = TableTextLocation._from_dict(_dict.get('title')) + if 'table_headers' in _dict: + args['table_headers'] = [ + TableHeaders._from_dict(x) for x in (_dict.get('table_headers')) + ] + if 'row_headers' in _dict: + args['row_headers'] = [ + TableRowHeaders._from_dict(x) + for x in (_dict.get('row_headers')) + ] + if 'column_headers' in _dict: + args['column_headers'] = [ + TableColumnHeaders._from_dict(x) + for x in (_dict.get('column_headers')) + ] + if 'key_value_pairs' in _dict: + args['key_value_pairs'] = [ + TableKeyValuePairs._from_dict(x) + for x in (_dict.get('key_value_pairs')) + ] + if 'body_cells' in _dict: + args['body_cells'] = [ + TableBodyCells._from_dict(x) for x in (_dict.get('body_cells')) + ] + if 'contexts' in _dict: + args['contexts'] = [ + TableTextLocation._from_dict(x) for x in (_dict.get('contexts')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, 'section_title') and self.section_title is not None: + _dict['section_title'] = self.section_title._to_dict() + if hasattr(self, 'title') and self.title is not None: + _dict['title'] = self.title._to_dict() + if hasattr(self, 'table_headers') and self.table_headers is not None: + _dict['table_headers'] = [x._to_dict() for x in self.table_headers] + if hasattr(self, 'row_headers') and self.row_headers is not None: + _dict['row_headers'] = [x._to_dict() for x in self.row_headers] + if hasattr(self, 'column_headers') and self.column_headers is not None: + _dict['column_headers'] = [ + x._to_dict() for x in self.column_headers + ] + if hasattr(self, + 'key_value_pairs') and self.key_value_pairs is not None: + _dict['key_value_pairs'] = [ + x._to_dict() for x in self.key_value_pairs + ] + if hasattr(self, 'body_cells') and self.body_cells is not None: + _dict['body_cells'] = [x._to_dict() for x in self.body_cells] + if hasattr(self, 'contexts') and self.contexts is not None: + _dict['contexts'] = [x._to_dict() for x in self.contexts] + return _dict + + def __str__(self): + """Return a `str` version of this TableResultTable object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableRowHeaderIds(): + """ + An array of values, each being the `id` value of a row header that is applicable to + this body cell. + + :attr str id: (optional) The `id` values of a row header. + """ + + def __init__(self, *, id=None): + """ + Initialize a TableRowHeaderIds object. + + :param str id: (optional) The `id` values of a row header. + """ + self.id = id + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableRowHeaderIds object from a json dictionary.""" + args = {} + valid_keys = ['id'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableRowHeaderIds: ' + + ', '.join(bad_keys)) + if 'id' in _dict: + args['id'] = _dict.get('id') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'id') and self.id is not None: + _dict['id'] = self.id + return _dict + + def __str__(self): + """Return a `str` version of this TableRowHeaderIds object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableRowHeaderTexts(): + """ + An array of values, each being the `text` value of a row header that is applicable to + this body cell. + + :attr str text: (optional) The `text` value of a row header. + """ + + def __init__(self, *, text=None): + """ + Initialize a TableRowHeaderTexts object. + + :param str text: (optional) The `text` value of a row header. + """ + self.text = text + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableRowHeaderTexts object from a json dictionary.""" + args = {} + valid_keys = ['text'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableRowHeaderTexts: ' + + ', '.join(bad_keys)) + if 'text' in _dict: + args['text'] = _dict.get('text') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + return _dict + + def __str__(self): + """Return a `str` version of this TableRowHeaderTexts object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableRowHeaderTextsNormalized(): + """ + If you provide customization input, the normalized version of the row header texts + according to the customization; otherwise, the same value as `row_header_texts`. + + :attr str text_normalized: (optional) The normalized version of a row header + text. + """ + + def __init__(self, *, text_normalized=None): + """ + Initialize a TableRowHeaderTextsNormalized object. + + :param str text_normalized: (optional) The normalized version of a row + header text. + """ + self.text_normalized = text_normalized + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableRowHeaderTextsNormalized object from a json dictionary.""" + args = {} + valid_keys = ['text_normalized'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableRowHeaderTextsNormalized: ' + + ', '.join(bad_keys)) + if 'text_normalized' in _dict: + args['text_normalized'] = _dict.get('text_normalized') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, + 'text_normalized') and self.text_normalized is not None: + _dict['text_normalized'] = self.text_normalized + return _dict + + def __str__(self): + """Return a `str` version of this TableRowHeaderTextsNormalized object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableRowHeaders(): + """ + Row-level cells, each applicable as a header to other cells in the same row as itself, + of the current table. + + :attr str cell_id: (optional) The unique ID of the cell in the current table. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + :attr str text: (optional) The textual contents of this cell from the input + document without associated markup content. + :attr str text_normalized: (optional) If you provide customization input, the + normalized version of the cell text according to the customization; otherwise, + the same value as `text`. + :attr int row_index_begin: (optional) The `begin` index of this cell's `row` + location in the current table. + :attr int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :attr int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :attr int column_index_end: (optional) The `end` index of this cell's `column` + location in the current table. + """ + + def __init__(self, + *, + cell_id=None, + location=None, + text=None, + text_normalized=None, + row_index_begin=None, + row_index_end=None, + column_index_begin=None, + column_index_end=None): + """ + Initialize a TableRowHeaders object. + + :param str cell_id: (optional) The unique ID of the cell in the current + table. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + :param str text: (optional) The textual contents of this cell from the + input document without associated markup content. + :param str text_normalized: (optional) If you provide customization input, + the normalized version of the cell text according to the customization; + otherwise, the same value as `text`. + :param int row_index_begin: (optional) The `begin` index of this cell's + `row` location in the current table. + :param int row_index_end: (optional) The `end` index of this cell's `row` + location in the current table. + :param int column_index_begin: (optional) The `begin` index of this cell's + `column` location in the current table. + :param int column_index_end: (optional) The `end` index of this cell's + `column` location in the current table. + """ + self.cell_id = cell_id + self.location = location + self.text = text + self.text_normalized = text_normalized + self.row_index_begin = row_index_begin + self.row_index_end = row_index_end + self.column_index_begin = column_index_begin + self.column_index_end = column_index_end + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableRowHeaders object from a json dictionary.""" + args = {} + valid_keys = [ + 'cell_id', 'location', 'text', 'text_normalized', 'row_index_begin', + 'row_index_end', 'column_index_begin', 'column_index_end' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableRowHeaders: ' + + ', '.join(bad_keys)) + if 'cell_id' in _dict: + args['cell_id'] = _dict.get('cell_id') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'text_normalized' in _dict: + args['text_normalized'] = _dict.get('text_normalized') + if 'row_index_begin' in _dict: + args['row_index_begin'] = _dict.get('row_index_begin') + if 'row_index_end' in _dict: + args['row_index_end'] = _dict.get('row_index_end') + if 'column_index_begin' in _dict: + args['column_index_begin'] = _dict.get('column_index_begin') + if 'column_index_end' in _dict: + args['column_index_end'] = _dict.get('column_index_end') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'cell_id') and self.cell_id is not None: + _dict['cell_id'] = self.cell_id + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, + 'text_normalized') and self.text_normalized is not None: + _dict['text_normalized'] = self.text_normalized + if hasattr(self, + 'row_index_begin') and self.row_index_begin is not None: + _dict['row_index_begin'] = self.row_index_begin + if hasattr(self, 'row_index_end') and self.row_index_end is not None: + _dict['row_index_end'] = self.row_index_end + if hasattr( + self, + 'column_index_begin') and self.column_index_begin is not None: + _dict['column_index_begin'] = self.column_index_begin + if hasattr(self, + 'column_index_end') and self.column_index_end is not None: + _dict['column_index_end'] = self.column_index_end + return _dict + + def __str__(self): + """Return a `str` version of this TableRowHeaders object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TableTextLocation(): + """ + Text and associated location within a table. + + :attr str text: (optional) The text retrieved. + :attr TableElementLocation location: (optional) The numeric location of the + identified element in the document, represented with two integers labeled + `begin` and `end`. + """ + + def __init__(self, *, text=None, location=None): + """ + Initialize a TableTextLocation object. + + :param str text: (optional) The text retrieved. + :param TableElementLocation location: (optional) The numeric location of + the identified element in the document, represented with two integers + labeled `begin` and `end`. + """ + self.text = text + self.location = location + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TableTextLocation object from a json dictionary.""" + args = {} + valid_keys = ['text', 'location'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TableTextLocation: ' + + ', '.join(bad_keys)) + if 'text' in _dict: + args['text'] = _dict.get('text') + if 'location' in _dict: + args['location'] = TableElementLocation._from_dict( + _dict.get('location')) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'text') and self.text is not None: + _dict['text'] = self.text + if hasattr(self, 'location') and self.location is not None: + _dict['location'] = self.location._to_dict() + return _dict + + def __str__(self): + """Return a `str` version of this TableTextLocation object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TrainingExample(): + """ + Object containing example response details for a training query. + + :attr str document_id: The document ID associated with this training example. + :attr str collection_id: The collection ID associated with this training + example. + :attr int relevance: The relevance of the training example. + :attr date created: (optional) The date and time the example was created. + :attr date updated: (optional) The date and time the example was updated. + """ + + def __init__(self, + document_id, + collection_id, + relevance, + *, + created=None, + updated=None): + """ + Initialize a TrainingExample object. + + :param str document_id: The document ID associated with this training + example. + :param str collection_id: The collection ID associated with this training + example. + :param int relevance: The relevance of the training example. + :param date created: (optional) The date and time the example was created. + :param date updated: (optional) The date and time the example was updated. + """ + self.document_id = document_id + self.collection_id = collection_id + self.relevance = relevance + self.created = created + self.updated = updated + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TrainingExample object from a json dictionary.""" + args = {} + valid_keys = [ + 'document_id', 'collection_id', 'relevance', 'created', 'updated' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TrainingExample: ' + + ', '.join(bad_keys)) + if 'document_id' in _dict: + args['document_id'] = _dict.get('document_id') + else: + raise ValueError( + 'Required property \'document_id\' not present in TrainingExample JSON' + ) + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + else: + raise ValueError( + 'Required property \'collection_id\' not present in TrainingExample JSON' + ) + if 'relevance' in _dict: + args['relevance'] = _dict.get('relevance') + else: + raise ValueError( + 'Required property \'relevance\' not present in TrainingExample JSON' + ) + if 'created' in _dict: + args['created'] = _dict.get('created') + if 'updated' in _dict: + args['updated'] = _dict.get('updated') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'document_id') and self.document_id is not None: + _dict['document_id'] = self.document_id + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, 'relevance') and self.relevance is not None: + _dict['relevance'] = self.relevance + if hasattr(self, 'created') and self.created is not None: + _dict['created'] = self.created + if hasattr(self, 'updated') and self.updated is not None: + _dict['updated'] = self.updated + return _dict + + def __str__(self): + """Return a `str` version of this TrainingExample object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TrainingQuery(): + """ + Object containing training query details. + + :attr str query_id: (optional) The query ID associated with the training query. + :attr str natural_language_query: The natural text query for the training query. + :attr str filter: (optional) The filter used on the collection before the + **natural_language_query** is applied. + :attr date created: (optional) The date and time the query was created. + :attr date updated: (optional) The date and time the query was updated. + :attr list[TrainingExample] examples: Array of training examples. + """ + + def __init__(self, + natural_language_query, + examples, + *, + query_id=None, + filter=None, + created=None, + updated=None): + """ + Initialize a TrainingQuery object. + + :param str natural_language_query: The natural text query for the training + query. + :param list[TrainingExample] examples: Array of training examples. + :param str query_id: (optional) The query ID associated with the training + query. + :param str filter: (optional) The filter used on the collection before the + **natural_language_query** is applied. + :param date created: (optional) The date and time the query was created. + :param date updated: (optional) The date and time the query was updated. + """ + self.query_id = query_id + self.natural_language_query = natural_language_query + self.filter = filter + self.created = created + self.updated = updated + self.examples = examples + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TrainingQuery object from a json dictionary.""" + args = {} + valid_keys = [ + 'query_id', 'natural_language_query', 'filter', 'created', + 'updated', 'examples' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TrainingQuery: ' + + ', '.join(bad_keys)) + if 'query_id' in _dict: + args['query_id'] = _dict.get('query_id') + if 'natural_language_query' in _dict: + args['natural_language_query'] = _dict.get('natural_language_query') + else: + raise ValueError( + 'Required property \'natural_language_query\' not present in TrainingQuery JSON' + ) + if 'filter' in _dict: + args['filter'] = _dict.get('filter') + if 'created' in _dict: + args['created'] = _dict.get('created') + if 'updated' in _dict: + args['updated'] = _dict.get('updated') + if 'examples' in _dict: + args['examples'] = [ + TrainingExample._from_dict(x) for x in (_dict.get('examples')) + ] + else: + raise ValueError( + 'Required property \'examples\' not present in TrainingQuery JSON' + ) + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'query_id') and self.query_id is not None: + _dict['query_id'] = self.query_id + if hasattr(self, 'natural_language_query' + ) and self.natural_language_query is not None: + _dict['natural_language_query'] = self.natural_language_query + if hasattr(self, 'filter') and self.filter is not None: + _dict['filter'] = self.filter + if hasattr(self, 'created') and self.created is not None: + _dict['created'] = self.created + if hasattr(self, 'updated') and self.updated is not None: + _dict['updated'] = self.updated + if hasattr(self, 'examples') and self.examples is not None: + _dict['examples'] = [x._to_dict() for x in self.examples] + return _dict + + def __str__(self): + """Return a `str` version of this TrainingQuery object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + +class TrainingQuerySet(): + """ + Object specifying the training queries contained in the identified training set. + + :attr list[TrainingQuery] queries: (optional) Array of training queries. + """ + + def __init__(self, *, queries=None): + """ + Initialize a TrainingQuerySet object. + + :param list[TrainingQuery] queries: (optional) Array of training queries. + """ + self.queries = queries + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TrainingQuerySet object from a json dictionary.""" + args = {} + valid_keys = ['queries'] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TrainingQuerySet: ' + + ', '.join(bad_keys)) + if 'queries' in _dict: + args['queries'] = [ + TrainingQuery._from_dict(x) for x in (_dict.get('queries')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'queries') and self.queries is not None: + _dict['queries'] = [x._to_dict() for x in self.queries] + return _dict + + def __str__(self): + """Return a `str` version of this TrainingQuerySet object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other diff --git a/ibm_watson/speech_to_text_v1.py b/ibm_watson/speech_to_text_v1.py index 9865543c4..2697d164e 100644 --- a/ibm_watson/speech_to_text_v1.py +++ b/ibm_watson/speech_to_text_v1.py @@ -1045,6 +1045,10 @@ def create_language_model(self, language model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. + You can create a maximum of 1024 custom language models, per credential. The + service returns an error if you attempt to create more than 1024 models. You do + not lose any models, but you cannot create any more until your model count is + below the limit. **See also:** [Create a custom language model](https://cloud.ibm.com/docs/services/speech-to-text?topic=speech-to-text-languageCreate#createModel-language). @@ -2278,6 +2282,10 @@ def create_acoustic_model(self, acoustic model can be used only with the base model for which it is created. The model is owned by the instance of the service whose credentials are used to create it. + You can create a maximum of 1024 custom acoustic models, per credential. The + service returns an error if you attempt to create more than 1024 models. You do + not lose any models, but you cannot create any more until your model count is + below the limit. **See also:** [Create a custom acoustic model](https://cloud.ibm.com/docs/services/speech-to-text?topic=speech-to-text-acoustic#createModel-acoustic). diff --git a/ibm_watson/speech_to_text_v1_adapter.py b/ibm_watson/speech_to_text_v1_adapter.py index d9eded45c..9f59e4721 100644 --- a/ibm_watson/speech_to_text_v1_adapter.py +++ b/ibm_watson/speech_to_text_v1_adapter.py @@ -20,7 +20,9 @@ BEARER = 'Bearer' + class SpeechToTextV1Adapter(SpeechToTextV1): + def recognize_using_websocket(self, audio, content_type, @@ -194,7 +196,8 @@ def recognize_using_websocket(self, raise ValueError('audio must be provided') if not isinstance(audio, AudioSource): raise Exception( - 'audio is not of type AudioSource. Import the class from ibm_watson.websocket') + 'audio is not of type AudioSource. Import the class from ibm_watson.websocket' + ) if content_type is None: raise ValueError('content_type must be provided') if recognize_callback is None: @@ -251,11 +254,7 @@ def recognize_using_websocket(self, options = {k: v for k, v in options.items() if v is not None} request['options'] = options - RecognizeListener(audio, - request.get('options'), - recognize_callback, - request.get('url'), - request.get('headers'), - http_proxy_host, - http_proxy_port, + RecognizeListener(audio, request.get('options'), recognize_callback, + request.get('url'), request.get('headers'), + http_proxy_host, http_proxy_port, self.disable_ssl_verification) diff --git a/ibm_watson/text_to_speech_adapter_v1.py b/ibm_watson/text_to_speech_adapter_v1.py index 16f71e68b..0c2f92229 100644 --- a/ibm_watson/text_to_speech_adapter_v1.py +++ b/ibm_watson/text_to_speech_adapter_v1.py @@ -1,4 +1,3 @@ - # coding: utf-8 # Copyright 2018 IBM All Rights Reserved. @@ -21,7 +20,9 @@ BEARER = 'Bearer' + class TextToSpeechV1Adapter(TextToSpeechV1): + def synthesize_using_websocket(self, text, synthesize_callback, @@ -94,18 +95,11 @@ def synthesize_using_websocket(self, url += '/v1/synthesize?{0}'.format(urlencode(params)) request['url'] = url - options = { - 'text': text, - 'accept': accept, - 'timings': timings - } + options = {'text': text, 'accept': accept, 'timings': timings} options = {k: v for k, v in options.items() if v is not None} request['options'] = options - SynthesizeListener(request.get('options'), - synthesize_callback, - request.get('url'), - request.get('headers'), - http_proxy_host, - http_proxy_port, + SynthesizeListener(request.get('options'), synthesize_callback, + request.get('url'), request.get('headers'), + http_proxy_host, http_proxy_port, self.disable_ssl_verification) diff --git a/ibm_watson/version.py b/ibm_watson/version.py index d5f9f2a7b..fa721b497 100644 --- a/ibm_watson/version.py +++ b/ibm_watson/version.py @@ -1 +1 @@ -__version__ = '4.0.4' +__version__ = '4.1.0' diff --git a/ibm_watson/visual_recognition_v3.py b/ibm_watson/visual_recognition_v3.py index 24cfb08d1..bfa038545 100644 --- a/ibm_watson/visual_recognition_v3.py +++ b/ibm_watson/visual_recognition_v3.py @@ -166,7 +166,7 @@ def classify(self, form_data.append(('url', (None, url, 'text/plain'))) if threshold: form_data.append( - ('threshold', (None, threshold, 'application/json'))) + ('threshold', (None, str(threshold), 'application/json'))) if owners: owners = self._convert_list(owners) form_data.append(('owners', (None, owners, 'text/plain'))) diff --git a/ibm_watson/visual_recognition_v4.py b/ibm_watson/visual_recognition_v4.py index a70ca9b7e..7ccad5166 100644 --- a/ibm_watson/visual_recognition_v4.py +++ b/ibm_watson/visual_recognition_v4.py @@ -16,10 +16,6 @@ """ Provide images to the IBM Watson™ Visual Recognition service for analysis. The service detects objects based on a set of images with training data. -**Beta:** The Visual Recognition v4 API and Object Detection model are beta features. For -more information about beta features, see the [Release -notes](https://cloud.ibm.com/docs/services/visual-recognition?topic=visual-recognition-release-notes#beta). -{: important} """ import json @@ -158,7 +154,7 @@ def analyze(self, form_data.append(('image_url', (None, item, 'text/plain'))) if threshold: form_data.append( - ('threshold', (None, threshold, 'application/json'))) + ('threshold', (None, str(threshold), 'application/json'))) url = '/v4/analyze' request = self.prepare_request(method='POST', @@ -555,7 +551,10 @@ def get_jpeg_image(self, collection_id, image_id, *, size=None, **kwargs): :param str collection_id: The identifier of the collection. :param str image_id: The identifier of the image. - :param str size: (optional) Specify the image size. + :param str size: (optional) The image size. Specify `thumbnail` to return a + version that maintains the original aspect ratio but is no larger than 200 + pixels in the larger dimension. For example, an original 800 x 1000 image + is resized to 160 x 200 pixels. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse @@ -679,6 +678,47 @@ def add_image_training_data(self, response = self.send(request) return response + def get_training_usage(self, *, start_time=None, end_time=None, **kwargs): + """ + Get training usage. + + Information about the completed training events. You can use this information to + determine how close you are to the training limits for the month. + + :param str start_time: (optional) The earliest day to include training + events. Specify dates in YYYY-MM-DD format. If empty or not specified, the + earliest training event is included. + :param str end_time: (optional) The most recent day to include training + events. Specify dates in YYYY-MM-DD format. All events for the day are + included. If empty or not specified, the current day is used. Specify the + same value as `start_time` to request events for a single day. + :param dict headers: A `dict` containing the request headers + :return: A `DetailedResponse` containing the result, headers and HTTP status code. + :rtype: DetailedResponse + """ + + headers = {} + if 'headers' in kwargs: + headers.update(kwargs.get('headers')) + sdk_headers = get_sdk_headers('watson_vision_combined', 'V4', + 'get_training_usage') + headers.update(sdk_headers) + + params = { + 'version': self.version, + 'start_time': start_time, + 'end_time': end_time + } + + url = '/v4/training_usage' + request = self.prepare_request(method='GET', + url=url, + headers=headers, + params=params, + accept_json=True) + response = self.send(request) + return response + ######################### # User data ######################### @@ -736,9 +776,12 @@ class GetJpegImageEnums(object): class Size(Enum): """ - Specify the image size. + The image size. Specify `thumbnail` to return a version that maintains the + original aspect ratio but is no larger than 200 pixels in the larger dimension. + For example, an original 800 x 1000 image is resized to 160 x 200 pixels. """ FULL = 'full' + THUMBNAIL = 'thumbnail' ############################################################################## @@ -1307,7 +1350,8 @@ class Image(): :attr ImageDimensions dimensions: Height and width of an image. :attr DetectedObjects objects: Container for the list of collections that have objects detected in an image. - :attr Error errors: (optional) Details about an error. + :attr list[Error] errors: (optional) A container for the problems in the + request. """ def __init__(self, source, dimensions, objects, *, errors=None): @@ -1318,7 +1362,8 @@ def __init__(self, source, dimensions, objects, *, errors=None): :param ImageDimensions dimensions: Height and width of an image. :param DetectedObjects objects: Container for the list of collections that have objects detected in an image. - :param Error errors: (optional) Details about an error. + :param list[Error] errors: (optional) A container for the problems in the + request. """ self.source = source self.dimensions = dimensions @@ -1352,7 +1397,9 @@ def _from_dict(cls, _dict): raise ValueError( 'Required property \'objects\' not present in Image JSON') if 'errors' in _dict: - args['errors'] = Error._from_dict(_dict.get('errors')) + args['errors'] = [ + Error._from_dict(x) for x in (_dict.get('errors')) + ] return cls(**args) def _to_dict(self): @@ -1365,7 +1412,7 @@ def _to_dict(self): if hasattr(self, 'objects') and self.objects is not None: _dict['objects'] = self.objects._to_dict() if hasattr(self, 'errors') and self.errors is not None: - _dict['errors'] = self.errors._to_dict() + _dict['errors'] = [x._to_dict() for x in self.errors] return _dict def __str__(self): @@ -1387,38 +1434,40 @@ class ImageDetails(): """ Details about an image. - :attr str image_id: The identifier of the image. - :attr datetime updated: Date and time in Coordinated Universal Time (UTC) that - the image was most recently updated. - :attr datetime created: Date and time in Coordinated Universal Time (UTC) that - the image was created. + :attr str image_id: (optional) The identifier of the image. + :attr datetime updated: (optional) Date and time in Coordinated Universal Time + (UTC) that the image was most recently updated. + :attr datetime created: (optional) Date and time in Coordinated Universal Time + (UTC) that the image was created. :attr ImageSource source: The source type of the image. - :attr ImageDimensions dimensions: Height and width of an image. - :attr Error errors: (optional) Details about an error. - :attr TrainingDataObjects training_data: Training data for all objects. + :attr ImageDimensions dimensions: (optional) Height and width of an image. + :attr list[Error] errors: (optional) + :attr TrainingDataObjects training_data: (optional) Training data for all + objects. """ def __init__(self, - image_id, - updated, - created, source, - dimensions, - training_data, *, - errors=None): + image_id=None, + updated=None, + created=None, + dimensions=None, + errors=None, + training_data=None): """ Initialize a ImageDetails object. - :param str image_id: The identifier of the image. - :param datetime updated: Date and time in Coordinated Universal Time (UTC) - that the image was most recently updated. - :param datetime created: Date and time in Coordinated Universal Time (UTC) - that the image was created. :param ImageSource source: The source type of the image. - :param ImageDimensions dimensions: Height and width of an image. - :param TrainingDataObjects training_data: Training data for all objects. - :param Error errors: (optional) Details about an error. + :param str image_id: (optional) The identifier of the image. + :param datetime updated: (optional) Date and time in Coordinated Universal + Time (UTC) that the image was most recently updated. + :param datetime created: (optional) Date and time in Coordinated Universal + Time (UTC) that the image was created. + :param ImageDimensions dimensions: (optional) Height and width of an image. + :param list[Error] errors: (optional) + :param TrainingDataObjects training_data: (optional) Training data for all + objects. """ self.image_id = image_id self.updated = updated @@ -1443,22 +1492,10 @@ def _from_dict(cls, _dict): + ', '.join(bad_keys)) if 'image_id' in _dict: args['image_id'] = _dict.get('image_id') - else: - raise ValueError( - 'Required property \'image_id\' not present in ImageDetails JSON' - ) if 'updated' in _dict: args['updated'] = string_to_datetime(_dict.get('updated')) - else: - raise ValueError( - 'Required property \'updated\' not present in ImageDetails JSON' - ) if 'created' in _dict: args['created'] = string_to_datetime(_dict.get('created')) - else: - raise ValueError( - 'Required property \'created\' not present in ImageDetails JSON' - ) if 'source' in _dict: args['source'] = ImageSource._from_dict(_dict.get('source')) else: @@ -1467,19 +1504,13 @@ def _from_dict(cls, _dict): if 'dimensions' in _dict: args['dimensions'] = ImageDimensions._from_dict( _dict.get('dimensions')) - else: - raise ValueError( - 'Required property \'dimensions\' not present in ImageDetails JSON' - ) if 'errors' in _dict: - args['errors'] = Error._from_dict(_dict.get('errors')) + args['errors'] = [ + Error._from_dict(x) for x in (_dict.get('errors')) + ] if 'training_data' in _dict: args['training_data'] = TrainingDataObjects._from_dict( _dict.get('training_data')) - else: - raise ValueError( - 'Required property \'training_data\' not present in ImageDetails JSON' - ) return cls(**args) def _to_dict(self): @@ -1496,7 +1527,7 @@ def _to_dict(self): if hasattr(self, 'dimensions') and self.dimensions is not None: _dict['dimensions'] = self.dimensions._to_dict() if hasattr(self, 'errors') and self.errors is not None: - _dict['errors'] = self.errors._to_dict() + _dict['errors'] = [x._to_dict() for x in self.errors] if hasattr(self, 'training_data') and self.training_data is not None: _dict['training_data'] = self.training_data._to_dict() return _dict @@ -1593,16 +1624,16 @@ class ImageDimensions(): """ Height and width of an image. - :attr int height: Height in pixels of the image. - :attr int width: Width in pixels of the image. + :attr int height: (optional) Height in pixels of the image. + :attr int width: (optional) Width in pixels of the image. """ - def __init__(self, height, width): + def __init__(self, *, height=None, width=None): """ Initialize a ImageDimensions object. - :param int height: Height in pixels of the image. - :param int width: Width in pixels of the image. + :param int height: (optional) Height in pixels of the image. + :param int width: (optional) Width in pixels of the image. """ self.height = height self.width = width @@ -1619,16 +1650,8 @@ def _from_dict(cls, _dict): + ', '.join(bad_keys)) if 'height' in _dict: args['height'] = _dict.get('height') - else: - raise ValueError( - 'Required property \'height\' not present in ImageDimensions JSON' - ) if 'width' in _dict: args['width'] = _dict.get('width') - else: - raise ValueError( - 'Required property \'width\' not present in ImageDimensions JSON' - ) return cls(**args) def _to_dict(self): @@ -2270,6 +2293,219 @@ def __ne__(self, other): return not self == other +class TrainingEvent(): + """ + Details about the training event. + + :attr str type: (optional) Trained object type. Only `objects` is currently + supported. + :attr str collection_id: (optional) Identifier of the trained collection. + :attr datetime completion_time: (optional) Date and time in Coordinated + Universal Time (UTC) that training on the collection finished. + :attr str status: (optional) Training status of the training event. + :attr int image_count: (optional) The total number of images that were used in + training for this training event. + """ + + def __init__(self, + *, + type=None, + collection_id=None, + completion_time=None, + status=None, + image_count=None): + """ + Initialize a TrainingEvent object. + + :param str type: (optional) Trained object type. Only `objects` is + currently supported. + :param str collection_id: (optional) Identifier of the trained collection. + :param datetime completion_time: (optional) Date and time in Coordinated + Universal Time (UTC) that training on the collection finished. + :param str status: (optional) Training status of the training event. + :param int image_count: (optional) The total number of images that were + used in training for this training event. + """ + self.type = type + self.collection_id = collection_id + self.completion_time = completion_time + self.status = status + self.image_count = image_count + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TrainingEvent object from a json dictionary.""" + args = {} + valid_keys = [ + 'type', 'collection_id', 'completion_time', 'status', 'image_count' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TrainingEvent: ' + + ', '.join(bad_keys)) + if 'type' in _dict: + args['type'] = _dict.get('type') + if 'collection_id' in _dict: + args['collection_id'] = _dict.get('collection_id') + if 'completion_time' in _dict: + args['completion_time'] = string_to_datetime( + _dict.get('completion_time')) + if 'status' in _dict: + args['status'] = _dict.get('status') + if 'image_count' in _dict: + args['image_count'] = _dict.get('image_count') + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'type') and self.type is not None: + _dict['type'] = self.type + if hasattr(self, 'collection_id') and self.collection_id is not None: + _dict['collection_id'] = self.collection_id + if hasattr(self, + 'completion_time') and self.completion_time is not None: + _dict['completion_time'] = datetime_to_string(self.completion_time) + if hasattr(self, 'status') and self.status is not None: + _dict['status'] = self.status + if hasattr(self, 'image_count') and self.image_count is not None: + _dict['image_count'] = self.image_count + return _dict + + def __str__(self): + """Return a `str` version of this TrainingEvent object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class TypeEnum(Enum): + """ + Trained object type. Only `objects` is currently supported. + """ + OBJECTS = "objects" + + class StatusEnum(Enum): + """ + Training status of the training event. + """ + FAILED = "failed" + SUCCEEDED = "succeeded" + + +class TrainingEvents(): + """ + Details about the training events. + + :attr datetime start_time: (optional) The starting day for the returned training + events in Coordinated Universal Time (UTC). If not specified in the request, it + identifies the earliest training event. + :attr datetime end_time: (optional) The ending day for the returned training + events in Coordinated Universal Time (UTC). If not specified in the request, it + lists the current time. + :attr int completed_events: (optional) The total number of training events in + the response for the start and end times. + :attr int trained_images: (optional) The total number of images that were used + in training for the start and end times. + :attr list[TrainingEvent] events: (optional) The completed training events for + the start and end time. + """ + + def __init__(self, + *, + start_time=None, + end_time=None, + completed_events=None, + trained_images=None, + events=None): + """ + Initialize a TrainingEvents object. + + :param datetime start_time: (optional) The starting day for the returned + training events in Coordinated Universal Time (UTC). If not specified in + the request, it identifies the earliest training event. + :param datetime end_time: (optional) The ending day for the returned + training events in Coordinated Universal Time (UTC). If not specified in + the request, it lists the current time. + :param int completed_events: (optional) The total number of training events + in the response for the start and end times. + :param int trained_images: (optional) The total number of images that were + used in training for the start and end times. + :param list[TrainingEvent] events: (optional) The completed training events + for the start and end time. + """ + self.start_time = start_time + self.end_time = end_time + self.completed_events = completed_events + self.trained_images = trained_images + self.events = events + + @classmethod + def _from_dict(cls, _dict): + """Initialize a TrainingEvents object from a json dictionary.""" + args = {} + valid_keys = [ + 'start_time', 'end_time', 'completed_events', 'trained_images', + 'events' + ] + bad_keys = set(_dict.keys()) - set(valid_keys) + if bad_keys: + raise ValueError( + 'Unrecognized keys detected in dictionary for class TrainingEvents: ' + + ', '.join(bad_keys)) + if 'start_time' in _dict: + args['start_time'] = string_to_datetime(_dict.get('start_time')) + if 'end_time' in _dict: + args['end_time'] = string_to_datetime(_dict.get('end_time')) + if 'completed_events' in _dict: + args['completed_events'] = _dict.get('completed_events') + if 'trained_images' in _dict: + args['trained_images'] = _dict.get('trained_images') + if 'events' in _dict: + args['events'] = [ + TrainingEvent._from_dict(x) for x in (_dict.get('events')) + ] + return cls(**args) + + def _to_dict(self): + """Return a json dictionary representing this model.""" + _dict = {} + if hasattr(self, 'start_time') and self.start_time is not None: + _dict['start_time'] = datetime_to_string(self.start_time) + if hasattr(self, 'end_time') and self.end_time is not None: + _dict['end_time'] = datetime_to_string(self.end_time) + if hasattr(self, + 'completed_events') and self.completed_events is not None: + _dict['completed_events'] = self.completed_events + if hasattr(self, 'trained_images') and self.trained_images is not None: + _dict['trained_images'] = self.trained_images + if hasattr(self, 'events') and self.events is not None: + _dict['events'] = [x._to_dict() for x in self.events] + return _dict + + def __str__(self): + """Return a `str` version of this TrainingEvents object.""" + return json.dumps(self._to_dict(), indent=2) + + def __eq__(self, other): + """Return `true` when self and other are equal, false otherwise.""" + if not isinstance(other, self.__class__): + return False + return self.__dict__ == other.__dict__ + + def __ne__(self, other): + """Return `true` when self and other are not equal, false otherwise.""" + return not self == other + + class TrainingStatus(): """ Training status information for the collection. @@ -2442,7 +2678,7 @@ def _from_dict(cls, _dict): 'Unrecognized keys detected in dictionary for class FileWithMetadata: ' + ', '.join(bad_keys)) if 'data' in _dict: - args['data'] = _dict.get('data') + args['data'] = file._from_dict(_dict.get('data')) else: raise ValueError( 'Required property \'data\' not present in FileWithMetadata JSON' @@ -2457,7 +2693,7 @@ def _to_dict(self): """Return a json dictionary representing this model.""" _dict = {} if hasattr(self, 'data') and self.data is not None: - _dict['data'] = self.data.__str__() + _dict['data'] = self.data if hasattr(self, 'filename') and self.filename is not None: _dict['filename'] = self.filename if hasattr(self, 'content_type') and self.content_type is not None: diff --git a/ibm_watson/websocket/audio_source.py b/ibm_watson/websocket/audio_source.py index b33930578..dfeb44b8e 100644 --- a/ibm_watson/websocket/audio_source.py +++ b/ibm_watson/websocket/audio_source.py @@ -14,6 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. + class AudioSource(object): """"Audio source for the speech to text recognize using websocket""" diff --git a/ibm_watson/websocket/recognize_abstract_callback.py b/ibm_watson/websocket/recognize_abstract_callback.py index ed77ce253..1c8ab5220 100644 --- a/ibm_watson/websocket/recognize_abstract_callback.py +++ b/ibm_watson/websocket/recognize_abstract_callback.py @@ -16,6 +16,7 @@ class RecognizeCallback(object): + def __init__(self): pass diff --git a/ibm_watson/websocket/recognize_listener.py b/ibm_watson/websocket/recognize_listener.py index a37e6792e..09f2f1276 100644 --- a/ibm_watson/websocket/recognize_listener.py +++ b/ibm_watson/websocket/recognize_listener.py @@ -31,7 +31,9 @@ START = "start" STOP = "stop" + class RecognizeListener(object): + def __init__(self, audio_source, options, @@ -64,7 +66,8 @@ def __init__(self, self.ws_client.run_forever(http_proxy_host=self.http_proxy_host, http_proxy_port=self.http_proxy_port, - sslopt={"cert_reqs": ssl.CERT_NONE} if self.verify is not None else None) + sslopt={"cert_reqs": ssl.CERT_NONE} + if self.verify is not None else None) @classmethod def build_start_message(cls, options): @@ -102,6 +105,7 @@ def send_audio(self, ws): :param ws: Websocket client """ + def run(*args): """Background process to stream the data""" if not self.audio_source.is_buffer: @@ -118,7 +122,8 @@ def run(*args): try: if not self.audio_source.input.empty(): chunk = self.audio_source.input.get() - self.ws_client.send(chunk, websocket.ABNF.OPCODE_BINARY) + self.ws_client.send(chunk, + websocket.ABNF.OPCODE_BINARY) time.sleep(TEN_MILLISECONDS) if self.audio_source.input.empty(): if self.audio_source.is_recording: @@ -132,7 +137,8 @@ def run(*args): break time.sleep(TEN_MILLISECONDS) - self.ws_client.send(self.build_closing_message(), websocket.ABNF.OPCODE_TEXT) + self.ws_client.send(self.build_closing_message(), + websocket.ABNF.OPCODE_TEXT) thread.start_new_thread(run, ()) @@ -147,7 +153,8 @@ def on_open(self, ws): # Send initialization message init_data = self.build_start_message(self.options) - self.ws_client.send(json.dumps(init_data).encode('utf8'), websocket.ABNF.OPCODE_TEXT) + self.ws_client.send( + json.dumps(init_data).encode('utf8'), websocket.ABNF.OPCODE_TEXT) def on_data(self, ws, message, message_type, fin): """ diff --git a/ibm_watson/websocket/synthesize_callback.py b/ibm_watson/websocket/synthesize_callback.py index e153b66b2..c8ee34c3c 100644 --- a/ibm_watson/websocket/synthesize_callback.py +++ b/ibm_watson/websocket/synthesize_callback.py @@ -16,6 +16,7 @@ class SynthesizeCallback(object): + def __init__(self): pass diff --git a/ibm_watson/websocket/synthesize_listener.py b/ibm_watson/websocket/synthesize_listener.py index fe21fcc1d..9c110daea 100644 --- a/ibm_watson/websocket/synthesize_listener.py +++ b/ibm_watson/websocket/synthesize_listener.py @@ -23,10 +23,11 @@ except ImportError: import _thread as thread - TEN_MILLISECONDS = 0.01 + class SynthesizeListener(object): + def __init__(self, options, callback, @@ -56,13 +57,15 @@ def __init__(self, self.ws_client.run_forever(http_proxy_host=self.http_proxy_host, http_proxy_port=self.http_proxy_port, - sslopt={'cert_reqs': ssl.CERT_NONE} if self.verify is not None else None) + sslopt={'cert_reqs': ssl.CERT_NONE} + if self.verify is not None else None) def send_text(self): """ Sends the text message Note: The service handles one request per connection """ + def run(*args): """Background process to send the text""" self.ws_client.send(json.dumps(self.options).encode('utf8')) @@ -94,7 +97,8 @@ def on_data(self, ws, message, message_type, fin): if message_type == websocket.ABNF.OPCODE_TEXT: json_object = json.loads(message) if 'binary_streams' in json_object: - self.callback.on_content_type(json_object['binary_streams'][0]['content_type']) + self.callback.on_content_type( + json_object['binary_streams'][0]['content_type']) elif 'error' in json_object: self.on_error(ws, json_object.get('error')) return diff --git a/package-lock.json b/package-lock.json index df88bc2c5..3603f0197 100644 --- a/package-lock.json +++ b/package-lock.json @@ -195,9 +195,9 @@ "integrity": "sha512-tHq6qdbT9U1IRSGf14CL0pUlULksvY9OZ+5eEgl1N7t+OA3tGvNpxJCzuKQlsNgCVwbAs670L1vcVQi8j9HjnA==" }, "@types/node": { - "version": "12.12.11", - "resolved": "https://registry.npmjs.org/@types/node/-/node-12.12.11.tgz", - "integrity": "sha512-O+x6uIpa6oMNTkPuHDa9MhMMehlxLAd5QcOvKRjAFsBVpeFWTOPnXbDvILvFgFFZfQ1xh1EZi1FbXxUix+zpsQ==" + "version": "12.12.14", + "resolved": "https://registry.npmjs.org/@types/node/-/node-12.12.14.tgz", + "integrity": "sha512-u/SJDyXwuihpwjXy7hOOghagLEV1KdAST6syfnOk6QZAMzZuWZqXy5aYYZbh8Jdpd4escVFP0MvftHNDb9pruA==" }, "@types/retry": { "version": "0.12.0", @@ -380,9 +380,9 @@ "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==" }, "execa": { - "version": "3.3.0", - "resolved": "https://registry.npmjs.org/execa/-/execa-3.3.0.tgz", - "integrity": "sha512-j5Vit5WZR/cbHlqU97+qcnw9WHRCIL4V1SVe75VcHcD1JRBdt8fv0zw89b7CQHQdUHTt2VjuhcF5ibAgVOxqpg==", + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/execa/-/execa-3.4.0.tgz", + "integrity": "sha512-r9vdGQk4bmCuK1yKQu1KTwcT2zwfWdbdaXfCtAh+5nU/4fSX+JAb7vZGvI5naJrQlvONrEB20jeruESI69530g==", "requires": { "cross-spawn": "^7.0.0", "get-stream": "^5.0.0", diff --git a/setup.py b/setup.py index 36f6bf589..fb53120b9 100644 --- a/setup.py +++ b/setup.py @@ -18,7 +18,7 @@ import os import sys -__version__ = '4.0.4' +__version__ = '4.1.0' if sys.argv[-1] == 'publish': diff --git a/test/integration/test_visual_recognition_v4.py b/test/integration/test_visual_recognition_v4.py index 8ed3aa073..036e3510e 100644 --- a/test/integration/test_visual_recognition_v4.py +++ b/test/integration/test_visual_recognition_v4.py @@ -120,5 +120,9 @@ def test_04_training(self): assert train_result is not None assert train_result.get('training_status') is not None + # training usage + training_usage = self.visual_recognition.get_training_usage(start_time='2019-11-01', end_time='2019-11-27').get_result() + assert training_usage is not None + # delete collection self.visual_recognition.delete_collection(collection_id) diff --git a/test/unit/test_assistant_v1.py b/test/unit/test_assistant_v1.py index dad8ed580..8ef258a87 100644 --- a/test/unit/test_assistant_v1.py +++ b/test/unit/test_assistant_v1.py @@ -8,7 +8,7 @@ from ibm_watson.assistant_v1 import Context, Counterexample, \ CounterexampleCollection, Entity, EntityCollection, Example, \ ExampleCollection, MessageInput, Intent, IntentCollection, Synonym, \ - SynonymCollection, Value, ValueCollection, Workspace, WorkspaceCollection + SynonymCollection, Value, ValueCollection, Workspace, WorkspaceCollection, Webhook, WebhookHeader from ibm_cloud_sdk_core.authenticators import BasicAuthenticator platform_url = 'https://gateway.watsonplatform.net' @@ -1344,7 +1344,8 @@ def test_create_workspace(): version='2017-02-03', authenticator=authenticator) workspace = service.create_workspace( name='Pizza app', description='Pizza app', language='en', metadata={}, - system_settings={'tooling': {'store_generic_responses' : True}}).get_result() + system_settings={'tooling': {'store_generic_responses' : True}}, + webhooks=[Webhook(url='fake-jenkins-url', name='jenkins', headers=[WebhookHeader('fake', 'header')])]).get_result() assert len(responses.calls) == 1 assert responses.calls[0].request.url.startswith(url) assert workspace == response @@ -1471,7 +1472,8 @@ def test_update_workspace(): description='Pizza app', language='en', metadata={}, - system_settings={'tooling': {'store_generic_responses' : True}}).get_result() + system_settings={'tooling': {'store_generic_responses' : True}}, + webhooks=[Webhook(url='fake-jenkins-url', name='jenkins', headers=[WebhookHeader('fake', 'header')])]).get_result() assert len(responses.calls) == 1 assert responses.calls[0].request.url.startswith(url) assert workspace == response @@ -1488,6 +1490,13 @@ def test_dialog_nodes(): status=200, content_type='application/json') + responses.add( + responses.POST, + "{0}/location-done?version=2017-02-03".format(url), + body='{ "application/json": { "dialog_node": "location-done" }}', + status=200, + content_type='application/json') + responses.add( responses.POST, "{0}?version=2017-02-03".format(url), @@ -1513,19 +1522,22 @@ def test_dialog_nodes(): assistant = ibm_watson.AssistantV1( version='2017-02-03', authenticator=authenticator) - assistant.create_dialog_node('id', 'location-done', user_label='xxx') + assistant.create_dialog_node('id', 'location-done', user_label='xxx', disambiguation_opt_out=False) assert responses.calls[0].response.json()['application/json']['dialog_node'] == 'location-done' + assistant.update_dialog_node('id', 'location-done', user_label='xxx', new_disambiguation_opt_out=False) + assert responses.calls[1].response.json()['application/json']['dialog_node'] == 'location-done' + assistant.delete_dialog_node('id', 'location-done') - assert responses.calls[1].response.json() == {"description": "deleted successfully"} + assert responses.calls[2].response.json() == {"description": "deleted successfully"} assistant.get_dialog_node('id', 'location-done') - assert responses.calls[2].response.json() == {"application/json": {"dialog_node": "location-atm"}} + assert responses.calls[3].response.json() == {"application/json": {"dialog_node": "location-atm"}} assistant.list_dialog_nodes('id') - assert responses.calls[3].response.json() == {"application/json": {"dialog_node": "location-atm"}} + assert responses.calls[4].response.json() == {"application/json": {"dialog_node": "location-atm"}} - assert len(responses.calls) == 4 + assert len(responses.calls) == 5 @responses.activate def test_delete_user_data(): diff --git a/test/unit/test_discovery_v2.py b/test/unit/test_discovery_v2.py new file mode 100644 index 000000000..da2d2caad --- /dev/null +++ b/test/unit/test_discovery_v2.py @@ -0,0 +1,1213 @@ +# -*- coding: utf-8 -*- +# (C) Copyright IBM Corp. 2019. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator +import json +import responses +import tempfile +from ibm_watson.discovery_v2 import * + +base_url = 'https://fake' + +############################################################################## +# Start of Service: Collections +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for list_collections +#----------------------------------------------------------------------------- +class TestListCollections(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_collections_response(self): + body = self.construct_full_body() + response = fake_response_ListCollectionsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_collections_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ListCollectionsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_collections_empty(self): + check_empty_required_params(self, + fake_response_ListCollectionsResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/collections'.format(body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.list_collections(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +# endregion +############################################################################## +# End of Service: Collections +############################################################################## + +############################################################################## +# Start of Service: Queries +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for query +#----------------------------------------------------------------------------- +class TestQuery(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_response(self): + body = self.construct_full_body() + response = fake_response_QueryResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_QueryResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_empty(self): + check_empty_required_params(self, fake_response_QueryResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/query'.format(body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.POST, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.query(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body.update({ + "collection_ids": [], + "filter": + "string1", + "query": + "string1", + "natural_language_query": + "string1", + "aggregation": + "string1", + "count": + 12345, + "return_": [], + "offset": + 12345, + "sort": + "string1", + "highlight": + True, + "spelling_suggestions": + True, + "table_results": + QueryLargeTableResults._from_dict( + json.loads("""{"enabled": false, "count": 5}""")), + "suggested_refinements": + QueryLargeSuggestedRefinements._from_dict( + json.loads("""{"enabled": false, "count": 5}""")), + "passages": + QueryLargePassages._from_dict( + json.loads( + """{"enabled": false, "per_document": true, "max_per_document": 16, "fields": [], "count": 5, "characters": 10}""" + )), + }) + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for get_autocompletion +#----------------------------------------------------------------------------- +class TestGetAutocompletion(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_autocompletion_response(self): + body = self.construct_full_body() + response = fake_response_Completions_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_autocompletion_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_Completions_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_autocompletion_empty(self): + check_empty_required_params(self, fake_response_Completions_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/autocompletion'.format(body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.get_autocompletion(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['prefix'] = "string1" + body['collection_ids'] = [] + body['field'] = "string1" + body['count'] = 12345 + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['prefix'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for query_notices +#----------------------------------------------------------------------------- +class TestQueryNotices(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_notices_response(self): + body = self.construct_full_body() + response = fake_response_QueryNoticesResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_notices_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_QueryNoticesResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_query_notices_empty(self): + check_empty_required_params(self, + fake_response_QueryNoticesResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/notices'.format(body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.query_notices(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['filter'] = "string1" + body['query'] = "string1" + body['natural_language_query'] = "string1" + body['count'] = 12345 + body['offset'] = 12345 + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for list_fields +#----------------------------------------------------------------------------- +class TestListFields(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_fields_response(self): + body = self.construct_full_body() + response = fake_response_ListFieldsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_fields_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ListFieldsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_fields_empty(self): + check_empty_required_params(self, fake_response_ListFieldsResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/fields'.format(body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.list_fields(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_ids'] = [] + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +# endregion +############################################################################## +# End of Service: Queries +############################################################################## + +############################################################################## +# Start of Service: ComponentSettings +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for get_component_settings +#----------------------------------------------------------------------------- +class TestGetComponentSettings(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_component_settings_response(self): + body = self.construct_full_body() + response = fake_response_ComponentSettingsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_component_settings_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ComponentSettingsResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_component_settings_empty(self): + check_empty_required_params( + self, fake_response_ComponentSettingsResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/component_settings'.format( + body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.get_component_settings(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +# endregion +############################################################################## +# End of Service: ComponentSettings +############################################################################## + +############################################################################## +# Start of Service: Documents +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for add_document +#----------------------------------------------------------------------------- +class TestAddDocument(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_document_response(self): + body = self.construct_full_body() + response = fake_response_DocumentAccepted_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_document_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_DocumentAccepted_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_document_empty(self): + check_empty_required_params(self, fake_response_DocumentAccepted_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/collections/{1}/documents'.format( + body['project_id'], body['collection_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.POST, + url, + body=json.dumps(response), + status=202, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.add_document(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + body['file'] = tempfile.NamedTemporaryFile() + body['filename'] = "string1" + body['file_content_type'] = "string1" + body['metadata'] = "string1" + body['x_watson_discovery_force'] = True + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for update_document +#----------------------------------------------------------------------------- +class TestUpdateDocument(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_document_response(self): + body = self.construct_full_body() + response = fake_response_DocumentAccepted_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_document_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_DocumentAccepted_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_document_empty(self): + check_empty_required_params(self, fake_response_DocumentAccepted_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/collections/{1}/documents/{2}'.format( + body['project_id'], body['collection_id'], body['document_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.POST, + url, + body=json.dumps(response), + status=202, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.update_document(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + body['document_id'] = "string1" + body['file'] = tempfile.NamedTemporaryFile() + body['filename'] = "string1" + body['file_content_type'] = "string1" + body['metadata'] = "string1" + body['x_watson_discovery_force'] = True + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + body['document_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for delete_document +#----------------------------------------------------------------------------- +class TestDeleteDocument(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_document_response(self): + body = self.construct_full_body() + response = fake_response_DeleteDocumentResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_document_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_DeleteDocumentResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_document_empty(self): + check_empty_required_params(self, + fake_response_DeleteDocumentResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/collections/{1}/documents/{2}'.format( + body['project_id'], body['collection_id'], body['document_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.DELETE, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.delete_document(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + body['document_id'] = "string1" + body['x_watson_discovery_force'] = True + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['collection_id'] = "string1" + body['document_id'] = "string1" + return body + + +# endregion +############################################################################## +# End of Service: Documents +############################################################################## + +############################################################################## +# Start of Service: TrainingData +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for list_training_queries +#----------------------------------------------------------------------------- +class TestListTrainingQueries(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_training_queries_response(self): + body = self.construct_full_body() + response = fake_response_TrainingQuerySet_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_training_queries_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingQuerySet_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_training_queries_empty(self): + check_empty_required_params(self, fake_response_TrainingQuerySet_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/training_data/queries'.format( + body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.list_training_queries(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for delete_training_queries +#----------------------------------------------------------------------------- +class TestDeleteTrainingQueries(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_training_queries_response(self): + body = self.construct_full_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_training_queries_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_training_queries_empty(self): + check_empty_required_params(self, fake_response__json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/training_data/queries'.format( + body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.DELETE, + url, + body=json.dumps(response), + status=204, + content_type='') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.delete_training_queries(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for create_training_query +#----------------------------------------------------------------------------- +class TestCreateTrainingQuery(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_training_query_response(self): + body = self.construct_full_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_training_query_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_training_query_empty(self): + check_empty_required_params(self, fake_response_TrainingQuery_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/training_data/queries'.format( + body['project_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.POST, + url, + body=json.dumps(response), + status=201, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.create_training_query(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body.update({ + "natural_language_query": "string1", + "examples": [], + "filter": "string1", + }) + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body.update({ + "natural_language_query": "string1", + "examples": [], + }) + return body + + +#----------------------------------------------------------------------------- +# Test Class for get_training_query +#----------------------------------------------------------------------------- +class TestGetTrainingQuery(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_query_response(self): + body = self.construct_full_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_query_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_query_empty(self): + check_empty_required_params(self, fake_response_TrainingQuery_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/training_data/queries/{1}'.format( + body['project_id'], body['query_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.get_training_query(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['query_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['query_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for update_training_query +#----------------------------------------------------------------------------- +class TestUpdateTrainingQuery(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_training_query_response(self): + body = self.construct_full_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_training_query_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingQuery_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_training_query_empty(self): + check_empty_required_params(self, fake_response_TrainingQuery_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v2/projects/{0}/training_data/queries/{1}'.format( + body['project_id'], body['query_id']) + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.POST, + url, + body=json.dumps(response), + status=201, + content_type='application/json') + + def call_service(self, body): + service = DiscoveryV2(authenticator=NoAuthAuthenticator(), + version='2019-11-22') + service.set_service_url(base_url) + output = service.update_training_query(**body) + return output + + def construct_full_body(self): + body = dict() + body['project_id'] = "string1" + body['query_id'] = "string1" + body.update({ + "natural_language_query": "string1", + "examples": [], + "filter": "string1", + }) + return body + + def construct_required_body(self): + body = dict() + body['project_id'] = "string1" + body['query_id'] = "string1" + body.update({ + "natural_language_query": "string1", + "examples": [], + "filter": "string1", + }) + return body + + +# endregion +############################################################################## +# End of Service: TrainingData +############################################################################## + + +def check_empty_required_params(obj, response): + """Test function to assert that the operation will throw an error when given empty required data + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + body = {k: None for k in body.keys()} + error = False + try: + send_request(obj, body, response) + except ValueError as e: + error = True + assert error + + +def check_missing_required_params(obj): + """Test function to assert that the operation will throw an error when missing required data + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + url = obj.make_url(body) + error = False + try: + send_request(obj, {}, {}, url=url) + except TypeError as e: + error = True + assert error + + +def check_empty_response(obj): + """Test function to assert that the operation will return an empty response when given an empty request + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + url = obj.make_url(body) + send_request(obj, {}, {}, url=url) + + +def send_request(obj, body, response, url=None): + """Test function to create a request, send it, and assert its accuracy to the mock response + + Args: + obj: The generated test function + body: Dict filled with fake data for calling the service + response_str: Mock response string + + """ + if not url: + url = obj.make_url(body) + obj.add_mock_response(url, response) + output = obj.call_service(body) + assert responses.calls[0].request.url.startswith(url) + assert output.get_result() == response + + +#################### +## Mock Responses ## +#################### + +fake_response__json = None +fake_response_ListCollectionsResponse_json = """{"collections": []}""" +fake_response_QueryResponse_json = """{"matching_results": 16, "results": [], "aggregations": [], "retrieval_details": {"document_retrieval_strategy": "fake_document_retrieval_strategy"}, "suggested_query": "fake_suggested_query", "suggested_refinements": [], "table_results": []}""" +fake_response_Completions_json = """{"completions": []}""" +fake_response_QueryNoticesResponse_json = """{"matching_results": 16, "notices": []}""" +fake_response_ListFieldsResponse_json = """{"fields": []}""" +fake_response_ComponentSettingsResponse_json = """{"fields_shown": {"body": {"use_passage": false, "field": "fake_field"}, "title": {"field": "fake_field"}}, "autocomplete": true, "structured_search": false, "results_per_page": 16, "aggregations": []}""" +fake_response_DocumentAccepted_json = """{"document_id": "fake_document_id", "status": "fake_status"}""" +fake_response_DocumentAccepted_json = """{"document_id": "fake_document_id", "status": "fake_status"}""" +fake_response_DeleteDocumentResponse_json = """{"document_id": "fake_document_id", "status": "fake_status"}""" +fake_response_TrainingQuerySet_json = """{"queries": []}""" +fake_response_TrainingQuery_json = """{"query_id": "fake_query_id", "natural_language_query": "fake_natural_language_query", "filter": "fake_filter", "examples": []}""" +fake_response_TrainingQuery_json = """{"query_id": "fake_query_id", "natural_language_query": "fake_natural_language_query", "filter": "fake_filter", "examples": []}""" +fake_response_TrainingQuery_json = """{"query_id": "fake_query_id", "natural_language_query": "fake_natural_language_query", "filter": "fake_filter", "examples": []}""" diff --git a/test/unit/test_visual_recognition_v4.py b/test/unit/test_visual_recognition_v4.py index ffc1d4f01..0c408ad30 100644 --- a/test/unit/test_visual_recognition_v4.py +++ b/test/unit/test_visual_recognition_v4.py @@ -1,3 +1,4 @@ +# -*- coding: utf-8 -*- # (C) Copyright IBM Corp. 2019. # # Licensed under the Apache License, Version 2.0 (the "License"); @@ -12,824 +13,1199 @@ # See the License for the specific language governing permissions and # limitations under the License. -import json -import ibm_watson -from ibm_watson.visual_recognition_v4 import AnalyzeEnums, FileWithMetadata +from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator import responses -import os -import jwt -import time -from unittest import TestCase -from ibm_cloud_sdk_core.authenticators import IAMAuthenticator - -platform_url = 'https://gateway.watsonplatform.net' -service_path = '/visual-recognition/api' -base_url = '{0}{1}'.format(platform_url, service_path) - - -def get_access_token(): - access_token_layout = { - "username": "dummy", - "role": "Admin", - "permissions": ["administrator", "manage_catalog"], - "sub": "admin", - "iss": "sss", - "aud": "sss", - "uid": "sss", - "iat": 3600, - "exp": int(time.time()) - } - - access_token = jwt.encode( - access_token_layout, - 'secret', - algorithm='HS256', - headers={'kid': '230498151c214b788dd97f22b85410a5'}) - return access_token.decode('utf-8') - - -class TestVisualRecognitionV4(TestCase): - - @classmethod - def setUp(cls): - iam_url = "https://iam.cloud.ibm.com/identity/token" - iam_token_response = { - "access_token": get_access_token(), - "token_type": "Bearer", - "expires_in": 3600, - "expiration": 1524167011, - "refresh_token": "jy4gl91BQ" - } - responses.add(responses.POST, - url=iam_url, - body=json.dumps(iam_token_response), - status=200) +import tempfile +from ibm_watson.visual_recognition_v4 import * + +base_url = 'https://gateway.watsonplatform.net/visual-recognition/api' + +############################################################################## +# Start of Service: Analysis +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for analyze +#----------------------------------------------------------------------------- +class TestAnalyze(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_analyze_response(self): + body = self.construct_full_body() + response = fake_response_AnalyzeResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 - ######################### - # analysis - ######################### + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_analyze_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_AnalyzeResponse_json + send_request(self, body, response) + assert len(responses.calls) == 1 + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- @responses.activate - def test_analyze(self): + def test_analyze_empty(self): + check_empty_required_params(self, fake_response_AnalyzeResponse_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/analyze' url = '{0}{1}'.format(base_url, endpoint) - response = { - "images": [{ - "objects": { - "collections": [{ - "collection_id": - "collection_id", - "objects": [{ - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }, { - "collection_id": - "collection_id", - "objects": [{ - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }] - }, - "source": { - "archive_filename": "archive_filename", - "filename": "filename", - "type": "file", - "resolved_url": "resolved_url", - "source_url": "source_url" - }, - "errors": { - "code": - "invalid_field", - "message": - "The date provided for `version` is not valid. Specify dates in `YYYY-MM-DD` format.", - "more_info": - "https://cloud.ibm.com/apidocs/visual-recognition-v4#versioning", - "target": { - "type": "parameter", - "name": "version" - } - }, - "dimensions": { - "width": 6, - "height": 0 - } - }, { - "objects": { - "collections": [{ - "collection_id": - "collection_id", - "objects": [{ - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }, { - "collection_id": - "collection_id", - "objects": [{ - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "score": 7.0614014, - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }] - }, - "source": { - "archive_filename": "archive_filename", - "filename": "filename", - "type": "file", - "resolved_url": "resolved_url", - "source_url": "source_url" - }, - "errors": { - "code": - "invalid_field", - "message": - "The date provided for `version` is not valid. Specify dates in `YYYY-MM-DD` format.", - "more_info": - "https://cloud.ibm.com/apidocs/visual-recognition-v4#versioning", - "target": { - "type": "parameter", - "name": "version" - } - }, - "dimensions": { - "width": 6, - "height": 0 - } - }], - "trace": - "trace", - "warnings": [{ - "code": "invalid_field", - "more_info": "more_info", - "message": "message" - }, { - "code": "invalid_field", - "more_info": "more_info", - "message": "message" - }] - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.analyze(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_ids'] = ['collection_id1, collection_id2'] + body['features'] = ['test'] + body['image_url'] = ['https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/American_Eskimo_Dog.jpg/1280px-American_Eskimo_Dog.jpg'] + body['threshold'] = 12345.0 + body['images_file'] = [FileWithMetadata(tempfile.NamedTemporaryFile())] + return body + + def construct_required_body(self): + body = dict() + body['collection_ids'] = ['fake'] + body['features'] = [AnalyzeEnums.Features.OBJECTS.value] + return body + + +# endregion +############################################################################## +# End of Service: Analysis +############################################################################## + +############################################################################## +# Start of Service: Collections +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for create_collection +#----------------------------------------------------------------------------- +class TestCreateCollection(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_collection_response(self): + body = self.construct_full_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_collection_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_create_collection_empty(self): + check_empty_response(self) + assert len(responses.calls) == 1 - with open( - os.path.join(os.path.dirname(__file__), - '../../resources/cars.zip'), 'rb') as cars: - detailed_response = service.analyze( - collection_ids=['collection_id1, collection_id2'], - features=[AnalyzeEnums.Features.OBJECTS.value], - images_file=[FileWithMetadata(cars)], - image_url=[ - 'https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/American_Eskimo_Dog.jpg/1280px-American_Eskimo_Dog.jpg' - ], - threshold='0.2') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 - - ######################### - # collections - ######################### - - @responses.activate - def test_create_collection(self): + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections' url = '{0}{1}'.format(base_url, endpoint) - response = { - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) - detailed_response = service.create_collection(name='name', - description='description') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + output = service.create_collection(**body) + return output + + def construct_full_body(self): + body = dict() + body.update({ + "name": "string1", + "description": "string1", + }) + return body + + def construct_required_body(self): + body = dict() + body.update({ + "name": "string1", + "description": "string1", + }) + return body + + +#----------------------------------------------------------------------------- +# Test Class for list_collections +#----------------------------------------------------------------------------- +class TestListCollections(): + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_list_collections(self): + def test_list_collections_response(self): + body = self.construct_full_body() + response = fake_response_CollectionsList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_collections_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_CollectionsList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_collections_empty(self): + check_empty_response(self) + assert len(responses.calls) == 1 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections' url = '{0}{1}'.format(base_url, endpoint) - response = { - "collections": [{ - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - }, { - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - }] - } + return url + + def add_mock_response(self, url, response): responses.add(responses.GET, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.list_collections(**body) + return output - detailed_response = service.list_collections() - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + def construct_full_body(self): + body = dict() + return body + def construct_required_body(self): + body = dict() + return body + + +#----------------------------------------------------------------------------- +# Test Class for get_collection +#----------------------------------------------------------------------------- +class TestGetCollection(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_get_collection(self): - endpoint = '/v4/collections/{0}'.format('collection_id') + def test_get_collection_response(self): + body = self.construct_full_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_collection_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_collection_empty(self): + check_empty_required_params(self, fake_response_Collection_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - } + return url + + def add_mock_response(self, url, response): responses.add(responses.GET, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.get_collection(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body - detailed_response = service.get_collection( - collection_id='collection_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 +#----------------------------------------------------------------------------- +# Test Class for update_collection +#----------------------------------------------------------------------------- +class TestUpdateCollection(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_collection_response(self): + body = self.construct_full_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- @responses.activate - def test_update_collection(self): - endpoint = '/v4/collections/{0}'.format('collection_id') + def test_update_collection_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_update_collection_empty(self): + check_empty_required_params(self, fake_response_Collection_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.update_collection(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body.update({ + "name": "string1", + "description": "string1", + }) + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body - detailed_response = service.update_collection( - collection_id='collection_id', - name='name', - description='description') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 +#----------------------------------------------------------------------------- +# Test Class for delete_collection +#----------------------------------------------------------------------------- +class TestDeleteCollection(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_collection_response(self): + body = self.construct_full_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- @responses.activate - def test_delete_collection(self): - endpoint = '/v4/collections/{0}'.format('collection_id') + def test_delete_collection_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_collection_empty(self): + check_empty_required_params(self, fake_response__json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = {} + return url + + def add_mock_response(self, url, response): responses.add(responses.DELETE, url, body=json.dumps(response), status=200, content_type='') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.delete_collection(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body - detailed_response = service.delete_collection( - collection_id='collection_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 - # ######################### - # # images - # ######################### +# endregion +############################################################################## +# End of Service: Collections +############################################################################## + +############################################################################## +# Start of Service: Images +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for add_images +#----------------------------------------------------------------------------- +class TestAddImages(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_images_response(self): + body = self.construct_full_body() + response = fake_response_ImageDetailsList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- @responses.activate - def test_add_images(self): - endpoint = '/v4/collections/{0}/images'.format('collection_id') + def test_add_images_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ImageDetailsList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_images_empty(self): + check_empty_required_params(self, fake_response_ImageDetailsList_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}/images'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "images": [{ - "training_data": { - "objects": [{ - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }, - "created": "2000-01-23T04:56:07.000+00:00", - "source": { - "archive_filename": "archive_filename", - "filename": "filename", - "type": "file", - "resolved_url": "resolved_url", - "source_url": "source_url" - }, - "image_id": "image_id", - "updated": "2000-01-23T04:56:07.000+00:00", - "errors": { - "code": - "invalid_field", - "message": - "The date provided for `version` is not valid. Specify dates in `YYYY-MM-DD` format.", - "more_info": - "https://cloud.ibm.com/apidocs/visual-recognition-v4#versioning", - "target": { - "type": "parameter", - "name": "version" - } - }, - "dimensions": { - "width": 6, - "height": 0 - } - }, { - "training_data": { - "objects": [{ - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }, - "created": "2000-01-23T04:56:07.000+00:00", - "source": { - "archive_filename": "archive_filename", - "filename": "filename", - "type": "file", - "resolved_url": "resolved_url", - "source_url": "source_url" - }, - "image_id": "image_id", - "updated": "2000-01-23T04:56:07.000+00:00", - "errors": { - "code": - "invalid_field", - "message": - "The date provided for `version` is not valid. Specify dates in `YYYY-MM-DD` format.", - "more_info": - "https://cloud.ibm.com/apidocs/visual-recognition-v4#versioning", - "target": { - "type": "parameter", - "name": "version" - } - }, - "dimensions": { - "width": 6, - "height": 0 - } - }], - "trace": - "trace", - "warnings": [{ - "code": "invalid_field", - "more_info": "more_info", - "message": "message" - }, { - "code": "invalid_field", - "more_info": "more_info", - "message": "message" - }] - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.add_images(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body['images_file'] = [] + body['image_url'] = [] + body['training_data'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body - detailed_response = service.add_images(collection_id='collection_id', - image_url='image_url', - training_data='training_data') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 +#----------------------------------------------------------------------------- +# Test Class for list_images +#----------------------------------------------------------------------------- +class TestListImages(): + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_list_images(self): - endpoint = '/v4/collections/{0}/images'.format('collection_id') + def test_list_images_response(self): + body = self.construct_full_body() + response = fake_response_ImageSummaryList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_images_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ImageSummaryList_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_list_images_empty(self): + check_empty_required_params(self, fake_response_ImageSummaryList_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}/images'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "images": [{ - "image_id": "image_id", - "updated": "2000-01-23T04:56:07.000+00:00" - }, { - "image_id": "image_id", - "updated": "2000-01-23T04:56:07.000+00:00" - }] - } + return url + + def add_mock_response(self, url, response): responses.add(responses.GET, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.list_images(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body - detailed_response = service.list_images(collection_id='collection_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 +#----------------------------------------------------------------------------- +# Test Class for get_image_details +#----------------------------------------------------------------------------- +class TestGetImageDetails(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_image_details_response(self): + body = self.construct_full_body() + response = fake_response_ImageDetails_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- @responses.activate - def test_get_image_details(self): + def test_get_image_details_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_ImageDetails_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_image_details_empty(self): + check_empty_required_params(self, fake_response_ImageDetails_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections/{0}/images/{1}'.format( - 'collection_id', 'image_id').format('image_id') + body['collection_id'], body['image_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "training_data": { - "objects": [{ - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - }, - "created": "2000-01-23T04:56:07.000+00:00", - "source": { - "archive_filename": "archive_filename", - "filename": "filename", - "type": "file", - "resolved_url": "resolved_url", - "source_url": "source_url" - }, - "image_id": "image_id", - "updated": "2000-01-23T04:56:07.000+00:00", - "errors": { - "code": - "invalid_field", - "message": - "The date provided for `version` is not valid. Specify dates in `YYYY-MM-DD` format.", - "more_info": - "https://cloud.ibm.com/apidocs/visual-recognition-v4#versioning", - "target": { - "type": "parameter", - "name": "version" - } - }, - "dimensions": { - "width": 6, - "height": 0 - } - } + return url + + def add_mock_response(self, url, response): responses.add(responses.GET, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.get_image_details(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + return body - detailed_response = service.get_image_details( - collection_id='collection_id', image_id='image_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + return body + +#----------------------------------------------------------------------------- +# Test Class for delete_image +#----------------------------------------------------------------------------- +class TestDeleteImage(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_image_response(self): + body = self.construct_full_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_image_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- @responses.activate - def test_delete_image(self): + def test_delete_image_empty(self): + check_empty_required_params(self, fake_response__json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections/{0}/images/{1}'.format( - 'collection_id', 'image_id').format('image_id') + body['collection_id'], body['image_id']) url = '{0}{1}'.format(base_url, endpoint) - response = {} + return url + + def add_mock_response(self, url, response): responses.add(responses.DELETE, url, body=json.dumps(response), status=200, content_type='') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.delete_image(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + return body - detailed_response = service.delete_image(collection_id='collection_id', - image_id='image_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + return body + +#----------------------------------------------------------------------------- +# Test Class for get_jpeg_image +#----------------------------------------------------------------------------- +class TestGetJpegImage(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_get_jpeg_image(self): + def test_get_jpeg_image_response(self): + body = self.construct_full_body() + response = fake_response_BinaryIO_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_jpeg_image_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_BinaryIO_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_jpeg_image_empty(self): + check_empty_required_params(self, fake_response_BinaryIO_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections/{0}/images/{1}/jpeg'.format( - 'collection_id', 'image_id').format('image_id') + body['collection_id'], body['image_id']) url = '{0}{1}'.format(base_url, endpoint) - response = {} + return url + + def add_mock_response(self, url, response): responses.add(responses.GET, url, body=json.dumps(response), status=200, content_type='') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.get_jpeg_image(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + body['size'] = "string1" + return body - detailed_response = service.get_jpeg_image( - collection_id='collection_id', image_id='image_id', size='size') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + return body - ######################### - # training - ######################### +# endregion +############################################################################## +# End of Service: Images +############################################################################## + +############################################################################## +# Start of Service: Training +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for train +#----------------------------------------------------------------------------- +class TestTrain(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_train(self): - endpoint = '/v4/collections/{0}/train'.format('collection_id') + def test_train_response(self): + body = self.construct_full_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_train_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_Collection_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_train_empty(self): + check_empty_required_params(self, fake_response_Collection_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/collections/{0}/train'.format(body['collection_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "collection_id": "collection_id", - "training_status": { - "objects": { - "in_progress": "true", - "data_changed": "true", - "ready": "true", - "latest_failed": "true", - "description": "description" - } - }, - "created": "2000-01-23T04:56:07.000+00:00", - "name": "name", - "description": "description", - "image_count": 0, - "updated": "2000-01-23T04:56:07.000+00:00" - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=202, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.train(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + return body + + +#----------------------------------------------------------------------------- +# Test Class for add_image_training_data +#----------------------------------------------------------------------------- +class TestAddImageTrainingData(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_image_training_data_response(self): + body = self.construct_full_body() + response = fake_response_TrainingDataObjects_json + send_request(self, body, response) + assert len(responses.calls) == 1 - detailed_response = service.train(collection_id='collection_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_add_image_training_data_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingDataObjects_json + send_request(self, body, response) + assert len(responses.calls) == 1 + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- @responses.activate - def test_add_image_training_data(self): + def test_add_image_training_data_empty(self): + check_empty_required_params(self, + fake_response_TrainingDataObjects_json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/collections/{0}/images/{1}/training_data'.format( - 'collection_id', 'image_id') + body['collection_id'], body['image_id']) url = '{0}{1}'.format(base_url, endpoint) - response = { - "objects": [{ - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }, { - "location": { - "top": 1, - "left": 5, - "width": 5, - "height": 2 - }, - "object": "object" - }] - } + return url + + def add_mock_response(self, url, response): responses.add(responses.POST, url, body=json.dumps(response), status=200, content_type='application/json') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') + service.set_service_url(base_url) + output = service.add_image_training_data(**body) + return output + + def construct_full_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + body.update({ + "objects": [], + }) + return body + + def construct_required_body(self): + body = dict() + body['collection_id'] = "string1" + body['image_id'] = "string1" + body.update({ + "objects": [], + }) + return body + + +#----------------------------------------------------------------------------- +# Test Class for get_training_usage +#----------------------------------------------------------------------------- +class TestGetTrainingUsage(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_usage_response(self): + body = self.construct_full_body() + response = fake_response_TrainingEvents_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_usage_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response_TrainingEvents_json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_get_training_usage_empty(self): + check_empty_response(self) + assert len(responses.calls) == 1 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): + endpoint = '/v4/training_usage' + url = '{0}{1}'.format(base_url, endpoint) + return url + + def add_mock_response(self, url, response): + responses.add(responses.GET, + url, + body=json.dumps(response), + status=200, + content_type='application/json') + + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.get_training_usage(**body) + return output - detailed_response = service.add_image_training_data( - collection_id='collection_id', image_id='image_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 + def construct_full_body(self): + body = dict() + body['start_time'] = "string1" + body['end_time'] = "string1" + return body - ######################### - # userData - ######################### + def construct_required_body(self): + body = dict() + return body + +# endregion +############################################################################## +# End of Service: Training +############################################################################## + +############################################################################## +# Start of Service: UserData +############################################################################## +# region + + +#----------------------------------------------------------------------------- +# Test Class for delete_user_data +#----------------------------------------------------------------------------- +class TestDeleteUserData(): + + #-------------------------------------------------------- + # Test 1: Send fake data and check response + #-------------------------------------------------------- @responses.activate - def test_delete_user_data(self): + def test_delete_user_data_response(self): + body = self.construct_full_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 2: Send only required fake data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_user_data_required_response(self): + # Check response with required params + body = self.construct_required_body() + response = fake_response__json + send_request(self, body, response) + assert len(responses.calls) == 1 + + #-------------------------------------------------------- + # Test 3: Send empty data and check response + #-------------------------------------------------------- + @responses.activate + def test_delete_user_data_empty(self): + check_empty_required_params(self, fake_response__json) + check_missing_required_params(self) + assert len(responses.calls) == 0 + + #----------- + #- Helpers - + #----------- + def make_url(self, body): endpoint = '/v4/user_data' url = '{0}{1}'.format(base_url, endpoint) - response = {} + return url + + def add_mock_response(self, url, response): responses.add(responses.DELETE, url, body=json.dumps(response), status=202, content_type='') - authenticator = IAMAuthenticator('bogusapikey') - service = ibm_watson.VisualRecognitionV4('YYYY-MM-DD', - authenticator=authenticator) + def call_service(self, body): + service = VisualRecognitionV4(authenticator=NoAuthAuthenticator(), + version='2019-02-11') service.set_service_url(base_url) + output = service.delete_user_data(**body) + return output + + def construct_full_body(self): + body = dict() + body['customer_id'] = "string1" + return body + + def construct_required_body(self): + body = dict() + body['customer_id'] = "string1" + return body + + +# endregion +############################################################################## +# End of Service: UserData +############################################################################## + + +def check_empty_required_params(obj, response): + """Test function to assert that the operation will throw an error when given empty required data + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + body = {k: None for k in body.keys()} + error = False + try: + send_request(obj, body, response) + except ValueError as e: + error = True + assert error + + +def check_missing_required_params(obj): + """Test function to assert that the operation will throw an error when missing required data + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + url = obj.make_url(body) + error = False + try: + send_request(obj, {}, {}, url=url) + except TypeError as e: + error = True + assert error + + +def check_empty_response(obj): + """Test function to assert that the operation will return an empty response when given an empty request + + Args: + obj: The generated test function + + """ + body = obj.construct_full_body() + url = obj.make_url(body) + send_request(obj, {}, {}, url=url) + + +def send_request(obj, body, response, url=None): + """Test function to create a request, send it, and assert its accuracy to the mock response + + Args: + obj: The generated test function + body: Dict filled with fake data for calling the service + response_str: Mock response string + + """ + if not url: + url = obj.make_url(body) + obj.add_mock_response(url, response) + output = obj.call_service(body) + assert responses.calls[0].request.url.startswith(url) + assert output.get_result() == response + + +#################### +## Mock Responses ## +#################### - detailed_response = service.delete_user_data(customer_id='customer_id') - result = detailed_response.get_result() - assert result is not None - assert len(responses.calls) == 2 +fake_response__json = None +fake_response_AnalyzeResponse_json = """{"images": [], "warnings": [], "trace": "fake_trace"}""" +fake_response_Collection_json = """{"collection_id": "fake_collection_id", "name": "fake_name", "description": "fake_description", "created": "2017-05-16T13:56:54.957Z", "updated": "2017-05-16T13:56:54.957Z", "image_count": 11, "training_status": {"objects": {"ready": false, "in_progress": false, "data_changed": true, "latest_failed": false, "description": "fake_description"}}}""" +fake_response_CollectionsList_json = """{"collections": []}""" +fake_response_Collection_json = """{"collection_id": "fake_collection_id", "name": "fake_name", "description": "fake_description", "created": "2017-05-16T13:56:54.957Z", "updated": "2017-05-16T13:56:54.957Z", "image_count": 11, "training_status": {"objects": {"ready": false, "in_progress": false, "data_changed": true, "latest_failed": false, "description": "fake_description"}}}""" +fake_response_Collection_json = """{"collection_id": "fake_collection_id", "name": "fake_name", "description": "fake_description", "created": "2017-05-16T13:56:54.957Z", "updated": "2017-05-16T13:56:54.957Z", "image_count": 11, "training_status": {"objects": {"ready": false, "in_progress": false, "data_changed": true, "latest_failed": false, "description": "fake_description"}}}""" +fake_response_ImageDetailsList_json = """{"images": [], "warnings": [], "trace": "fake_trace"}""" +fake_response_ImageSummaryList_json = """{"images": []}""" +fake_response_ImageDetails_json = """{"image_id": "fake_image_id", "updated": "2017-05-16T13:56:54.957Z", "created": "2017-05-16T13:56:54.957Z", "source": {"type": "fake_type", "filename": "fake_filename", "archive_filename": "fake_archive_filename", "source_url": "fake_source_url", "resolved_url": "fake_resolved_url"}, "dimensions": {"height": 6, "width": 5}, "errors": [], "training_data": {"objects": []}}""" +fake_response_BinaryIO_json = """Contents of response byte-stream...""" +fake_response_Collection_json = """{"collection_id": "fake_collection_id", "name": "fake_name", "description": "fake_description", "created": "2017-05-16T13:56:54.957Z", "updated": "2017-05-16T13:56:54.957Z", "image_count": 11, "training_status": {"objects": {"ready": false, "in_progress": false, "data_changed": true, "latest_failed": false, "description": "fake_description"}}}""" +fake_response_TrainingDataObjects_json = """{"objects": []}""" +fake_response_TrainingEvents_json = """{"start_time": "2017-05-16T13:56:54.957Z", "end_time": "2017-05-16T13:56:54.957Z", "completed_events": 16, "trained_images": 14, "events": []}"""