diff --git a/CHANGELOG.md b/CHANGELOG.md
index 02ccc86a03..84b8709cbe 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,24 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
+## 0.26.0 (20th December, 2023)
+
+### Added
+
+* The `proxy` argument was added. You should use the `proxy` argument instead of the deprecated `proxies`, or use `mounts=` for more complex configurations. (#2879)
+
+### Deprecated
+
+* The `proxies` argument is now deprecated. It will still continue to work, but it will be removed in the future. (#2879)
+
+### Fixed
+
+* Fix cases of double escaping of URL path components. Allow / as a safe character in the query portion. (#2990)
+* Handle `NO_PROXY` envvar cases when a fully qualified URL is supplied as the value. (#2741)
+* Allow URLs where username or password contains unescaped '@'. (#2986)
+* Ensure ASGI `raw_path` does not include URL query component. (#2999)
+* Ensure `Response.iter_text()` cannot yield empty strings. (#2998)
+
## 0.25.2 (24th November, 2023)
### Added
diff --git a/docs/advanced.md b/docs/advanced.md
index 2a4779662e..bb003a1a52 100644
--- a/docs/advanced.md
+++ b/docs/advanced.md
@@ -504,7 +504,7 @@ The `NetRCAuth()` class uses [the `netrc.netrc()` function from the Python stand
## HTTP Proxying
-HTTPX supports setting up [HTTP proxies](https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers) via the `proxies` parameter to be passed on client initialization or top-level API functions like `httpx.get(..., proxies=...)`.
+HTTPX supports setting up [HTTP proxies](https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers) via the `proxy` parameter to be passed on client initialization or top-level API functions like `httpx.get(..., proxy=...)`.

@@ -516,19 +516,19 @@ HTTPX supports setting up [HTTP proxies](https://en.wikipedia.org/wiki/Proxy_ser
To route all traffic (HTTP and HTTPS) to a proxy located at `http://localhost:8030`, pass the proxy URL to the client...
```python
-with httpx.Client(proxies="http://localhost:8030") as client:
+with httpx.Client(proxy="http://localhost:8030") as client:
...
```
-For more advanced use cases, pass a proxies `dict`. For example, to route HTTP and HTTPS requests to 2 different proxies, respectively located at `http://localhost:8030`, and `http://localhost:8031`, pass a `dict` of proxy URLs:
+For more advanced use cases, pass a mounts `dict`. For example, to route HTTP and HTTPS requests to 2 different proxies, respectively located at `http://localhost:8030`, and `http://localhost:8031`, pass a `dict` of proxy URLs:
```python
-proxies = {
- "http://": "http://localhost:8030",
- "https://": "http://localhost:8031",
+proxy_mounts = {
+ "http://": httpx.HTTPTransport(proxy="http://localhost:8030"),
+ "https://": httpx.HTTPTransport(proxy="http://localhost:8031"),
}
-with httpx.Client(proxies=proxies) as client:
+with httpx.Client(mounts=proxy_mounts) as client:
...
```
@@ -546,132 +546,10 @@ For detailed information about proxy routing, see the [Routing](#routing) sectio
Proxy credentials can be passed as the `userinfo` section of the proxy URL. For example:
```python
-proxies = {
- "http://": "http://username:password@localhost:8030",
- # ...
-}
-```
-
-### Routing
-
-HTTPX provides fine-grained controls for deciding which requests should go through a proxy, and which shouldn't. This process is known as proxy routing.
-
-The `proxies` dictionary maps URL patterns ("proxy keys") to proxy URLs. HTTPX matches requested URLs against proxy keys to decide which proxy should be used, if any. Matching is done from most specific proxy keys (e.g. `https://
:`) to least specific ones (e.g. `https://`).
-
-HTTPX supports routing proxies based on **scheme**, **domain**, **port**, or a combination of these.
-
-#### Wildcard routing
-
-Route everything through a proxy...
-
-```python
-proxies = {
- "all://": "http://localhost:8030",
-}
-```
-
-#### Scheme routing
-
-Route HTTP requests through one proxy, and HTTPS requests through another...
-
-```python
-proxies = {
- "http://": "http://localhost:8030",
- "https://": "http://localhost:8031",
-}
-```
-
-#### Domain routing
-
-Proxy all requests on domain "example.com", let other requests pass through...
-
-```python
-proxies = {
- "all://example.com": "http://localhost:8030",
-}
-```
-
-Proxy HTTP requests on domain "example.com", let HTTPS and other requests pass through...
-
-```python
-proxies = {
- "http://example.com": "http://localhost:8030",
-}
-```
-
-Proxy all requests to "example.com" and its subdomains, let other requests pass through...
-
-```python
-proxies = {
- "all://*example.com": "http://localhost:8030",
-}
-```
-
-Proxy all requests to strict subdomains of "example.com", let "example.com" and other requests pass through...
-
-```python
-proxies = {
- "all://*.example.com": "http://localhost:8030",
-}
-```
-
-#### Port routing
-
-Proxy HTTPS requests on port 1234 to "example.com"...
-
-```python
-proxies = {
- "https://example.com:1234": "http://localhost:8030",
-}
-```
-
-Proxy all requests on port 1234...
-
-```python
-proxies = {
- "all://*:1234": "http://localhost:8030",
-}
-```
-
-#### No-proxy support
-
-It is also possible to define requests that _shouldn't_ be routed through proxies.
-
-To do so, pass `None` as the proxy URL. For example...
-
-```python
-proxies = {
- # Route requests through a proxy by default...
- "all://": "http://localhost:8031",
- # Except those for "example.com".
- "all://example.com": None,
-}
-```
-
-#### Complex configuration example
-
-You can combine the routing features outlined above to build complex proxy routing configurations. For example...
-
-```python
-proxies = {
- # Route all traffic through a proxy by default...
- "all://": "http://localhost:8030",
- # But don't use proxies for HTTPS requests to "domain.io"...
- "https://domain.io": None,
- # And use another proxy for requests to "example.com" and its subdomains...
- "all://*example.com": "http://localhost:8031",
- # And yet another proxy if HTTP is used,
- # and the "internal" subdomain on port 5550 is requested...
- "http://internal.example.com:5550": "http://localhost:8032",
-}
+with httpx.Client(proxy="http://username:password@localhost:8030") as client:
+ ...
```
-#### Environment variables
-
-HTTP proxying can also be configured through environment variables, although with less fine-grained control.
-
-See documentation on [`HTTP_PROXY`, `HTTPS_PROXY`, `ALL_PROXY`](environment_variables.md#http_proxy-https_proxy-all_proxy) for more information.
-
### Proxy mechanisms
!!! note
@@ -707,7 +585,7 @@ $ pip install httpx[socks]
You can now configure a client to make requests via a proxy using the SOCKS protocol:
```python
-httpx.Client(proxies='socks5://user:pass@host:port')
+httpx.Client(proxy='socks5://user:pass@host:port')
```
## Timeout Configuration
@@ -1294,3 +1172,125 @@ Adding support for custom schemes:
mounts = {"file://": FileSystemTransport()}
client = httpx.Client(mounts=mounts)
```
+
+### Routing
+
+HTTPX provides a powerful mechanism for routing requests, allowing you to write complex rules that specify which transport should be used for each request.
+
+The `mounts` dictionary maps URL patterns to HTTP transports. HTTPX matches requested URLs against URL patterns to decide which transport should be used, if any. Matching is done from most specific URL patterns (e.g. `https://:`) to least specific ones (e.g. `https://`).
+
+HTTPX supports routing requests based on **scheme**, **domain**, **port**, or a combination of these.
+
+#### Wildcard routing
+
+Route everything through a transport...
+
+```python
+mounts = {
+ "all://": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+#### Scheme routing
+
+Route HTTP requests through one transport, and HTTPS requests through another...
+
+```python
+mounts = {
+ "http://": httpx.HTTPTransport(proxy="http://localhost:8030"),
+ "https://": httpx.HTTPTransport(proxy="http://localhost:8031"),
+}
+```
+
+#### Domain routing
+
+Proxy all requests on domain "example.com", let other requests pass through...
+
+```python
+mounts = {
+ "all://example.com": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+Proxy HTTP requests on domain "example.com", let HTTPS and other requests pass through...
+
+```python
+mounts = {
+ "http://example.com": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+Proxy all requests to "example.com" and its subdomains, let other requests pass through...
+
+```python
+mounts = {
+ "all://*example.com": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+Proxy all requests to strict subdomains of "example.com", let "example.com" and other requests pass through...
+
+```python
+mounts = {
+ "all://*.example.com": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+#### Port routing
+
+Proxy HTTPS requests on port 1234 to "example.com"...
+
+```python
+mounts = {
+ "https://example.com:1234": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+Proxy all requests on port 1234...
+
+```python
+mounts = {
+ "all://*:1234": httpx.HTTPTransport(proxy="http://localhost:8030"),
+}
+```
+
+#### No-proxy support
+
+It is also possible to define requests that _shouldn't_ be routed through the transport.
+
+To do so, pass `None` as the proxy URL. For example...
+
+```python
+mounts = {
+ # Route requests through a proxy by default...
+ "all://": httpx.HTTPTransport(proxy="http://localhost:8031"),
+ # Except those for "example.com".
+ "all://example.com": None,
+}
+```
+
+#### Complex configuration example
+
+You can combine the routing features outlined above to build complex proxy routing configurations. For example...
+
+```python
+mounts = {
+ # Route all traffic through a proxy by default...
+ "all://": httpx.HTTPTransport(proxy="http://localhost:8030"),
+ # But don't use proxies for HTTPS requests to "domain.io"...
+ "https://domain.io": None,
+ # And use another proxy for requests to "example.com" and its subdomains...
+ "all://*example.com": httpx.HTTPTransport(proxy="http://localhost:8031"),
+ # And yet another proxy if HTTP is used,
+ # and the "internal" subdomain on port 5550 is requested...
+ "http://internal.example.com:5550": httpx.HTTPTransport(proxy="http://localhost:8032"),
+}
+```
+
+#### Environment variables
+
+There are also environment variables that can be used to control the dictionary of the client mounts.
+They can be used to configure HTTP proxying for clients.
+
+See documentation on [`HTTP_PROXY`, `HTTPS_PROXY`, `ALL_PROXY`](environment_variables.md#http_proxy-https_proxy-all_proxy) for more information.
+
diff --git a/docs/compatibility.md b/docs/compatibility.md
index 3e8bf9b965..7190b65898 100644
--- a/docs/compatibility.md
+++ b/docs/compatibility.md
@@ -157,13 +157,17 @@ httpx.get('https://www.example.com', timeout=None)
## Proxy keys
-When using `httpx.Client(proxies={...})` to map to a selection of different proxies, we use full URL schemes, such as `proxies={"http://": ..., "https://": ...}`.
+HTTPX uses the mounts argument for HTTP proxying and transport routing.
+It can do much more than proxies and allows you to configure more than just the proxy route.
+For more detailed documentation, see [Mounting Transports](advanced.md#mounting-transports).
+
+When using `httpx.Client(mounts={...})` to map to a selection of different transports, we use full URL schemes, such as `mounts={"http://": ..., "https://": ...}`.
This is different to the `requests` usage of `proxies={"http": ..., "https": ...}`.
-This change is for better consistency with more complex mappings, that might also include domain names, such as `proxies={"all://": ..., "all://www.example.com": None}` which maps all requests onto a proxy, except for requests to "www.example.com" which have an explicit exclusion.
+This change is for better consistency with more complex mappings, that might also include domain names, such as `mounts={"all://": ..., httpx.HTTPTransport(proxy="all://www.example.com": None})` which maps all requests onto a proxy, except for requests to "www.example.com" which have an explicit exclusion.
-Also note that `requests.Session.request(...)` allows a `proxies=...` parameter, whereas `httpx.Client.request(...)` does not.
+Also note that `requests.Session.request(...)` allows a `proxies=...` parameter, whereas `httpx.Client.request(...)` does not allow `mounts=...`.
## SSL configuration
@@ -195,7 +199,7 @@ We don't support `response.is_ok` since the naming is ambiguous there, and might
There is no notion of [prepared requests](https://requests.readthedocs.io/en/stable/user/advanced/#prepared-requests) in HTTPX. If you need to customize request instantiation, see [Request instances](advanced.md#request-instances).
-Besides, `httpx.Request()` does not support the `auth`, `timeout`, `follow_redirects`, `proxies`, `verify` and `cert` parameters. However these are available in `httpx.request`, `httpx.get`, `httpx.post` etc., as well as on [`Client` instances](advanced.md#client-instances).
+Besides, `httpx.Request()` does not support the `auth`, `timeout`, `follow_redirects`, `mounts`, `verify` and `cert` parameters. However these are available in `httpx.request`, `httpx.get`, `httpx.post` etc., as well as on [`Client` instances](advanced.md#client-instances).
## Mocking
diff --git a/docs/contributing.md b/docs/contributing.md
index 1d44616f73..47dd9dc5e3 100644
--- a/docs/contributing.md
+++ b/docs/contributing.md
@@ -213,9 +213,7 @@ this is where our previously generated `client.pem` comes in:
```
import httpx
-proxies = {"all://": "http://127.0.0.1:8080/"}
-
-with httpx.Client(proxies=proxies, verify="/path/to/client.pem") as client:
+with httpx.Client(proxy="http://127.0.0.1:8080/", verify="/path/to/client.pem") as client:
response = client.get("https://example.org")
print(response.status_code) # should print 200
```
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 459f744edf..a0cb210ccf 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -19,9 +19,9 @@ httpx.ProxyError: _ssl.c:1091: The handshake operation timed out
**Resolution**: it is likely that you've set up your proxies like this...
```python
-proxies = {
- "http://": "http://myproxy.org",
- "https://": "https://myproxy.org",
+mounts = {
+ "http://": httpx.HTTPTransport(proxy="http://myproxy.org"),
+ "https://": httpx.HTTPTransport(proxy="https://myproxy.org"),
}
```
@@ -32,16 +32,18 @@ But if you get the error above, it is likely that your proxy doesn't support con
Change the scheme of your HTTPS proxy to `http://...` instead of `https://...`:
```python
-proxies = {
- "http://": "http://myproxy.org",
- "https://": "http://myproxy.org",
+mounts = {
+ "http://": httpx.HTTPTransport(proxy="http://myproxy.org"),
+ "https://": httpx.HTTPTransport(proxy="http://myproxy.org"),
}
```
This can be simplified to:
```python
-proxies = "http://myproxy.org"
+proxy = "http://myproxy.org"
+with httpx.Client(proxy=proxy) as client:
+ ...
```
For more information, see [Proxies: FORWARD vs TUNNEL](advanced.md#forward-vs-tunnel).
diff --git a/httpx/__version__.py b/httpx/__version__.py
index c6bc0ac023..3edc842c69 100644
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.25.2"
+__version__ = "0.26.0"
diff --git a/httpx/_api.py b/httpx/_api.py
index 571289cf2b..c7af947218 100644
--- a/httpx/_api.py
+++ b/httpx/_api.py
@@ -10,6 +10,7 @@
CookieTypes,
HeaderTypes,
ProxiesTypes,
+ ProxyTypes,
QueryParamTypes,
RequestContent,
RequestData,
@@ -32,6 +33,7 @@ def request(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
follow_redirects: bool = False,
@@ -63,6 +65,7 @@ def request(
request.
* **auth** - *(optional)* An authentication class to use when sending the
request.
+ * **proxy** - *(optional)* A proxy URL where all the traffic should be routed.
* **proxies** - *(optional)* A dictionary mapping proxy keys to proxy URLs.
* **timeout** - *(optional)* The timeout configuration to use when sending
the request.
@@ -91,6 +94,7 @@ def request(
"""
with Client(
cookies=cookies,
+ proxy=proxy,
proxies=proxies,
cert=cert,
verify=verify,
@@ -124,6 +128,7 @@ def stream(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
follow_redirects: bool = False,
@@ -143,6 +148,7 @@ def stream(
"""
with Client(
cookies=cookies,
+ proxy=proxy,
proxies=proxies,
cert=cert,
verify=verify,
@@ -171,6 +177,7 @@ def get(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -193,6 +200,7 @@ def get(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -209,6 +217,7 @@ def options(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -231,6 +240,7 @@ def options(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -247,6 +257,7 @@ def head(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -269,6 +280,7 @@ def head(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -289,6 +301,7 @@ def post(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -312,6 +325,7 @@ def post(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -332,6 +346,7 @@ def put(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -355,6 +370,7 @@ def put(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -375,6 +391,7 @@ def patch(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -398,6 +415,7 @@ def patch(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
@@ -414,6 +432,7 @@ def delete(
headers: typing.Optional[HeaderTypes] = None,
cookies: typing.Optional[CookieTypes] = None,
auth: typing.Optional[AuthTypes] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
follow_redirects: bool = False,
cert: typing.Optional[CertTypes] = None,
@@ -436,6 +455,7 @@ def delete(
headers=headers,
cookies=cookies,
auth=auth,
+ proxy=proxy,
proxies=proxies,
follow_redirects=follow_redirects,
cert=cert,
diff --git a/httpx/_client.py b/httpx/_client.py
index 2c7ae030f5..2813a84f01 100644
--- a/httpx/_client.py
+++ b/httpx/_client.py
@@ -36,6 +36,7 @@
CookieTypes,
HeaderTypes,
ProxiesTypes,
+ ProxyTypes,
QueryParamTypes,
RequestContent,
RequestData,
@@ -597,6 +598,7 @@ class Client(BaseClient):
to authenticate the client. Either a path to an SSL certificate file, or
two-tuple of (certificate file, key file), or a three-tuple of (certificate
file, key file, password).
+ * **proxy** - *(optional)* A proxy URL where all the traffic should be routed.
* **proxies** - *(optional)* A dictionary mapping proxy keys to proxy
URLs.
* **timeout** - *(optional)* The timeout configuration to use when sending
@@ -628,8 +630,11 @@ def __init__(
cert: typing.Optional[CertTypes] = None,
http1: bool = True,
http2: bool = False,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
- mounts: typing.Optional[typing.Mapping[str, BaseTransport]] = None,
+ mounts: typing.Optional[
+ typing.Mapping[str, typing.Optional[BaseTransport]]
+ ] = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
follow_redirects: bool = False,
limits: Limits = DEFAULT_LIMITS,
@@ -666,8 +671,17 @@ def __init__(
"Make sure to install httpx using `pip install httpx[http2]`."
) from None
+ if proxies:
+ message = (
+ "The 'proxies' argument is now deprecated."
+ " Use 'proxy' or 'mounts' instead."
+ )
+ warnings.warn(message, DeprecationWarning)
+ if proxy:
+ raise RuntimeError("Use either `proxy` or 'proxies', not both.")
+
allow_env_proxies = trust_env and app is None and transport is None
- proxy_map = self._get_proxy_map(proxies, allow_env_proxies)
+ proxy_map = self._get_proxy_map(proxies or proxy, allow_env_proxies)
self._transport = self._init_transport(
verify=verify,
@@ -1264,7 +1278,9 @@ def __enter__(self: T) -> T:
if self._state != ClientState.UNOPENED:
msg = {
ClientState.OPENED: "Cannot open a client instance more than once.",
- ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.",
+ ClientState.CLOSED: (
+ "Cannot reopen a client instance, once it has been closed."
+ ),
}[self._state]
raise RuntimeError(msg)
@@ -1322,6 +1338,7 @@ class AsyncClient(BaseClient):
file, key file, password).
* **http2** - *(optional)* A boolean indicating if HTTP/2 support should be
enabled. Defaults to `False`.
+ * **proxy** - *(optional)* A proxy URL where all the traffic should be routed.
* **proxies** - *(optional)* A dictionary mapping HTTP protocols to proxy
URLs.
* **timeout** - *(optional)* The timeout configuration to use when sending
@@ -1353,8 +1370,11 @@ def __init__(
cert: typing.Optional[CertTypes] = None,
http1: bool = True,
http2: bool = False,
+ proxy: typing.Optional[ProxyTypes] = None,
proxies: typing.Optional[ProxiesTypes] = None,
- mounts: typing.Optional[typing.Mapping[str, AsyncBaseTransport]] = None,
+ mounts: typing.Optional[
+ typing.Mapping[str, typing.Optional[AsyncBaseTransport]]
+ ] = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
follow_redirects: bool = False,
limits: Limits = DEFAULT_LIMITS,
@@ -1391,8 +1411,17 @@ def __init__(
"Make sure to install httpx using `pip install httpx[http2]`."
) from None
+ if proxies:
+ message = (
+ "The 'proxies' argument is now deprecated."
+ " Use 'proxy' or 'mounts' instead."
+ )
+ warnings.warn(message, DeprecationWarning)
+ if proxy:
+ raise RuntimeError("Use either `proxy` or 'proxies', not both.")
+
allow_env_proxies = trust_env and app is None and transport is None
- proxy_map = self._get_proxy_map(proxies, allow_env_proxies)
+ proxy_map = self._get_proxy_map(proxies or proxy, allow_env_proxies)
self._transport = self._init_transport(
verify=verify,
@@ -1980,7 +2009,9 @@ async def __aenter__(self: U) -> U:
if self._state != ClientState.UNOPENED:
msg = {
ClientState.OPENED: "Cannot open a client instance more than once.",
- ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.",
+ ClientState.CLOSED: (
+ "Cannot reopen a client instance, once it has been closed."
+ ),
}[self._state]
raise RuntimeError(msg)
diff --git a/httpx/_content.py b/httpx/_content.py
index 0aaea33749..cd0d17f171 100644
--- a/httpx/_content.py
+++ b/httpx/_content.py
@@ -105,7 +105,7 @@ async def __aiter__(self) -> AsyncIterator[bytes]:
def encode_content(
- content: Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
+ content: Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]],
) -> Tuple[Dict[str, str], Union[SyncByteStream, AsyncByteStream]]:
if isinstance(content, (bytes, str)):
body = content.encode("utf-8") if isinstance(content, str) else content
diff --git a/httpx/_decoders.py b/httpx/_decoders.py
index b4ac9a44af..3f507c8e04 100644
--- a/httpx/_decoders.py
+++ b/httpx/_decoders.py
@@ -212,7 +212,7 @@ def __init__(self, chunk_size: typing.Optional[int] = None) -> None:
def decode(self, content: str) -> typing.List[str]:
if self._chunk_size is None:
- return [content]
+ return [content] if content else []
self._buffer.write(content)
if self._buffer.tell() >= self._chunk_size:
@@ -259,7 +259,8 @@ class LineDecoder:
"""
Handles incrementally reading lines from text.
- Has the same behaviour as the stdllib splitlines, but handling the input iteratively.
+ Has the same behaviour as the stdllib splitlines,
+ but handling the input iteratively.
"""
def __init__(self) -> None:
@@ -279,7 +280,9 @@ def decode(self, text: str) -> typing.List[str]:
text = text[:-1]
if not text:
- return []
+ # NOTE: the edge case input of empty text doesn't occur in practice,
+ # because other httpx internals filter out this value
+ return [] # pragma: no cover
trailing_newline = text[-1] in NEWLINE_CHARS
lines = text.splitlines()
diff --git a/httpx/_exceptions.py b/httpx/_exceptions.py
index 24a4f8aba3..123692955b 100644
--- a/httpx/_exceptions.py
+++ b/httpx/_exceptions.py
@@ -313,7 +313,10 @@ class ResponseNotRead(StreamError):
"""
def __init__(self) -> None:
- message = "Attempted to access streaming response content, without having called `read()`."
+ message = (
+ "Attempted to access streaming response content,"
+ " without having called `read()`."
+ )
super().__init__(message)
@@ -323,7 +326,10 @@ class RequestNotRead(StreamError):
"""
def __init__(self) -> None:
- message = "Attempted to access streaming request content, without having called `read()`."
+ message = (
+ "Attempted to access streaming request content,"
+ " without having called `read()`."
+ )
super().__init__(message)
diff --git a/httpx/_main.py b/httpx/_main.py
index 7c12ce841d..adb57d5fc0 100644
--- a/httpx/_main.py
+++ b/httpx/_main.py
@@ -63,20 +63,21 @@ def print_help() -> None:
)
table.add_row(
"--auth [cyan]",
- "Username and password to include in the request. Specify '-' for the password to use "
- "a password prompt. Note that using --verbose/-v will expose the Authorization "
- "header, including the password encoding in a trivially reversible format.",
+ "Username and password to include in the request. Specify '-' for the password"
+ " to use a password prompt. Note that using --verbose/-v will expose"
+ " the Authorization header, including the password encoding"
+ " in a trivially reversible format.",
)
table.add_row(
- "--proxies [cyan]URL",
+ "--proxy [cyan]URL",
"Send the request via a proxy. Should be the URL giving the proxy address.",
)
table.add_row(
"--timeout [cyan]FLOAT",
- "Timeout value to use for network operations, such as establishing the connection, "
- "reading some data, etc... [Default: 5.0]",
+ "Timeout value to use for network operations, such as establishing the"
+ " connection, reading some data, etc... [Default: 5.0]",
)
table.add_row("--follow-redirects", "Automatically follow redirects.")
@@ -385,8 +386,8 @@ def handle_help(
),
)
@click.option(
- "--proxies",
- "proxies",
+ "--proxy",
+ "proxy",
type=str,
default=None,
help="Send the request via a proxy. Should be the URL giving the proxy address.",
@@ -455,7 +456,7 @@ def main(
headers: typing.List[typing.Tuple[str, str]],
cookies: typing.List[typing.Tuple[str, str]],
auth: typing.Optional[typing.Tuple[str, str]],
- proxies: str,
+ proxy: str,
timeout: float,
follow_redirects: bool,
verify: bool,
@@ -472,7 +473,7 @@ def main(
try:
with Client(
- proxies=proxies,
+ proxy=proxy,
timeout=timeout,
verify=verify,
http2=http2,
diff --git a/httpx/_models.py b/httpx/_models.py
index 8a5e6280f3..b8617cdab5 100644
--- a/httpx/_models.py
+++ b/httpx/_models.py
@@ -358,7 +358,8 @@ def __init__(
# Using `content=...` implies automatically populated `Host` and content
# headers, of either `Content-Length: ...` or `Transfer-Encoding: chunked`.
#
- # Using `stream=...` will not automatically include *any* auto-populated headers.
+ # Using `stream=...` will not automatically include *any*
+ # auto-populated headers.
#
# As an end-user you don't really need `stream=...`. It's only
# useful when:
@@ -852,7 +853,7 @@ def iter_text(
yield chunk
text_content = decoder.flush()
for chunk in chunker.decode(text_content):
- yield chunk
+ yield chunk # pragma: no cover
for chunk in chunker.flush():
yield chunk
@@ -956,7 +957,7 @@ async def aiter_text(
yield chunk
text_content = decoder.flush()
for chunk in chunker.decode(text_content):
- yield chunk
+ yield chunk # pragma: no cover
for chunk in chunker.flush():
yield chunk
diff --git a/httpx/_multipart.py b/httpx/_multipart.py
index 6d5baa8639..5122d5114f 100644
--- a/httpx/_multipart.py
+++ b/httpx/_multipart.py
@@ -48,7 +48,8 @@ def __init__(
)
if value is not None and not isinstance(value, (str, bytes, int, float)):
raise TypeError(
- f"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}"
+ "Invalid type for value. Expected primitive type,"
+ f" got {type(value)}: {value!r}"
)
self.name = name
self.value: typing.Union[str, bytes] = (
@@ -96,11 +97,13 @@ def __init__(self, name: str, value: FileTypes) -> None:
content_type: typing.Optional[str] = None
# This large tuple based API largely mirror's requests' API
- # It would be good to think of better APIs for this that we could include in httpx 2.0
- # since variable length tuples (especially of 4 elements) are quite unwieldly
+ # It would be good to think of better APIs for this that we could
+ # include in httpx 2.0 since variable length tuples(especially of 4 elements)
+ # are quite unwieldly
if isinstance(value, tuple):
if len(value) == 2:
- # neither the 3rd parameter (content_type) nor the 4th (headers) was included
+ # neither the 3rd parameter (content_type) nor the 4th (headers)
+ # was included
filename, fileobj = value # type: ignore
elif len(value) == 3:
filename, fileobj, content_type = value # type: ignore
@@ -116,9 +119,9 @@ def __init__(self, name: str, value: FileTypes) -> None:
has_content_type_header = any("content-type" in key.lower() for key in headers)
if content_type is not None and not has_content_type_header:
- # note that unlike requests, we ignore the content_type
- # provided in the 3rd tuple element if it is also included in the headers
- # requests does the opposite (it overwrites the header with the 3rd tuple element)
+ # note that unlike requests, we ignore the content_type provided in the 3rd
+ # tuple element if it is also included in the headers requests does
+ # the opposite (it overwrites the headerwith the 3rd tuple element)
headers["Content-Type"] = content_type
if isinstance(fileobj, io.StringIO):
diff --git a/httpx/_transports/asgi.py b/httpx/_transports/asgi.py
index f67f0fbd5b..08cd392f75 100644
--- a/httpx/_transports/asgi.py
+++ b/httpx/_transports/asgi.py
@@ -103,7 +103,7 @@ async def handle_async_request(
"headers": [(k.lower(), v) for (k, v) in request.headers.raw],
"scheme": request.url.scheme,
"path": request.url.path,
- "raw_path": request.url.raw_path,
+ "raw_path": request.url.raw_path.split(b"?")[0],
"query_string": request.url.query,
"server": (request.url.host, request.url.port),
"client": self.client,
diff --git a/httpx/_transports/default.py b/httpx/_transports/default.py
index 343c588f9f..14a087389a 100644
--- a/httpx/_transports/default.py
+++ b/httpx/_transports/default.py
@@ -47,7 +47,8 @@
WriteTimeout,
)
from .._models import Request, Response
-from .._types import AsyncByteStream, CertTypes, SyncByteStream, VerifyTypes
+from .._types import AsyncByteStream, CertTypes, ProxyTypes, SyncByteStream, VerifyTypes
+from .._urls import URL
from .base import AsyncBaseTransport, BaseTransport
T = typing.TypeVar("T", bound="HTTPTransport")
@@ -124,13 +125,14 @@ def __init__(
http2: bool = False,
limits: Limits = DEFAULT_LIMITS,
trust_env: bool = True,
- proxy: typing.Optional[Proxy] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
uds: typing.Optional[str] = None,
local_address: typing.Optional[str] = None,
retries: int = 0,
socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None,
) -> None:
ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)
+ proxy = Proxy(url=proxy) if isinstance(proxy, (str, URL)) else proxy
if proxy is None:
self._pool = httpcore.ConnectionPool(
@@ -190,7 +192,8 @@ def __init__(
)
else: # pragma: no cover
raise ValueError(
- f"Proxy protocol must be either 'http', 'https', or 'socks5', but got {proxy.url.scheme!r}."
+ "Proxy protocol must be either 'http', 'https', or 'socks5',"
+ f" but got {proxy.url.scheme!r}."
)
def __enter__(self: T) -> T: # Use generics for subclass support.
@@ -263,13 +266,14 @@ def __init__(
http2: bool = False,
limits: Limits = DEFAULT_LIMITS,
trust_env: bool = True,
- proxy: typing.Optional[Proxy] = None,
+ proxy: typing.Optional[ProxyTypes] = None,
uds: typing.Optional[str] = None,
local_address: typing.Optional[str] = None,
retries: int = 0,
socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None,
) -> None:
ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env)
+ proxy = Proxy(url=proxy) if isinstance(proxy, (str, URL)) else proxy
if proxy is None:
self._pool = httpcore.AsyncConnectionPool(
@@ -328,7 +332,8 @@ def __init__(
)
else: # pragma: no cover
raise ValueError(
- f"Proxy protocol must be either 'http', 'https', or 'socks5', but got {proxy.url.scheme!r}."
+ "Proxy protocol must be either 'http', 'https', or 'socks5',"
+ " but got {proxy.url.scheme!r}."
)
async def __aenter__(self: A) -> A: # Use generics for subclass support.
diff --git a/httpx/_types.py b/httpx/_types.py
index 83cf35a32a..649d101d54 100644
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -78,7 +78,8 @@
Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
"Timeout",
]
-ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[None, URLTypes, "Proxy"]]]
+ProxyTypes = Union[URLTypes, "Proxy"]
+ProxiesTypes = Union[ProxyTypes, Dict[URLTypes, Union[None, ProxyTypes]]]
AuthTypes = Union[
Tuple[Union[str, bytes], Union[str, bytes]],
diff --git a/httpx/_urlparse.py b/httpx/_urlparse.py
index 8e060424e8..07bbea9070 100644
--- a/httpx/_urlparse.py
+++ b/httpx/_urlparse.py
@@ -62,8 +62,8 @@
(
r"(?:(?P{userinfo})@)?" r"(?P{host})" r":?(?P{port})?"
).format(
- userinfo="[^@]*", # Any character sequence not including '@'.
- host="(\\[.*\\]|[^:]*)", # Either any character sequence not including ':',
+ userinfo=".*", # Any character sequence.
+ host="(\\[.*\\]|[^:@]*)", # Either any character sequence excluding ':' or '@',
# or an IPv6 address enclosed within square brackets.
port=".*", # Any character sequence.
)
@@ -260,10 +260,8 @@ def urlparse(url: str = "", **kwargs: typing.Optional[str]) -> ParseResult:
# For 'path' we need to drop ? and # from the GEN_DELIMS set.
parsed_path: str = quote(path, safe=SUB_DELIMS + ":/[]@")
# For 'query' we need to drop '#' from the GEN_DELIMS set.
- # We also exclude '/' because it is more robust to replace it with a percent
- # encoding despite it not being a requirement of the spec.
parsed_query: typing.Optional[str] = (
- None if query is None else quote(query, safe=SUB_DELIMS + ":?[]@")
+ None if query is None else quote(query, safe=SUB_DELIMS + ":/?[]@")
)
# For 'fragment' we can include all of the GEN_DELIMS set.
parsed_fragment: typing.Optional[str] = (
@@ -360,24 +358,25 @@ def normalize_port(
def validate_path(path: str, has_scheme: bool, has_authority: bool) -> None:
"""
- Path validation rules that depend on if the URL contains a scheme or authority component.
+ Path validation rules that depend on if the URL contains
+ a scheme or authority component.
See https://datatracker.ietf.org/doc/html/rfc3986.html#section-3.3
"""
if has_authority:
- # > If a URI contains an authority component, then the path component
- # > must either be empty or begin with a slash ("/") character."
+ # If a URI contains an authority component, then the path component
+ # must either be empty or begin with a slash ("/") character."
if path and not path.startswith("/"):
raise InvalidURL("For absolute URLs, path must be empty or begin with '/'")
else:
- # > If a URI does not contain an authority component, then the path cannot begin
- # > with two slash characters ("//").
+ # If a URI does not contain an authority component, then the path cannot begin
+ # with two slash characters ("//").
if path.startswith("//"):
raise InvalidURL(
"URLs with no authority component cannot have a path starting with '//'"
)
- # > In addition, a URI reference (Section 4.1) may be a relative-path reference, in which
- # > case the first path segment cannot contain a colon (":") character.
+ # In addition, a URI reference (Section 4.1) may be a relative-path reference,
+ # in which case the first path segment cannot contain a colon (":") character.
if path.startswith(":") and not has_scheme:
raise InvalidURL(
"URLs with no scheme component cannot have a path starting with ':'"
@@ -431,13 +430,12 @@ def is_safe(string: str, safe: str = "/") -> bool:
if char not in NON_ESCAPED_CHARS:
return False
- # Any '%' characters must be valid '%xx' escape sequences.
- return string.count("%") == len(PERCENT_ENCODED_REGEX.findall(string))
+ return True
-def quote(string: str, safe: str = "/") -> str:
+def percent_encoded(string: str, safe: str = "/") -> str:
"""
- Use percent-encoding to quote a string if required.
+ Use percent-encoding to quote a string.
"""
if is_safe(string, safe=safe):
return string
@@ -448,17 +446,57 @@ def quote(string: str, safe: str = "/") -> str:
)
+def quote(string: str, safe: str = "/") -> str:
+ """
+ Use percent-encoding to quote a string, omitting existing '%xx' escape sequences.
+
+ See: https://www.rfc-editor.org/rfc/rfc3986#section-2.1
+
+ * `string`: The string to be percent-escaped.
+ * `safe`: A string containing characters that may be treated as safe, and do not
+ need to be escaped. Unreserved characters are always treated as safe.
+ See: https://www.rfc-editor.org/rfc/rfc3986#section-2.3
+ """
+ parts = []
+ current_position = 0
+ for match in re.finditer(PERCENT_ENCODED_REGEX, string):
+ start_position, end_position = match.start(), match.end()
+ matched_text = match.group(0)
+ # Add any text up to the '%xx' escape sequence.
+ if start_position != current_position:
+ leading_text = string[current_position:start_position]
+ parts.append(percent_encoded(leading_text, safe=safe))
+
+ # Add the '%xx' escape sequence.
+ parts.append(matched_text)
+ current_position = end_position
+
+ # Add any text after the final '%xx' escape sequence.
+ if current_position != len(string):
+ trailing_text = string[current_position:]
+ parts.append(percent_encoded(trailing_text, safe=safe))
+
+ return "".join(parts)
+
+
def urlencode(items: typing.List[typing.Tuple[str, str]]) -> str:
- # We can use a much simpler version of the stdlib urlencode here because
- # we don't need to handle a bunch of different typing cases, such as bytes vs str.
- #
- # https://github.com/python/cpython/blob/b2f7b2ef0b5421e01efb8c7bee2ef95d3bab77eb/Lib/urllib/parse.py#L926
- #
- # Note that we use '%20' encoding for spaces. and '%2F for '/'.
- # This is slightly different than `requests`, but is the behaviour that browsers use.
- #
- # See
- # - https://github.com/encode/httpx/issues/2536
- # - https://github.com/encode/httpx/issues/2721
- # - https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlencode
- return "&".join([quote(k, safe="") + "=" + quote(v, safe="") for k, v in items])
+ """
+ We can use a much simpler version of the stdlib urlencode here because
+ we don't need to handle a bunch of different typing cases, such as bytes vs str.
+
+ https://github.com/python/cpython/blob/b2f7b2ef0b5421e01efb8c7bee2ef95d3bab77eb/Lib/urllib/parse.py#L926
+
+ Note that we use '%20' encoding for spaces. and '%2F for '/'.
+ This is slightly different than `requests`, but is the behaviour that browsers use.
+
+ See
+ - https://github.com/encode/httpx/issues/2536
+ - https://github.com/encode/httpx/issues/2721
+ - https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlencode
+ """
+ return "&".join(
+ [
+ percent_encoded(k, safe="") + "=" + percent_encoded(v, safe="")
+ for k, v in items
+ ]
+ )
diff --git a/httpx/_urls.py b/httpx/_urls.py
index b023941b62..26202e95db 100644
--- a/httpx/_urls.py
+++ b/httpx/_urls.py
@@ -51,21 +51,23 @@ class URL:
assert url.raw_host == b"xn--fiqs8s.icom.museum"
* `url.port` is either None or an integer. URLs that include the default port for
- "http", "https", "ws", "wss", and "ftp" schemes have their port normalized to `None`.
+ "http", "https", "ws", "wss", and "ftp" schemes have their port
+ normalized to `None`.
assert httpx.URL("http://example.com") == httpx.URL("http://example.com:80")
assert httpx.URL("http://example.com").port is None
assert httpx.URL("http://example.com:80").port is None
- * `url.userinfo` is raw bytes, without URL escaping. Usually you'll want to work with
- `url.username` and `url.password` instead, which handle the URL escaping.
+ * `url.userinfo` is raw bytes, without URL escaping. Usually you'll want to work
+ with `url.username` and `url.password` instead, which handle the URL escaping.
* `url.raw_path` is raw bytes of both the path and query, without URL escaping.
This portion is used as the target when constructing HTTP requests. Usually you'll
want to work with `url.path` instead.
- * `url.query` is raw bytes, without URL escaping. A URL query string portion can only
- be properly URL escaped when decoding the parameter names and values themselves.
+ * `url.query` is raw bytes, without URL escaping. A URL query string portion can
+ only be properly URL escaped when decoding the parameter names and values
+ themselves.
"""
def __init__(
@@ -115,7 +117,8 @@ def __init__(
self._uri_reference = url._uri_reference.copy_with(**kwargs)
else:
raise TypeError(
- f"Invalid type for url. Expected str or httpx.URL, got {type(url)}: {url!r}"
+ "Invalid type for url. Expected str or httpx.URL,"
+ f" got {type(url)}: {url!r}"
)
@property
@@ -305,7 +308,8 @@ def raw(self) -> RawURL:
Provides the (scheme, host, port, target) for the outgoing request.
In older versions of `httpx` this was used in the low-level transport API.
- We no longer use `RawURL`, and this property will be deprecated in a future release.
+ We no longer use `RawURL`, and this property will be deprecated
+ in a future release.
"""
return RawURL(
self.raw_scheme,
@@ -342,7 +346,9 @@ def copy_with(self, **kwargs: typing.Any) -> "URL":
For example:
- url = httpx.URL("https://www.example.com").copy_with(username="jo@gmail.com", password="a secret")
+ url = httpx.URL("https://www.example.com").copy_with(
+ username="jo@gmail.com", password="a secret"
+ )
assert url == "https://jo%40email.com:a%20secret@www.example.com"
"""
return URL(self, **kwargs)
diff --git a/httpx/_utils.py b/httpx/_utils.py
index ba5807c048..bc3cb001dd 100644
--- a/httpx/_utils.py
+++ b/httpx/_utils.py
@@ -152,7 +152,7 @@ def parse_content_type_charset(content_type: str) -> typing.Optional[str]:
def obfuscate_sensitive_headers(
- items: typing.Iterable[typing.Tuple[typing.AnyStr, typing.AnyStr]]
+ items: typing.Iterable[typing.Tuple[typing.AnyStr, typing.AnyStr]],
) -> typing.Iterator[typing.Tuple[typing.AnyStr, typing.AnyStr]]:
for k, v in items:
if to_str(k.lower()) in SENSITIVE_HEADERS:
@@ -227,7 +227,9 @@ def get_environment_proxies() -> typing.Dict[str, typing.Optional[str]]:
# (But not "wwwgoogle.com")
# NO_PROXY can include domains, IPv6, IPv4 addresses and "localhost"
# NO_PROXY=example.com,::1,localhost,192.168.0.0/16
- if is_ipv4_hostname(hostname):
+ if "://" in hostname:
+ mounts[hostname] = None
+ elif is_ipv4_hostname(hostname):
mounts[f"all://{hostname}"] = None
elif is_ipv6_hostname(hostname):
mounts[f"all://[{hostname}]"] = None
@@ -293,14 +295,10 @@ async def _get_time(self) -> float:
import trio
return trio.current_time()
- elif library == "curio": # pragma: no cover
- import curio
-
- return typing.cast(float, await curio.clock())
-
- import asyncio
+ else:
+ import asyncio
- return asyncio.get_event_loop().time()
+ return asyncio.get_event_loop().time()
def sync_start(self) -> None:
self.started = time.perf_counter()
diff --git a/pyproject.toml b/pyproject.toml
index 326a880cfc..4f7a848f83 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -97,10 +97,6 @@ replacement = 'src="https://raw.githubusercontent.com/encode/httpx/master/\1"'
[tool.ruff]
select = ["E", "F", "I", "B", "PIE"]
ignore = ["B904", "B028"]
-line-length = 88
-
-[tool.ruff.pycodestyle]
-max-line-length = 120
[tool.ruff.isort]
combine-as-imports = true
diff --git a/requirements.txt b/requirements.txt
index e859bfc615..3597bc37f5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,7 @@ types-chardet==5.0.4.5
# Documentation
mkdocs==1.5.3
mkautodoc==0.2.0
-mkdocs-material==9.4.7
+mkdocs-material==9.4.14
# Packaging
build==1.0.3
@@ -20,12 +20,12 @@ twine==4.0.2
# Tests & Linting
coverage[toml]==7.3.0
-cryptography==41.0.4
+cryptography==41.0.7
mypy==1.5.1
types-certifi==2021.10.8.2
pytest==7.4.3
-ruff==0.1.3
+ruff==0.1.6
trio==0.22.2
-trio-typing==0.9.0
+trio-typing==0.10.0
trustme==1.1.0
-uvicorn==0.22.0
+uvicorn==0.24.0.post1
diff --git a/tests/client/test_auth.py b/tests/client/test_auth.py
index 9b1dd88f5e..e6bac23dfc 100644
--- a/tests/client/test_auth.py
+++ b/tests/client/test_auth.py
@@ -596,7 +596,8 @@ async def test_digest_auth_resets_nonce_count_after_401() -> None:
# with this we now force a 401 on a subsequent (but initial) request
app.send_response_after_attempt = 2
- # we expect the client again to try to authenticate, i.e. the history length must be 1
+ # we expect the client again to try to authenticate,
+ # i.e. the history length must be 1
response_2 = await client.get(url, auth=auth)
assert response_2.status_code == 200
assert len(response_2.history) == 1
diff --git a/tests/client/test_proxies.py b/tests/client/test_proxies.py
index 62ffc380bf..7bba1ab2c3 100644
--- a/tests/client/test_proxies.py
+++ b/tests/client/test_proxies.py
@@ -33,7 +33,8 @@ def url_to_origin(url: str) -> httpcore.URL:
],
)
def test_proxies_parameter(proxies, expected_proxies):
- client = httpx.Client(proxies=proxies)
+ with pytest.warns(DeprecationWarning):
+ client = httpx.Client(proxies=proxies)
client_patterns = [p.pattern for p in client._mounts.keys()]
client_proxies = list(client._mounts.values())
@@ -47,15 +48,31 @@ def test_proxies_parameter(proxies, expected_proxies):
assert len(expected_proxies) == len(client._mounts)
+def test_socks_proxy_deprecated():
+ url = httpx.URL("http://www.example.com")
+
+ with pytest.warns(DeprecationWarning):
+ client = httpx.Client(proxies="socks5://localhost/")
+ transport = client._transport_for_url(url)
+ assert isinstance(transport, httpx.HTTPTransport)
+ assert isinstance(transport._pool, httpcore.SOCKSProxy)
+
+ with pytest.warns(DeprecationWarning):
+ async_client = httpx.AsyncClient(proxies="socks5://localhost/")
+ async_transport = async_client._transport_for_url(url)
+ assert isinstance(async_transport, httpx.AsyncHTTPTransport)
+ assert isinstance(async_transport._pool, httpcore.AsyncSOCKSProxy)
+
+
def test_socks_proxy():
url = httpx.URL("http://www.example.com")
- client = httpx.Client(proxies="socks5://localhost/")
+ client = httpx.Client(proxy="socks5://localhost/")
transport = client._transport_for_url(url)
assert isinstance(transport, httpx.HTTPTransport)
assert isinstance(transport._pool, httpcore.SOCKSProxy)
- async_client = httpx.AsyncClient(proxies="socks5://localhost/")
+ async_client = httpx.AsyncClient(proxy="socks5://localhost/")
async_transport = async_client._transport_for_url(url)
assert isinstance(async_transport, httpx.AsyncHTTPTransport)
assert isinstance(async_transport._pool, httpcore.AsyncSOCKSProxy)
@@ -121,7 +138,12 @@ def test_socks_proxy():
],
)
def test_transport_for_request(url, proxies, expected):
- client = httpx.Client(proxies=proxies)
+ if proxies:
+ with pytest.warns(DeprecationWarning):
+ client = httpx.Client(proxies=proxies)
+ else:
+ client = httpx.Client(proxies=proxies)
+
transport = client._transport_for_url(httpx.URL(url))
if expected is None:
@@ -136,7 +158,8 @@ def test_transport_for_request(url, proxies, expected):
@pytest.mark.network
async def test_async_proxy_close():
try:
- client = httpx.AsyncClient(proxies={"https://": PROXY_URL})
+ with pytest.warns(DeprecationWarning):
+ client = httpx.AsyncClient(proxies={"https://": PROXY_URL})
await client.get("http://example.com")
finally:
await client.aclose()
@@ -145,15 +168,21 @@ async def test_async_proxy_close():
@pytest.mark.network
def test_sync_proxy_close():
try:
- client = httpx.Client(proxies={"https://": PROXY_URL})
+ with pytest.warns(DeprecationWarning):
+ client = httpx.Client(proxies={"https://": PROXY_URL})
client.get("http://example.com")
finally:
client.close()
+def test_unsupported_proxy_scheme_deprecated():
+ with pytest.warns(DeprecationWarning), pytest.raises(ValueError):
+ httpx.Client(proxies="ftp://127.0.0.1")
+
+
def test_unsupported_proxy_scheme():
with pytest.raises(ValueError):
- httpx.Client(proxies="ftp://127.0.0.1")
+ httpx.Client(proxy="ftp://127.0.0.1")
@pytest.mark.parametrize(
@@ -279,8 +308,31 @@ def test_proxies_environ(monkeypatch, client_class, url, env, expected):
],
)
def test_for_deprecated_proxy_params(proxies, is_valid):
- if not is_valid:
- with pytest.raises(ValueError):
+ with pytest.warns(DeprecationWarning):
+ if not is_valid:
+ with pytest.raises(ValueError):
+ httpx.Client(proxies=proxies)
+ else:
httpx.Client(proxies=proxies)
- else:
- httpx.Client(proxies=proxies)
+
+
+def test_proxy_and_proxies_together():
+ with pytest.warns(DeprecationWarning), pytest.raises(
+ RuntimeError,
+ ):
+ httpx.Client(proxies={"all://": "http://127.0.0.1"}, proxy="http://127.0.0.1")
+
+ with pytest.warns(DeprecationWarning), pytest.raises(
+ RuntimeError,
+ ):
+ httpx.AsyncClient(
+ proxies={"all://": "http://127.0.0.1"}, proxy="http://127.0.0.1"
+ )
+
+
+def test_proxy_with_mounts():
+ proxy_transport = httpx.HTTPTransport(proxy="http://127.0.0.1")
+ client = httpx.Client(mounts={"http://": proxy_transport})
+
+ transport = client._transport_for_url(httpx.URL("http://example.com"))
+ assert transport == proxy_transport
diff --git a/tests/client/test_redirects.py b/tests/client/test_redirects.py
index 6155df1447..f65827134c 100644
--- a/tests/client/test_redirects.py
+++ b/tests/client/test_redirects.py
@@ -345,7 +345,7 @@ def test_can_stream_if_no_redirect():
class ConsumeBodyTransport(httpx.MockTransport):
def handle_request(self, request: httpx.Request) -> httpx.Response:
assert isinstance(request.stream, httpx.SyncByteStream)
- [_ for _ in request.stream]
+ list(request.stream)
return self.handler(request) # type: ignore[return-value]
diff --git a/tests/models/test_cookies.py b/tests/models/test_cookies.py
index dbe1bfb99e..f7abe11ad4 100644
--- a/tests/models/test_cookies.py
+++ b/tests/models/test_cookies.py
@@ -92,7 +92,7 @@ def test_cookies_repr():
cookies.set(name="foo", value="bar", domain="http://blah.com")
cookies.set(name="fizz", value="buzz", domain="http://hello.com")
- assert (
- repr(cookies)
- == ", ]>"
+ assert repr(cookies) == (
+ ","
+ " ]>"
)
diff --git a/tests/models/test_requests.py b/tests/models/test_requests.py
index d0d4f11d32..ad6d6705f2 100644
--- a/tests/models/test_requests.py
+++ b/tests/models/test_requests.py
@@ -82,7 +82,7 @@ def test_read_and_stream_data():
request.read()
assert request.stream is not None
assert isinstance(request.stream, typing.Iterable)
- content = b"".join([part for part in request.stream])
+ content = b"".join(list(request.stream))
assert content == request.content
diff --git a/tests/models/test_responses.py b/tests/models/test_responses.py
index 9177773a50..d639625825 100644
--- a/tests/models/test_responses.py
+++ b/tests/models/test_responses.py
@@ -397,19 +397,19 @@ def test_iter_raw():
def test_iter_raw_with_chunksize():
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_raw(chunk_size=5)]
+ parts = list(response.iter_raw(chunk_size=5))
assert parts == [b"Hello", b", wor", b"ld!"]
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_raw(chunk_size=7)]
+ parts = list(response.iter_raw(chunk_size=7))
assert parts == [b"Hello, ", b"world!"]
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_raw(chunk_size=13)]
+ parts = list(response.iter_raw(chunk_size=13))
assert parts == [b"Hello, world!"]
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_raw(chunk_size=20)]
+ parts = list(response.iter_raw(chunk_size=20))
assert parts == [b"Hello, world!"]
@@ -422,7 +422,7 @@ def streaming_body_with_empty_chunks() -> typing.Iterator[bytes]:
response = httpx.Response(200, content=streaming_body_with_empty_chunks())
- parts = [part for part in response.iter_raw()]
+ parts = list(response.iter_raw())
assert parts == [b"Hello, ", b"world!"]
@@ -445,7 +445,7 @@ def test_iter_raw_on_async():
)
with pytest.raises(RuntimeError):
- [part for part in response.iter_raw()]
+ list(response.iter_raw())
def test_close_on_async():
@@ -538,21 +538,21 @@ def test_iter_bytes():
def test_iter_bytes_with_chunk_size():
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_bytes(chunk_size=5)]
+ parts = list(response.iter_bytes(chunk_size=5))
assert parts == [b"Hello", b", wor", b"ld!"]
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_bytes(chunk_size=13)]
+ parts = list(response.iter_bytes(chunk_size=13))
assert parts == [b"Hello, world!"]
response = httpx.Response(200, content=streaming_body())
- parts = [part for part in response.iter_bytes(chunk_size=20)]
+ parts = list(response.iter_bytes(chunk_size=20))
assert parts == [b"Hello, world!"]
def test_iter_bytes_with_empty_response():
response = httpx.Response(200, content=b"")
- parts = [part for part in response.iter_bytes()]
+ parts = list(response.iter_bytes())
assert parts == []
@@ -565,7 +565,7 @@ def streaming_body_with_empty_chunks() -> typing.Iterator[bytes]:
response = httpx.Response(200, content=streaming_body_with_empty_chunks())
- parts = [part for part in response.iter_bytes()]
+ parts = list(response.iter_bytes())
assert parts == [b"Hello, ", b"world!"]
@@ -611,23 +611,23 @@ def test_iter_text():
def test_iter_text_with_chunk_size():
response = httpx.Response(200, content=b"Hello, world!")
- parts = [part for part in response.iter_text(chunk_size=5)]
+ parts = list(response.iter_text(chunk_size=5))
assert parts == ["Hello", ", wor", "ld!"]
response = httpx.Response(200, content=b"Hello, world!!")
- parts = [part for part in response.iter_text(chunk_size=7)]
+ parts = list(response.iter_text(chunk_size=7))
assert parts == ["Hello, ", "world!!"]
response = httpx.Response(200, content=b"Hello, world!")
- parts = [part for part in response.iter_text(chunk_size=7)]
+ parts = list(response.iter_text(chunk_size=7))
assert parts == ["Hello, ", "world!"]
response = httpx.Response(200, content=b"Hello, world!")
- parts = [part for part in response.iter_text(chunk_size=13)]
+ parts = list(response.iter_text(chunk_size=13))
assert parts == ["Hello, world!"]
response = httpx.Response(200, content=b"Hello, world!")
- parts = [part for part in response.iter_text(chunk_size=20)]
+ parts = list(response.iter_text(chunk_size=20))
assert parts == ["Hello, world!"]
@@ -664,7 +664,7 @@ def test_iter_lines():
200,
content=b"Hello,\nworld!",
)
- content = [line for line in response.iter_lines()]
+ content = list(response.iter_lines())
assert content == ["Hello,", "world!"]
diff --git a/tests/models/test_url.py b/tests/models/test_url.py
index a47205f97d..79e1605a5a 100644
--- a/tests/models/test_url.py
+++ b/tests/models/test_url.py
@@ -2,99 +2,158 @@
import httpx
+# Tests for `httpx.URL` instantiation and property accessors.
+
+
+def test_basic_url():
+ url = httpx.URL("https://www.example.com/")
+
+ assert url.scheme == "https"
+ assert url.userinfo == b""
+ assert url.netloc == b"www.example.com"
+ assert url.host == "www.example.com"
+ assert url.port is None
+ assert url.path == "/"
+ assert url.query == b""
+ assert url.fragment == ""
+
+ assert str(url) == "https://www.example.com/"
+ assert repr(url) == "URL('https://www.example.com/')"
+
+
+def test_complete_url():
+ url = httpx.URL("https://example.org:123/path/to/somewhere?abc=123#anchor")
+ assert url.scheme == "https"
+ assert url.host == "example.org"
+ assert url.port == 123
+ assert url.path == "/path/to/somewhere"
+ assert url.query == b"abc=123"
+ assert url.raw_path == b"/path/to/somewhere?abc=123"
+ assert url.fragment == "anchor"
+
+ assert str(url) == "https://example.org:123/path/to/somewhere?abc=123#anchor"
+ assert (
+ repr(url) == "URL('https://example.org:123/path/to/somewhere?abc=123#anchor')"
+ )
+
+
+def test_url_with_empty_query():
+ """
+ URLs with and without a trailing `?` but an empty query component
+ should preserve the information on the raw path.
+ """
+ url = httpx.URL("https://www.example.com/path")
+ assert url.path == "/path"
+ assert url.query == b""
+ assert url.raw_path == b"/path"
+
+ url = httpx.URL("https://www.example.com/path?")
+ assert url.path == "/path"
+ assert url.query == b""
+ assert url.raw_path == b"/path?"
+
+
+def test_url_no_scheme():
+ url = httpx.URL("://example.com")
+ assert url.scheme == ""
+ assert url.host == "example.com"
+ assert url.path == "/"
+
+
+def test_url_no_authority():
+ url = httpx.URL("http://")
+ assert url.scheme == "http"
+ assert url.host == ""
+ assert url.path == "/"
+
+
+# Tests for percent encoding across path, query, and fragment...
+
@pytest.mark.parametrize(
- "given,idna,host,raw_host,scheme,port",
+ "url,raw_path,path,query,fragment",
[
+ # URL with unescaped chars in path.
(
- "http://中国.icom.museum:80/",
- "http://xn--fiqs8s.icom.museum:80/",
- "中国.icom.museum",
- b"xn--fiqs8s.icom.museum",
- "http",
- None,
+ "https://example.com/!$&'()*+,;= abc ABC 123 :/[]@",
+ b"/!$&'()*+,;=%20abc%20ABC%20123%20:/[]@",
+ "/!$&'()*+,;= abc ABC 123 :/[]@",
+ b"",
+ "",
),
+ # URL with escaped chars in path.
(
- "http://Königsgäßchen.de",
- "http://xn--knigsgchen-b4a3dun.de",
- "königsgäßchen.de",
- b"xn--knigsgchen-b4a3dun.de",
- "http",
- None,
+ "https://example.com/!$&'()*+,;=%20abc%20ABC%20123%20:/[]@",
+ b"/!$&'()*+,;=%20abc%20ABC%20123%20:/[]@",
+ "/!$&'()*+,;= abc ABC 123 :/[]@",
+ b"",
+ "",
),
+ # URL with mix of unescaped and escaped chars in path.
+ # WARNING: This has the incorrect behaviour, adding the test as an interim step.
(
- "https://faß.de",
- "https://xn--fa-hia.de",
- "faß.de",
- b"xn--fa-hia.de",
- "https",
- None,
+ "https://example.com/ %61%62%63",
+ b"/%20%61%62%63",
+ "/ abc",
+ b"",
+ "",
),
+ # URL with unescaped chars in query.
(
- "https://βόλος.com:443",
- "https://xn--nxasmm1c.com:443",
- "βόλος.com",
- b"xn--nxasmm1c.com",
- "https",
- None,
+ "https://example.com/?!$&'()*+,;= abc ABC 123 :/[]@?",
+ b"/?!$&'()*+,;=%20abc%20ABC%20123%20:/[]@?",
+ "/",
+ b"!$&'()*+,;=%20abc%20ABC%20123%20:/[]@?",
+ "",
),
+ # URL with escaped chars in query.
(
- "http://ශ්රී.com:444",
- "http://xn--10cl1a0b660p.com:444",
- "ශ්රී.com",
- b"xn--10cl1a0b660p.com",
- "http",
- 444,
+ "https://example.com/?!$&%27()*+,;=%20abc%20ABC%20123%20:%2F[]@?",
+ b"/?!$&%27()*+,;=%20abc%20ABC%20123%20:%2F[]@?",
+ "/",
+ b"!$&%27()*+,;=%20abc%20ABC%20123%20:%2F[]@?",
+ "",
),
+ # URL with mix of unescaped and escaped chars in query.
(
- "https://نامهای.com:4433",
- "https://xn--mgba3gch31f060k.com:4433",
- "نامهای.com",
- b"xn--mgba3gch31f060k.com",
- "https",
- 4433,
+ "https://example.com/?%20%97%98%99",
+ b"/?%20%97%98%99",
+ "/",
+ b"%20%97%98%99",
+ "",
+ ),
+ # URL encoding characters in fragment.
+ (
+ "https://example.com/#!$&'()*+,;= abc ABC 123 :/[]@?#",
+ b"/",
+ "/",
+ b"",
+ "!$&'()*+,;= abc ABC 123 :/[]@?#",
),
- ],
- ids=[
- "http_with_port",
- "unicode_tr46_compat",
- "https_without_port",
- "https_with_port",
- "http_with_custom_port",
- "https_with_custom_port",
],
)
-def test_idna_url(given, idna, host, raw_host, scheme, port):
- url = httpx.URL(given)
- assert url == httpx.URL(idna)
- assert url.host == host
- assert url.raw_host == raw_host
- assert url.scheme == scheme
- assert url.port == port
-
+def test_path_query_fragment(url, raw_path, path, query, fragment):
+ url = httpx.URL(url)
+ assert url.raw_path == raw_path
+ assert url.path == path
+ assert url.query == query
+ assert url.fragment == fragment
-def test_url():
- url = httpx.URL("https://example.org:123/path/to/somewhere?abc=123#anchor")
- assert url.scheme == "https"
- assert url.host == "example.org"
- assert url.port == 123
- assert url.path == "/path/to/somewhere"
- assert url.query == b"abc=123"
- assert url.raw_path == b"/path/to/somewhere?abc=123"
- assert url.fragment == "anchor"
- assert (
- repr(url) == "URL('https://example.org:123/path/to/somewhere?abc=123#anchor')"
- )
- new = url.copy_with(scheme="http", port=None)
- assert new == httpx.URL("http://example.org/path/to/somewhere?abc=123#anchor")
- assert new.scheme == "http"
+def test_url_query_encoding():
+ """
+ URL query parameters should use '%20' for encoding spaces,
+ and should treat '/' as a safe character. This behaviour differs
+ across clients, but we're matching browser behaviour here.
+ See https://github.com/encode/httpx/issues/2536
+ and https://github.com/encode/httpx/discussions/2460
+ """
+ url = httpx.URL("https://www.example.com/?a=b c&d=e/f")
+ assert url.raw_path == b"/?a=b%20c&d=e/f"
-def test_url_eq_str():
- url = httpx.URL("https://example.org:123/path/to/somewhere?abc=123#anchor")
- assert url == "https://example.org:123/path/to/somewhere?abc=123#anchor"
- assert str(url) == url
+ url = httpx.URL("https://www.example.com/", params={"a": "b c", "d": "e/f"})
+ assert url.raw_path == b"/?a=b%20c&d=e%2Ff"
def test_url_params():
@@ -109,51 +168,290 @@ def test_url_params():
assert url.params == httpx.QueryParams({"a": "123"})
-def test_url_join():
+# Tests for username and password
+
+
+@pytest.mark.parametrize(
+ "url,userinfo,username,password",
+ [
+ # username and password in URL.
+ (
+ "https://username:password@example.com",
+ b"username:password",
+ "username",
+ "password",
+ ),
+ # username and password in URL with percent escape sequences.
+ (
+ "https://username%40gmail.com:pa%20ssword@example.com",
+ b"username%40gmail.com:pa%20ssword",
+ "username@gmail.com",
+ "pa ssword",
+ ),
+ (
+ "https://user%20name:p%40ssword@example.com",
+ b"user%20name:p%40ssword",
+ "user name",
+ "p@ssword",
+ ),
+ # username and password in URL without percent escape sequences.
+ (
+ "https://username@gmail.com:pa ssword@example.com",
+ b"username%40gmail.com:pa%20ssword",
+ "username@gmail.com",
+ "pa ssword",
+ ),
+ (
+ "https://user name:p@ssword@example.com",
+ b"user%20name:p%40ssword",
+ "user name",
+ "p@ssword",
+ ),
+ ],
+)
+def test_url_username_and_password(url, userinfo, username, password):
+ url = httpx.URL(url)
+ assert url.userinfo == userinfo
+ assert url.username == username
+ assert url.password == password
+
+
+# Tests for different host types
+
+
+def test_url_valid_host():
+ url = httpx.URL("https://example.com/")
+ assert url.host == "example.com"
+
+
+def test_url_normalized_host():
+ url = httpx.URL("https://EXAMPLE.com/")
+ assert url.host == "example.com"
+
+
+def test_url_ipv4_like_host():
+ """rare host names used to quality as IPv4"""
+ url = httpx.URL("https://023b76x43144/")
+ assert url.host == "023b76x43144"
+
+
+# Tests for different port types
+
+
+def test_url_valid_port():
+ url = httpx.URL("https://example.com:123/")
+ assert url.port == 123
+
+
+def test_url_normalized_port():
+ # If the port matches the scheme default it is normalized to None.
+ url = httpx.URL("https://example.com:443/")
+ assert url.port is None
+
+
+def test_url_invalid_port():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://example.com:abc/")
+ assert str(exc.value) == "Invalid port: 'abc'"
+
+
+# Tests for path handling
+
+
+def test_url_normalized_path():
+ url = httpx.URL("https://example.com/abc/def/../ghi/./jkl")
+ assert url.path == "/abc/ghi/jkl"
+
+
+def test_url_escaped_path():
+ url = httpx.URL("https://example.com/ /🌟/")
+ assert url.raw_path == b"/%20/%F0%9F%8C%9F/"
+
+
+def test_url_leading_dot_prefix_on_absolute_url():
+ url = httpx.URL("https://example.com/../abc")
+ assert url.path == "/abc"
+
+
+def test_url_leading_dot_prefix_on_relative_url():
+ url = httpx.URL("../abc")
+ assert url.path == "../abc"
+
+
+# Tests for optional percent encoding
+
+
+def test_param_requires_encoding():
+ url = httpx.URL("http://webservice", params={"u": "with spaces"})
+ assert str(url) == "http://webservice?u=with%20spaces"
+
+
+def test_param_does_not_require_encoding():
+ url = httpx.URL("http://webservice", params={"u": "with%20spaces"})
+ assert str(url) == "http://webservice?u=with%20spaces"
+
+
+def test_param_with_existing_escape_requires_encoding():
+ url = httpx.URL("http://webservice", params={"u": "http://example.com?q=foo%2Fa"})
+ assert str(url) == "http://webservice?u=http%3A%2F%2Fexample.com%3Fq%3Dfoo%252Fa"
+
+
+# Tests for invalid URLs
+
+
+def test_url_invalid_hostname():
"""
- Some basic URL joining tests.
+ Ensure that invalid URLs raise an `httpx.InvalidURL` exception.
"""
- url = httpx.URL("https://example.org:123/path/to/somewhere")
- assert url.join("/somewhere-else") == "https://example.org:123/somewhere-else"
+ with pytest.raises(httpx.InvalidURL):
+ httpx.URL("https://😇/")
+
+
+def test_url_excessively_long_url():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://www.example.com/" + "x" * 100_000)
+ assert str(exc.value) == "URL too long"
+
+
+def test_url_excessively_long_component():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://www.example.com", path="/" + "x" * 100_000)
+ assert str(exc.value) == "URL component 'path' too long"
+
+
+def test_url_non_printing_character_in_url():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://www.example.com/\n")
+ assert str(exc.value) == "Invalid non-printable ASCII character in URL"
+
+
+def test_url_non_printing_character_in_component():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://www.example.com", path="/\n")
assert (
- url.join("somewhere-else") == "https://example.org:123/path/to/somewhere-else"
+ str(exc.value)
+ == "Invalid non-printable ASCII character in URL component 'path'"
)
+
+
+# Test for url components
+
+
+def test_url_with_components():
+ url = httpx.URL(scheme="https", host="www.example.com", path="/")
+
+ assert url.scheme == "https"
+ assert url.userinfo == b""
+ assert url.host == "www.example.com"
+ assert url.port is None
+ assert url.path == "/"
+ assert url.query == b""
+ assert url.fragment == ""
+
+ assert str(url) == "https://www.example.com/"
+
+
+def test_urlparse_with_invalid_component():
+ with pytest.raises(TypeError) as exc:
+ httpx.URL(scheme="https", host="www.example.com", incorrect="/")
+ assert str(exc.value) == "'incorrect' is an invalid keyword argument for URL()"
+
+
+def test_urlparse_with_invalid_scheme():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL(scheme="~", host="www.example.com", path="/")
+ assert str(exc.value) == "Invalid URL component 'scheme'"
+
+
+def test_urlparse_with_invalid_path():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL(scheme="https", host="www.example.com", path="abc")
+ assert str(exc.value) == "For absolute URLs, path must be empty or begin with '/'"
+
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL(path="//abc")
assert (
- url.join("../somewhere-else") == "https://example.org:123/path/somewhere-else"
+ str(exc.value)
+ == "URLs with no authority component cannot have a path starting with '//'"
+ )
+
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL(path=":abc")
+ assert (
+ str(exc.value)
+ == "URLs with no scheme component cannot have a path starting with ':'"
)
- assert url.join("../../somewhere-else") == "https://example.org:123/somewhere-else"
-def test_url_set_param_manipulation():
+def test_url_with_relative_path():
+ # This path would be invalid for an absolute URL, but is valid as a relative URL.
+ url = httpx.URL(path="abc")
+ assert url.path == "abc"
+
+
+# Tests for `httpx.URL` python built-in operators.
+
+
+def test_url_eq_str():
"""
- Some basic URL query parameter manipulation.
+ Ensure that `httpx.URL` supports the equality operator.
"""
- url = httpx.URL("https://example.org:123/?a=123")
- assert url.copy_set_param("a", "456") == "https://example.org:123/?a=456"
+ url = httpx.URL("https://example.org:123/path/to/somewhere?abc=123#anchor")
+ assert url == "https://example.org:123/path/to/somewhere?abc=123#anchor"
+ assert str(url) == url
-def test_url_add_param_manipulation():
+def test_url_set():
"""
- Some basic URL query parameter manipulation.
+ Ensure that `httpx.URL` instances can be used in sets.
"""
- url = httpx.URL("https://example.org:123/?a=123")
- assert url.copy_add_param("a", "456") == "https://example.org:123/?a=123&a=456"
+ urls = (
+ httpx.URL("http://example.org:123/path/to/somewhere"),
+ httpx.URL("http://example.org:123/path/to/somewhere/else"),
+ )
+ url_set = set(urls)
-def test_url_remove_param_manipulation():
+ assert all(url in urls for url in url_set)
+
+
+# Tests for TypeErrors when instantiating `httpx.URL`.
+
+
+def test_url_invalid_type():
"""
- Some basic URL query parameter manipulation.
+ Ensure that invalid types on `httpx.URL()` raise a `TypeError`.
"""
- url = httpx.URL("https://example.org:123/?a=123")
- assert url.copy_remove_param("a") == "https://example.org:123/"
+ class ExternalURLClass: # representing external URL class
+ pass
+
+ with pytest.raises(TypeError):
+ httpx.URL(ExternalURLClass()) # type: ignore
-def test_url_merge_params_manipulation():
+
+def test_url_with_invalid_component():
+ with pytest.raises(TypeError) as exc:
+ httpx.URL(scheme="https", host="www.example.com", incorrect="/")
+ assert str(exc.value) == "'incorrect' is an invalid keyword argument for URL()"
+
+
+# Tests for `URL.join()`.
+
+
+def test_url_join():
"""
- Some basic URL query parameter manipulation.
+ Some basic URL joining tests.
"""
- url = httpx.URL("https://example.org:123/?a=123")
- assert url.copy_merge_params({"b": "456"}) == "https://example.org:123/?a=123&b=456"
+ url = httpx.URL("https://example.org:123/path/to/somewhere")
+ assert url.join("/somewhere-else") == "https://example.org:123/somewhere-else"
+ assert (
+ url.join("somewhere-else") == "https://example.org:123/path/to/somewhere-else"
+ )
+ assert (
+ url.join("../somewhere-else") == "https://example.org:123/path/somewhere-else"
+ )
+ assert url.join("../../somewhere-else") == "https://example.org:123/somewhere-else"
def test_relative_url_join():
@@ -219,15 +517,32 @@ def test_url_join_rfc3986():
assert url.join("g#s/../x") == "http://example.com/b/c/g#s/../x"
-def test_url_set():
- urls = (
- httpx.URL("http://example.org:123/path/to/somewhere"),
- httpx.URL("http://example.org:123/path/to/somewhere/else"),
- )
+def test_resolution_error_1833():
+ """
+ See https://github.com/encode/httpx/issues/1833
+ """
+ url = httpx.URL("https://example.com/?[]")
+ assert url.join("/") == "https://example.com/"
- url_set = set(urls)
- assert all(url in urls for url in url_set)
+# Tests for `URL.copy_with()`.
+
+
+def test_copy_with():
+ url = httpx.URL("https://www.example.com/")
+ assert str(url) == "https://www.example.com/"
+
+ url = url.copy_with()
+ assert str(url) == "https://www.example.com/"
+
+ url = url.copy_with(scheme="http")
+ assert str(url) == "http://www.example.com/"
+
+ url = url.copy_with(netloc=b"example.com")
+ assert str(url) == "http://example.com/"
+
+ url = url.copy_with(path="/abc")
+ assert str(url) == "http://example.com/abc"
def test_url_copywith_authority_subcomponents():
@@ -321,56 +636,150 @@ def test_url_copywith_security():
url.copy_with(scheme=bad)
-def test_url_invalid():
- with pytest.raises(httpx.InvalidURL):
- httpx.URL("https://😇/")
+# Tests for copy-modifying-parameters methods.
+#
+# `URL.copy_set_param()`
+# `URL.copy_add_param()`
+# `URL.copy_remove_param()`
+# `URL.copy_merge_params()`
-def test_url_invalid_type():
- class ExternalURLClass: # representing external URL class
- pass
-
- with pytest.raises(TypeError):
- httpx.URL(ExternalURLClass()) # type: ignore
+def test_url_set_param_manipulation():
+ """
+ Some basic URL query parameter manipulation.
+ """
+ url = httpx.URL("https://example.org:123/?a=123")
+ assert url.copy_set_param("a", "456") == "https://example.org:123/?a=456"
-def test_url_with_empty_query():
+def test_url_add_param_manipulation():
"""
- URLs with and without a trailing `?` but an empty query component
- should preserve the information on the raw path.
+ Some basic URL query parameter manipulation.
"""
- url = httpx.URL("https://www.example.com/path")
- assert url.path == "/path"
- assert url.query == b""
- assert url.raw_path == b"/path"
-
- url = httpx.URL("https://www.example.com/path?")
- assert url.path == "/path"
- assert url.query == b""
- assert url.raw_path == b"/path?"
+ url = httpx.URL("https://example.org:123/?a=123")
+ assert url.copy_add_param("a", "456") == "https://example.org:123/?a=123&a=456"
-def test_url_query_encoding():
+def test_url_remove_param_manipulation():
"""
- URL query parameters should use '%20' to encoding spaces,
- and should treat '/' as a safe character. This behaviour differs
- across clients, but we're matching browser behaviour here.
+ Some basic URL query parameter manipulation.
+ """
+ url = httpx.URL("https://example.org:123/?a=123")
+ assert url.copy_remove_param("a") == "https://example.org:123/"
- See https://github.com/encode/httpx/issues/2536
- and https://github.com/encode/httpx/discussions/2460
+
+def test_url_merge_params_manipulation():
"""
- url = httpx.URL("https://www.example.com/?a=b c&d=e/f")
- assert url.raw_path == b"/?a=b%20c&d=e%2Ff"
+ Some basic URL query parameter manipulation.
+ """
+ url = httpx.URL("https://example.org:123/?a=123")
+ assert url.copy_merge_params({"b": "456"}) == "https://example.org:123/?a=123&b=456"
- url = httpx.URL("https://www.example.com/", params={"a": "b c", "d": "e/f"})
- assert url.raw_path == b"/?a=b%20c&d=e%2Ff"
+# Tests for IDNA hostname support.
-def test_url_with_url_encoded_path():
- url = httpx.URL("https://www.example.com/path%20to%20somewhere")
- assert url.path == "/path to somewhere"
- assert url.query == b""
- assert url.raw_path == b"/path%20to%20somewhere"
+
+@pytest.mark.parametrize(
+ "given,idna,host,raw_host,scheme,port",
+ [
+ (
+ "http://中国.icom.museum:80/",
+ "http://xn--fiqs8s.icom.museum:80/",
+ "中国.icom.museum",
+ b"xn--fiqs8s.icom.museum",
+ "http",
+ None,
+ ),
+ (
+ "http://Königsgäßchen.de",
+ "http://xn--knigsgchen-b4a3dun.de",
+ "königsgäßchen.de",
+ b"xn--knigsgchen-b4a3dun.de",
+ "http",
+ None,
+ ),
+ (
+ "https://faß.de",
+ "https://xn--fa-hia.de",
+ "faß.de",
+ b"xn--fa-hia.de",
+ "https",
+ None,
+ ),
+ (
+ "https://βόλος.com:443",
+ "https://xn--nxasmm1c.com:443",
+ "βόλος.com",
+ b"xn--nxasmm1c.com",
+ "https",
+ None,
+ ),
+ (
+ "http://ශ්රී.com:444",
+ "http://xn--10cl1a0b660p.com:444",
+ "ශ්රී.com",
+ b"xn--10cl1a0b660p.com",
+ "http",
+ 444,
+ ),
+ (
+ "https://نامهای.com:4433",
+ "https://xn--mgba3gch31f060k.com:4433",
+ "نامهای.com",
+ b"xn--mgba3gch31f060k.com",
+ "https",
+ 4433,
+ ),
+ ],
+ ids=[
+ "http_with_port",
+ "unicode_tr46_compat",
+ "https_without_port",
+ "https_with_port",
+ "http_with_custom_port",
+ "https_with_custom_port",
+ ],
+)
+def test_idna_url(given, idna, host, raw_host, scheme, port):
+ url = httpx.URL(given)
+ assert url == httpx.URL(idna)
+ assert url.host == host
+ assert url.raw_host == raw_host
+ assert url.scheme == scheme
+ assert url.port == port
+
+
+def test_url_unescaped_idna_host():
+ url = httpx.URL("https://中国.icom.museum/")
+ assert url.raw_host == b"xn--fiqs8s.icom.museum"
+
+
+def test_url_escaped_idna_host():
+ url = httpx.URL("https://xn--fiqs8s.icom.museum/")
+ assert url.raw_host == b"xn--fiqs8s.icom.museum"
+
+
+def test_url_invalid_idna_host():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://☃.com/")
+ assert str(exc.value) == "Invalid IDNA hostname: '☃.com'"
+
+
+# Tests for IPv4 hostname support.
+
+
+def test_url_valid_ipv4():
+ url = httpx.URL("https://1.2.3.4/")
+ assert url.host == "1.2.3.4"
+
+
+def test_url_invalid_ipv4():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://999.999.999.999/")
+ assert str(exc.value) == "Invalid IPv4 address: '999.999.999.999'"
+
+
+# Tests for IPv6 hostname support.
def test_ipv6_url():
@@ -380,6 +789,26 @@ def test_ipv6_url():
assert url.netloc == b"[::ffff:192.168.0.1]:5678"
+def test_url_valid_ipv6():
+ url = httpx.URL("https://[2001:db8::ff00:42:8329]/")
+ assert url.host == "2001:db8::ff00:42:8329"
+
+
+def test_url_invalid_ipv6():
+ with pytest.raises(httpx.InvalidURL) as exc:
+ httpx.URL("https://[2001]/")
+ assert str(exc.value) == "Invalid IPv6 address: '[2001]'"
+
+
+@pytest.mark.parametrize("host", ["[::ffff:192.168.0.1]", "::ffff:192.168.0.1"])
+def test_ipv6_url_from_raw_url(host):
+ url = httpx.URL(scheme="https", host=host, port=443, path="/")
+
+ assert url.host == "::ffff:192.168.0.1"
+ assert url.netloc == b"[::ffff:192.168.0.1]"
+ assert str(url) == "https://[::ffff:192.168.0.1]/"
+
+
@pytest.mark.parametrize(
"url_str",
[
@@ -397,24 +826,13 @@ def test_ipv6_url_copy_with_host(url_str, new_host):
assert str(url) == "http://[::ffff:192.168.0.1]:1234"
-@pytest.mark.parametrize("host", ["[::ffff:192.168.0.1]", "::ffff:192.168.0.1"])
-def test_ipv6_url_from_raw_url(host):
- url = httpx.URL(scheme="https", host=host, port=443, path="/")
-
- assert url.host == "::ffff:192.168.0.1"
- assert url.netloc == b"[::ffff:192.168.0.1]"
- assert str(url) == "https://[::ffff:192.168.0.1]/"
+# Test for deprecated API
-def test_resolution_error_1833():
+def test_url_raw_compatibility():
"""
- See https://github.com/encode/httpx/issues/1833
+ Test case for the (to-be-deprecated) `url.raw` accessor.
"""
- url = httpx.URL("https://example.com/?[]")
- assert url.join("/") == "https://example.com/"
-
-
-def test_url_raw_compatibility():
url = httpx.URL("https://www.example.com/path")
scheme, host, port, raw_path = url.raw
diff --git a/tests/test_asgi.py b/tests/test_asgi.py
index 14d6df6ded..8bb6dcb7bc 100644
--- a/tests/test_asgi.py
+++ b/tests/test_asgi.py
@@ -120,6 +120,19 @@ async def test_asgi_raw_path():
assert response.json() == {"raw_path": "/user@example.org"}
+@pytest.mark.anyio
+async def test_asgi_raw_path_should_not_include_querystring_portion():
+ """
+ See https://github.com/encode/httpx/issues/2810
+ """
+ async with httpx.AsyncClient(app=echo_raw_path) as client:
+ url = httpx.URL("http://www.example.org/path?query")
+ response = await client.get(url)
+
+ assert response.status_code == 200
+ assert response.json() == {"raw_path": "/path"}
+
+
@pytest.mark.anyio
async def test_asgi_upload():
async with httpx.AsyncClient(app=echo_body) as client:
diff --git a/tests/test_config.py b/tests/test_config.py
index 00913b2c17..6f6ee4f575 100644
--- a/tests/test_config.py
+++ b/tests/test_config.py
@@ -101,7 +101,10 @@ def test_create_ssl_context_with_get_request(server, cert_pem_file):
def test_limits_repr():
limits = httpx.Limits(max_connections=100)
- expected = "Limits(max_connections=100, max_keepalive_connections=None, keepalive_expiry=5.0)"
+ expected = (
+ "Limits(max_connections=100, max_keepalive_connections=None,"
+ " keepalive_expiry=5.0)"
+ )
assert repr(limits) == expected
diff --git a/tests/test_content.py b/tests/test_content.py
index a4d5f7a1fc..21c92dd799 100644
--- a/tests/test_content.py
+++ b/tests/test_content.py
@@ -15,7 +15,7 @@ async def test_empty_content():
assert isinstance(request.stream, httpx.SyncByteStream)
assert isinstance(request.stream, httpx.AsyncByteStream)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {"Host": "www.example.com", "Content-Length": "0"}
@@ -29,7 +29,7 @@ async def test_bytes_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {"Host": "www.example.com", "Content-Length": "13"}
@@ -42,7 +42,7 @@ async def test_bytes_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {"Host": "www.example.com", "Content-Length": "13"}
@@ -56,7 +56,7 @@ async def test_bytesio_content():
assert isinstance(request.stream, typing.Iterable)
assert not isinstance(request.stream, typing.AsyncIterable)
- content = b"".join([part for part in request.stream])
+ content = b"".join(list(request.stream))
assert request.headers == {"Host": "www.example.com", "Content-Length": "13"}
assert content == b"Hello, world!"
@@ -100,7 +100,7 @@ def hello_world() -> typing.Iterator[bytes]:
assert isinstance(request.stream, typing.Iterable)
assert not isinstance(request.stream, typing.AsyncIterable)
- content = b"".join([part for part in request.stream])
+ content = b"".join(list(request.stream))
assert request.headers == {
"Host": "www.example.com",
@@ -109,7 +109,7 @@ def hello_world() -> typing.Iterator[bytes]:
assert content == b"Hello, world!"
with pytest.raises(httpx.StreamConsumed):
- [part for part in request.stream]
+ list(request.stream)
# Support 'data' for compat with requests.
with pytest.warns(DeprecationWarning):
@@ -117,7 +117,7 @@ def hello_world() -> typing.Iterator[bytes]:
assert isinstance(request.stream, typing.Iterable)
assert not isinstance(request.stream, typing.AsyncIterable)
- content = b"".join([part for part in request.stream])
+ content = b"".join(list(request.stream))
assert request.headers == {
"Host": "www.example.com",
@@ -168,7 +168,7 @@ async def test_json_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -186,7 +186,7 @@ async def test_urlencoded_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -204,7 +204,7 @@ async def test_urlencoded_boolean():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -222,7 +222,7 @@ async def test_urlencoded_none():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -240,7 +240,7 @@ async def test_urlencoded_list():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -265,7 +265,7 @@ async def test_multipart_files_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -304,7 +304,7 @@ async def test_multipart_data_and_files_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -348,7 +348,7 @@ async def test_empty_request():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {"Host": "www.example.com", "Content-Length": "0"}
@@ -375,7 +375,7 @@ async def test_multipart_multiple_files_single_input_content():
assert isinstance(request.stream, typing.Iterable)
assert isinstance(request.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in request.stream])
+ sync_content = b"".join(list(request.stream))
async_content = b"".join([part async for part in request.stream])
assert request.headers == {
@@ -421,7 +421,7 @@ async def test_response_empty_content():
assert isinstance(response.stream, typing.Iterable)
assert isinstance(response.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in response.stream])
+ sync_content = b"".join(list(response.stream))
async_content = b"".join([part async for part in response.stream])
assert response.headers == {}
@@ -435,7 +435,7 @@ async def test_response_bytes_content():
assert isinstance(response.stream, typing.Iterable)
assert isinstance(response.stream, typing.AsyncIterable)
- sync_content = b"".join([part for part in response.stream])
+ sync_content = b"".join(list(response.stream))
async_content = b"".join([part async for part in response.stream])
assert response.headers == {"Content-Length": "13"}
@@ -453,13 +453,13 @@ def hello_world() -> typing.Iterator[bytes]:
assert isinstance(response.stream, typing.Iterable)
assert not isinstance(response.stream, typing.AsyncIterable)
- content = b"".join([part for part in response.stream])
+ content = b"".join(list(response.stream))
assert response.headers == {"Transfer-Encoding": "chunked"}
assert content == b"Hello, world!"
with pytest.raises(httpx.StreamConsumed):
- [part for part in response.stream]
+ list(response.stream)
@pytest.mark.anyio
diff --git a/tests/test_decoders.py b/tests/test_decoders.py
index 61c9a4acca..170a93453c 100644
--- a/tests/test_decoders.py
+++ b/tests/test_decoders.py
@@ -219,6 +219,17 @@ def test_text_decoder_empty_cases():
assert response.text == ""
+@pytest.mark.parametrize(
+ ["data", "expected"],
+ [((b"Hello,", b" world!"), ["Hello,", " world!"])],
+)
+def test_streaming_text_decoder(
+ data: typing.Iterable[bytes], expected: typing.List[str]
+) -> None:
+ response = httpx.Response(200, content=iter(data))
+ assert list(response.iter_text()) == expected
+
+
def test_line_decoder_nl():
response = httpx.Response(200, content=[b""])
assert list(response.iter_lines()) == []
diff --git a/tests/test_multipart.py b/tests/test_multipart.py
index 55211fb1a9..fc283c9cc4 100644
--- a/tests/test_multipart.py
+++ b/tests/test_multipart.py
@@ -174,7 +174,10 @@ def test_multipart_file_tuple_headers(file_content_type: typing.Optional[str]) -
def test_multipart_headers_include_content_type() -> None:
- """Content-Type from 4th tuple parameter (headers) should override the 3rd parameter (content_type)"""
+ """
+ Content-Type from 4th tuple parameter (headers) should
+ override the 3rd parameter (content_type)
+ """
file_name = "test.txt"
file_content = io.BytesIO(b"")
file_content_type = "text/plain"
diff --git a/tests/test_urlparse.py b/tests/test_urlparse.py
deleted file mode 100644
index b03291b44f..0000000000
--- a/tests/test_urlparse.py
+++ /dev/null
@@ -1,285 +0,0 @@
-import pytest
-
-import httpx
-
-
-def test_urlparse():
- url = httpx.URL("https://www.example.com/")
-
- assert url.scheme == "https"
- assert url.userinfo == b""
- assert url.netloc == b"www.example.com"
- assert url.host == "www.example.com"
- assert url.port is None
- assert url.path == "/"
- assert url.query == b""
- assert url.fragment == ""
-
- assert str(url) == "https://www.example.com/"
-
-
-def test_urlparse_no_scheme():
- url = httpx.URL("://example.com")
- assert url.scheme == ""
- assert url.host == "example.com"
- assert url.path == "/"
-
-
-def test_urlparse_no_authority():
- url = httpx.URL("http://")
- assert url.scheme == "http"
- assert url.host == ""
- assert url.path == "/"
-
-
-# Tests for different host types
-
-
-def test_urlparse_valid_host():
- url = httpx.URL("https://example.com/")
- assert url.host == "example.com"
-
-
-def test_urlparse_normalized_host():
- url = httpx.URL("https://EXAMPLE.com/")
- assert url.host == "example.com"
-
-
-def test_urlparse_ipv4_like_host():
- """rare host names used to quality as IPv4"""
- url = httpx.URL("https://023b76x43144/")
- assert url.host == "023b76x43144"
-
-
-def test_urlparse_valid_ipv4():
- url = httpx.URL("https://1.2.3.4/")
- assert url.host == "1.2.3.4"
-
-
-def test_urlparse_invalid_ipv4():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://999.999.999.999/")
- assert str(exc.value) == "Invalid IPv4 address: '999.999.999.999'"
-
-
-def test_urlparse_valid_ipv6():
- url = httpx.URL("https://[2001:db8::ff00:42:8329]/")
- assert url.host == "2001:db8::ff00:42:8329"
-
-
-def test_urlparse_invalid_ipv6():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://[2001]/")
- assert str(exc.value) == "Invalid IPv6 address: '[2001]'"
-
-
-def test_urlparse_unescaped_idna_host():
- url = httpx.URL("https://中国.icom.museum/")
- assert url.raw_host == b"xn--fiqs8s.icom.museum"
-
-
-def test_urlparse_escaped_idna_host():
- url = httpx.URL("https://xn--fiqs8s.icom.museum/")
- assert url.raw_host == b"xn--fiqs8s.icom.museum"
-
-
-def test_urlparse_invalid_idna_host():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://☃.com/")
- assert str(exc.value) == "Invalid IDNA hostname: '☃.com'"
-
-
-# Tests for different port types
-
-
-def test_urlparse_valid_port():
- url = httpx.URL("https://example.com:123/")
- assert url.port == 123
-
-
-def test_urlparse_normalized_port():
- # If the port matches the scheme default it is normalized to None.
- url = httpx.URL("https://example.com:443/")
- assert url.port is None
-
-
-def test_urlparse_invalid_port():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://example.com:abc/")
- assert str(exc.value) == "Invalid port: 'abc'"
-
-
-# Tests for path handling
-
-
-def test_urlparse_normalized_path():
- url = httpx.URL("https://example.com/abc/def/../ghi/./jkl")
- assert url.path == "/abc/ghi/jkl"
-
-
-def test_urlparse_escaped_path():
- url = httpx.URL("https://example.com/ /🌟/")
- assert url.raw_path == b"/%20/%F0%9F%8C%9F/"
-
-
-def test_urlparse_leading_dot_prefix_on_absolute_url():
- url = httpx.URL("https://example.com/../abc")
- assert url.path == "/abc"
-
-
-def test_urlparse_leading_dot_prefix_on_relative_url():
- url = httpx.URL("../abc")
- assert url.path == "../abc"
-
-
-# Tests for optional percent encoding
-
-
-def test_param_requires_encoding():
- url = httpx.URL("http://webservice", params={"u": "with spaces"})
- assert str(url) == "http://webservice?u=with%20spaces"
-
-
-def test_param_does_not_require_encoding():
- url = httpx.URL("http://webservice", params={"u": "with%20spaces"})
- assert str(url) == "http://webservice?u=with%20spaces"
-
-
-def test_param_with_existing_escape_requires_encoding():
- url = httpx.URL("http://webservice", params={"u": "http://example.com?q=foo%2Fa"})
- assert str(url) == "http://webservice?u=http%3A%2F%2Fexample.com%3Fq%3Dfoo%252Fa"
-
-
-# Tests for invalid URLs
-
-
-def test_urlparse_excessively_long_url():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://www.example.com/" + "x" * 100_000)
- assert str(exc.value) == "URL too long"
-
-
-def test_urlparse_excessively_long_component():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://www.example.com", path="/" + "x" * 100_000)
- assert str(exc.value) == "URL component 'path' too long"
-
-
-def test_urlparse_non_printing_character_in_url():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://www.example.com/\n")
- assert str(exc.value) == "Invalid non-printable ASCII character in URL"
-
-
-def test_urlparse_non_printing_character_in_component():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL("https://www.example.com", path="/\n")
- assert (
- str(exc.value)
- == "Invalid non-printable ASCII character in URL component 'path'"
- )
-
-
-# Test for urlparse components
-
-
-def test_urlparse_with_components():
- url = httpx.URL(scheme="https", host="www.example.com", path="/")
-
- assert url.scheme == "https"
- assert url.userinfo == b""
- assert url.host == "www.example.com"
- assert url.port is None
- assert url.path == "/"
- assert url.query == b""
- assert url.fragment == ""
-
- assert str(url) == "https://www.example.com/"
-
-
-def test_urlparse_with_invalid_component():
- with pytest.raises(TypeError) as exc:
- httpx.URL(scheme="https", host="www.example.com", incorrect="/")
- assert str(exc.value) == "'incorrect' is an invalid keyword argument for URL()"
-
-
-def test_urlparse_with_invalid_scheme():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL(scheme="~", host="www.example.com", path="/")
- assert str(exc.value) == "Invalid URL component 'scheme'"
-
-
-def test_urlparse_with_invalid_path():
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL(scheme="https", host="www.example.com", path="abc")
- assert str(exc.value) == "For absolute URLs, path must be empty or begin with '/'"
-
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL(path="//abc")
- assert (
- str(exc.value)
- == "URLs with no authority component cannot have a path starting with '//'"
- )
-
- with pytest.raises(httpx.InvalidURL) as exc:
- httpx.URL(path=":abc")
- assert (
- str(exc.value)
- == "URLs with no scheme component cannot have a path starting with ':'"
- )
-
-
-def test_urlparse_with_relative_path():
- # This path would be invalid for an absolute URL, but is valid as a relative URL.
- url = httpx.URL(path="abc")
- assert url.path == "abc"
-
-
-# Tests for accessing and modifying `urlparse` results.
-
-
-def test_copy_with():
- url = httpx.URL("https://www.example.com/")
- assert str(url) == "https://www.example.com/"
-
- url = url.copy_with()
- assert str(url) == "https://www.example.com/"
-
- url = url.copy_with(scheme="http")
- assert str(url) == "http://www.example.com/"
-
- url = url.copy_with(netloc=b"example.com")
- assert str(url) == "http://example.com/"
-
- url = url.copy_with(path="/abc")
- assert str(url) == "http://example.com/abc"
-
-
-# Tests for percent encoding across path, query, and fragement...
-
-
-def test_path_percent_encoding():
- # Test percent encoding for SUB_DELIMS ALPHA NUM and allowable GEN_DELIMS
- url = httpx.URL("https://example.com/!$&'()*+,;= abc ABC 123 :/[]@")
- assert url.raw_path == b"/!$&'()*+,;=%20abc%20ABC%20123%20:/[]@"
- assert url.path == "/!$&'()*+,;= abc ABC 123 :/[]@"
- assert url.query == b""
- assert url.fragment == ""
-
-
-def test_query_percent_encoding():
- # Test percent encoding for SUB_DELIMS ALPHA NUM and allowable GEN_DELIMS
- url = httpx.URL("https://example.com/?!$&'()*+,;= abc ABC 123 :/[]@" + "?")
- assert url.raw_path == b"/?!$&'()*+,;=%20abc%20ABC%20123%20:%2F[]@?"
- assert url.path == "/"
- assert url.query == b"!$&'()*+,;=%20abc%20ABC%20123%20:%2F[]@?"
- assert url.fragment == ""
-
-
-def test_fragment_percent_encoding():
- # Test percent encoding for SUB_DELIMS ALPHA NUM and allowable GEN_DELIMS
- url = httpx.URL("https://example.com/#!$&'()*+,;= abc ABC 123 :/[]@" + "?#")
- assert url.raw_path == b"/"
- assert url.path == "/"
- assert url.query == b""
- assert url.fragment == "!$&'()*+,;= abc ABC 123 :/[]@?#"
diff --git a/tests/test_utils.py b/tests/test_utils.py
index dedb92f7f2..5391f9c22d 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -109,7 +109,8 @@ def test_logging_redirect_chain(server, caplog):
(
"httpx",
logging.INFO,
- 'HTTP Request: GET http://127.0.0.1:8000/redirect_301 "HTTP/1.1 301 Moved Permanently"',
+ "HTTP Request: GET http://127.0.0.1:8000/redirect_301"
+ ' "HTTP/1.1 301 Moved Permanently"',
),
(
"httpx",
@@ -196,6 +197,7 @@ def test_get_ssl_cert_file():
({"no_proxy": "localhost"}, {"all://localhost": None}),
({"no_proxy": "github.com"}, {"all://*github.com": None}),
({"no_proxy": ".github.com"}, {"all://*.github.com": None}),
+ ({"no_proxy": "http://github.com"}, {"http://github.com": None}),
],
)
def test_get_environment_proxies(environment, proxies):