Python is in the midst of a resurgence. It never went away, but usage now grows like never before. With machine learning developers and data scientists relying on Python, much of the web development ecosystem around the language continues to grow.
One aspect that affects all three of these specializations is the powerful benefits of APIs. Pulling in data, and connecting to external services, is an essential part of any language. In this article, we'll look at the primary libraries for making HTTP requests, along with some common use cases that allow you to connect to an API in Python. Before that, we should ask an important question.
Is Python good for APIs?
It seems like a strange question, but given the large web presence of Node.js and Ruby, you may think that Python isn't good for making API calls. This isn't true. In fact, Python has had a long and dedicated presence on the web, specifically with it's Flask and Django libraries.
As Python is a powerful, accessible way to manipulate data, it makes sense to also use it for acquiring the data sources. This is where API calls come in. Let's start with the most popular Python HTTP library used for making API calls. Requests.
Requests
Requests is the accessible, leading library that developers use for making API requests in Python. It offers an interface to make HTTP requests synchronously. Let's get right into some common types of requests you can make with Requests. The following examples will all assume that your project includes Requests. You can follow their installation instructions, but the gist is:
Install it via pip
or pipenv
:
pip install requests
Then, make sure to import requests into your project
import requests
Common types of API calls with Requests
The most simple GET
request is intuitive.
response = requests.get('https://example.com')
As we can see with the get
method above, Requests offers shortcut methods for the HTTP verbs, including POST
, PUT
, DELETE
, HEAD
, and OPTIONS
.
The previous request is pretty simple. Let's look at more complex requests. Often, an APIs documentation will require that you pass query parameters to a specific endpoint. To pass query parameters, we can pass them into get
as the second argument.
response = requests.get('https://example.com', params={'name': 'Bearer'})
The response
variable contains the data returned by the API in our examples. There are three primary ways to access the data.
- As text, with
response.text
- As bites with
response.content
- As JSON with
response.json()
- Or as the raw response with
response.raw
In addition to the body of the response, we can also access the status code with response.status_code
, the headers with response.headers
, and so on. You can find a full list of properties and methods available on Response
in the requests.Response documentation.
As we saw with the params
argument, we can also pass headers to the request.
response = requests.get('https://example.com, headers={'example-header': 'Bearer'})
Here, we pass the headers
argument with a python dictionary of headers.
The last common API call type we'll make is a full-featured POST
, with authentication. This will combine the previous headers technique with the use of the data
argument.
url = 'https://example.com'
headers = {'Authorization': 'Bearer example-auth-code'}
payload = {'name':'Mark', email: 'mark@bearer.sh'}
response = requests.post(url, headers=headers, data=payload)
This sends the payload as form-encoded data. For most modern APIs, we often need to send data as JSON. In this next example, we use the built in json helper from requests.
url = 'https://example.com'
headers = {'Authorization': 'Bearer example-auth-code'}
payload = {'name':'Mark', email: 'mark@bearer.sh'}
response = requests.post(url, headers=headers, json=payload)
This will encode the payload as JSON, as well as automatically change the Content-Type
header to application/json
.
Requests is excellent for synchronous API calls, but sometimes your app may depend on asynchronous requests. For this, we can use an asynchronous HTTP library like aiohttp.
aiohttp
When making asynchronous HTTP requests, you'll need to take advantage of some newer features in Python 3. While the requests library does have variations and plugins to handle asynchronous programming, one of the more popular libraries for async is aiohttp. Used together with the asyncio, we can use aiohttp to make requests in an async way. The code is a little more complex, but provides all the additional freedom that async calls provide.
To get started, we'll need to install aiohttp
.
Install aiohttp
.
pip install aiohttp
Common types of API calls with aiohttp
We will begin with the same GET
request we saw earlier. To start, import both libraries, and define an async function.
import asyncio # [1]
import aiohttp
async def main(): # [2]
async with aiohttp.ClientSession() as session: # [3]
async with session.get('http://example.com') as resp: # [4]
response = await resp.read() # [5]
print(response)
asyncio.run(main()) # [6]
In the code above, we perform the following:
- We import the required libraries.
- Define
main
as an async function. - We set up a
ClientSession
fromaiohttp
. - We use the session to perform an HTTP GET request.
- Next, we await the response and print it.
- Finally, we use the
run
method of Python's asyncio to call the asynchronous function.
If you haven't worked with async in Python before, this may look strange and complicated compared to the earlier examples. The makers of aiohttp
recommend setting a single session per application and opening/closing connections on the single session. To make our examples self-contained, I have left the examples in the less efficient format.
Next, let's look at a full-featured POST
with auth headers, like in the requests example.
# ...
async def main():
async with aiohttp.ClientSession() as session:
async with session.post('http://example.com',
headers={'Authorization':'Bearer 123456', 'Content-Type':'application/json'},
json={'title':'Try Bearer'}) as resp: # [1]
response = await resp.json() # [2]
print(response)
asyncio.run(main())
There are a few differences between this example and the previous:
- The session uses the
post
method, and passes in headers and json dictionaries in addition to the URL. - We use the library's built-in
json
method from the response to parse the returned json.
With these two snippets, we're able to perform the majority of common API-related tasks. For additional features like file uploading and form data, take a look at aiohttp's developer documentation.
Additional libraries to try
While Requests is the most popular, you may find value in some of these additional libraries for more unique use cases.
- httpx:
httpx
offers both sync and async support. It also uses a requests-compatible API that will make moving between the two much easier. Currently the library is in a beta state with a 1.0 expected min-summer 2020, but it is worth keeping an eye on as it matures. - httpcore: Keeping with the pre-1.0 trend,
httpcore
is an interesting option if you are building a library. It is low-level, so you can build your own abstractions on top of it. They explicitly recommend not to use it unless you need a low-level library. - urllib3: We should
urllib3
, if only because it is the the underlying library that Requests and many others (including pip) are built on top of. While less user-friendly than some of the high-level libraries, urllib3 is powerful and battle-tested. If for some reason you need something with fewer abstractions than requests, it is a good option.
Making the most of API calls
With Python's power in data processing and recent resurgence, thanks in part of the ML and data science communities, it is a great option for interacting with APIs. It is important to remember that even the most battle-tested and popular third-party APIs and services still suffer problems and outages. At Bearer, we're building tools to help manage these problems and better monitor your third-party APIs. Give Bearer a try today, and let us know what you think!