This tutorial was originally posted to developer.twitter.com.
Since I have a car in New York City, my car is subject to the city’s alternate side of the street parking regulations. This means most nights I need to move my car before the early morning street cleaning that happens in my neighborhood. I had developed a nightly routine around moving my car before I go to bed. I am sometimes a bit too good at this and I often move my car on days I don’t need to. Since alternate side of the street parking is often canceled on days where there are holidays, or bad weather, there is a Twitter handle @NYCASP, which posts daily and whenever there is an emergency situation.
To solve my problem I used Twitter data and Twilio. I now get a text message whenever I don’t need to move my car. I used the Search Tweets wrapper for Python to grab data from the aforementioned Twitter handle and Twilio so that I get a text message whenever I don’t need to move my car. I’ve created a script that will look to see if the words suspended and tomorrow appear in a Tweet and will send me a text message if those conditions are met. This tutorial will walk you through how I was able to create this solution.
Getting our requirements set
The code I wrote was written with Python 3.6. I recommend using this version as well. In order to use pip, you will need to install pip first. For this tutorial we’ll use Atom. You will want to make sure you have the command line tools installed.
We will also need to run the following in our command line to make sure we have the right dependencies set:
pip install pandas
pip install twilio
pip install searchtweets
Let’s now create on the command line a new directory for this project and change into it.
mkdir parking
cd parking
Setting up a Jupyter notebook
I wrote my code using a Jupyter notebook, so I could work with the data in an interactive way. This was important for me in this project so I could see the Tweets I was searching for. You can use the Anaconda distribution of Python, which has Jupyter preinstalled or use the following syntax:
pip install jupyter
Once you have Jupyter installed, you can run your notebook by typing this into your command line:
jupyter notebook
If you need to stop your Jupyter notebook at any point you can press ctrl+c
to do so.
If the setup was done correctly, a list of what’s in your directory should launch in your browser. In the right hand corner click where it says new
. From there you will want to click where it says Python 3
in the dropdown option to launch a notebook. Once this launches, click where it says untitled
to change the name of the notebook to parking.
Setting up to connect with Twilio
You will need to create a script so that we can connect to Twilio. To do this you will need a Twilio account and check out the getting started documentation on the subject. To create a script to connect with Twilio. Let’s open up the directory we’re working in.
In our command line let’s type:
atom .
We’ll also need to set the environment variables for our Twilio account. These are found in your console once you have created a Twilio account:
export 'TWILIO_ACCOUNT_SID'='xxxxxxxxxxxxxxxxxxx'
export 'TWILIO_AUTH_TOKEN'='xxxxxxxxxxxxxxxxxxxxxxx'
export 'TWILIO_PHONE_NUMBER'='xxxxxxxxxxx'
export 'CELL_PHONE_NUMBER'='xxxxxxxxxxx'
When we add in our phone numbers as environment variables we’ll need to put our phone number in one string with the country code, area code, and number. For example if I was in the US with the phone number (555) 555-5555 my string for the phone number variable would be ‘15555555555’.
From here we can add a file called twilio_connect_demo.py
:
We will first add the two import statements we’ll need:
import os
from twilio.rest import Client
Now we can write a function to help us connect to the Twilio API.
def twilio_connect():
account_sid = os.environ.get('TWILIO_ACCOUNT_SID')
auth_token = os.environ.get('TWILIO_AUTH_TOKEN')
client = Client(account_sid, auth_token)
return client
Now we can write another function to send the text message:
def send_message(client):
return client.messages.create(from_=os.environ.get('TWILIO_PHONE_NUMBER'),
to=os.environ.get('CELL_PHONE_NUMBER'),
body="You don't have to move your car tonight. Enjoy your night!")
Now we have our script that sends a text message, this will be helpful later on in this tutorial.
Importing what we need
Inside of our notebook, in the first cell we’ll need to import the libraries we’ll be working with. We’ll be using datetime
, which is part of the Python standard library that lets us work with the dates to run our search. We’ll also need to use pandas which we will use to take our data and turn it into a dataframe. To work with the Search Tweets API, we’ll import the search tweets Python wrapper. We’ll need the ResultStream
, gen_rule_playload
, and the load_credentials
functions from this library. We’ll also import the two functions from the script we just wrote to send text messages later on in the code.
import datetime
import pandas as pd
from searchtweets import ResultStream, gen_rule_payload, load_credentials
from twilio_connect_demo import twilio_connect, send_message
To run this code we can press the play button or use the keyboard shortcut of shift-enter
.
Connecting to the Twitter API
You will need to create a Twitter app which allows you to connect to API. Before you can do that, you are going to need an approved developer account. After you have completed the initial set up, you will need to create a developer account. You can apply for a developer account here.
Once your account is approved, you will need to create an app. Then, set up that app for the Search Tweets: 30-Days environment and set a dev environment label name.
After you have completed the initial set up, you will need to find your consumer API key and API secret key from your Twitter app. For more information about getting set up to locate your keys, please review our documentation on the subject.
In Atom, we will need to add a new file that has our app key and secret in it. Let’s create a file called secret.yaml
that contains the following:
search_tweets_api:
account_type: premium
endpoint: https://api.twitter.com/1.1/tweets/search/30day/env_name.json
consumer_key: xxxxxxxxxxxxxxxxxxx
consumer_secret: xxxxxxxxxxxxxxxxxxx
The env_name
in the endpoint is the name of the dev environment you created on developer.twitter.com. You will need to change this to the name of your own dev environment.
You are going to want to make sure you add this file to your .gitignore
before you push it to GitHub. For more information about working with an gitignore file check out this page. You also might want to review our guide on keeping tokens secure.
Working with Twitter data
Back in the browser where we are running our Jupyter notebook, we can create a variable called search_args where we can load our credentials in to connect to the Twitter API.
search_args = load_credentials(filename="secret.yaml",
yaml_key="search_tweets_api",
env_overwrite=False)
Since we want the dates we are searching to be dynamic, meaning that we don’t have to enter the date we’ll need to create some variables. Let’s start by creating one called today
that grabs today’s date from the datetime
library that we’ve imported in earlier.
today = datetime.date.today()
print(today)
When we run this line of code we should get back today’s date. To get the start date which is 30 days from today we’ll create a variable called start date. This will take today’s date and subtract 30 days using the built in method of timedelta from the datetime library.
start_date = today + datetime.timedelta(-30)
print(start_date)
After we run this we should get the start date which is 30 days from today. Now we have the dates we can pass into our gen_rule_payload
we imported from the search tweets python wrapper that allows us to create a rule which will allow us to pull data from the past 30 days from the @NYCASP twitter handle.
rule = gen_rule_payload("from:NYCASP",
from_date=str(start_date),
to_date=str(today),
results_per_call=500,
)
print(rule)
After you run this line of code, you should get back the rule that you can also paste into the body of a REST client such as Postman or Insomnia to the view data ahead of time.
{"query": "from:NYCASP", "maxResults": 500, "toDate": "201811060000", "fromDate": "201810070000"}
Let’s now create a variable called rs
, which is short for ResultStream since we are going to create a result stream of Tweets. To do this we can pass the rule we just created into along with our credentials and a few other parameters into this variable as follows:
rs = ResultStream(rule_payload=rule,
max_results=500,
max_pages=1,
**search_args)
print(rs)
Once we print the variable called rs we can see the information we passed into it. This should look something like this:
ResultStream:
{
"username": null,
"endpoint": "https://api.twitter.com/1.1/tweets/search/30day/env_name.json",
"rule_payload": {
"query": "from:NYCASP",
"maxResults": 500,
"toDate": "201811060000",
"fromDate": "201810070000"
},
"tweetify": true,
"max_results": 500
}
We can pass the stream of the results using the attribute .stream()
into a variable called tweets
.
tweets = rs.stream()
From here we can convert the result stream into a list so we can work with this data in a more robust way. This is the point when we will start to see the Tweets we are working with. Let’s use a list comprehension to print out the first 5 rows, just to make sure we are on the right track here.
list_tweets = list(tweets)
[print(tweet.all_text, end='\n\n') for tweet in list_tweets[0:5]];
When we run this code we’ll get something that looks like this:
#NYCASP rules will be suspended tomorrow, Wednesday, November 7 for Diwali. Parking meters will be in effect.
#NYCASP rules are suspended today, November 6 for Election Day. Parking meters are in effect.
#NYCASP rules will be suspended tomorrow, Tuesday, November 6 for Election Day. Parking meters will be in effect.
#NYCASP rules are in effect today, November 5.
#NYCASP rules will be in effect tomorrow, Monday, November 5.
Now that we know we are receiving the data properly we can create two lists one for the date, and one for the Tweet text. We’ll start two empty lists and use a for loop to iterate through our data.
tweet_text = []
tweet_date = []
for tweet in list_tweets:
tweet_text.append(tweet['text'])
tweet_date.append(tweet['created_at'])
Now we can use pandas to create a dataframe consisting of these two columns.
df = pd.DataFrame({'tweet':tweet_text, 'date':tweet_date})
We can use the head attribute to see the first 5 rows of the dataframe.
df.head()
We’ll see something that looks like this:
Sending the message
Now that we have the Twitter data in the right shape. We can set up to send the message if the right conditions are met. We’ll first need to create a variable called client
that allows us to connect to Twilio.
client = twilio_connect()
From here we can set the logic to send us a text message if the words suspended and tomorrow appear in the last tweet.
if 'suspended' in df['tweet'].values[0]:
if 'tomorrow' in df['tweet'].values[0]:
send_message(client=client)
print('text sent')
else:
print('suspended but not tomorrow, no text sent')
else:
print('not suspended, no text sent')
As a result, we should get the corresponding print messages depending on if alternate side of the street parking suspended tomorrow. This allows us to debug from our command line needed.
If the words suspended and tomorrow appear in the last tweet we’ll get a text message that looks like this:
To download this script as a .py
file click in the right hand corner where it says file and select the option for download as. From there you will be prompted to select the format, be sure to select Python
.
Deployment
As this currently stands, this is the only time you will get this text message unless you deploy it to a server. If you’d like to get a text message whenever you don’t need to move your car you can set this script that it runs daily on a server with a cron job. I have it set so this script runs daily at 7:30pm.
Next steps
The full code is found here. My coworker in the London office mentioned this code could be easily adapted to run with this handle to let her see when there are delays on the Oxford Tube. While talking with others about this project they mentioned that this project might make more sense if it tells when you need to move your car, instead of when you don’t need to move it. You could easily make a few changes to the code to make this happen. As it currently stands, this code could serve as a template to build off for other similar ideas.
Let us know on the forums or Tweet us at @TwitterDev if this inspires you to build anything.