Moving on from where we stopped, we'll be focusing on setting up our database (PostgreSQL) and redis connections here. We'll leverage on the awesome rust ecosystem and previous setup to make this seamless. Redis is needed to help us forcefully destroy some tokens and to efficiently store our session cookies later on.
Source code
The source code for this series is hosted on GitHub via:
After that, change directory into each subdirectory: backend and frontend in different terminals. Then following the instructions in each subdirectory to run them.
You can get the overview of the code for this article on github.
Step 1: Create a users submodule in the routes module
This step is not really that relevant here but so that we can still feel focused, I decided to add it here. Let's prepare our application for the task ahead. In the src/routes folder, create a subfolder and name it users. Make the new folder a module:
Then link the newly created module to its parent module:
// src/routes/mod.rs...modusers;
Let's leave that there and focus on this article's main business — linking our application up with a database and redis.
Step 2: Install database connection dependencies and set them up
As discussed in the previous article, we'll be using SQLx to asynchronously interact with the database. It's not an ORM so we'll be using raw SQL which, though maybe tedious, will allow us to learn the good old SQL alongside. If you need ORM, you can check out diesel. Let's install SQLx:
We are activating runtime-actix-rustls (since we use actix-web), postgres (our Database of choice), uuid (for IDs), chrono (to support rust chrono crate used for date and time), and migrate (to allow automatic migration) features. What do we want to migrate? We'll be migrating some SQL tables. To start out, let's create a migrations folder at the root of our application. That's the default path it should be located. However, you can change this. Whenever you run a migration, this folder will be checked to know whether or not some database stuff has been altered.
~/rust-auth/backend$ mkdir migrations
Now, using SQLx CLI, we can generate .sql files for our tables in the migrations folder:
You should now see two .sql files generated. The flag -r makes creating reversible migrations with corresponding "up" and "down" scripts possible. If the above command didn't work, don't worry, we'll make it work.
Let's add some credentials to settings/base.yaml file for our DB and redis connections:
Some basic settings which you need to supply for your local setup of PostgreSQL and Redis. Proceeding to src/settings.rs:
// src/settings.rsusesqlx::ConnectOptions;/// Global settings for exposing all preconfigured variables#[derive(serde::Deserialize,Clone)]pubstructSettings{pubapplication:ApplicationSettings,pubdebug:bool,pubdatabase:DatabaseSettings,pubredis:RedisSettings,}.../// Redis settings for the entire app#[derive(serde::Deserialize,Clone,Debug)]pubstructRedisSettings{puburi:String,pubpool_max_open:u64,pubpool_max_idle:u64,pubpool_timeout_seconds:u64,pubpool_expire_seconds:u64,}/// Database settings for the entire app#[derive(serde::Deserialize,Clone)]pubstructDatabaseSettings{pubusername:String,pubpassword:String,pubport:u16,pubhost:String,pubdatabase_name:String,pubrequire_ssl:bool,}implDatabaseSettings{pubfnconnect_to_db(&self)->sqlx::postgres::PgConnectOptions{letssl_mode=ifself.require_ssl{sqlx::postgres::PgSslMode::Require}else{sqlx::postgres::PgSslMode::Prefer};letmutoptions=sqlx::postgres::PgConnectOptions::new().host(&self.host).username(&self.username).password(&self.password).port(self.port).ssl_mode(ssl_mode).database(&self.database_name);options.log_statements(tracing::log::LevelFilter::Trace);options}}
We added RedisSettings and DatabaseSettings to our settings file and effected the change in the global Settings struct. We also implemented a method, connect_to_db, for DatabaseSettings so that we can easily connect to our database using the credentials provided. It's time to integrate these settings with our application. Open up src/startup.rs:
// src/startup.rs
...
pub async fn build(
settings: crate::settings::Settings,
+ test_pool: Option<sqlx::postgres::PgPool>,
) -> Result<Self, std::io::Error> {
+ let connection_pool = if let Some(pool) = test_pool {
+ pool
+ } else {
+ get_connection_pool(&settings.database).await
+ };
+ sqlx::migrate!()
+ .run(&connection_pool)
+ .await
+ .expect("Failed to migrate the database.");
let address = format!(
"{}:{}",
settings.application.host, settings.application.port
);
let listener = std::net::TcpListener::bind(&address)?;
let port = listener.local_addr().unwrap().port();
- let server = run(listener).await?;
+ let server = run(listener, connection_pool, settings).await?;
Ok(Self { port, server })
}
...
+ pub async fn get_connection_pool(
+ settings: &crate::settings::DatabaseSettings,
+ ) -> sqlx::postgres::PgPool {
+ sqlx::postgres::PgPoolOptions::new()
+ .acquire_timeout(std::time::Duration::from_secs(2))
+ .connect_lazy_with(settings.connect_to_db())
+ }
async fn run(
listener: std::net::TcpListener,
+ db_pool: sqlx::postgres::PgPool,
+ settings: crate::settings::Settings,
) -> Result<actix_web::dev::Server, std::io::Error> {
+ // Database connection pool application state
+ let pool = actix_web::web::Data::new(db_pool);
+ // Redis connection pool
+ let cfg = deadpool_redis::Config::from_url(settings.clone().redis.uri);
+ let redis_pool = cfg
+ .create_pool(Some(deadpool_redis::Runtime::Tokio1))
+ .expect("Cannot create deadpool redis.");
+ let redis_pool_data = actix_web::web::Data::new(redis_pool);
let server = actix_web::HttpServer::new(move || {
actix_web::App::new().service(crate::routes::health_check)
actix_web::App::new()
.service(crate::routes::health_check)
+ // Add database pool to application state
+ .app_data(pool.clone())
+ // Add redis pool to application state
+ .app_data(redis_pool_data.clone())
})
.listen(listener)?
.run();
Ok(server)
}
We created a new function, get_connection_pool, that really and lazily connects our application to the DB and then returns such connection for the app's use. Our run function was extended to take more parameters, such as the pool returned by the previously explained function. Since many endpoints will be needing access to the DB (and redis — to be created) pool created, we need to make it available app-wide. To do this, actix-web provides an extractor, actix_web::web::Data<T>, to help share the application state "with all routes and resources within the same scope". We then used this API to create pool and redis_pool_data which were then attached to the application via App::pp_data(). For the build method, we also extended it to allow an optional argument, test_pool, which will be used when tests are being run. We also allowed automatic migration of the DB using the sqlx::migrate macro. If you created your migrations folder in a different place other than the root directory, you must pass the path to such a folder in this macro. Before we install deadpool-redis, let's update our src/main.rs one last time:
#[tokio::main]
async fn main() -> std::io::Result<()> {
dotenv::dotenv().ok();
let settings = backend::settings::get_settings().expect("Failed to read settings.");
let subscriber = backend::telemetry::get_subscriber(settings.clone().debug);
backend::telemetry::init_subscriber(subscriber);
- let application = backend::startup::Application::build(settings).await?;
+ let application = backend::startup::Application::build(settings, None).await?;
tracing::event!(target: "backend", tracing::Level::INFO, "Listening on http://127.0.0.1:{}/", application.port());
application.run_until_stopped().await?;
Ok(())
}
Since this is the real app, we set test_pool to None. Now, install deadpool-redis:
failed before, you can rerun it now. It's time to write some SQL. Open up migrations/*_users_table.up.sql:
-- migrations/*_users_table.up.sql-- Add up migration script here-- User tableCREATETABLEIFNOTEXISTSusers(idUUIDNOTNULLPRIMARYKEYDEFAULTgen_random_uuid(),emailTEXTNOTNULLUNIQUE,passwordTEXTNOTNULL,first_nameTEXTNOTNULL,last_nameTEXTNOTNULL,is_activeBOOLEANDEFAULTFALSE,is_staffBOOLEANDEFAULTFALSE,is_superuserBOOLEANDEFAULTFALSE,thumbnailTEXTNULL,date_joinedTIMESTAMPTZNOTNULLDEFAULTNOW());CREATEINDEXIFNOTEXISTSusers_id_email_is_active_indxONusers(id,email,is_active);-- Create a domain for phone data typeCREATEDOMAINphoneASTEXTCHECK(octet_length(VALUE)BETWEEN1/*+*/+8AND1/*+*/+15+3ANDVALUE~'^\+\d+$');-- User details table (One-to-one relationship)CREATETABLEuser_profile(idUUIDNOTNULLPRIMARYKEYDEFAULTgen_random_uuid(),user_idUUIDNOTNULLUNIQUE,phone_numberphoneNULL,birth_dateDATENULL,github_linkTEXTNULL,FOREIGNKEY(user_id)REFERENCESusers(id)ONDELETECASCADE);CREATEINDEXIFNOTEXISTSusers_detail_id_user_idONuser_profile(id,user_id);
As a person who's some Django experience, I tend to like its default User model and the users table models that except the replacement of username by email. I also like using UUID for my primary key.
From the SQL codes, we have two simple tables — users and user_profile. user_profile has a one-to-one relationship with the users table since a user can only have one profile. We also created a custom datatype, phone, using SQL's DOMAIN. This allows us to give some constraints to any text that will be stored as phone number — E.164 standard was used. Database indexes were also created.