SQL concepts from A to Z

Helen Anderson - Feb 6 '19 - - Dev Community

It's time for some jargon-busting.

A big part of my role is to onboard and support junior data analysts. Many have just started using SQL, have come from the world of Excel analysis and have self-taught the SQL basics.

Here are some of those terms and concepts that pop up during training if you need a refresher or have a junior analyst in your life who needs something to refer back to.


Alias
Begin Transaction
CTEs v Subqueries
Design
ETL
Function
Group By
Heaped Storage
Integrity
Join
Key
Lock
Massive Parallel Processing
Normalisation
OLTP v OLAP
Privileges
Query Plan
Disaster Recovery
System Tables
Truncate v Drop
Union
View
Window Function
XML
Year
Zero


Alias

When joining tables, we need to state which column from which table we want to match up, and which columns we want to return in the results. If there are columns with the same name we need to be specific about which column we want to return.

select
  orders.item, 
  inventory.item,
  inventory.unitprice
from 
  orders 
inner join 
  inventory 
on orders.order_item = inventory.inventory_item
Enter fullscreen mode Exit fullscreen mode

To make it quicker to type we can alias the two tables with something shorter.

select
  o.item, 
  i.item,
  i.unitprice
from 
  orders o
inner join 
  inventory i 
on o.order_item = i.inventory_item
Enter fullscreen mode Exit fullscreen mode

Instead of having to type out the whole table name each time we want a new column added, we can alias them with the letters 'o' for orders and 'i' for inventory.

Read more about JOINs and aliasing in this beginner-friendly post:


Begin Transaction

SQL Transactions are used to trap errors when making changes to tables. During an UPDATE or DELETE statement, the change is auto-committed.

By wrapping the statement in a transaction we can 'roll back' or 'commit' when we are sure that it should be executed, or if a condition has been met.

The following transaction will run in a block and commit if successful.

begin transaction

update orders
set status = 'sent'
where order_id = '12345'

update orders
set status = 'sent'
where order_id = '54321'

commit transaction
Enter fullscreen mode Exit fullscreen mode

Read more about how these work:


CTEs v Subqueries

CTEs (Common Table Expressions) are a temporary, named result set we can come back to in the scope of a single SELECT, INSERT, UPDATE, DELETE or MERGE statement.

I use them when dealing with large tables for example: get all the columns I need from the 'emailsent' table, then get all the columns I need from the 'emailunsubscribe' table. Then in a final step join them together.

; -- start the CTE with a semicolon to terminate anything above

with sent as -- here is where you name the dataset

(select 
  emailaddress,
  emailid, 
  senddate
from
  marketing.emailsent
where
  senddate between '2018-01-01' and '2018-01-31'
),  -- add a comma if you need to add a subsequent CTE

unsubs as

(select 
  emailaddress,
  emailid, 
  senddate
from
  marketing.emailunsubscribe
where
  senddate between '2018-01-01' and '2018-01-31'
) -- no comma for the last CTE

select
  'January' as [monthdelivered],
  c.country, 
  count(distinct sent.emailaddress) as [countofdelivered], 
  count(distinct unsubs.emailaddress) as [countofunsubd]
from sent
left join 
  marketing.customers c on sent.email = unsubs.emailaddress
left join  
  unsubs on sent.email = unsubs.email 
  and sent.emailid = unsubs.emailid
Enter fullscreen mode Exit fullscreen mode

Chidiebere has written an excellent series on CTEs and how they compare with the subquery.


Design

Star and snowflake schemas are made of two types of tables.

  • Facts - that count how many times something has happened.
  • Dimensions - (or Dims) that describe an attribute.

In the star model, we can have a sales table as our fact in the centre. Dimension tables for the store, product and location surround the fact like a star.

Alt Text

*Attribution: SqlPac at English Wikipedia*

The snowflake is similar but takes the dimensions one step further. Instead of just a location table, we may have a city, country and, postcode table. All the dimensions become the points on the snowflake.

Alt Text

*Attribution: SqlPac at English Wikipedia*

Read more about the advantages and disadvantages of both:


ETL

ETL and ELT are the steps needed to move data from a source to a destination system.

  • Extract - in the Extract step the raw data is moved from the source to a temporary or staging area.

  • Transform - the Transform step converts the data to match the destination table.

  • Load - the Load step moves the data to its final destination for analysis or reporting.

ETL is great for putting data into the right format, stripping out unnecessary columns, and masking fields relating to GDPR compliance.

However, ELT has become a more popular approach when used in conjunction with data lake architecture. The data arrives quickly as it does not have to be altered in any way. The Data Scientist can then use only the data needed, get results, and not deal with delays if a transformation step fails.

Considerations need to be made around how reliable the data is in its raw form. Each Data Scientist or end-user should apply the same logic and business rules when analysing to keep results consistent.

For more on ETL, ELT and data pipelines check out this post from SeattleDataGuy who writes excellent posts on all things data.


Function

In PostgreSQL, we can execute blocks of code, called functions, on a schedule. They can be written like the statements we run ad hoc on the database or can be parsed variables to make them dynamic.

Read more about how to write and execute functions:


Group By

Aggregate functions allow us to perform calculations on fields. The most common ones are SUM, COUNT, MIN, MAX, AVERAGE.

For example, to see the total amount due for each item in the orders table we can use the SUM of the amount_due column and GROUP BY

select 
  order_item, 
  sum(amount_due) 
from orders
group by order_item;
Enter fullscreen mode Exit fullscreen mode

Read more about Aggregate functions:


Heaped Storage

Heaped storage is a term for tables that live on the database with no clustered index. The data is stored in no particular order and new data is added as it comes in.

Indexes are a way of telling the database to order the data or where to look to find the data you query often.

  • Clustered Indexes are like the contents page of a book. Applying this kind of index is telling the data how it should be ordered, like the pages in a book.

indexes1

  • Non-clustered Indexes are like the index of a book, the pages haven't been arranged that way physically, but you now have a lookup to get to what you need faster.

indexes2

Read more in this beginner-friendly post:


Integrity

This refers to data quality and rules ensuring data is traceable, searchable and recoverable.

  • Entity Integrity - each table must have a unique primary key
  • Referential Integrity - foreign keys on each table refer to a primary key on another or is NULL
  • Domain Integrity - each column has a specific data type and length.

Read more about integrity and database design:


Join

Because our database contains tables which are normalised you may not find all the data you need on one single table. To put the data back together in a way that makes the most sense for us we use JOINs. This adds columns from multiple tables into one dataset.

Use an INNER JOIN, shortened to 'JOIN' to find the match between two tables. You need a column on both tables that you join ON, this is where the matching happens. Any results where there is not a match are discarded.

inner

Use a LEFT JOIN when you want to find a match between two tables, but also show a NULL where there is no match from the right table. A RIGHT JOIN does the same but in reverse.

Like the INNER JOIN, you need a column to join ON. Unlike the INNER JOIN, a NULL is used to show there is no match between the two tables.

outer

Katie has written a great post on 'Every JOIN you will ever need' using the Oracle syntax.


Key

A primary key is a column that best identifies one unique row, and identifies each record as unique, like an ID

  • It ensures that there are no duplicates
  • It cannot be unknown (NULL)
  • There can only be one primary key per table

A foreign key is a column that matches a primary key in another table and enforces integrity between the two.

To create a primary key in SQL Server add the reserved words 'primary key' after the data type of your chosen column.

create table students (
  id int not null primary key,
  firstname varchar(255) not null,
  lastname varchar(255) not null,
);
Enter fullscreen mode Exit fullscreen mode

Lenique puts this into practice with this post on relational model design.


Lock

When two users try to query or update the same table at the same time it may result in a lock. In the same way that two people with ATM cards for the same bank account are trying to withdraw the same $100 from the same bank account, one will be locked out while the first transaction is completed.

Rhymes does a great job of explaining how it works on the database:

"...database locks serve the purpose of protecting access to shared resources (tables, rows, data).

In a system where tens if not hundreds of connections operate on the same dataset, there has to be a system to avoid two connections invalidating each other's operation (or in other cases causing a deadlock) ...

Locks are a way to do that. An operation comes to the database, declares it needs a resource, finishes its modification, and then releases such resource so the next operation can do the same. If they didn't lock their resource two operations might overwrite each other's data causing disasters.


Massive Parallel Processing

In Massively Parallel Processing databases, like Redshift, data is partitioned across multiple compute nodes with each node having memory to process data locally.

Redshift distributes the rows of a table to the nodes so that the data can be processed in parallel. By selecting an appropriate distribution key for each table, the workload can be balanced.

Read more about Redshift with alexantra who has written a complete guide to Redshift:


Normalisation

Database normalisation increases data integrity and allows new data to be added without changing the underlying structure.

  • Eliminate or minimise duplication - repeating a value across multiple tables means that tables take up more space than needed which increases storage costs. Storing customers' address details on one table with keys linking to their orders will take up less space than repeating address details on each row of the order table.

  • Simplify updates to data - by keeping a value in one table with a key to another we minimise the risk of errors when updates are made. If there are two places where a customer's email is stored and only one gets updated there will be confusion over which one is correct.

  • Simplify queries - searching and sorting becomes easier if there is no duplication on tables.

Read more about normalisation:


OLTP v OLAP

OLTP and OLAP refer to types of databases and tools that perform different functions.

  • OLTP - Online Transaction Processing - used for fast data processing and responds immediately to queries.
  • OLAP - Online Analytics Processing - used for storing historical data and data mining.

Read more about OLTP:


Privileges

If you intend on sharing a table with your colleagues who have access to your schema, you need to grant access to them. This keeps data locked down to just those who need to see it.

GRANT ALL ON <schemaname.tablename> TO <username>  
-- if you would like them to SELECT, UPDATE and DELETE

GRANT SELECT ON <schemaname.tablename> TO <username> 
-- if you would like them to be able to only SELECT
Enter fullscreen mode Exit fullscreen mode

Read more about permissions and everything else you need to know about databases:


Query Plan

When we run a query the SQL engine considers the joins, and the indexes, whether it will scan through the whole table or be faced with table locking.

In SQL Server we can use the Execution Plan to visualise runtime information and any warnings.

Alt Text

*From the SQL Server documentation*

In PostgreSQL we can check the query plan using the EXPLAIN command:

EXPLAIN -- show the execution plan of a statement
EXPLAIN ANALYZE -- causes the query to be executed as well as explained
Enter fullscreen mode Exit fullscreen mode

Read more about what each query looks like and how to interpret the results:


Recovery

Disaster Recovery in the database world relates to the backups, logs and replication instances maintained while everything is working fine. These can then be switched on, switched over, and analysed if there is a hardware failure, natural disaster, or even human error.

  • Failover - multiple clusters are set up so if one fails the other can take over.

  • Mirroring - maintaining two copies of the same database at different locations. One is in offline mode so we know where things are at when we need to use it.

  • Replication - the secondary database is online and can be queried. This is not only good for Disaster Recovery but can be useful if you utilise one instance for reporting and one for live queries. If you are using AWS setting this up takes just a few clicks.


System Tables

In SQL Server these are often referred to as system tables and views. They can be found in the master database, which holds data about the database.

Examples of system views

  • sys.objects - shows each object, its type and created date
  • sys.indexes - shows each index and type
  • information_schema.columns - shows each column, its position and datatype

In PostgreSQL, a similar collection of tables can be found in the information_schema and PostgreSQL catalog.

Examples of catalog objects

  • information_schema.tables - shows each object, its type and created date
  • pg_index - shows each index and type
  • information_schema.columns - shows each column, its position and datatype

Truncate v Drop

Both of these commands will remove the data from a table but in different ways.

  • TRUNCATE is a DDL command which removes the contents of the table while leaving the structure in place
truncate table marketing.emailcampaign
Enter fullscreen mode Exit fullscreen mode
  • DELETE is a DML command which removes rows given a WHERE clause
delete from 
  marketing.emailcampaign
where
  month = 'January'
Enter fullscreen mode Exit fullscreen mode

For a breakdown of which commands fall into DDL and which are DML commands check out this post.


Union

While a JOIN combines rows of columns horizontally, a UNION combines the results vertically. Using a UNION combines the result of two queries into one column and removes duplicates. If your query has multiple columns, they need to be in the same order to complete the UNION.

union

select *
from
  orders
union
select *
from
  inventory
Enter fullscreen mode Exit fullscreen mode

The UNION ALL combines the results of two queries the same as a UNION but keeps the duplicates in the result.

unionall

select *
from
  orders
union all
select *
from
  inventory
Enter fullscreen mode Exit fullscreen mode

View

Views are not tables, they are queries that are executed on the fly and are used as a way to create a level of abstraction from the base table.

Joe sums it up perfectly in his post

A view is a stored query. When you create a database view, the database stores the SQL you gave it. Then, when you come along and query that view, the database takes the stored view query, adds in the extras from the query against the view, and executes it. That's it!


Window Function

A window function gets its name because, unlike an aggregate function, it keeps each row intact and adds a row number or running total.

Here is an example using the orders table that returns a rank using the order_value.

select
  order_id,
  order_name
  order_date,
  rank() over(order by amount_due desc) as rank
from 
  dbo.orders
Enter fullscreen mode Exit fullscreen mode

XML

We can import files into tables using the import/export wizard. But they don't have to just be csv or txt files. By using a few lines of code we can import XML as well.

Read more with instructions:


Year

The syntax used to calculate the difference between two dates varies between databases. This adds complexity when moving between databases so keep this in mind on your next migration project.


Zero

NULL means that the value is unknown, not zero and not blank. This makes it difficult to compare values if you are comparing NULLs with NULLs.

Because NULL is not a value we can't use comparison operators. Instead, we need to use the IS, and IS NOT operators:

select * 
from 
  inventory
where 
  unitprice is not null
Enter fullscreen mode Exit fullscreen mode

By default, NULLs will appear as the largest value making sorting at best annoying and at worse misleading. To get around this we can use COALESCE to treat NULLs as a 0.

select 
  itemname, 
  coalesce(unitprice, 0)
from 
  inventory
order by 2
Enter fullscreen mode Exit fullscreen mode

There we have it, a quick introduction to the key terms, concepts and jargon for those new to SQL and databases.

. . . . . . . . . . . .