Sometimes I do contract work helping companies with their transitions to AWS. It is great experience and a wonderful way to level up in services that you may not typically encounter in your "real" job.
This story of errors recently occurred on one such contract. The client is in a highly regulated industry and required me to use an Amazon WorkSpaces virtual desktop that the company had provisioned for me.
I installed the AWS CLI on my workstation, authenticated and ran the first command I always run on a new install and CLI login: aws s3 ls
which is short and usually shows that I am successfully logged in (and is easier to type than aws sts get-caller-identity
).
$ aws s3 ls
Access Denied
$
I begrudgingly ran aws sts get-caller-identity
which showed me that I was, in fact, authenticated and assuming the appropriate role.. So I checked the AWS Web Console where I could see the buckets. Strange..
I then added the region flag to the S3 command:
$ aws s3 ls --region us-east-1
Access Denied
$ aws s3 ls --region us-east-2
bucket1 (in us-east-1)
bucket2 (in us-east-1)
bucket3 (in us-east-1)
etc...
$
Odd. What is going on? I looked in IAM at the role and confirmed it had the s3:ListAllMyBuckets
permission and there was no SCP preventing the action in us-east-1. But wait!
There was an inline policy intended to only allow requests from known company IPs.
{
"Sid": "RestrictIPs",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"workspaces_public_range",
"other_corp_ranges"
]
}
}
}
I checked my IP from the workspace and it was within the allowed range. I checked my IP from the CLI and it was the same... What on Earth was going on here.
I reached out to the client's full time Infrastructure people but they don't have the issue. Even when they assumed my role they were unable to reproduce. We conclude that there appears to be nothing that should block my request.
Finally I go to the CloudWatch logs and looked at some of my denied requests.
The requests to --region us-east-2
came from the expected IP address and succeed. But requests to --region us-east-1
are not coming from the IP that is expected and also not one that is in the allowed IP range! Well, that's why they were failing, but why were those coming from a different IP?
Oh! That unfamiliar IP looks like a private IP address! Why was it using a private IP address for S3 calls to us-east-1? I checked the WorkSpaces VPC in the WorkSpaces account and THERE IT WAS! An S3 Gateway VPC Endpoint in us-east-1 that keeps S3 traffic internal in that region to save on data egress costs!
I recreated these conditions in my own account and verified that the blocking was the combination of the SourceIP rule and the S3 Gateway. Removing either of these resulted in a successful request. Per the documentation:
You can't use the aws:SourceIp condition in an identity policy or a bucket policy for requests to Amazon S3 that traverse a VPC endpoint. Instead, use the aws:VpcSourceIp condition.
Note: You can also use the aws:SourceVpc condition.
Modification of the policy to allow requests from the VPC looks like this:
{
"Sid": "RestrictIPs",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"workspaces_public_range",
"other_corp_ranges"
]
},
"StringNotEqualsIfExists": {
"aws:SourceVpc": "vpc-id"
}
}
}
And that addition resolved the issue.
I shared my findings and solutions with the infrastructure team (and shared the documentation outlining the need to allow the VPC) and in short order I was back in business!
TLDR: If you have roles that Deny
based on SourceIP conditions, be aware that using AWS compute to make their requests, (in my case an Amazon WorkSpace connecting to S3 in one region through a S3 Gateway VPC Endpoint) may not use their public IP and be blocked.