AWS has become the clear market leader on the cloud platform providing comprehensive services, easy to use with the IaaS, PaaS, and SaaS models. As mentioned, it is one of the most popular cloud platforms. There will be security threats on the AWS platform too. Sometimes, developers mistakenly leak the credentials via hardcoded on their codebase, which might impact the organization. Here, we will discuss the common vulnerabilities on the AWS platform. Some are them are listed below:
- S3 Bucket
- Lambda
- SSRF
- IAM Issues & Security Groups
- Others
One thing to be noted is Reconnaissance and OSINT, the key for cloud services and applications for pentesting. When attacking apps and servers, collect the info and resources as many as you can. Post exploitation has no limits with the cloud.
S3 Bucket
Amazon Web Services (AWS) provides a service called Simple Storage Service (S3) which exposes a storage container interface. The storage container is called a “bucket” and the files inside the bucket are called “objects”. S3 provides an unlimited storage for each bucket and owners can use them to serve files. Files can be served either privately (via signed URLs) or publicly via an appropriately configured ACL (Access Control List) or ACP (Access Control Policy).
S3 directory traversal
S3 Bucket permissions are secure by default, meaning that upon creation, only the bucket and object owners have access to the resources on the S3 server as explained in the S3 FAQ. Well, there are lots of issue and misconfigured mistakely done by Developers let’s figure it out much more in briefly.
Attacker’s view
For the enumeration of buckets, make sure you do well enough to know the bucket name. I recommend perform the combination of domains , subdomains, even with top level domains too. To determine the existence of bucket, you can navigate to pre-defined S3 URLs (Default format https://bucketname.s3.amazonaws.com) and check the response code.
The DNS-entry of the domain might reveal the bucket-name directly if the host points directly to S3.
You can also use aws cli to check out the content in the bucket.
aws s3 ls s3://bucketname --region $region
Defender’s view
- Avoid passing user supplied input to file system and API altogether
- Perform strict input validation against all the parameters
Subdomain takeover on s3
S3 each bucket points to a specific domain or subdomain. When s3 buckets is no longer in use, developer/user might delete them from their Amazon account, but forgets to remove the DNS entry pointing to that subdomain it may escalate to a subdomain takeover because amazon allow non existing bucket names to be claimed again on any other account.
Consider, S3 bucket is created and its URL is https://momobhai.s3.amazonaws.com which bound to a subdomain momobhai.com belonging to org to obfuscate the AWS S3 URL. Later when this bucket is deleted from AWS S3, but the CNAMEs record from the DNS plaform(Route53, cloudflare) is not removed, an attacker may create an AWS S3 bucket with the same name and the malicious contents of this bucket will be served on the victim’s domain(eg: storage.example.net), Hence s3 subdomain takeover takes place.
Attacker’s view
dig cname <enter the subdomain>
- Visit the CNAME URL provided by dig command
- If you notice message ” the specified bucket does not exist”, then this particular s3 is vulnerable.
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>momobhai.cloud</BucketName>
<RequestId>ASJKADUSDSADASDSA</RequestId>
<HostId>sdausdhausdakslcnxuc=</HostId>
</Error>
- If you notice the message “access denied”, this particular s3 is not vulnerable.,
You can use single command for checking out the s3 subdomain takeover.
http -b GET http://{SOURCE DOMAIN NAME} | grep -E -q '<Code>NoSuchBucket</Code>|<li>Code: NoSuchBucket</li>' && echo "Subdomain takeover may be possible" || echo "Subdomain takeover is not possible"
If it’s considered as vulnerable
- Go to S3 panel
- Click Create Bucket
- Set Bucket name to source domain name (i.e., the domain you want to take over)
- Open the created bucket
- Upload file to the bucket as index.html with the Permissions tab select Grant public read access to this object(s)
- Click Static website hosting
- Select Use this bucket to host a website
Defender’s view
- Remove the dangling CNAME
- Always create in this order S3 -> Cloudfront -> DNS
- Always Delete in this order DNS -> Cloudfront-> S3
- If you are experiementing, dont forget to clean up!
Insecure S3 upload policy & Directory traversal
Attacker’s view
- Check s3 bucket are acessible to unauthenticated user and public or not.
- Identify who owns the S3 bucket.
- What kind of domain is being used serve the objects from the bucket.
- Analyze the content of files inside the bucket. Some s3 buckets are used for static hosting, might lead to include some credentials in javascripts files.
- Try to find out the global DELETE or GET Permisions with write/upload enabled via bucket policy, it could enable to upload a custom javascript library, allowing them to serve malicious Javascript(BeEf Hook) to clients.
aws s3 ls s3://bucketname
aws s3 rm s3://bucketname/objectname --no-sign-request # try to remove the file exposed to public
aws s3 cp filename s3://bucketname/ --no-sign-request # try to upload a file to the bucket
aws s3 mv filename s3://bucketname/ --no-sign-request # try to move a file to the bucket, incase if you are able to move a file, you can delete the file via move too
-
Check the bucket has reveals any ACP/ACL policy.
-
S3 bucket can vulnerable with s3 directory traversal with bucket takeover with file upload enabled policy, it can lead to xss too simplify uploading xss script on the bucket. You can try out using S3Exploits, script that automates to find out the AWS misconfigured S3 buckets that can lead pentester to exposed many vulnerabilities (XSS, phishing, site deface, many more).
Defender’s view
- Developers must always define based path when creating a POST upload policy.
- If you want to store content in the bucket for publicly accessible use aws cloudfront. keep the bucket private aside and use it as the origin of cloudfront distribution. with this config, u will be allowing only cloudfront to access content from the bucket.
- You can use access analyzer to review bucket, it’s policies, acls , access points that grant public access. access analyzer for s3 alerts you to bucets that are configured to alllow access to anyone on the internet. Note: by default, all aws s3 resources - buckets, objects and related subresources are private.
- Try using AWS S3 viruscan on your infrasturcture too.
Lambda
Serverless is a cloud computing execution model that automatically scales resources according to demand. It simplifies the process of building and deploying applications. Most common flaws we have seen so far is injection flaws. It occurs when untrusted input is being passed directly to interpreter and eventually gets executed. This might lead to OS command injection, file system access, privilege escalation, and other attacks or function runtime code injection including on Nodejs,python,java,golang.
Here is the free Lambda vulnerable lab.
Defender’s view
- Identify trusted sources and whitelist them
- Always run functions with least privileges to perform the task
SSRF
If an attacker can make arbitrary HTTP/HTTPS requests from the server and get the respponses then it’s considered as SSRF(Server Side Request Forgery) vulnerability. It access EC2 metaservice with certain user role, might include private keys too.An attacker could then impersonate the role attached to the machine using the temporary credentials and do additional discovery or damage. It’s perform via GET based approach where a response for a user URL is being fetched via HTTP GET request through endpoints of EC2 instance.There is no authentication at the Instance Metadata endpoint. This allows for a simple GET request with no additional/custom headers to retrieve information.
curl url=http://ec2instanceip/latest/meta-data
To determine if the EC2 instance has an IAM role associated with it, look for it. A 404 response indicates there is no IAM role associated. You may also get a 200 response that is empty, this indicates that there was an IAM Role however it has since been revoked.
curl url=http://ec2instanceip/latest/meta-data/iam/iam
curl url=http://ec2instanceip/latest/meta-data/iam/security-credentials
Make sure you have aws cli and check whether those credentials active or not. Export those necessaries credentials to your environment.
aws sts get-caller-identity
Based on the credentials, attackers can leverage and use the services what they wish for for the target organization.
Defender’s view
- Enable Instance Metadata Service (IMDS v2), utility provided to the users to obtain information about the instance via console or aws cli.
aws ec2 run-instances --image-id \
\
--metadata-options "HttpEndpoint=enabled,HttpTokens=required"
For existence instance
aws ec2 modify-instance-metadata-options \ --instance-id \ --http-tokens required \ --http-endpoint enabled
- Always use the least privilege user and it’s a good practice to create non-privileged user for the instance once it gets spin up and use least privilege user to run the application.
- Restrict outgoing traffic to the instance via Security groups. Hence, it can stop unauthorized users to make requets to the instance.
IAM Issues & Security Groups
Identity and Access Management(IAM) allows users to assign AWS permission to people. This allows them to control what they can do with AWS resources whereas Security groups are a network feature like a firewall that controls what inbound(incoming) or outbound(outgoing) traffic can be sent to a specific services(instance).
If no security group is assigned to an instance in, then the instance automatically gets assigned to the default security group of the VPC which allows public access to all the associated resources. What if this cases rises on RDS so that attacker can have public access to RDS with credentials. Any security group having an IP address as 0.0.0.0/0 means that it allows unrestricted or public access.
I highly recommended Pacu to exploit configuration flaws in IAM issue and whole AWS infrastructure.
Note: IAM exploitation requries credentials in most cases.
Defender’s view
- Many org don’t even enable multi-factor aauthentication for privileged accounts. create different AWS accounts for different environment, and use a single aws org to enforce policies for sub environment.
- Always grant the permissions which are only neccessities to developers for the task.
- Decouple the aws credentials and never hardcoded on the codebase, make sure using alert notifications to notify the user if it goes public.
- Actively monitor IAM configuration so that we can know access key is activated or not.
- Strictly write custom rules to enforce the policy. We requerie users to change their AWS password every 45 days.
- [More] (https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
- Remove the unused accounts and permission that are not required in user group.
Others
Snapshots exposed to the public
Snapshots are an effective way to replicate data and applications. There is a chance that a snapshot may be exposed to the public ensure that develoepr might try to give access permission to certain users. Rather than defining fine grained permissions, it is easier and faster to share broadly and hence it got exposed to public.
Attacker’s view
- Unauthorized user can enumerate aws infrastructure and find these snapshots, restore them to their own RDS instance, simply reset the database and gain the full of the snapshots.
Defender’s view
- Try to limit snapshot access to particular AWS users. View the shared status of each EBS snapshot in AWS Management Console, and choose the Public Snapshots filter to see those that are public.
Conclusion
You have successfully reached till the end and learned much more about AWS security practises. Intently, you have learned something new. If you have anythings on this, please feel free to reach out to me at Twitter.
Last one, Hey, never miss any updates about blog posts, you can subscribe to my SRE/DevOps newsletter.