Set Up
Steps to get your mantle environment set up for the first time.
AWS
In order for Mantle to organize your data and run pipelines within your AWS account, you will need to set up a few things.
Pre-requisites
- An AWS account
- S3 buckets storing your raw data
1. S3 Buckets
Determine which S3 buckets you want to use for your Mantle environment. You will need to specify these buckets within the IAM policy you create for Mantle.
Mantle uploads all output files to a single S3 bucket. It is recommend to create a new bucket for Mantle to use. This bucket should be in the same region as the Batch environment you create. Please grant both read and write permissions to Mantle for this bucket (see below).
If you want to use a bucket that you already have, we will need to specifiy a prefix for Mantle to use. This will help keep Mantle’s files organized within your bucket.
Provide the bucket name and prefix to Mantle to set up your account.
Bucket CORs Policy
For the mantle write bucket add the following CORS policy to allow Mantle to upload files to the bucket:
For any read buckets you want to use with Mantle, add the following CORS policy to allow Mantle to read files from the bucket:
2. Create IAM Users
Mantle User
- Go to the IAM Console and navigate to “Policies” in the left-hand menu.
- Click “Create Policy” and select the “JSON” tab.
- Use the following JSON:
This policy allows the IAM user to read from the S3 buckets you specify and write to the Mantle output bucket. It also allows the user to submit batch jobs. Nextflow requires batch actions to be performed on "Resource": [ "*" ]
. If your pipeline relies on Docker images stored in ECR, you will need to provide Mantle with access to the images. This can be done by adding the following snippet to your IAM policy. Futher instructions for how to use containers in your Nextflow pipeline can be found here.
Note: We are testing different configurations to reduce the number of permissions we need. It may be sufficient in the future to ensure your AWS Batch role has these permissions rather than giving them to the Mantle user — we will update the documentation.
- Click “Review Policy” and give it a name, such as “MantlePolicy”.
- Navigate to “Users” in the left-hand menu and click “Add user”.
- Give the user a name, such as “MantleUser”, and select “Programmatic access”.
- Click “Next: Permissions” and attach the policy you just created.
- Click “Next: Tags” and “Next: Review”.
- Click “Create user” and save the access key and secret key in a secure location.
If you navigate away before saving you will need to create a new access key.
Provide the access key and secret key to Mantle to set up your account.
Batch Role
To use EBS auto-scaling you will need to create a role granting appropriate access.
- Navigate to IAM Console and click Policies in the left side bar
- Click “Create Policy”
- Select JSON
- Paste the following in the “Policy editor”:
- Name the policy
amazon-ebs-autoscale-policy-nextflow
- Click “Create Policy”
- Attach this new policy in the next step.
Each job within the batch queue has the access scoped to an IAM role, to set up a role with the access required to the batch queue follow the steps below:
- Go to the IAM Console and navigate to “Roles” in the left-hand menu.
- Click “Create role”.
- Select “AWS service” and “EC2” as the service that will use this role.
- Click “Next”.
- Attach the following policies:
- AmazonEC2ContainerServiceforEC2Role
- AmazonS3FullAccess (see below for a more restrictive policy, if desired)
- amazon-ebs-autoscale-policy-nextflow
- Click “Next” and give the role a name, such as “MantleBatchRole”.
- Click “Create role”.
For more restrictive S3 Access: 8. Click on the role you just created. 9. Click “Add inline policy”. 10. Click “JSON” and use the following policy:
3. Launch Template
In order for Mantle to run Nextflow pipelines on AWS Batch, you will need to create a custom AMI with the following software installed:
- aws cli (installed through miniconda)
- docker
We recommend only installing these two packages on the AMI to keep it lightweight and reduce the time it takes to launch an instance. All other dependencies should be intsalled within the docker container that runs the pipeline.
Bioinformatics pipelines can process a large amount of data because of this we recommend using “EBS autoscaling”. This monitors a mount point on an instance and dynamically increases the available storage based on predefined capacity thresholds. Setting this up involves installing a few lightweight dependencies and a simple daemon on the host instance. More informtion on ebs-autoscaling can be found here.
Create a launch template with autoscaling EBS
Here are the instructions for configuring an EC2 instance with EBS autoscaling:
- Go to the EC2 Console
- Click on “Launch Templates” (under “Instances”)
- Click on “Create launch template”
- Under “Launch template name and description” a. Name it something meaningful.
- For “Application and OS Images (Amazon Machine Image)” a. Type “Amazon ECS-Optimized Amazon Linux 2 AMI” in the search bar and select the latest version.
- Leave “Instance Type” as “Don’t include in launch template”
- Leave “Key pair (login)” as “Don’t include in launch template”
- Leave “Network settings” as “Don’t include in launch template”
- Under “Storage volumes”
a. Click “Add new volume” - this will add an entry called “Volume 3 (custom)“
- Set Size to 100 GiB
- Set Delete on termination to Yes
- Set Device name to /dev/xvdba
- Set Volume type to General purpose SSD (gp2)
- Set Encrypted to Yes
The Different Volumes are described below:
- Volume 1 is used for the root filesystem. The default size of 8GB is typically sufficient.
- Volume 2 (the one you created above) will be used for job scratch space. This will be mapped to /var/lib/docker which is used for container storage - i.e. what each running container will use to create its internal filesystem.
- Expand the “Advanced details” section
a. Add the following script to “User data” and leave the rest of the fields as default:
This script installs the aws cli and ebs-autoscale on the instance.
The ebs-autoscale daemon will monitor the /var/lib/docker mount point and dynamically increase the available storage based on predefined capacity thresholds.
This installs the aws-cli to /miniconda/bin/aws
. (see instructions on how to incorporate this into your pipeline here)
4. Batch Queue
Mantle runs nexflow pipelines on AWS batch. You will need to create a queue for Mantle to use.
Compute Environment
- Navigate to the AWS Batch Console.
- Click “Create compute environment”.
- Select “Amazon Elastic Compute Cloud (EC2)” for the compute environment type.
- Give the environment a name and select “Managed” for the environment type.
- Select an “Instance role” (the role you created earlier).
- Click “Next”.
- Provide a min, desired, and max vCPUs for the environment. a. We recommend starting with a min of 0 to save costs when the environment is not in use. b. Desired vCPUs can be set to 0 since the environment will be managed by AWS Batch. b. Set the max vCPUs to the maximum number of vCPUs you want to use at any given time. (If this is higher than your accounts limits it may run into errors when trying to scale up)
- Select the allowed instance types (we recommend using the
optimal
option). - Under
Instance Configuration
->Additional Configuration
, select the launch template you created. - Click “Next”, review your VPC and security group settings, and click “Next”.
- Review your settings and click “Create compute environment”.
Job Queue
- Navigate to the AWS Batch Console.
- Click “Job queues” in the left-hand menu.
- Click “Create”.
- Select “Amazon Elastic Compute Cloud (EC2)” for the orchestration type.
- Give the queue a name and select the compute environment you just created.
- Set the priority and click “Create job queue”.