A Five Minute Overview of AWS Transfer for SFTP
If you would prefer to listen to this article, click this link to hear it using Amazon Polly. It will also be available in iTunes: search for LabR Learning Resources.
Recently I was listening to a discussion involving two applications which needed to share files, and the only method these two entities could agree on was SFTP.
Typically, organizations wanting to provide an SFTP service have to operate a server to host the service, provide an authentication mechanism and operate the SFTP server itself. Security teams were concerned with ensuring the server didn’t allow access to things the users were not approved for.
If the organization wanted to provide the service for both internal and external connections, it was necessary to provide infrastructure for both, to monitor capacity and maintain the operating system, security and application patches.
AWS Transfer for SFTP is a managed service from Amazon Web Services which alleviates the problems with providing an SFTP service.
What is AWS Transfer for SFTP?
AWS now offers a managed solution for enterprises needing SFTP services. Instead of operating an EC2 instance configured to accept SFTP connections, AWS Transfer for SFTP accepts SFTP transfers and stores the files in an S3 bucket for incoming file transfers, and retrieves files from an S3 bucket for outgoing transfers.
The service is persistent and highly available, so there is no requirement for an organization to operate EC2 instances to provide SFTP services, the management of those instances and creating a highly scalable and available infrastructure for the service. AWS Transfer for SFTP provides all these benefits.
AWS Transfer uses the user’s public SSH key for authentication, or authentication can be configured using a custom authentication provider.
Setting up AWS Transfer for SFTP
Setting up the service is as simple as going to the AWS Console, enabling the service, associating your SFTP hostnames (or using a service provided hostname), configuring the IAM roles, associating an identity provider, creating users and assigning the S3 bucket. With those tasks performed, the service is operational.
Here is an example. Log in to AWS Console and select “Transfer for SFTP” from the menu items. When the page loads, click the “Create Server” item.

First, we have to provide some details on the SFTP server itself.

There are three possible options for the server endpoint name. You can select from
- None, resulting in a service generated endpoint name like s-1d24762a32354fd1b.server.transfer.us-east-1.amazonaws.com. This is not terribly friendly from a user perspective.
- Amazon Route53 DNS Alias, which results in a name created by Route53 and added to the DNS pool.
- Other DNS, which you should use if you already have a custom domain name managed by DNS somewhere. This would allow you to create a name like sftp.cloud.example.com.
After selecting the server endpoint name, you need to specify if users will be managed by the service or a custom identity provider. If you select “Service Managed”, you can store the user’s SSH public key. Selecting a custom provider, such as Microsoft AD, you are responsible for the configuration and management of that custom provider.
If using the Service Managed identity provider, you can use the user’s public ssh keys. If you do not manage those ssh keys, the user will have to provide the ssh public key for configuration on the server.
If using the custom provider option, the users will authenticate using a username and password, which is managed through that identity provider.
The final part to create the server is to elect a logging role, and apply any specific tags. The Logging role is used to allow AWS Transfer for SFTP the ability to save events to CloudWatch.

After providing the logging details, we can add any tags we wish for the server, and click the “Create” button. At this point, the process is launched to create the SFTP endpoint.

Now, the service is operational and we can see the details for our endpoint.

S3 Buckets and IAM Roles
As mentioned, AWS Transfer for SFTP uses an S3 bucket which is specified when a user is created, along with a folder if desired, which forms the user’s home directory.
The documentation indicates you specify the S3 bucket when the server is created. This is not currently how the server creation process works.
According to the documentation, the user is not restricted to the home directory without creating a “scope-down” policy. The scope down policy is defined and associated with the user when they are created.
The service uses a specific IAM role, which is granted access to the S3 bucket, allowing the role to read or write files into the bucket.
Care should be taken when setting up the S3 bucket used for AWS Transfer for SFTP to ensure it is not public, objects cannot be public and access is limited to only the IAM roles or users needing access.
Users
When creating a user, you specify their username, the Access policy from IAM (the Access section), and if desired the scope-down policy (the Policy section). You also specify the S3 bucket and optional directory which is used to create the user’s home directory.

The last part of the user configuration is to upload their SSH public key. This means you will either need the user to provide you with their public key, or you will generate the public and private keys and provide the private key to the user.
This key management is not provided as part of the AWS Transfer service. To specify the public key for the user’s account, copy the public key text into the public key field.

Using AWS Transfer for SFTP
To upload or download files using AWS Transfer for SFTP, the user initiates a connection using their favorite SFTP client.
Here is a sample exchange with our endpoint using an SFTP client on a Fedora 29 Linux workstation.
```
[chare@macaroni ~]$ sftp -i my_key sftp-user1@s-1d24762a32354fd1b.server.transfer.us-east-1.amazonaws.com
Connected to sftp-user1@s-1d24762a32354fd1b.server.transfer.us-east-1.amazonaws.com.sftp> dir
sftp> put ap.py
Uploading ap.py to /transfer-bucket-030619/sftp-user1/ap.py
ap.py. 100% 132. 1.6KB/s. 00:00.
sftp> dir
ap.py.
sftp> quit
[chare@macaroni ~]$
```
Notice there is no password exchange in this example because the authentication is performed using the user’s SSH keys.
Monitoring AWS Transfer for SFTP
When we created the SFTP server, we had to specify a role for CloudWatch. We can see the file transfer shown in the previous sample by looking in Cloudwatch at the log stream associated for the service.

What does it Cost?
Like other AWS services, AWS Transfer is billed for what you use, with the exception of the endpoint. Once configured, whether the service is used in that billing cycle or not, there is a charge for the endpoint and for any S3 storage used. Otherwise, data charges are only incurred if there is data moved into or out of the service.
Conclusion
AWS Transfer for SFTP greatly simplifies the file exchange process for both inter-organizational and intra-organizational file exchange. As files are moved in and out of S3, they are automatically converted between file and S3 objects. The authentication process and data exchange are both secure as they are protected by the SSH protocol.
The tricky parts of AWS Transfer for SFTP an organization will have to address is user management for the SFTP server, including keys and folders in the S3 bucket. As AWS Transfer for SFTP has a CLI and SDK interface, these problems can be addressed at an organization level using defined naming conventions. Additionally, if an organization would prefer to use a custom identity provider instead of SSH keys, that is also available.
References
AWS Transfer for SFTP Overview
AWS Transfer for SFTP Documentation
AWS Identity and Access Management
Copyright 2019, Chris Hare