Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this and with a bit of elbow grease, you could get there on a $10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¯\_(ツ)_/¯

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail drop-in, add some static configuration, and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from jeremy@chipconf.com, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. 🍻

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case chipconf.com, check the Generate DKIM Settings option, and click Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • TXT record that acts as domain verification.
    • Three CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an MX record to route incoming email on your domain to AWS.
  5. Click Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to. As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note: You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to Domains under Identity Management and check that the status for your domain is listed as verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig to double check the TXT record’s response.

› dig TXT _amazonsesbroken.chipconf @ns1.dnsimple.com +short


› dig TXT _amazonses.chipconf.com @ns1.dnsimple.com +short
"4QdRWJvCM6j1dS4IdK+csUB42YxdCWEniBKKAn9rgeM="

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using ns1.dnsimple.com above. I can change that to ns2.dnsimple.com, ns3.dnsimple.com, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. chipconf.com) rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the appropriately named and MIT licensed AWS Lambda SES Email Forwarder function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail should be something like noreply@chipconf.com
    • emailBucket should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix should be an empty string.
    • forwardMapping is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like @chipconf.com as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
           {
              "Effect": "Allow",
              "Action": [
                 "logs:CreateLogGroup",
                 "logs:CreateLogStream",
                 "logs:PutLogEvents"
              ],
              "Resource": "arn:aws:logs:*:*:*"
           },
           {
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
           },
           {
              "Effect": "Allow",
              "Action": [
                 "s3:GetObject",
                 "s3:PutObject"
              ],
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
           }
        ]
      }
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.

Voilà!

Now I can go back to Postmark and add jeremy@chipconf.com as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of jeremy@chipconf.com to any address.

If someone replies to one of those emails (or just sends one to jeremy@chipconf.com), it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Pieter Hintjens on Building Open Source Communities

This was a really interesting listen. Pieter Hintjens, the founder of ZeroMQ, lays out a handful of rules for building open source communities.

  1. Put people before code.
  2. Make progress before you get consensus.
  3. Problems before solutions.
  4. Contracts before internals.

Everything you do as a founder of a community should be aimed at getting the people into your project and getting them happy and getting them productive.

And one of the ways to do that, according to Hintjens in rule 2, is to merge pull requests liberally and get new contributors’ “vision of progress on record” so that they immediately become members of the community. Worry about fixing the progress later.

His thoughts around licensing (our contract) were also interesting. Without formal contracts, then pull requests rely on the license applied to a project. If a project has a very lenient license, such as MIT, and somebody forks your project, it’s feasible that a different license could be applied to code the submit back to the project through a pull request. If a project has a share-alike license—his example was MPLv2, I’m familiar with GPL—then you can rely on incoming patches already being compatible without additional paperwork.

I’d like to explore more around that, as I’m sure there are some other legal angles. It does further stress that paying attention to the license you choose is good. It would be interesting to see if my thinking had changed during our late license selection process for VVV.

The Q/A has some good gems too, great questions and great answers, so keep listening. Here are two of my more favorite answers. 🙂

“throw away your ego, it will help a lot”

And, in response to someone asking why you shouldn’t just let everyone commit directly to master.

“If you send a pull request and somebody says merge, it feels good. […] If you’re merging your own work, […] it feels lonely, you feel uncertain, you start making mistakes, and no one is there to stop you.”

 

Email to Slack bot idea

It would be fun to have a Slack bot that could be copied late into a long email thread of reply-alls. It would create a channel or join an existing one, parse the thread into a conversation with messages applied to people in the conversation, and then reply-all to the email thread with a link to the conversation in Slack.

This would allow you to politely suggest a conversation move to Slack and move it there at the same time.

See also: the same, but with a new GitHub issue.

Wired on Drone Geo-Fencing

“This is NOT something users want,” another critic added. “I have a good relationship with my local airports and have worked with every local tower or control center. I get clearance to fly and they have been great, but this ‘update’ takes away my control.

”Ryan Calo, a University of Washingtonlaw professor who studies robots and the law, traces the resistance to two sources. “One is a complaint about restricting innovation. The second one says you should own your own stuff, and it’s a liberty issue: corporate verses individual control and autonomy,” Calo says. “When I purchase something I own it, and when someone else controls what I own, it will be serving someone else’s interest, not mine.”

Source: Why the US Government Is Terrified of Hobbyist Drones | WIRED

Intersections of technology and government regulation are interesting.

When a piece of technology is so small and cheap, it’s easy to apply personal ideas of how you should be able to interact with it. At some level it make sense to compare geo-fence restrictions on drones to DRM on e-books. But really, it’s not the same concept at all.

When something is large and expensive, such as a private plane, then it’s probably easier to agree with (and understand) restrictions on where you can use it. The same thing applies to cars—just because I own a vehicle doesn’t mean I can drive it down a one way street or onto private property without consequence.

WCSF 2014 Talk: Public Universities and Open Source Software

The above is my talk about applying the open source ethos to sharing our work as a community in public land grant universities. I posted earlier with the full textual context and slides.

You may notice that the talk description is very far off from the actual talk. 🙂 I originally submitted an expansive talk on public universities using and contributing to open source software. When I was invited to do a 5 minute lightning talk instead, I chopped and chopped at the original material. Once I reached the 8 minute mark, I had to pick between two paths and this felt the most right.

Boone asked a question after the talk which was exactly related to the other path. And I flubbed the answer. I’m in the process of writing a post now with what I really wanted to say and it’s definitely a topic I want to continue discussing.

I will also note that I loved the lightning talk format. It was the hardest talk I’ve had to prepare for and I’m happy that I recognized that far enough in advance. It was great to be a part of such a wonderful lineup this year at WCSF.

An old answer to the Vagrant vs MAMP question on Reddit

I’m about to delete my Reddit account because it’s weird to have these passwords around for accounts that I really have no intent of using. I did find one answer I left to a question about my “Evolving WordPress Development with Vagrant” post that I feel like saving for posterity.

Is there an advantage for a single designer/developer who uses MAMP to set up and develop WordPress sites? I read their docs but I couldn’t discern a real advantage if it’s just a single user configuration? – grafxbill

While there probably isn’t an immediate advantage, I would argue that if you can see yourself ever working with more than one site or a host that does not use a LAMP stack, then the long term benefits are worth it. Though one of my next goals is to setup a basic LAMP stack via Vagrant and at that point I’ll come back and tell you to ditch MAMP completely. 🙂

Caching alone is probably the most important reason for having a matching environment. These issues are so hard to troubleshoot locally if you don’t have the same setup. APC via MAMP can be helpful at first, but really duplicating the environment takes away a lot of assumption.

My other favorite thing is the quality of the sandbox. I can play around with server settings, screw everything up, and then ‘vagrant destroy’ to be back at square one without my personal computer feeling any of the side effects.

SSL remains fairly terrifying

Moxie Marlinspike‘s presentation on SSL Stripping, while 5 years old, is both fascinating and terrifying. I’m not sure I’ll ever turn my secure VPN off again. At the same time, I’m not sure if it really does me any good.

The 55 minutes of his talk are very much worth it. Some moments from the video:

“when looking for security vulnerabilities … it’s good to start with places where developers probably don’t really know what they’re doing but feel really about the solutions they’ve come up with.”

“A padlock, who’d of thought … it doesn’t inspire security.”

“[EV Certs]: Now we’re supposed to pay extra for the Certificate Authorities to do the thing they were supposed to do to begin with.”

And the most important to remember, which is also the least assuring:

“Lots of times the security of HTTPS comes down to the security of HTTP, and HTTP is not secure.”

Major props to Zack, who prodded me to watch this many times before I finally ran into it again today.

Amazon’s petition for exemption to fly drones commercially

Amazon filed a petition for exemption with the FAA last week so that they could fly prototype drones outdoors as part of research and development for their future Prime Air offering. It’s a quick read, with a couple fun points:

Because Amazon is a commercial enterprise we have been limited to conducting R&D flights indoors or in other countries.

[…]

We will effectively operate our own private model airplane field, but with additional safeguards that go far beyond those that FAA has long-held provide a sufficient level of safety for public model airplane fields – and only with sUAS.

It’s pretty amazing to think that Amazon would have made this much progress—eight or nine generations—without flying anything outside. One of the items listed in their request was the mention that their drones flew up to 50mph with 5lb payloads. How big is this facility that they’re testing in?

Or is this a lie that many working on serious commercial efforts with drones right now is telling?

Current Thoughts on Pressgram

Pressgram is an iOS app with a great story and a great tagline.

The best way to filter & publish photos to your WordPress-powered site.

I’ll admit not paying much attention to anything beyond my assumption of the concept for the last 6 months, though I was still excited to try Pressgram almost immediately when the app launched to the public last week. And while I have some reservations, my overall outlook is hopeful and I’m planning on finding ways to use it.

First thought.

Pressgram is just as much a service as it is an app. Terms of service are an unavoidable thing when running a service. In order to keep operating and avoid liability issues, the terms will likely be written in favor of the service rather than the user. This doesn’t mean the user loses all rights, or that the service is out to get the user, it’s just legals.

That said, terms of service lead to things like content moderation. Not necessarily a bad thing for a community sharing photos with each other. Not necessarily an ideal thing for publishing to your own site.

Which leads into my assumptions, or wishful thinking. When I first heard of Pressgram, my brain went straight to how it would be great to have an app on my phone that would take pictures and publish them directly to WordPress. I’ve been trying to find the right workflow for a while that does just that. The WordPress app is probably pretty close right now, but nothing is perfect.

Second thought.

Pressgram sends photos to a central server (or servers) first and then proceeds to push them out to whichever services were selected—Twitter, Facebook, WordPress—while also focusing on providing that photo to other Pressgram users through the feed.

In order to do this, Pressgram needs authorization. With Twitter and Facebook, OAuth provides a layer of separation between our passwords and Pressgram. With WordPress, we have XMLRPC over HTTP, which requires a username and password for each connection. This isn’t really any different than when uploading photos through a browser. The only difference is that Pressgram needs to store this username and password combo on a server somewhere so that the user isn’t bugged for it every time a photo needs to be published.

So my feeling around this is that it’s kind of a bummer to give over my site’s username and password in the hopes that it is well taken care of. There are ways to mitigate the worry. With my single author site, I can just create another author user and assign a unique password for use with Pressgram. With more complex sites, there is likely room for a plugin that provides one off authorization passwords for use by apps that rely on XMLRPC. I guess it’s even likely that it exists already.

All that said, I have a pretty short wish list at this point:

  1. Some transparency around password management. What is being done on the server side to protect users’ WordPress sites?
  2. Photo uploading from iOS app directly to WordPress over XMLRPC, no middleman.

Those are my ramblings, hopefully more constructive than fleeting tweets. I’m going to knock around some plugins to solve my near term worries so that I can keep using the app and I’m definitely looking forward to seeing where it goes. All in all John Saddington has done a great job thus far.