What’s the right USB-C to HDMI adapter for a Dell XPS 13″ 9350 running Ubuntu 16.10


When I first got my Dell XPS 13″ 9350 (Developer Edition), I needed an adapter so that I could power an HDMI display via the laptop’s USB-C port.

After poking around for a few minutes, it seemed like the DA200 USB-C to HDMI/VGA/Ethernet/USB 3.0 adapter was the right choice. It works with plenty of Dell XPS laptops and says things like “USB-C to HDMI” in the name of the adapter.

I was wrong!

I have no idea why I was wrong, but thanks to the answer on this Ask Ubuntu post, I learned that the DA200 is not exactly what it claims to be. The only way this adapter actually works with a 55″ TV or similar is at 800×600 resolution. Definitely not what you expect when connecting HDMI.

After reading that post, I purchased the Apple USB-C Digital AV Multiport Adapter. It has only a 2 star rating on the Apple site, but it worked immediately and powered the 55″ TV as expected over HDMI at 1920×1080.

Hopefully this post helps make it more obvious via Google that the DA200 is the wrong adapter and the Apple USB-C Digitial AV Multiport Adapter works great!

Things I learned (or broke) while trying to fix my wireless in Ubuntu 16.10

I’ve had a series of strange issues with my Dell XPS 13 9350 DE under Ubuntu 14.04, 16.04, and now 16.10. Bouncing around between versions has helped some things, but this strange wireless issue seems to have followed me around.

Everything works just fine for X amount of time and then the wireless will just drop.  In my /var/log/kern.log file (or via dmesg), I’ll see something like this:

wlp58s0: disconnect from AP 58:6d:8f:99:8b:f4 for new auth to 58:6d:8f:99:8b:f5

Just in case I missed it, that message will generously repeat every 2 minutes or so. Sometimes nothing noticeable happens, other times my connection will break and I’ll have to restart the network manager service.

I spent some time today digging into the issue. I’m not sure that I actually fixed anything, but I at least learned a few commands and want to save links to a few helpful sites.

FWIW, my laptop has an Intel 8260 rev 3a wireless adapter, but these commands should help in general troubleshooting as well.

sudo lshw -C network

lshw is “a small tool to extract detailed information on the hardware configuration of the machine.”

This gave me some cool information like the product, vendor, and firmware version of the wireless adapter. The firmware version listed when I started was 21.302800.0, which didn’t make any sense when compared to the list of iwlwifi drivers maintained on kernel.org.

sudo iwlist scan

iwlist, combined with “scan”, provides  a list of in range access points. I didn’t really use this much this time, but it does help show what wireless frequencies other nearby APs are using.

uname -r -m

uname prints system information. This isn’t entirely useful for troubleshooting wireless issues, but is an easy way to get your kernel information. I didn’t know this command until recently, so I’m putting it here. See also: uname -a.

dmesg | grep iwl

dmesg displays “all messages from the kernel ring buffer”. There’s a term.

This is probably the most useful command. It showed me what firmware linux was trying to load for the wireless adapter, what failed, and what succeeded.

[ 8.687719] iwlwifi 0000:3a:00.0: Direct firmware load for iwlwifi-8000C-22.ucode failed with error -2
[ 8.690891] iwlwifi 0000:3a:00.0: loaded firmware version 21.302800.0 op_mode iwlmvm

There was a lot more than that, but I was able to see that it was attempting to load 3 wrong versions of the firmware and then successfully loading a very wrong version of the firmware.

modprobe -r iwlwifi; modprobe iwlwifi

I removed the “wrong” versions of the firmware from /lib/firmware/ and ran the modprobe commands to first remove the loaded module and then re-add it again. I confirmed through dmesg that the “correct” firmware version was loaded.

The worst part about all of this is that it’s still happening. So I guess it may not be a firmware issue, but some kind of hardware thing. The list of “platform noise” issue on the kernel.org iwlwifi page is probably where I’m going to focus next. I changed the channel on my router once, but I may be able to dig deeper on that.

Update (an hour later): Things seem to be more stable after finding a more open 2.4Ghz channel. I also found what seems to be a maintained list of core releases for the iwlwifi drivers. That led me to what should be the latest mainline version of the 8260 driver that I need. After making iwlwifi-8000C-22.code available and running modprobe, the correct driver (22) was picked up and installed.

Update (Jan 8, 2017): The problem is definitely back, even with all of the fixes above. I’m starting to wonder if it’s partially the fault of my router as I’m always able to get to the router admin interface, just not over anything that requires DNS. Today I disabled IPv6 on the wireless adapter and set a static IP address with static DNS. We’ll see how that holds up!

Update (Jan 8, 2017): Disabling IPv6 didn’t matter at all. I did find a new piece of the puzzle though. Ubuntu’s Network Manager uses dnsmasq by default. I disabled the configuration for this and now when my wireless drops and reconnects, DNS will at least continue working. Some other network stuff is still having trouble, but I don’t have to completely restart the network services right now.

Here are some links that I found useful for learning how to troubleshoot this all:

“VR” photos with Google Cardboard Camera

I was really excited when Google offered a Daydream VR headset with my new Google Pixel back in October, but I wasn’t really familiar with why I should be excited. I figured I’d play a couple games, but that it would mostly be just another stepping stone toward a real VR system in a few years.

When I got the Daydream, it took me a couple uses before I discovered the Google Cardboard Camera app, which is freaking amazing. I was surprised how well the panorama shots I had taken with my iPhone translated into the VR format. Even cooler (!) is that the app itself lets you capture 360° shots with audio. When you share them with others later, they can experience the surroundings along with the ambient noise you recorded.

It’s still a bit gimmicky, but I’m a pretty big fan of the concept. If you have the Google Cardboard Camera app, here’s the view I enjoyed earlier today when snowshoeing in the Saint Joe National Forest:


The crunching of snow is me turning around while taking the shot. Otherwise, it was a super peaceful day with hardly any wind. More about that in another post!

Things I’ve enjoyed reading about open source in 2016

I started this post back in February after reading Nadia Eghbal’s “What success really looks like in open source” so that I’d have something other than tweets to reference when I wanted to refer back to some good stuff.

So here’s a list, not in any particular order, and including the article already mentioned, of some things I’ve enjoyed reading about open source in 2016:

There were many more great articles that I likely forgot to add here, but that list is a good one to revisit.

The next two are videos. The first is Pieter Hintjens on building open source communities. I was so in love with this talk from the beginning that I opened a new post window to start writing while I was watching.

Pieter Hintjens on Building Open Source Communities

Hintjens, who passed away earlier this year, left behind a so very useful body of work in his writings on open source, development, and everything else. I know I’ll continue to go back and discover nice pieces.

The next video is Joshua Matthews on optimizing your open source project for contribution. Yet another one where I was just nodding along the entire time. I know I picked up a few useful tips from this.

I kind of like that I kept this post rolling this year, so I’m going to try to do the same thing next year. It’d probably be good if I wrote a sentence or two along the way on why I felt the piece was important enough to revisit!

Enjoy and feel free to send me links on open source that you think might be missing from the list above.

Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this and with a bit of elbow grease, you could get there on a $10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¯\_(ツ)_/¯

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail drop-in, add some static configuration, and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from jeremy@chipconf.com, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. 🍻

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case chipconf.com, check the Generate DKIM Settings option, and click Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • TXT record that acts as domain verification.
    • Three CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an MX record to route incoming email on your domain to AWS.
  5. Click Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to. As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note: You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to Domains under Identity Management and check that the status for your domain is listed as verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig to double check the TXT record’s response.

› dig TXT _amazonsesbroken.chipconf @ns1.dnsimple.com +short

› dig TXT _amazonses.chipconf.com @ns1.dnsimple.com +short

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using ns1.dnsimple.com above. I can change that to ns2.dnsimple.com, ns3.dnsimple.com, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. chipconf.com) rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the appropriately named and MIT licensed AWS Lambda SES Email Forwarder function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail should be something like noreply@chipconf.com
    • emailBucket should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix should be an empty string.
    • forwardMapping is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like @chipconf.com as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:logs:*:*:*"
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.


Now I can go back to Postmark and add jeremy@chipconf.com as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of jeremy@chipconf.com to any address.

If someone replies to one of those emails (or just sends one to jeremy@chipconf.com), it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Pieter Hintjens on Building Open Source Communities

This was a really interesting listen. Pieter Hintjens, the founder of ZeroMQ, lays out a handful of rules for building open source communities.

  1. Put people before code.
  2. Make progress before you get consensus.
  3. Problems before solutions.
  4. Contracts before internals.

Everything you do as a founder of a community should be aimed at getting the people into your project and getting them happy and getting them productive.

And one of the ways to do that, according to Hintjens in rule 2, is to merge pull requests liberally and get new contributors’ “vision of progress on record” so that they immediately become members of the community. Worry about fixing the progress later.

His thoughts around licensing (our contract) were also interesting. Without formal contracts, then pull requests rely on the license applied to a project. If a project has a very lenient license, such as MIT, and somebody forks your project, it’s feasible that a different license could be applied to code the submit back to the project through a pull request. If a project has a share-alike license—his example was MPLv2, I’m familiar with GPL—then you can rely on incoming patches already being compatible without additional paperwork.

I’d like to explore more around that, as I’m sure there are some other legal angles. It does further stress that paying attention to the license you choose is good. It would be interesting to see if my thinking had changed during our late license selection process for VVV.

The Q/A has some good gems too, great questions and great answers, so keep listening. Here are two of my more favorite answers. 🙂

“throw away your ego, it will help a lot”

And, in response to someone asking why you shouldn’t just let everyone commit directly to master.

“If you send a pull request and somebody says merge, it feels good. […] If you’re merging your own work, […] it feels lonely, you feel uncertain, you start making mistakes, and no one is there to stop you.”


Email to Slack bot idea

It would be fun to have a Slack bot that could be copied late into a long email thread of reply-alls. It would create a channel or join an existing one, parse the thread into a conversation with messages applied to people in the conversation, and then reply-all to the email thread with a link to the conversation in Slack.

This would allow you to politely suggest a conversation move to Slack and move it there at the same time.

See also: the same, but with a new GitHub issue.

Wired on Drone Geo-Fencing

“This is NOT something users want,” another critic added. “I have a good relationship with my local airports and have worked with every local tower or control center. I get clearance to fly and they have been great, but this ‘update’ takes away my control.

”Ryan Calo, a University of Washingtonlaw professor who studies robots and the law, traces the resistance to two sources. “One is a complaint about restricting innovation. The second one says you should own your own stuff, and it’s a liberty issue: corporate verses individual control and autonomy,” Calo says. “When I purchase something I own it, and when someone else controls what I own, it will be serving someone else’s interest, not mine.”

Source: Why the US Government Is Terrified of Hobbyist Drones | WIRED

Intersections of technology and government regulation are interesting.

When a piece of technology is so small and cheap, it’s easy to apply personal ideas of how you should be able to interact with it. At some level it make sense to compare geo-fence restrictions on drones to DRM on e-books. But really, it’s not the same concept at all.

When something is large and expensive, such as a private plane, then it’s probably easier to agree with (and understand) restrictions on where you can use it. The same thing applies to cars—just because I own a vehicle doesn’t mean I can drive it down a one way street or onto private property without consequence.

WCSF 2014 Talk: Public Universities and Open Source Software

The above is my talk about applying the open source ethos to sharing our work as a community in public land grant universities. I posted earlier with the full textual context and slides.

You may notice that the talk description is very far off from the actual talk. 🙂 I originally submitted an expansive talk on public universities using and contributing to open source software. When I was invited to do a 5 minute lightning talk instead, I chopped and chopped at the original material. Once I reached the 8 minute mark, I had to pick between two paths and this felt the most right.

Boone asked a question after the talk which was exactly related to the other path. And I flubbed the answer. I’m in the process of writing a post now with what I really wanted to say and it’s definitely a topic I want to continue discussing.

I will also note that I loved the lightning talk format. It was the hardest talk I’ve had to prepare for and I’m happy that I recognized that far enough in advance. It was great to be a part of such a wonderful lineup this year at WCSF.