Sony A7R II Eye AF Settings

I recently got a Sony A7R II camera, to replace my Panasonic GH2. I also got the 55mm prime and the 24-70mm zoom lenses. It’s a great camera, and I’m still learning the ins and outs of it. One really nice feature of the camera is EYE AF, the ability to focus automatically on someone’s eyes. It works surprisingly well, and is a great feature especially if you have young children (who aren’t so keen on staying in one spot for very long). You can tell when the camera has locked onto someone’s eyes because it’ll draw little green boxes around the eyes.

Eye AF is great, but the way it’s implemented is a little quirky. With the default configuration of the camera, you must first half press the shutter button to focus the camera (as you would normally), then you have to press and hold the center button of the control wheel to activate the Eye AF (while still half pressing the shutter button). This is awkward, to say the least. I have reconfigured my camera to make things easier. I now use back button focus, tied to the AEL button. Also, I have configured the AF/MF/AEL switch level to toggle between normal AF and Eye AF. So now, to focus the camera, I hold down the AEL button, and then use the shutter to take the photo. Depending on which position the AF/MF/AEL toggle is in, when focusing I’ll either be in normal AF or Eye AF. I don’t have to hold down two buttons at once for focus, and I can quickly switch autofocus modes.

It took me a bit to figure out how to configure the camera to do this, so here are the steps required (2.5 means go to the second tab in the Menu, 5th screen):

  • 2.5 AF w/shutter – Off
  • 2.6 AEL w/shutter – Auto
  • 2.7 Custom Key Settings
    • AEL Button – Eye AF
    • AF/MF Button – AF On

It’s also important to note that for Eye AF to work, the camera must be in AF-C (continuous autofocus) mode.

Also, here’s the Sony Help Guide for the A7RII.

Backup Strategy 2015

I just changed large parts of our family backup strategy, and looking back, it’s been 2 years since I last detailed what we use, so I thought it’d be a good time to revisit the topic. In our family, for computers, we have several Macs. For data, we have about 20 GB of personal and financial documents, and a little more than 200 GB of photos. I believe in having at least two backups, one of which must be offsite/in the cloud.

Previously, we used Dropbox and Boxcryptor to share our personal files, and the photos resided on a Synology DS412+ NAS. I was never comfortable having our personal information on Dropbox, even using Boxcryptor, which had the side effect of making things more cumbersome. Synology has a private Dropbox feature, called Cloud Station, and we’ve moved everything off of Dropbox onto it. It’s been problem free.

For backups, we continue to use Time Machine to back up our Macs to a Time Capsule, and Crashplan to back up our Macs to the cloud. This works fine. Previously, I had also used Crashplan on the Synology to backup our photos to the cloud and also to an external USB drive. This never worked well. Crashplan is not officially supported on Synology, so anyone wishing to use it has to rely on a third-party package. Every time Synology updated the operating system (which is fairly often), Crashplan would break. It would also break at other, random times. Finally I couldn’t get it to work anymore at all. As an aside, as part of my trouble-shooting, I learned that if there are problems, the Synology will mount the USB drive as read-only, but not make it apparent that it has done so. This backup system was just not working.

So, I threw out Crashplan and the external USB drive on the Synology, and replaced it with two things. First, I started using Synology’s package to do backups to Amazon Glacier, their cloud archiving service. It took about 5 days to backup 230 GB of data, and cost a little under $10. If I understand the billing correctly, it will continue to cost about $10/month to store that data, which I agree is somewhat expensive. And should I need to recover it, it will cost a lot more. But I consider the Glacier backup a disaster recovery backup only, and don’t anticipate ever having to recover from it. The Glacier backup is scheduled to run once a week.

The other thing I did was purchase a second Synology NAS (a DS415+, the next hardware rev of the DS412+), and set up nightly backups from the first Synology. I believe it’s an rsync-based system, and it provides multiple versions of files. It was painless to set up, and because it’s a local backup, it took a bit less than 2 hours to do a full backup.

So now, I have our data on 4 hard drives locally (each Synology has 2 drives in a RAID), as well as in the cloud. Additionally, our Macs are backed up to two different places, one of which is in the cloud. Update

It’s been almost six months since I launched and four months since I’ve talked about it here on the blog, so I figured it was time for an update. I’ve been heads down working on new features and bug fixes. Here’s a short list of the major features added during that time:

Slack Member Sync

Mailing lists and chat, like peanut butter and chocolate, go great together. Do you have a Slack Team? You can now link it with your group. Our new Slack Member Sync feature lets you synchronize your Slack and member lists. When someone joins your group, they will automatically get an invite to join your Slack Team. And when someone joins your Slack Team, they’ll automatically get added to your group. You can configure the sync to be automatic or you can sync members by hand. Access the new member sync area from the Settings page for your group.

As an aside, another potentially great combination, bacon and chocolate, do not go great together. Trust us, we’ve tried.

Google Log-in

You can now log into using Google. For new users, this allows them to skip the confirmation email step, making it quicker and easier to join your groups.

Markdown and Syntax Highlighting Support

You can now post messages using Markdown and emoji characters. And we support syntax highlighting of code snippets.

Archive Management Tools

The heart of a group is the message archive. And nobody likes a unorganized archive. We’ve added the ability to split and merge threads. Has a thread changed topics half way through? Split it into two threads. Or if two threads are talking about the same thing, you can merge them. You can also delete individual messages, and change the subject of threads.

Subgroups now supports subgroups. A subgroup is a group within another group. When viewing your group on the website, you can create a subgroup by clicking the ‘Subgroup’ tab on the left side. The email address of a subgroup is of the form

Subgroups have all the functionality of normal groups, with one exception. To be a member of a subgroup, you must be a member of the parent group. A subgroup can be open to all members of the parent group, or it can be restricted. Archives can be viewable by members of the parent group, or they can be private to the members of the subgroup. Subgroups are listed on the group home page, or they can be completely hidden.

Calendar, Files and Wiki

Every group now has a dedicated full-featured Calendar, Files section, and Wiki.

In other news, we also started an Easy Group Transfer program, for people who wish to move their groups from Yahoo or Google over to

Email groups are all about community, and I’m pleased that the Beta group has developed into a valuable community, helping define new features and scope out bugs. I’m working to be as transparent as possible about the development of through that group, and through a dedicated Trello board which catalogs requested features and bug reports. If you’re interested, please join and help shape the future of! Database Design

Continuing to talk about the design of, today I’ll talk about our database design.

Database Design is built on top of Postgresql. We use GORP to handle marshaling our database objects. We split our data over several separate databases. The databases are all currently running in one Postgresql instance, but this will allow us to easily split data over several physical databases as we scale up. A downside to this is that we end up having to manage more database connections now, and the code is more complicated, but we won’t have to change any code in the future when we split the databases over multiple machines (sharding is a whole other thing).

There are no joins in the system and there are no foreign key constraints. We enforce constraints in an application layer. We did this for future scalability. It did require more work in the beginning and it remains to be seen if we engaged in an act of premature optimization. Every record in every table has a 64-bit integer primary key.

We have 3 database machines. DB01 is our main database machine. DB02 is a warm-standby, and DB03 is a hot-standby. We use wall-e to backup DB01’s database to S3. DB02 uses wall-e to pull its data from S3 to keep warm. All three machines also run Elasticsearch as part of a cluster. We run statistics on DB03.

Our data is segmented into the following main databases: userdb, archivedb, activitydb, deliverydb, integrationdb.


The userdb contains user, group and subscription records. Subscriptions provide a mapping from users to groups, and we copy down several bits of information from users and groups into the subscription records, to make some processing easier. Here are some of the copied down columns:

GroupName string // Group.Name
Email string // User.Email
UserName string // User.UserName
FullName string // User.FullName
UserStatus uint8 // User.Status
Privacy uint8 // Group.Privacy

We maintain these columns in an application layer above the database. By duplicating this information in the subscription record, we greatly reduce the number of user and group record fetches we need to do throughout the system. These fields rarely change, so there’s not a large write penalty. There is definitely a memory penalty, with the expanded subscription record. But I figured that was a good trade off.


The archivedb stores everything related to message archives. The main tables are the thread table and the message table. We store every message in the message table, as raw compressed text, but before we insert each message, we strip out any attachments, and instead store them in Amazon’s S3. This reduces the average size of emails to a much more manageable level.


The activitydb stores activity logging records for each group.


The deliverydb stores bounce information for users.


The integrationdb stores information relating to the various integrations available in


We use Elasticsearch for our search, and our indexes mirror the Postgresql tables. We have a Group index, a Thread index and a Message index. I tried a couple Go Elasticsearch libraries and didn’t like any of them, so I wrote my own simple library to talk to our cluster.

Next Time

In future articles, I’ll talk about some aspects of the code itself. Are there any specific topics you’d like me to address? Please let me know.

Are you unhappy with Yahoo Groups or Google Groups? Or are you looking for an email groups service for your company? Please try

What Runs

I always appreciate when people talk about how they’ve built a particular piece of software or a web service, so I thought I’d talk about some of the architecture choices I made when building, my recently launched email groups service. This will be a multi-part series.


One of the goals I had when I first started working on was to use it as an opportunity to learn the new language Go. is written completely in Go and is my first project in the language. As a diehard C programmer (ONElist was written in C, and Bloglines was written in C++), it took very little time to get up to speed on Go and I now consider myself a huge fan of the language. There are many reasons why I like to code in Go. It’s compiled, so it’s fast and you get all the code checks you miss from interpreted languages. It generates stand alone binaries, which is great for distributing to production machines. It’s got a great standard library. It’s easy to write multithreaded code (threads are called goroutines). The documentation system is good. But besides all that, the philosophy behind Go just fits my mental model better than any other language I’ve worked in. It all combines to make programming in Go the most fun I’ve had coding in a very long time.

Components consists of several components that interact with each other. All interactions are done using JSON over HTTP.


The web server handles all web traffic, naturally. It is proxied behind nginx, because I believe that makes for a more flexible and slightly more secure system. Nginx terminates the encrypted HTTPS traffic and passes the unencrypted traffic to the web process. We use the standard Go HTML template system for our web templates, and we use several parts of the Gorilla web toolkit. We use Bootstrap for our HTML framework.


The smtpd daemon handles incoming SMTP traffic for the domain. It is also proxied behind nginx. The email it handles consists mainly of group messages, although there are some other messages as well, including bounce messages. It sends group and bounce messages to the messageserver for processing. Other messages are forwarded, using a set of rules, to other email addresses. We based smtpd heavily on Go-Guerrilla’s SMTPd.


The messageserver daemon processes group messages, bounce messages and email commands. For group messages, it verifies that the poster is subscribed and has permission to post to the group, it archives the message and sends it out to the group subscribers, using Karl to send the messages. It also sends the messages to our Elasticsearch cluster. Bounce and email command messages are processed as well. All group messages are processed through the messageserver, whether they arrive through the smtpd, or whether they were posted through the web site.


Karl, named after Karl ‘The Mailman’ Malone, is our email sending process. It is responsible for all emails originating from the domain. It is passed an email message, a footer template, a sender, and a set of data about each receiver the message should be sent to. For each receiver, it evaluates the template, inserting subscriber specific information, and then merges it with the email message before sending it out. It also handles DKIM signing of emails. It stores all emails using Google’s leveldb database until they are successfully sent.

A reasonable question to ask is why didn’t I outsource the email delivery part of the service. There are several companies that provide email delivery outsourcing. In general, outsourcing is a way to save development time. But when I thought about it, I did not think I’d be able to save much time by outsourcing; I’d still have to connect our data with whatever templating system the email delivery service used. And Karl did not take very long to write. But more importantly, email delivery is a core competency of our service and I believe we have to own that.


Errord is a simple logging process, used to log error messages and stack traces from any core dumps in any of the other processes. I can look at the errord log and instantly see if anything in the system has crashed and where it crashed.

Rsscrawler, Instagramcrawler

Rsscrawler and instagramcrawler are cronjobs that deal with the Feed and Instagram integrations, respectively. Rsscrawler looks for updates in feeds that are integrated with our groups, and Instagramcrawler does the same for instagram accounts. They’re currently run twice an hour. If they find an update, they generate a group message and pass it along to the messageserver.


Bouncer is a cronjob that is run once a day to manage bouncing users.


Expirethreads is a cronjob that’s run twice an hour to expire threads that are tagged with hashtags that have an expiration.


Senddigests is a cronjob that’s run once a night, to generate digest emails for users with digest subscriptions.

Next Time

In future articles, I’ll talk about the machine cluster running, the database design behind the service, and some aspects of the code itself. Are there any specific topics you’d like me to address? Please let me know.

Are you unhappy with Yahoo Groups or Google Groups? Or are you looking for an email groups service for your company? Please try


I’m not one to live in the past (well, except maybe for A-Team re-runs), but for many years now, I’ve felt like I’ve had unfinished business. I started the service ONElist in 1998. ONElist made it easy for people to create, manage, run and find email groups. As it grew over the next two and a half years, we expanded, changed our name to eGroups, and, in the summer of 2000, were acquired by Yahoo. The service was renamed Yahoo Groups, and I left the company to pursue other startups.

But really this story starts even further back, in the Winter of 1989, when in college I was introduced to mailing lists. I was instantly hooked. It was obvious that a mailing list was a great way to communicate with a group of people about a common interest. I started subscribing to lists dedicated to my favorite bands (’80’s Hair Metal, anyone?). I joined a list for a local running club. And, at every company I’ve worked at since graduating, there have been invaluable internal company mailing lists.

But that doesn’t mean that mailing lists can’t improve. And this is where we get back to the unfinished business. Because email groups (the modern version of mailing lists) have stagnated over the past decade. Yahoo Groups and Google Groups both exude the dank air of benign neglect. Google Groups hasn’t been updated in years, and some of Yahoo’s recent changes have actually made Yahoo Groups worse! And yet, millions of people put up with this uncertainty and neglect, because email groups are still one of the best ways to communicate with groups of people. And I have a plan to make them even better.

So today I’m launching in beta, to bring email groups into the 21st Century. At launch, we have many features that those other services don’t have, including:

  • Integration with other services, including: Github, Google Hangouts, Dropbox, Instagram, Facebook Pages, and the ability to import Feeds into your groups.
  • Businesses and organizations can have their own private groups on their own subdomain.
  • Better archive organization, using hashtags.
  • Many more email delivery options.
  • The ability to mute threads or hashtags.
  • Fully searchable archives, including searching within attachments.

We’re just starting out; following the tradition of new startups everywhere, we’re in Beta. We’re working hard to squash the inevitable bugs and work to make the system even better (based on your feedback!).

I’m passionate about email groups. They are one of the very best things about the Internet and, with, I’ve set out to make them even better. As John ‘Hannibal’ Smith, leader of the A-Team, liked to say, “I love it when a plan comes together.”

Turning A Web Site Into A Mac App

For some web sites, I have multiple accounts, and need to be able to switch between those accounts easily. I created a set of site specific browsers for each web site and account using Fluid. A site specific browser looks like a normal app, but is actually a self contained browser set to open a specific web page. These site specific browsers don’t share resources, so you can set multiple ones up targeting the same web page, but using different logins. The problem with Fluid, however, is that it doesn’t seem to work with 1Password, the app I use to manage all my passwords. This meant that each time I launched a Fluid app, I’d have to also launch 1Password, look up the appropriate password, and then cut and paste it into the Fluid app to log in. Not ideal. Fortunately, I’ve come across a better solution, using Chrome. It allows me to create site specific browsers using Chrome and it also integrates with 1Password. And it’s free. It involves just a couple steps.

First, you must download this shell script. Each time you run it, it will create a new site specific browser app. It requires 3 bits of information: the name you want to call the app, the web page it should open up, and an icon to use for the app. For icons, I used Google Image Search.

Once you run the script, it creates the new app in your /Applications directory. Clicking on this app will launch a Chrome process, separate from your normal Chrome browser, pointed at the page you specified. So far, we’ve duplicated Fluid. Now, we need to install the 1Password extension. Hit Command-T, to open a new tab in the app, and go to the web page: Then click on the green button to install the 1Password extension. Now, the site specific browser you’ve created has 1Password installed. Quit out of it and restart it. You can now right-click to bring up 1Password and fill in any log in form you have.

Reducing Craigslist Flakes

Ever try to give something away on Craigslist and had to deal with people flaking out on you? You’ll get the initial ‘Is it still available?’ barrage of emails and you never hear from them again. Or you’ll agree to a time and place for them to pick up the item and they don’t show up. It’s so frustrating I came to dread what should have been an easy act, giving something away.

At least part of the reason that this happens is that the other person has nothing at ‘risk’. It’s easy to send an initial email. It’s easy to agree to a place and time. If they don’t show up, it’s no skin off their back. After this happened to me a couple times, I came up with a potential solution. Make the other person demonstrate their commitment to picking up the item by having them donate a token amount of money to a charity and then sending you the receipt. Here’s an email I used recently when giving away a piece of furniture:


Thank you for your interest in the furniture. It’s still available! I’m flexible with dates and times; I’m sure we can agree on something soon.

Have you ever sold or given anything away on Craigslist? If so, you know that many people will flake out and never show up. I’m running an experiment with this listing. To show that you’re serious in these end tables, I’d like you to make a small donation to a charity. You get to pick which charity; PayPal makes it easy to do so:

Just pick a charity, make a $5 donation, and then email me some proof of that. The proof could be the email receipt you received from PayPal, or it could be the charity receipt from them. Anything that proves you made the donation. Once you do that, we’ll set up a time for you to pick up the end tables. You get to feel good about helping a charity, and I know that you’re serious about picking up these end tables.

If you are not interested in doing this, please let me know and I’ll move on to the next person who’s interested in the end tables. Also, if I don’t hear back from you one way or the other within the next 1 hour, I’ll assume you’re not interested and will move on to the next person.

Regardless, any feedback you have about this idea of making a charitable donation to show interest in this listing would be greatly appreciated.



It worked for me. If you try it, please let me know how it goes!

Multi-User Lightroom

My wife and I take a lot of photos and we’ve been searching for a system where we could combine and manage our various pictures. I had been using Adobe Lightroom to manage my photos and she had been using Apple’s Aperture. We wanted one system where we could access, catalog, manage, develop and print our photos. We decided to standardize on Lightroom, but Lightroom is currently single-user only. We needed to be able to access our Lightroom catalog from multiple computers and Lightroom’s SQLite-based database is not designed for that. So after some research, I put together the following system. It allows us to use one Lightroom catalog on multiple computers. The caveat is that only one of us can be running Lightroom at a time. Other than that, it solves our problem.

WARNING: This is a hack. While it works for us, I do not guarantee that this will not trash your Lightroom catalog. Make backups and proceed carefully.

SECOND WARNING: These directions and the script are not polished. This post assumes some technical savvy.

There are a couple parts to my solution. It requires a network share on a NAS and it requires a service like Dropbox, that syncs a set of files across multiple computers. Some NAS devices come with software that provides Dropbox-like functionality. The NAS I have, a Synology DS412+, has software, called CloudStation, which provides this functionality. Also, we’re a Apple Mac-based household. This solution should work for Windows as well, but you will have to customize the shell script.

In short, we store our photos on the NAS and we store the Lightroom catalog on the Dropbox folder. We invoke Lightroom using a shell script that ensures that only one person can run Lightroom at a time. The reason we put the Lightroom catalog in a Dropbox folder is for speed; the catalog and previews are stored locally.

Many people already store their photos on a NAS. If you are not currently doing so, there are several tutorials to help you migrate your photos, such as this one.

To begin, make sure you’re not running Lightroom. Locate the Lightroom catalog, which is usually stored in your Pictures folder. You’re looking for the ‘Lightroom 5 Catalog.lrcat’ and ‘Lightroom 5 Catalog Previews.lrdata’  files. Copy these to a folder in your Dropbox, and then rename the old ones so that Lightroom doesn’t try to use them in the future. When you next launch Lightroom, it will ask you for the catalog file; point it to the one in your Dropbox folder.

The ‘Lightroom 5 Catalog Previews.lrdata’ file is a cache of previews of your photos. It can be large, but can be regenerated at any time. I choose to not have Dropbox/CloudStation sync that across the various computers, and let each computer generate it when Lightroom is run. Dropbox and CloudStation both have selective sync functions that allow you to exclude files/folders from syncing; that’s how I do that.

Now you should have a normally working Lightroom installation, with your photos on the network share on the NAS and your catalog in the Dropbox folder. The last bit of the solution is to only run Lightroom through the use of the following shell script, which I’ll explain.




if [ ! -d "${MOUNTDIR}" ]; then
 mkdir "${MOUNTDIR}"
 mount_afp afp://${USER}:${PASSWORD}@${NAS}/home ${MOUNTDIR}

# Want to delay at least N seconds since last instance was closed to
# allow for CloudStation propagation
if [ -f "${TIMEFILE}" ]; then

if test `find "${TIMEFILE}" -atime +15s`; then
 echo "ok"
 osascript -e 'tell app "System Events" to display alert "Need to sleep, Lightroom will start momentarily"'
 sleep 15

if mkdir "${LOCKDIR}"; then

echo "Locking succeeded" >&2
 open -W /Applications/Adobe\ Photoshop\ Lightroom\
 touch "${TIMEFILE}"
 rmdir "${LOCKDIR}"


osascript -e 'tell app "System Events" to display alert "Someone else is currently using Lightroom"'
 echo "Lock failed - exit" >&2
 exit 1


What the shell script does is as follows:

  1. It makes sure the network share containing the photos is mounted.
  2. On the network share, it looks for a time file that was created by a previous instance of running the shell script, and indicates the last time the script (and Lightroom) were run.
  3. If the file exists, it checks the time and makes sure it’s been at least 15 seconds since the last run. This is to allow Dropbox time to synchronize the catalog from any other computer. The 15 seconds is a guess on my part; you may want to make it longer.
  4. Once it’s been at least 15 seconds, the script attempts to create a lock directory on the network share. This only succeeds if the lock directory doesn’t already exist. If it exists, the script assumes that someone else is running Lightroom and displays an error message.
  5. One the lock directory is created, it launches Lightroom and then waits for Lightroom to close.
  6. Once Lightroom closes, it removes the lock directory and updates/creates the time file.

Things you have to customize in the script:

  • The USER, PASSWORD and NAS variables lines 3,4,5
  • The script assumes your network share is mounted at /Volumes/home and that there is a Lightroom directory there. This does not have to be where your photos are stored.

To run the script, I used Platypus to create a Mac application out of the shell script. I placed the resulting app, which I call ‘RunLightroom’, on the network share, and then on each computer I dragged that to the Dock, to make it easy to run.

Hopefully this helps someone else out. Family photo sharing/management is a huge opportunity that Adobe should probably own (for better or worse). This post only addresses part of the problem; another issue is access to your photos on all your devices. Synology has a solution for that and I’m working on integrating that with Lightroom. I’ll put up another post when/if I have that figured out.

Please let me know if you have suggestions for improving this post; this is just a first draft and these instructions are admittedly pretty rough.

Photo – Norwegian Boat Houses

Photo & Video Sharing by SmugMug

This past summer we vacationed in Scandinavia, visiting Norway, Sweden and Russia. This photo was taken as we travelled by ferry from Oslo to Balestrand, Norway.


Get every new post delivered to your Inbox.

Join 1,250 other followers