Optimizing SHA-1 Performance on OS X

At work we need to do some SHA-1 verification on application startup, naturally we want it to run as fast as possible — no user likes their browser bounce too many times on the dock before showing up.

The initial implementation is extremely naïve one, yet quite portable. It’s base on Chromium’s (which our code base is built upon) portable SHA-1 implementation: we first read the entire file into a std::string, then pass this string to SHA1HashString(), job done. Simple, right?

Unfortunately, for the files we have it will take at least 280 milliseconds (61 MB/s) on a rather quick (2.6 GHz Core i7) desktop machine, among which about 30 ms was spent on file reading and the rest is on SHA-1. Needless to say, the memory footprint is quite big given the file sizes we have.

First step of using any other SHA-1 function would be decouple the ReadFileToString() function into normal stdio file reading routine:

FILE* file = fopen(path, "rb");
char buf[1 << 16];
size_t len;
// SHA-1 context initialization.
while ((len = fread(buf, 1, sizeof(buf), file)) > 0) {
// SHA-1 context update.
// SHA-1 context done and get the value.

By doing this we can already save quite a lot of time in std::string concatenation, now the time spent on file reading is down to 5 ms. There is not much room for further improvement, let’s see how we can improve the SHA-1 performance.

A well known fast SHA-1 implementation is the one written by Linus Torvalds for GIT, it’s called block-sha1. The code is derived from the Mozilla NSS library but Linus claimed that he has rewritten it entirely. This performed indeed quite well, the actual calculation time is now down to 60 ~ 65 ms (246 MB/s).

However block-sha1 license is not entirely clear for our closed source use, Linus said that he wouldn’t mind to license it as MPL but so far we still consider it’s licensed as rest of GIT since it resides in its repository. We obviously can’t link to GPLv2 licensed code in closed source software.

Another alternative is Steve Reid’s SHA-1 implementation in C, which is completely public domain code. It performances quite fast as well, around 88 ms for us, which equals to 181 MB/s.

Now that we have a good backup plan, I tried the SHA-1 implementation from Mozilla as well, it takes almost as long as Steve Reid’s implementation, no real improvement here but there is not much burden in license for us either.

I would like to try OpenSSL’s implementation as well, since according to Improving the Performance of the Secure Hash Algorithm (SHA-1), it has more optimized implementation for Intel SIMD instructions (SSE3). However I didn’t managed to get the project build with OpenSSL due to some other complications. Also because I managed to find a better alternative than fiddling with OpenSSL’s fragile API: the CommonDigest library from Apple. It performed much better than block-sha1: only takes 40ms (400 MB/s). From the source released, Apple seemed to be using the cross-platform OpenSSL implementation here as well, but it still would be nice to see how does it compare with OpenSSL library shipped with the system. I will try to post some results in upcoming days.

Preserving Extended Attributes on OS X

When codesigning a Mach-O file (OS X executables or libraries), the signature information will be stored in the file itself through some Mach-O extension. When codesiging a bundle (.app or .framework), _CodeSignature directory will be created. But what happens when you codesigning a plain text file? Signature information will be stored in extended attributes. Because of that, when packaging or copying files like those, you would expect the tools to preserve extended attributes. Not all of them do that by default.

tar on OS X preserves extended attributes by default, both archive and unarchive. But zip doesn’t, a better replacement is ditto -k, ditto can be used as a replacement for cp as well, though cp in OS X preserves extended attributes by default.

When using rsync, -E or --extended-attributes will make sure it copies extended attributes.

When creating a dmg with hdiutil, keep in mind the makehybrid command will lost extended attributes, so you will have to use alternative ways.


Codesigning is one of the worst issues we had been having since we started working on the new Opera for Mac. How Apple managed to screw this up never ceased to amaze us.

Since yesterday morning our build servers started to get CSSMERR_TP_NOT_TRUSTED error while code signing the Mac builds. Well, we didn’t notice until trying to release the new Opera Next build in the afternoon, which is obviously a bad timing. When it happened, immediate reaction was search for it in Google, unfortunately, when it happened words haven’t been spread yet so all results we got were from early 2009 ~ 2011, about some intermediate certificates missing, which completely mislead us. We spent a couple of hours inspecting certificates on all 3 of our Mac buildbot servers, none of them seemed wrong. One of my colleagues tried to resign a package locally with certificates/keys installed, got the same error as well.

Fortunately our build server didn’t get the same error every time so we managed to get a build for release.

When I later did a search for the same keyword but limit the results in last 24 hours, we finally found the real answer to the problem this time. According to this discussion:

Apple timestamp server(s) after all that is the problem here. If I add the --timestamp=none option, codesign always succeeds.

I have exactly the same problem. Probably Apple got two timeservers, with one broken, and a 50% chance for us to reach the working one.

And it worked for us perfectly as well. The only thing I didn’t know was whether it’s safe to release a build without requesting a timestamp (or where can we find other trusted timestamp servers).

This morning I woke up and saw this summary about yesterday’s incident.

According to Allan Odgaard (the author of TextMate):

As long as the key hasn’t expired, there should be no issue with shipping an app without a date stamp, and quite sure I have shipped a few builds without the signed date stamp.

That at least give us some confidence that if such incident happen again, it shouldn’t be a big issue to turn timestamp off.

Update: More explanations from Apple:

The point of cryptographic timestamps is to assist with situations where your key is compromised. You recover from key compromise by asking Apple to revoke your certificate, which will invalidate (as far as code signing and Gatekeeper are concerned) every signature ever made with it unless it has a cryptographic timestamp that proves it was made before you lost control of your key. Every signature that does not have such a timestamp will become invalid upon revocation.

Mac 连接环绕立体声系统

在家看电影这么久,老早就在琢磨要不要换一个更强大一点的 5.1 环绕立体声系统,因为空间有限,专业的音响系统用不上也听不出差别,最近 XBMC 出了一个模拟立体声的 bug,让我又开始想要不要换到用数字输出才不浪费大部分 BluRay rip 里的 DTS 5.1 音频了。

Logitech Z906
Logitech Z906

Mac 的数字音频输出有两种方式,通过集成的数字/模拟音频输出 3.5mm 口用 TOSLINK 线,或者通过 HDMI。因为我用来放电影的 Mac mini 还是 2009 年的型号,所以用 TOSLINK 是唯一的解法。注意 TOSLINK 相比 HDMI 的限制是由于带宽所限无法直接输出 DTS HDMA 或者 Dolby TrueHD 的音频,但考虑到 OS X 的限制即使是用 HDMI 也无法直接输出这两种格式的,所以也就没啥好说的了。

硬件选择上,一开始考虑选择低端的数字音频解码器,比如 Yamaha RX-V473 或者 Pioneer VSX-527-S,价位相近,2000 ~ 3000 kr。优点是这两款低端的解码器对于我的需求来说都绰绰有余了,而且都支持 7.1 的 HD 音频,日后升级也有余地。缺点也很明显,单有解码器只解决一半问题,至少还得买一套 5.1 的音箱系统来替换我原来用的 Logitech Z523 2.1 系统。

所以再仔细一看 Logitech 现在的产品,替换原来的 Z5500 的这套 Z906 倒是个很好的选择:自带 DTS 和 Dolby 5.1 解码,有数字和模拟音频输入,省掉了解码器的费用,而且本身也正好是一套不错的 5.1 音箱。看看到处评价都不错,就买来试试了,一同买的还有一根 TOSLINK – 3.5mm 线,其中 3.5mm 头接 Mac mini,当然也可以买 TOSLINK – TOSLINK 线然后加一个 TOSLINK – 3.5mm 转接头

今天收到货,接上之后 OS X 直接就识别出了数字输出 (副作用是这样就锁定了音量,必须通过解码终端来控制音量),在 XBMC 中也切换为数字输出 5.1 DTS/AC3 音频很顺利的就在 Z906 中正确解码了,唯一需要注意的是 Z906 的“Input”选择必须切换为数字输入对应的端口 (有 3 或者 4 两个 TOSLINK 输入,我用的是 3)。相比原来的音箱,感觉音效果然提升不少。

除了 XBMC 以外,MPlayer XVLC 都应该支持 5.1 DTS/AC3 输出,但其他的应用比如 Mac 下的游戏我还没尝试。另外可以考虑再找条 TOSLINK 线试试用 PS3 玩游戏的效果。

不过 Logitech Z906 的缺点也很明显,不支持 DTS HDMA/Dolby TrueHD 解码,也没有 HDMI 输入输出,相信下一代产品会有改进吧。

My new job

In my previous post I talked about leaving Nokia and the Qt community. So what am I joining? Turned out I’m staying in Oslo for Opera Software. Why? There are a few reasons.

  • When I applied for a job at Nokia, Qt Development Frameworks, I also sent my resume to Opera. But their response came too late (I got a “Your background looks very interesting…” letter after 4 months), by the time I received it, I have finished my interviews at Nokia and almost decided to join them. So I joined the trolls for 2 years. But I have always been thinking what would be like to work on Opera instead. Now I got the chance.
  • I joined the trolls expecting to be a Mac developer, but as it turned out I actually focused on the other interest: typography. It’s wonderful to be one of the few typography engineers in the world, but I still want to sharpen my Cocoa skills from time to time. So now I’m actually working full time as a Mac developer for Opera.
  • Working on typography is my dream job since I was a child. But I had the fear that I was too familiar with internals of Qt thus afraid of change and learning new things. Now I got the exposure of a whole new area and have to quickly learn a lot of new things — exactly I wanted.
  • Doing framework job is a great learning experience, the code has to be so solid and stable and I get to work with many great engineers. But from time to time I wanted to work on some products that are closer to the end user, like a browser. Something that you can go to the party and tell rest of the people what you are working on. (Explaining Qt to non-tech people is not exactly my strength.)

I have worked in the new Opera office for more than a month and so far it has been a really great experience. The work is fast pace, challenging and my colleagues are friendly. The best thing so far is we have free beers on every Friday 🙂 I will probably write again about my job after a few months and tell you more.