hosting-compose (or) the sad buyout of Webfaction

docker-compose is one of those essential tools that make working with docker so much better. I do use docker directly occasionally, but anything non-trivial, I’d reach out docker-compose immediately. It allows you to “glue” things together and describe the stack in such a neat way.

I currently handle my dev environments with docker-compose, and even some live and staging deployments (like thumbor). I also manage remote database backups with it (using restic, postgresql, stunnel, redis and rdb-tools). In the latter example, it saves me from installing different versions of the database clients and connectors. I am able to instantly upgrade them, and then connect to the remote databases and back them up or restore. It makes the backup system itself immutable and disposable.

Recently however, I started using docker-compose for something that I haven’t considered before: a replacement for shared hosting.

a snippet of my hosting-compose docker-compose.yml


Continue reading “hosting-compose (or) the sad buyout of Webfaction”

too many toys?

My wife and I are wannabe-minimalists. We try to reduce how much we consume, make our home a bit more organized and get rid of excess. We also like vintage items, so it’s always hard. Next to my desk, I have an old calculator from the 60s or 70s (I guess) that I picked up at a flea-market a few years ago. It’s just cool, but serves no purpose. Maybe I should get rid of it, but it’s still there. Next to my own Nokia 8210 from 1998 or so… I somehow got attached to this phone.

My calculator and Nokia 8210

Continue reading “too many toys?”

Security through obscurity with Bitwarden

I never thought I’ll write something negative about Bitwarden. I love it. It’s an incredible password manager, and I even created envwarden: a small open-source wrapper to handle server secrets with Bitwarden.

But I recently bumped into a small issue that looks like Security through obscurity to me. And I thought it was odd for a security-focused product.

The issue was that I couldn’t export the items in my company’s vault. Even though I had access to the cards [1].

I contacted Bitwarden about it, and they said that:

An Organization user cannot export the Organization’s Vault without being an Admin or Owner.

After trying to understand why, since I did have access to cards in my organization, so why couldn’t I export them? I was told:

We do not allow people to export the Organization Vault unless they are an Admin simply because this has been requested by demand from our customers. Being able to dump all passwords in one quick action is different than having to access every one individually to copy them out.

I explained that this seems like Security through obscurity, since I had vault access, and also it’s trivial to dump all passwords using the Bitwarden CLI anyway.

Continue reading “Security through obscurity with Bitwarden”

SEO optimization for suckers

There’s a famous Jewish, Yiddish phrase:

Man plans and God laughs.

I think the same applies to SEO and Google nowadays.

Man SEOs and Google laughs.

I was always a bit suspicious of SEO, and let’s face it, the sea of snake-oil SEO salesmen doesn’t help to establish credibility here, does it?

But I think that I’m becoming even more cynical of it every day.

The problem with getting good advice for SEO is that there’s no money in telling you “Don’t do anything”, “It’s a waste of time”, or “Focus on valuable content for your audience”. But there’s tons of money in doing a site audit, in telling you about best strategies to extract link juice, or why alt tags for images are important.

But it works

Continue reading “SEO optimization for suckers”

Planning for the unplanned

There’s an expression in Hebrew: “Baltam”. It’s a shorthand form for something unplanned, or more precisely, it strongly implies: [something that is] impossible to plan. I think it has its roots in the military. In the battle field, you always have to account for some surprises. You cannot possibly have everything planned. Israelis are also (in)famous for improvising. Not so famous for planning ahead.

As an (ex?) Israeli, I recently felt awkward, essentially being accused of being overly bureaucratic. And by a German colleague, of all people. Can you imagine it?? :)

Some things take you by surprise

Ok, and just to clarify one thing, this post isn’t about cultural stereotypes, but rather trying to figure out a practical approach to a real problem that my team is facing with new ideas and features:

How do you deal with new tasks or ideas, especially small ones?

Continue reading “Planning for the unplanned”

simple and secure cron using AWS Lambda

Many apps require some tasks to execute on schedule: cleaning up inactive user accounts, generating daily, weekly or monthly reports, sending out reminders via email, etc.

cron is a simple and trusted scheduler for unix, and used on pretty much any unix-based system I come across.

So cron seems like a natural candidate for triggering those job executions. But it’s not always the best solution.

In our case, we’ve used the whenever gem for rails successfully for a long while. The gem acts as a cron DSL and lets you inject and manage cron entries from your rails app.

The problem starts however when you start growing, and your app spans more than one server. Or even if you only use one server, but want to be able to fail-over, or switch from one server to another.

Why? suddenly you have more than one cron launcher, and jobs that should execute once end up executing once on each server. This can cause some weird and unexpected lockouts, duplication and other issues.

So what’s the alternative?

Continue reading “simple and secure cron using AWS Lambda”

why I stopped using Intercom

This post has been on the back of my head for a couple of years now. I think we actually switched-off Intercom in 2016 or so… But the reasons should still stand now, or might even be stronger. Of course, things might have shifted, so please forgive me if some features are totally different by now.

For those who don’t know (now, well, I think you probably do know it, but maybe not by name. It’s the technology (or company) that adds those little “bubbles” on websites, with friendly faces offering to help.

How intercom works (taken from

Of course, isn’t the only one now, and there are a few competitors in this space. The principle is pretty similar though. I think intercom was the most successful company doing this, or the first, or both. But it’s not really important. It’s mostly about intercom as a concept, rather than a specific implementation.


The short, simple, and most crucial reason: it didn’t work. How do I know? We A/B tested it. Over a fairly long time and a large number of people.

Continue reading “why I stopped using Intercom”

marketing lessons from the street market

When you walk inside the Ben Tanh market in Ho Chi Minh City, Vietnam, you’ll eventually end up inside the food area. There are probably hundreds of stalls selling local food. Lots of delicious Banh Mi sandwiches, noodle soups, fruit juices and summer rolls.

One thing that you can’t ignore however, is that as soon as you walk around, you’ll get approached by one of the stall owners. They’ll simply hand you the menu to choose from.

Continue reading “marketing lessons from the street market”

take your pick

The book piqued my curiosity, so I picked it up and took a peek at the first page. It was written by a artist at the peak of her career.

As a non-native speaker, I guess when I pronounce any of those words: pick, peek, peak, or pique, they all sound the same. So it’s even harder for me to clearly memorise. I mostly get it right, but can occasionally confuse some forms.

Especially peek and peak.

It doesn’t happen with meet and meat, feet and feat, leek and leak though. I wonder why.

Continue reading “take your pick”

Innovation, Promises, Lies and Toupées

I recently finished reading “Bad Blood – Secrets and Lies in a Silicon Valley Startup”, by John Carreyrou. It’s a remarkable piece of investigative journalism and an amazingly grabbing read. I just couldn’t let it off my hands.

I think it particularly stood out, because the amazingly stark contrast with another book I just recently wrote about: “It doesn’t have to be crazy at work”, by the co-founders of Basecamp.

Continue reading “Innovation, Promises, Lies and Toupées”

A Guava a day

This isn’t exactly a standard type of post for this blog, but then perhaps I shouldn’t be too strict with myself as far as things I write about. After all, it’s my personal blog. I make (and break) the rules. And anyway, nobody reads it. If you are reading this, consider yourself one of a very select few.

I’m no health specialist, and this is just a sample of one, and much less scientific than my A/B testing for coffee (which wasn’t scientific at all), but I’m totally crazy about Guava, and what I perceive to be its health benefits for me.

Growing up in Israel, guavas were kinda pungent, slightly mushy, yellowish fruit. It was also one of those things the local Israeli folklore qualified as “either you love it or you hate it” (we have no Marmite in Israel, not to my knowledge anyway. Or maybe there’s a strong consensus and everyone hates it? anyway, I digress).

I guess I was in the “love it” camp, but I don’t recall being particularly crazy about it either. I think the local wisdom was also that it causes constipation, so I guess I tried not to have too much of it.

I no longer live in Israel. But I also don’t come across Guavas. At all.

I lived in London for a number of years, and I don’t recall eating any there, or even seeing them. Maybe pink, artificial guava juice. And I now live in Berlin for several years, and I can’t think of seeing any here either. How come??

I do see them everywhere in Thailand and Vietnam though. It’s literally around every street corner. Every fruit stand would typically have them besides Papayas, Pineapples and Watermelon. You can also have a proper, fresh, guava juice or shake in lots of places.

White Guava

Continue reading “A Guava a day”

invisible reCAPTCHA v3 with Rails and Devise

We’re recently being hit with more and more bots.

Some of them are crawling our site and hitting valid or invalid endpoints. We’ve seen plenty of credential stuffing attacks as well. Most of them distributed across different IPs, with each IP hitting us at low frequency.

And most recently, someone abused our registration form to spam their recipients via our system.

It was quite clever actually. When you register, you enter your name, email and password. We then send a confirmation email saying something like

“Hey Roberta, thanks for joining. Please click here to confirm your account”.

Now those guys used their victim’s email address, and used the name field to link to a URL. So those users would get an email

“Hey lottery tickets, thanks for joining. Please click here to confirm your account”.

Slimey. Naturally our own email system took the hit of sending spam. Double ouch.

Luckily, we had some anomaly detection in place, and we blocked those guys quickly. They used some browser automation from a fixed set of IPs, so it was easy to block. At least until the next wave…

I’ve been dealing with those types of scenarios with fail2ban, and it’s really quite effective. We define regular expressions to inspect our log files matching certain patterns, and then ban if we see repeated offensive behaviour. fail2ban is limited though in some aspects.

First of all, those rules are a bit of a pain to create and maintain, and you need to make sure the offending IP appears on the application log record you want to capture. In some cases it’s easy, but not always. The bigger problem however is that fail2ban doesn’t scale. The more servers you have — let’s say in a load-balanced setup — the less accurate fail2ban becomes. Or you need to aggregate all your logs on a single fail2ban host, creating a single point of failure or a bottleneck…

So I was searching for a better solution. Sadly there aren’t many. Cloudflare, which we also use, offers some degree of protection. But it’s not as flexible. And of course there’s reCAPTCHA. You know, those annoying things asking you to pick traffic signs, or even just click “I’m not a robot”?

Now I was initially hesitating to use it. I’m not sure why, but the fact that it doesn’t really have any real competition bothers me. Plus, as a user, I’m frequently annoyed by those challenges, and I hate this experience.

Luckily, the latest version of reCAPTCHA (v3) doesn’t present any user-facing challenges. It’s completely invisible. The no-competition problem is not something I can solve. I discovered that even Cloudflare itself uses reCAPTCHA in some cases! And these guys have their own Javascript challenge and what not… So I decided to bite the bullet, and give it a shot.

Setting it up is surprisingly simple, and from my limited experience, quite effective. That is, the scores it produced were surprisingly accurate. Albeit my ability to test different scenarios was limited.

I’ll try to give some pointers for implementing reCAPTCHA v3 with Rails 5.1 and Devise 4. The implementation can work on any form or controller however, and not just with Devise.

Continue reading “invisible reCAPTCHA v3 with Rails and Devise”

Is it zen at work?

I really enjoyed reading It Doesn’t Have to Be Crazy at Work recently. It’s another bestseller from Basecamp. After reading Rework before, a lot of things felt a bit familiar. Too familiar, perhaps. But their new book still has a few new ideas and covers things from a different angle. Well worth a read.

Working remotely, and at a company with very similar culture and values to Basecamp, a lot of what they write about resonated. Much of the way we structure things at work was inspired or wholesale copied from Basecamp to be completely honest. Why reinvent the wheel when someone hands you an instruction manual for building a perfect one?

But some things caught me by surprise. It felt a little too zen, or even contradictory in some cases? But it definitely gave me pause. Maybe we’re doing some things wrong, and can improve even further? I’m still unsure, but hope we can experiment with some ideas. Let me jump into a few examples…

Continue reading “Is it zen at work?”

SmugMug video data loss

I’ve written only recently about SmugMug, and expressed my frustration as a developer who built an open-source tool for their platform. This has led me to try to get my data out of SmugMug as I was considering moving away from it as well… Only to discover that some of my video data is lost and/or not being made available. This applies only for videos. Both the quality is potentially degraded, and the metadata that is available on SmugMug cannot be downloaded or exported out of their platform.

If you upload a video to SmugMug, they don’t actually store the original video for you. Here’s a quote from their official page:


We don’t keep a copy of the original video you upload. We make high-quality display copies, which are probably altered from what you send us.

I’m not sure what this high-quality display copy means in actual terms, but I won’t be surprised if some quality is lost in the process. For a company that prides itself caring for photographers, where quality and reliability is key, I find it rather vague and disconcerting.

Furthermore, what isn’t mentioned on this page is that if you want to download your videos again, those videos would be stripped-out of the original metadata as well. This metadata includes information about the Camera you used, the date/time of the video, location information etc. All of this data is still stored on SmugMug, but you can’t get it back when you download it. It’s locked-in. For me, personally, this is even worse than losing video quality. My video memories are very tightly linked to the time and location of those original scenes. Without this info, the videos are next to useless. I just can’t find them (without going manually through hundreds or thousands of dateless and location-less videos, that is)

Continue reading “SmugMug video data loss”

An open letter to SmugMug


SmugMug is great, but its developer ecosystem is, in my humble opinion, crumbling, and can use some serious love — or put out of its misery and die…

Dear SmugMug, there are lots of people, myself included, who want to see you thrive and succeed. People who are spending their free time, resources and energy on sharing their tools with the community. People who can build great things on top of SmugMug, and can make SmugMug even more successful than it currently is. Please don’t forget us. We are the potential evangelists, multipliers, and we do this for free. Please treat our free gifts with respect. These gifts might be free, but they are precious. They should be cherished, rather than ignored, or discarded.

Continue reading “An open letter to SmugMug”

The one (stupid) feature

My wife and I started using Amazon photos a while ago. I didn’t think that much of it first, but it was included with our Prime membership, and offered an automatic upload from our phones, plus free storage (for photos), so why not?

Fast forward a couple of years. We’ve since cancelled Prime, and I wanted to switch to Dropbox, which has comparable automatic upload, a mobile app, and superior sync with a proper linux client. But I couldn’t. Why? Because of this one (stupid) feature.

Which one?

Continue reading “The one (stupid) feature”

Why I’m not using Fastmail

Prepare for a somewhat ranty post, but it doesn’t come from a bad place. I honestly want Fastmail to succeed. I’m eager to see more alternatives for email hosting, and clients (and there are scaringly few).
I also acknowledge that some of the problems I bumped into are quite specific to my own setup, which isn’t common. So in some ways, it’s not about you, Fastmail. It’s me. Make your own judgement.

TL;DR – Fastmail is pretty neat, but their support sucks. Their support ticket system sucks even more, and their product is not clear enough to work without support. From my personal experience anyway.

Continue reading “Why I’m not using Fastmail”

a scalable Analytics backend with Google BigQuery, AWS Lambda and Kinesis

On my previous post, I described the architecture of Gimel – an A/B testing backend using AWS Lambda and redis HyperLogLog. One of the commenters suggested looking into Google BigQuery as a potential alternative backend.

It looked quite promising, with the potential of increasing result accuracy even further. HyperLogLog is pretty awesome, but trades space for accuracy. Google BigQuery offers a very affordable analytics data storage with an SQL query interface.

There was one more thing I wanted to look into and could also improve the redis backend – batching writes. The current gimel architecture writes every event directly to redis. Whilst redis itself is fast and offers low latency, the AWS Lambda architecture means we might have lots of active simultaneous connections to redis. As another commenter noted, this can become a bottleneck, particularly on lower-end redis hosting plans. In addition, any other backend that does not offer low-latency writes could benefit from batching. Even before trying out BigQuery, I knew I’d be looking at much higher latency and needed to queue and batch writes.

Continue reading “a scalable Analytics backend with Google BigQuery, AWS Lambda and Kinesis”

a Scaleable A/B testing backend in ~100 lines of code (and for free*)

(updated: 2016-05-07)

tip-toeing on the shoulders of giants

Before I dive into the reasons for writing Gimel in the first place, I’d like to cover what it’s based on. Clearly, 100 lines of code won’t get you that far on their own. There are two (or three) essential components this backend is running on, which makes it scalable and also light-weight in terms of actual code:

  1. AWS Lambda (and Amazon API Gateway) – handle the requests to both store experiment data and to return the experiment results.
  2. Redis – using Sets and HyperLogLog data structures to store the experiment data. It provides an extremely efficient memory footprint and great performance.

For free?

Continue reading “a Scaleable A/B testing backend in ~100 lines of code (and for free*)”

Stop showing me your homepage

I haven’t noticed it much before, but it’s becoming a pet peeve once I started paying attention to it.

We LOVE homepages. Like eyes being the key to our souls, our homepage shows who we really are. What we stand for. They turn random visitors to loyal customers. They inspire trust, build an emotional connection, they bind us together… ok ok. You got the picture. Homepages are great.

But once I’m sold. I’m in. I gave you my email. I’m a loyal customer. I go to your site every. single. day. Do I really need to see your homepage again??! Do I actually care that you changed the photo on the frontpage and highlighted another benefit to potential customers? Or most important – do I really have to click the ‘Login’, ‘Go to my app’, ‘Dashboard’ or whatever other link you give me to get started?

Continue reading “Stop showing me your homepage”

Coffee A/B Tasting – Results

This is the final post on this series. I started by covering the method for A/B testing coffee, as well as the motivation and approach. I later wrote about the first test session using Hario V60, comparing those beans by making Espresso and the last post described two preparation methods Aeropress and Cappucino.

I repeated a similar process using various combinations of A, B, C, D and E coffee beans. This post will be more brief, with the “results” based on my personal preferences and how I ended up scoring all 5 types of beans.

Continue reading “Coffee A/B Tasting – Results”

Coffee A/B Tasting – Creme de la Crema

On my previous post, I covered the first blind A/B tasting session using the “Gingerlime Tasting Technique” ™. You can read some more background about the motivation and method, as well as a full list of coffees I’m comparing on the first post in the series.

After the first taste using pour-over Hario V60 filter, I was anxious to find out whether both A and B coffees will show similar characteristics using other preparation methods. Namely: Espresso, Aeroproess and Cappuccino. Would B stay my favourite when served with milk? Would the Aeropress extract different flavours out of A than I managed with the Hario?

Continue reading “Coffee A/B Tasting – Creme de la Crema”

Coffee A/B testing – first A/B taste

This is the second post in a series, exploring the “Gingerlime Tasting Technique” ™. You can read some background on the previous post, where I explain the motivation, testing method and how I started exploring A/B testing for coffee. Different tasting sessions comparing two types of beans and trying to choose the best out of the two.

A taste test

The first tasting was between coffee A and B (still unknown to me at this point in time). The test was actually a series of 4 different tasting sessions. Each session used a different method of making coffee: Hario V60 filter, Espresso, Aeropress and a Cappuccino.

Continue reading “Coffee A/B testing – first A/B taste”

Coffee A/B testing

I do quite a bit of A/B testing and find it to be a great tool for experimenting and ultimately improving things.

But what’s “Coffee A/B testing”?

The idea came to me when I was visiting my wife’s family in Japan. We went to a restaurant and my father and brother in-law ordered two types of Sake. They let me taste both and decide which one I liked the most. It was a simple task, but an interesting one. The tastes were subtly different, but enough that I could clearly pick my personal favourite.

It then occurred to me that as much as I love coffee, and tend to pick some beans over others, I don’t quite know what makes me like a certain type, or what it is that I’m looking for for my “ultimate” coffee.

What if I could A/B test coffees? Try two types of beans (or blends), and pick the one I like. Then repeating the process I could gradually find the one I like the most. And in doing that, I can also figure out what it is that I like, and pay more attention to the difference. I rarely compare coffees. Well, not any more!

Continue reading “Coffee A/B testing”

Cutting through red-tape with Stripe

It’s not all about the technology. Stripe does one thing that makes it light-years better than its competition: Time to market. Or in simpler terms, its activation process to allow you to receive actual payments.

My wife and I run a small website selling vintage items from Germany to Japan. So far, my wife was asking all her customers to pay via bank-transfer. This is naturally time-consuming and for most of her customers, Japanese housewives and arts & crafts lovers, inconvenient. A few months ago I suggested to her to introduce credit-card payments on her website. How difficult could this be to implement?

Continue reading “Cutting through red-tape with Stripe”

Android Teleportation (or silly location restrictions)

My wife and I recently had a baby. Amongst the toys and cloths we received as gifts, there were a few CDs and DVDs with music for the little one. We then realised that we no longer have a CD or DVD drive in our computers. So we bought an external USB DVD/CD. When playing the DVDs, the region-selection menu appeared. I nearly forgot about it. Oh, the good ol’ copy-protection of the 90’s. So I chalked it up as one of those oddities of life, and thought how silly it seems today in the Internet age and all that. My wife is japanese, and I’m Israeli. And we live in Berlin. Naturally each side of the family wanted to send us Music in their own language, so there you go.

Only a few days later, my wife asked for my help with her Nexus 7. She bought a few eBooks from a Japanese site. Those work fine on her iPhone and Mac. But somehow the Play store won’t install the app (never mind the question why someone needs a bespoke app to read books).

“This item is not available in your country”.

This time I was determined to work around this.

Here’s a quick howto which does not require a rooted android.

Continue reading “Android Teleportation (or silly location restrictions)”

Route53 healthcheck failover for SSL pages with nginx

UPDATE: AWS recently introduced SSL Health checks. So the method in this post should no longer be necessary.

Amazon Route53 offers a DNS healthcheck that allows you to failover to another host / region if one IP is not responsive. This works great if you want to create a secondary site, or even a simple maintenance page to give your users a little more info than just an empty browser window.

There are some limitations to the healthchecks currently. Route53 allows you to choose between TCP and HTTP. However, there’s no HTTPS / SSL support for URLs.

So what can you do if your site is running only with SSL?
Continue reading “Route53 healthcheck failover for SSL pages with nginx”

Quick & Dirty SSL tunnelling for rails development

Just a quick&dirty guide on setting up SSL tunnelling in your development environment. This is written for Rails, but can be easily used for Django, Node, or any other web development.

Why SSL in development?

There’s no important reason to use SSL for development, but some times, you just seem to have to. I was trying to build an integration with helpscout, using their dynamic custom app. For some reason, helpscout forces you to use SSL for the external URL. Even for development. I won’t go into details why I think it’s unnecessary, but rather focus on how to set it up. After all, it might be something else that requires SSL within development, so here’s one quick way to do so.

Continue reading “Quick & Dirty SSL tunnelling for rails development”

Getting a bit creepy

I spend a lot of time working with monitoring solutions, and like to measure and track things. The information we collect from our apps tells us a lot about what’s going on. Who’s using it. How frequently they access it. Where they are from. How much time they spend accessing the app etc. And then there’s a lot we can do as app owners with this data. We can measure it, trend it, slice and dice and produce nice reports. We can also action on this info. Offer people stuff based on their behaviour. Use those ‘lifecycle’ emails to improve conversion. Increase our sales. Bring people back to using our products.

I’m getting used to those supposedly-personal email from Matt, the founder of Widgets inc. who’s “just checking if I need any help using the product”, or Stuart from Rackspace who has “only one question”. I know it’s automated, but it’s fine. As long as I can hit reply and actually reach a person, that’s ok with me. I pretend to not notice.

However, I’m feeling recently that some of those emails get a little creepy. A couple of random examples:

Continue reading “Getting a bit creepy”

Matryoshka Fragment Caching in Rails

“Russian doll Caching” gained some popularity recently, I suspect in part due to its catchy (or cachie?) name and how easy it is to visualize the concept. Rails 4 should have this improved caching available by default. With Rails 3 you need to install the cache-digests gem. It’s pretty easy to get started with it, and the documentation is clear. It makes a lot of sense to start using it in your Rails app. I won’t attempt to cover the basics and will assume you are already familiar with it. I want to talk about a specific aspect of fragment caching surrounding the generation of the cache keys.

Continue reading “Matryoshka Fragment Caching in Rails”

Django-Tastypie Authorization glitch


If a request using django tastypie is not authorized, please make sure to raise Unauthorized() exception in your _detail authorization methods in Tastypie v0.9.12.

The longer version

On one of my previous posts I wrote at length about django-tastypie authorization and gave some tips and tricks on how to work more flexibly and securely with this framework. A lot has happened since, and it was hard to keep track of all the various changes and updates to Tastypie.

Since version 0.9.12, the authorization mechanisms in tastypie changed rather radically, and that’s a very good improvement. It plugged some holes with nested resources and authorization, and made authorization decisions more granular. From a simple is_authorized and apply_limits, now each operation can be authorized, broken down to CRUD elements (create, read, update, delete). Each element is authorized for _list and _detail operations (I’ll try to cover this in more depth on a follow-up post at some stage).

For now, I just wanted to highlight an important pitfall you might want to avoid when using the new tastypie authorization that could leave you exposed. There’s a fix in the pipeline very soon, but until then, you should protect yourself by making a small change to your authorization methods, and the _detail ones in particular

The crux of the issue is that the _detail authorization methods should make a binary decision – is this authorized? (yes/no). If the method returns True, or does nothing, the request is authorized. If the method returns False or raises an Unauthorized exception, the request should be blocked.

The glitch is that if your authorization _detail functions return False, the request still goes through and is effectively authorized. Until the fix is in place, please make sure to raise Unauthorized() exception if you’re using Tastypie v0.9.12.

Software? eugh!

I’ve had a strange conversation with my wife this morning.

She told me that google reader is closing down.

She’s using it much more than I do. So I said to her something like “I’m sure you can install some other RSS reader software to replace Google”.

Her response was a bit of a surprise for me: “Software?! eugh!”.

Then I said “Ok then, or an app”, and she seemed rather pleased.

How did software become such a dirty word?!

Graphite Alerts with Monit

I love Graphite. It’s the most robust, flexible, kick-ass monitoring tool out there. But when I say monitoring, I’m actually not describing what graphite really does. In fact, it does almost anything but monitoring. It collects metrics via carbon, it stores them using whisper, and it provides a front-end (both API and web-based), via graphite-web. It does not however monitor anything, and certainly does not alert when certain things happen (or fail to happen).

So graphite is great for collecting, viewing and analyzing data, particularly with the multitude of dashboard front-ends, my favourite being giraffe ;-). But what can you do when you want to get an email or a text message when, say, carbon throws some errors, or your web server starts to bleed with 500’s like there’s no tomorrow? Even better – do you want to get an email when your conversion signup rates drops below a certain mark??

Monitoring graphite

So what can you use if you want to monitor stuff using graphite? And what kind of stuff can you monitor? I’ve come across a really great approach using nagios. In fact, I ‘borrowed’ the method the author was using for alerting on 500 errors for my own approach. So I wanted to do something very similar, but I really didn’t want nagios. It’s an overkill for me, if all I want is to get an email (or run a script) when something goes wrong.

Continue reading “Graphite Alerts with Monit”

Rails IP Spoofing Vulnerabilities and Protection

I’ve recently bumped into an interesting post about a stackoverflow vulnerability discovered by Anthony Ferrara. I didn’t think too much about it. I’ve come across similar issues before, where the application relies on a piece of information that might be easy to forge. Telephony systems are vulnerable to Caller ID spoofing, which becomes increasingly easier with Voice-Over-IP providers. Web based applications can also be fooled if they rely on header information, such as the X-Forwarded-For, typically used by Proxy servers.

I was experimenting with switching rails from Phusion Passenger to Unicorn, when I suddenly came across a strange error message:

    ActionDispatch::RemoteIp::IpSpoofAttackError (IP spoofing attack?!HTTP_CLIENT_IP=""HTTP_X_FORWARDED_FOR=""): app/controllers/application_controller.rb:138:in `append_info_to_payload'

That looked quite impressive. Rails is trying to identify spoofing attacks and raise an exception when it happens? Nice.

However, after digging a little deeper, trying to figure out what’s actually happening, it seems that Rails might actually be vulnerable to spoofing attacks under certain setups. I will try to describe those scenarios and suggest a few workarounds to avoid any pitfalls.

What I observed applies to Rails latest stable (3.2.9 at the time of writing), previous versions and potentially future versions as well (including 4.0).


Your rails application might be vulnerable to IP spoofing. To test it, try to add a fake X-Forwarded-For header and check which IP address appears in your log files.


curl -H "X-Forwarded-For:"

You can try to implement one of the workarounds mentioned below.

Continue reading “Rails IP Spoofing Vulnerabilities and Protection”

I’m not pinterested in spam

Just a quick rant this time.

I recently signed-up for pinterest. I wasn’t actually interested in signing-up, but wanted to see what their sign-up process looks like. If you’ve read one of my previous posts, you’d know I nearly always use unique, unpredictable email addresses for new services I sign-up to. Pinterest registration is quite nice, and only asks for a few details and an email address (that is, if you prefer a username and password instead of using Facebook or Twitter to login). Once you enter the details, pinterest sends you a Please verify your email message to your inbox. So far, so good.

However, what happens if you don’t verify your email? As was the case here. I wasn’t actually interested in creating an account. I assumed that I won’t hear from pinterest again. Wrong. I just received an email from pinterest, announcing their new secret boards. So much for confirming my account. According to Spamhaus, this is considered unconfirmed opt-in which is categorized as spam.

To add insult to injury, if I try to opt-out from the email I just received, Pinterest asks me to login to my (unconfirmed) account. These are all small annoyances, I know. But is it really that difficult to do things right? An unconfirmed account should not receive any messages. Opt-out links should just be one click and that’s it.

Statsd and Carbon security

I’ve written about installing and using Graphite and it’s a really great tool for measuring lots of kinds of metrics. Most of the guides online don’t touch on the security aspects of this setup, and there was at least one thing that I thought should be worth writing about.

How are we measuring

Metrics we gather from our applications have the current characteristics / requirements:

  • We want to gather lots of data over time.
  • Any single data-point isn’t significant on its own. Only in aggregate.
  • Measuring is important, but not if it slows down our application in any way.

Continue reading “Statsd and Carbon security”

Rackspace ate my homework pt. II

For those who followed my previous post, I thought I should post a quick update.

Something positive

I was naturally quite surprised to be contacted rather quickly by Rackspace shortly after posting. This was a nice surprise, and the contact afterwards were somehow more understanding. At least I could sense they are feeling sorry for my situation.

Lost homework

As expected, there was no way to recover the lost image. I received a follow-up message on the original ticket confirming this quite clearly. They then rather swiftly changed the tone into legal-speak and referred me to their terms of service, which I quote here for the benefit of the world at large.

Continue reading “Rackspace ate my homework pt. II”

More ActiveAdmin Customizations with CanCan

Coming from Django, I was a little surprised/disappointed that permissions aren’t very tightly integrated with the Rails ActiveAdmin as they are with the django admin. Luckily, my search for better authorization for ActiveAdmin has led me to this very informative post by Chad Boyd. It makes things much easier so we can authorize resources more flexibly.

However, there were a couple of aspects that I still wasn’t 100% happy with:

  1. When an unauthorized action is attempted, the user is simply redirected with an error message. I personally like to return a 403 response / page. Yes, I’m nitpicking. I know.
  2. Default actions like Edit, View and Delete still appear. They are not dependent on the permission the user has. Clicking on those won’t actually allow you to do anything, but why have some option on the screen if they are not actually allowed??

So with my rather poor Ruby/Rails skill, and together with my much more experienced colleague, we’ve made a few tweaks to the proposal on Chad’s post to make it happen.

Continue reading “More ActiveAdmin Customizations with CanCan”

Simple Detection of Comment Spam in Rails

It’s always nice to be able to get some feedback, or for users to make a contact via a simple Contact form. However, it didn’t take too long before spammers started hitting those forms too. It was quite interesting to see the kind of messages we started receiving. In a way, most of those submissions were more like stories, or snippets from an email to a friend. They didn’t have any of those very much expected keywords for fake watches or erectile dysfunction enhancers. Many didn’t even have any links either. So what were these messages then? My personal guess was that these were some kind of a reconnaissance attempts. The bots were sending innocent messages first to various online forms. Then I imagine they will crawl the site more, trying to see if those submissions appear elsewhere. If/when they do, they will hit those forms hard with the real spam content. In any case, these were all speculations that I didn’t really care to prove right or wrong. I just wanted to get rid of this junk. Fast.

Continue reading “Simple Detection of Comment Spam in Rails”

Bootstrap cloud shoot-out part II

A recent comment by Martyn on my cloud performance shoot-out post prompted me to do another round of testing. As the bootstrap process I described on the last post evolved, it’s always a good idea to test it anyway, so why not kill two birds with one stone? The comment suggested that the Amazon EC2 micro instance is CPU throttled, and that after a long period (in computer terms: about 20 seconds according to the comment), you could lose up to 99% of CPU power. Whereas on a small instance, this shouldn’t happen. So is EC2 small going to perform way-better than the micro instance? How is it going to perform against Linode or Rackspace equivalent VPS?

Continue reading “Bootstrap cloud shoot-out part II”

Fabric Installer for Graphite

fabric-graphite is a fabric script to install Graphite, Nginx, uwsgi and all dependencies on a debian-based host.


I was reading a few interesting posts about graphite. When I tried to install it however, I couldn’t find anything that really covered all the steps. Some covered it well for Apache, others covered Nginx, but had steps missing or assumed the reader knows about them etc.

I’m a big fan of fabric, and try to do all deployments and installations using it. This way I can re-run the process, and also better document what needs to be done. So instead of writing another guide, I created this fabric script.

Continue reading “Fabric Installer for Graphite”

bootstrap shooting at the clouds

One of my primary aims when building a resillient cloud architecture, is being able to spawn instances quickly. Many cloud providers give you tools to create images or snapshots of existing cloud instances and launch them. This is great, but not particularly portable. If I have one instance on Linode and I want to clone it to Rackspace, I can’t easily do that.

That’s one of the reasons I am creating bootstrap scripts that completely automate a server (re)build process. Given an IP address and root password, the script should connect to the instance, install all necessary packages, pull the code from the repository, initialize the database, configure the web server and get the server ready for restore of user-data.

I’m primarily using fabric for automating this process, and use a standard operating system across different cloud providers. This allows a fairly consistent deployments across different providers. This also means the architecture is not dependent on a single provider, which in my opinion gives a huge benefit. Not only can my architecture run on different data centres or geographic locations, but I can also be flxeible in the choice of hosting providers.

All that aside however, building and refining this bootstrapping process allowed me to run it across different cloud providers, namely: Rackspace, Linode, and EC2. Whilst running the bootrstrapping process many times, I thought it might be a great opportunity to compare performance of those providers side-by-side. My bootstrap process runs the same commands in order, and covers quite a variety of operations. This should give an interesting indication on how each of the cloud providers performs.
Continue reading “bootstrap shooting at the clouds”

How much (cache) is too much?

One of the best rules of thumb I know is the 80/20 rule. I can’t think of a more practical rule in almost any situation. Combined with the law of diminishing returns, it pretty much sums up how the universe works. One case-study that hopes to illustrate both of these, if only a little, is a short experiment in optimization I carried out recently. I was reading so many posts about optimizing wordpress using nginx, varnish, W3-Total-Cache and php-fpm. The results on some of them were staggering in terms of improvements, and I was inspired to try to come up with a similar setup that will really push the boundary of how fast I can serve a wordpress site.

Spoiler – Conclusion

So I know there isn’t such a thing as too much cash, but does the same apply to cache?
Continue reading “How much (cache) is too much?”