My wife and I are wannabe-minimalists. We try to reduce how much we consume, make our home a bit more organized and get rid of excess. We also like vintage items, so it’s always hard. Next to my desk, I have an old calculator from the 60s or 70s (I guess) that I picked up at a flea-market a few years ago. It’s just cool, but serves no purpose. Maybe I should get rid of it, but it’s still there. Next to my own Nokia 8210 from 1998 or so… I somehow got attached to this phone.
I never thought I’ll write something negative about Bitwarden. I love it. It’s an incredible password manager, and I even created envwarden: a small open-source wrapper to handle server secrets with Bitwarden.
But I recently bumped into a small issue that looks like Security through obscurity to me. And I thought it was odd for a security-focused product.
The issue was that I couldn’t export the items in my company’s vault. Even though I had access to the cards .
I contacted Bitwarden about it, and they said that:
An Organization user cannot export the Organization’s Vault without being an Admin or Owner.
After trying to understand why, since I did have access to cards in my organization, so why couldn’t I export them? I was told:
We do not allow people to export the Organization Vault unless they are an Admin simply because this has been requested by demand from our customers. Being able to dump all passwords in one quick action is different than having to access every one individually to copy them out.
I explained that this seems like Security through obscurity, since I had vault access, and also it’s trivial to dump all passwords using the Bitwarden CLI anyway.
There’s a famous Jewish, Yiddish phrase:
Man plans and God laughs.
I think the same applies to SEO and Google nowadays.
Man SEOs and Google laughs.
I was always a bit suspicious of SEO, and let’s face it, the sea of snake-oil SEO salesmen doesn’t help to establish credibility here, does it?
But I think that I’m becoming even more cynical of it every day.
The problem with getting good advice for SEO is that there’s no money in telling you “Don’t do anything”, “It’s a waste of time”, or “Focus on valuable content for your audience”. But there’s tons of money in doing a site audit, in telling you about best strategies to extract link juice, or why
alt tags for images are important.
But it works
There’s an expression in Hebrew: “Baltam”. It’s a shorthand form for something unplanned, or more precisely, it strongly implies: [something that is] impossible to plan. I think it has its roots in the military. In the battle field, you always have to account for some surprises. You cannot possibly have everything planned. Israelis are also (in)famous for improvising. Not so famous for planning ahead.
As an (ex?) Israeli, I recently felt awkward, essentially being accused of being overly bureaucratic. And by a German colleague, of all people. Can you imagine it?? :)
Ok, and just to clarify one thing, this post isn’t about cultural stereotypes, but rather trying to figure out a practical approach to a real problem that my team is facing with new ideas and features:
How do you deal with new tasks or ideas, especially small ones?
Many apps require some tasks to execute on schedule: cleaning up inactive user accounts, generating daily, weekly or monthly reports, sending out reminders via email, etc.
cron is a simple and trusted scheduler for unix, and used on pretty much any unix-based system I come across.
So cron seems like a natural candidate for triggering those job executions. But it’s not always the best solution.
In our case, we’ve used the whenever gem for rails successfully for a long while. The gem acts as a cron DSL and lets you inject and manage cron entries from your rails app.
The problem starts however when you start growing, and your app spans more than one server. Or even if you only use one server, but want to be able to fail-over, or switch from one server to another.
Why? suddenly you have more than one cron launcher, and jobs that should execute once end up executing once on each server. This can cause some weird and unexpected lockouts, duplication and other issues.
So what’s the alternative?
This post has been on the back of my head for a couple of years now. I think we actually switched-off Intercom in 2016 or so… But the reasons should still stand now, or might even be stronger. Of course, things might have shifted, so please forgive me if some features are totally different by now.
For those who don’t know intercom.io (now intercom.com), well, I think you probably do know it, but maybe not by name. It’s the technology (or company) that adds those little “bubbles” on websites, with friendly faces offering to help.
Of course, intercom.io isn’t the only one now, and there are a few competitors in this space. The principle is pretty similar though. I think intercom was the most successful company doing this, or the first, or both. But it’s not really important. It’s mostly about intercom as a concept, rather than a specific implementation.
The short, simple, and most crucial reason: it didn’t work. How do I know? We A/B tested it. Over a fairly long time and a large number of people.
When you walk inside the Ben Tanh market in Ho Chi Minh City, Vietnam, you’ll eventually end up inside the food area. There are probably hundreds of stalls selling local food. Lots of delicious Banh Mi sandwiches, noodle soups, fruit juices and summer rolls.
One thing that you can’t ignore however, is that as soon as you walk around, you’ll get approached by one of the stall owners. They’ll simply hand you the menu to choose from.
The book piqued my curiosity, so I picked it up and took a peek at the first page. It was written by a artist at the peak of her career.
As a non-native speaker, I guess when I pronounce any of those words: pick, peek, peak, or pique, they all sound the same. So it’s even harder for me to clearly memorise. I mostly get it right, but can occasionally confuse some forms.
Especially peek and peak.
It doesn’t happen with meet and meat, feet and feat, leek and leak though. I wonder why.
I recently finished reading “Bad Blood – Secrets and Lies in a Silicon Valley Startup”, by John Carreyrou. It’s a remarkable piece of investigative journalism and an amazingly grabbing read. I just couldn’t let it off my hands.
I think it particularly stood out, because the amazingly stark contrast with another book I just recently wrote about: “It doesn’t have to be crazy at work”, by the co-founders of Basecamp.
This isn’t exactly a standard type of post for this blog, but then perhaps I shouldn’t be too strict with myself as far as things I write about. After all, it’s my personal blog. I make (and break) the rules. And anyway, nobody reads it. If you are reading this, consider yourself one of a very select few.
I’m no health specialist, and this is just a sample of one, and much less scientific than my A/B testing for coffee (which wasn’t scientific at all), but I’m totally crazy about Guava, and what I perceive to be its health benefits for me.
Growing up in Israel, guavas were kinda pungent, slightly mushy, yellowish fruit. It was also one of those things the local Israeli folklore qualified as “either you love it or you hate it” (we have no Marmite in Israel, not to my knowledge anyway. Or maybe there’s a strong consensus and everyone hates it? anyway, I digress).
I guess I was in the “love it” camp, but I don’t recall being particularly crazy about it either. I think the local wisdom was also that it causes constipation, so I guess I tried not to have too much of it.
I no longer live in Israel. But I also don’t come across Guavas. At all.
I lived in London for a number of years, and I don’t recall eating any there, or even seeing them. Maybe pink, artificial guava juice. And I now live in Berlin for several years, and I can’t think of seeing any here either. How come??
I do see them everywhere in Thailand and Vietnam though. It’s literally around every street corner. Every fruit stand would typically have them besides Papayas, Pineapples and Watermelon. You can also have a proper, fresh, guava juice or shake in lots of places.
We’re recently being hit with more and more bots.
Some of them are crawling our site and hitting valid or invalid endpoints. We’ve seen plenty of credential stuffing attacks as well. Most of them distributed across different IPs, with each IP hitting us at low frequency.
And most recently, someone abused our registration form to spam their recipients via our system.
It was quite clever actually. When you register, you enter your name, email and password. We then send a confirmation email saying something like
“Hey Roberta, thanks for joining. Please click here to confirm your account”.
Now those guys used their victim’s email address, and used the name field to link to a URL. So those users would get an email
“Hey lottery tickets http://some.link, thanks for joining. Please click here to confirm your account”.
Slimey. Naturally our own email system took the hit of sending spam. Double ouch.
Luckily, we had some anomaly detection in place, and we blocked those guys quickly. They used some browser automation from a fixed set of IPs, so it was easy to block. At least until the next wave…
I’ve been dealing with those types of scenarios with fail2ban, and it’s really quite effective. We define regular expressions to inspect our log files matching certain patterns, and then ban if we see repeated offensive behaviour. fail2ban is limited though in some aspects.
First of all, those rules are a bit of a pain to create and maintain, and you need to make sure the offending IP appears on the application log record you want to capture. In some cases it’s easy, but not always. The bigger problem however is that fail2ban doesn’t scale. The more servers you have — let’s say in a load-balanced setup — the less accurate fail2ban becomes. Or you need to aggregate all your logs on a single fail2ban host, creating a single point of failure or a bottleneck…
So I was searching for a better solution. Sadly there aren’t many. Cloudflare, which we also use, offers some degree of protection. But it’s not as flexible. And of course there’s reCAPTCHA. You know, those annoying things asking you to pick traffic signs, or even just click “I’m not a robot”?
Now I was initially hesitating to use it. I’m not sure why, but the fact that it doesn’t really have any real competition bothers me. Plus, as a user, I’m frequently annoyed by those challenges, and I hate this experience.
Setting it up is surprisingly simple, and from my limited experience, quite effective. That is, the scores it produced were surprisingly accurate. Albeit my ability to test different scenarios was limited.
I’ll try to give some pointers for implementing reCAPTCHA v3 with Rails 5.1 and Devise 4. The implementation can work on any form or controller however, and not just with Devise.
I really enjoyed reading It Doesn’t Have to Be Crazy at Work recently. It’s another bestseller from Basecamp. After reading Rework before, a lot of things felt a bit familiar. Too familiar, perhaps. But their new book still has a few new ideas and covers things from a different angle. Well worth a read.
Working remotely, and at a company with very similar culture and values to Basecamp, a lot of what they write about resonated. Much of the way we structure things at work was inspired or wholesale copied from Basecamp to be completely honest. Why reinvent the wheel when someone hands you an instruction manual for building a perfect one?
But some things caught me by surprise. It felt a little too zen, or even contradictory in some cases? But it definitely gave me pause. Maybe we’re doing some things wrong, and can improve even further? I’m still unsure, but hope we can experiment with some ideas. Let me jump into a few examples…
I’ve written only recently about SmugMug, and expressed my frustration as a developer who built an open-source tool for their platform. This has led me to try to get my data out of SmugMug as I was considering moving away from it as well… Only to discover that some of my video data is lost and/or not being made available. This applies only for videos. Both the quality is potentially degraded, and the metadata that is available on SmugMug cannot be downloaded or exported out of their platform.
If you upload a video to SmugMug, they don’t actually store the original video for you. Here’s a quote from their official page:
We don’t keep a copy of the original video you upload. We make high-quality display copies, which are probably altered from what you send us.
I’m not sure what this high-quality display copy means in actual terms, but I won’t be surprised if some quality is lost in the process. For a company that prides itself caring for photographers, where quality and reliability is key, I find it rather vague and disconcerting.
Furthermore, what isn’t mentioned on this page is that if you want to download your videos again, those videos would be stripped-out of the original metadata as well. This metadata includes information about the Camera you used, the date/time of the video, location information etc. All of this data is still stored on SmugMug, but you can’t get it back when you download it. It’s locked-in. For me, personally, this is even worse than losing video quality. My video memories are very tightly linked to the time and location of those original scenes. Without this info, the videos are next to useless. I just can’t find them (without going manually through hundreds or thousands of dateless and location-less videos, that is)
SmugMug is great, but its developer ecosystem is, in my humble opinion, crumbling, and can use some serious love — or put out of its misery and die…
Dear SmugMug, there are lots of people, myself included, who want to see you thrive and succeed. People who are spending their free time, resources and energy on sharing their tools with the community. People who can build great things on top of SmugMug, and can make SmugMug even more successful than it currently is. Please don’t forget us. We are the potential evangelists, multipliers, and we do this for free. Please treat our free gifts with respect. These gifts might be free, but they are precious. They should be cherished, rather than ignored, or discarded.
Rails Russian Doll Caching is super cool. It’s simple, effective and makes caching much easier to reason about.
There’s a dark side to it though. Not in the negative, evil sense. But rather the hidden, unknown, confusing sense.
My wife and I started using Amazon photos a while ago. I didn’t think that much of it first, but it was included with our Prime membership, and offered an automatic upload from our phones, plus free storage (for photos), so why not?
Fast forward a couple of years. We’ve since cancelled Prime, and I wanted to switch to Dropbox, which has comparable automatic upload, a mobile app, and superior sync with a proper linux client. But I couldn’t. Why? Because of this one (stupid) feature.
Prepare for a somewhat ranty post, but it doesn’t come from a bad place. I honestly want Fastmail to succeed. I’m eager to see more alternatives for email hosting, and clients (and there are scaringly few).
I also acknowledge that some of the problems I bumped into are quite specific to my own setup, which isn’t common. So in some ways, it’s not about you, Fastmail. It’s me. Make your own judgement.
TL;DR – Fastmail is pretty neat, but their support sucks. Their support ticket system sucks even more, and their product is not clear enough to work without support. From my personal experience anyway.
On my previous post, I described the architecture of Gimel – an A/B testing backend using AWS Lambda and redis HyperLogLog. One of the commenters suggested looking into Google BigQuery as a potential alternative backend.
It looked quite promising, with the potential of increasing result accuracy even further. HyperLogLog is pretty awesome, but trades space for accuracy. Google BigQuery offers a very affordable analytics data storage with an SQL query interface.
There was one more thing I wanted to look into and could also improve the redis backend – batching writes. The current gimel architecture writes every event directly to redis. Whilst redis itself is fast and offers low latency, the AWS Lambda architecture means we might have lots of active simultaneous connections to redis. As another commenter noted, this can become a bottleneck, particularly on lower-end redis hosting plans. In addition, any other backend that does not offer low-latency writes could benefit from batching. Even before trying out BigQuery, I knew I’d be looking at much higher latency and needed to queue and batch writes.
tip-toeing on the shoulders of giants
Before I dive into the reasons for writing Gimel in the first place, I’d like to cover what it’s based on. Clearly, 100 lines of code won’t get you that far on their own. There are two (or three) essential components this backend is running on, which makes it scalable and also light-weight in terms of actual code:
- AWS Lambda (and Amazon API Gateway) – handle the requests to both store experiment data and to return the experiment results.
- Redis – using Sets and HyperLogLog data structures to store the experiment data. It provides an extremely efficient memory footprint and great performance.
I haven’t noticed it much before, but it’s becoming a pet peeve once I started paying attention to it.
We LOVE homepages. Like eyes being the key to our souls, our homepage shows who we really are. What we stand for. They turn random visitors to loyal customers. They inspire trust, build an emotional connection, they bind us together… ok ok. You got the picture. Homepages are great.
But once I’m sold. I’m in. I gave you my email. I’m a loyal customer. I go to your site every. single. day. Do I really need to see your homepage again??! Do I actually care that you changed the photo on the frontpage and highlighted another benefit to potential customers? Or most important – do I really have to click the ‘Login’, ‘Go to my app’, ‘Dashboard’ or whatever other link you give me to get started?
This is the final post on this series. I started by covering the method for A/B testing coffee, as well as the motivation and approach. I later wrote about the first test session using Hario V60, comparing those beans by making Espresso and the last post described two preparation methods Aeropress and Cappucino.
I repeated a similar process using various combinations of
E coffee beans. This post will be more brief, with the “results” based on my personal preferences and how I ended up scoring all 5 types of beans.
On previous posts I covered the method for A/B testing coffee, as well as the motivation and approach. I later wrote about the first test session using Hario V60. The last post was comparing those beans by making Espresso.
This post will cover two tasting sessions of the same mysterious
B beans: Aeropress and Cappuccino.
On my previous post, I covered the first blind A/B tasting session using the “Gingerlime Tasting Technique” ™. You can read some more background about the motivation and method, as well as a full list of coffees I’m comparing on the first post in the series.
After the first taste using pour-over Hario V60 filter, I was anxious to find out whether both
B coffees will show similar characteristics using other preparation methods. Namely: Espresso, Aeroproess and Cappuccino. Would
B stay my favourite when served with milk? Would the Aeropress extract different flavours out of
A than I managed with the Hario?
This is the second post in a series, exploring the “Gingerlime Tasting Technique” ™. You can read some background on the previous post, where I explain the motivation, testing method and how I started exploring A/B testing for coffee. Different tasting sessions comparing two types of beans and trying to choose the best out of the two.
A taste test
The first tasting was between coffee A and B (still unknown to me at this point in time). The test was actually a series of 4 different tasting sessions. Each session used a different method of making coffee: Hario V60 filter, Espresso, Aeropress and a Cappuccino.
I do quite a bit of A/B testing and find it to be a great tool for experimenting and ultimately improving things.
But what’s “Coffee A/B testing”?
The idea came to me when I was visiting my wife’s family in Japan. We went to a restaurant and my father and brother in-law ordered two types of Sake. They let me taste both and decide which one I liked the most. It was a simple task, but an interesting one. The tastes were subtly different, but enough that I could clearly pick my personal favourite.
It then occurred to me that as much as I love coffee, and tend to pick some beans over others, I don’t quite know what makes me like a certain type, or what it is that I’m looking for for my “ultimate” coffee.
What if I could A/B test coffees? Try two types of beans (or blends), and pick the one I like. Then repeating the process I could gradually find the one I like the most. And in doing that, I can also figure out what it is that I like, and pay more attention to the difference. I rarely compare coffees. Well, not any more!
It’s not all about the technology. Stripe does one thing that makes it light-years better than its competition: Time to market. Or in simpler terms, its activation process to allow you to receive actual payments.
My wife and I run a small website selling vintage items from Germany to Japan. So far, my wife was asking all her customers to pay via bank-transfer. This is naturally time-consuming and for most of her customers, Japanese housewives and arts & crafts lovers, inconvenient. A few months ago I suggested to her to introduce credit-card payments on her website. How difficult could this be to implement?
My wife and I recently had a baby. Amongst the toys and cloths we received as gifts, there were a few CDs and DVDs with music for the little one. We then realised that we no longer have a CD or DVD drive in our computers. So we bought an external USB DVD/CD. When playing the DVDs, the region-selection menu appeared. I nearly forgot about it. Oh, the good ol’ copy-protection of the 90’s. So I chalked it up as one of those oddities of life, and thought how silly it seems today in the Internet age and all that. My wife is japanese, and I’m Israeli. And we live in Berlin. Naturally each side of the family wanted to send us Music in their own language, so there you go.
Only a few days later, my wife asked for my help with her Nexus 7. She bought a few eBooks from a Japanese site. Those work fine on her iPhone and Mac. But somehow the Play store won’t install the app (never mind the question why someone needs a bespoke app to read books).
“This item is not available in your country”.
This time I was determined to work around this.
Here’s a quick howto which does not require a rooted android.
UPDATE: AWS recently introduced SSL Health checks. So the method in this post should no longer be necessary.
Amazon Route53 offers a DNS healthcheck that allows you to failover to another host / region if one IP is not responsive. This works great if you want to create a secondary site, or even a simple maintenance page to give your users a little more info than just an empty browser window.
There are some limitations to the healthchecks currently. Route53 allows you to choose between TCP and HTTP. However, there’s no HTTPS / SSL support for URLs.
So what can you do if your site is running only with SSL?
Continue reading “Route53 healthcheck failover for SSL pages with nginx”
Just a quick&dirty guide on setting up SSL tunnelling in your development environment. This is written for Rails, but can be easily used for Django, Node, or any other web development.
Why SSL in development?
There’s no important reason to use SSL for development, but some times, you just seem to have to. I was trying to build an integration with helpscout, using their dynamic custom app. For some reason, helpscout forces you to use SSL for the external URL. Even for development. I won’t go into details why I think it’s unnecessary, but rather focus on how to set it up. After all, it might be something else that requires SSL within development, so here’s one quick way to do so.
I spend a lot of time working with monitoring solutions, and like to measure and track things. The information we collect from our apps tells us a lot about what’s going on. Who’s using it. How frequently they access it. Where they are from. How much time they spend accessing the app etc. And then there’s a lot we can do as app owners with this data. We can measure it, trend it, slice and dice and produce nice reports. We can also action on this info. Offer people stuff based on their behaviour. Use those ‘lifecycle’ emails to improve conversion. Increase our sales. Bring people back to using our products.
I’m getting used to those supposedly-personal email from Matt, the founder of Widgets inc. who’s “just checking if I need any help using the product”, or Stuart from Rackspace who has “only one question”. I know it’s automated, but it’s fine. As long as I can hit reply and actually reach a person, that’s ok with me. I pretend to not notice.
However, I’m feeling recently that some of those emails get a little creepy. A couple of random examples:
“Russian doll Caching” gained some popularity recently, I suspect in part due to its catchy (or cachie?) name and how easy it is to visualize the concept. Rails 4 should have this improved caching available by default. With Rails 3 you need to install the cache-digests gem. It’s pretty easy to get started with it, and the documentation is clear. It makes a lot of sense to start using it in your Rails app. I won’t attempt to cover the basics and will assume you are already familiar with it. I want to talk about a specific aspect of fragment caching surrounding the generation of the cache keys.
If a request using django tastypie is not authorized, please make sure to
raise Unauthorized() exception in your
_detail authorization methods in Tastypie v0.9.12.
The longer version
On one of my previous posts I wrote at length about django-tastypie authorization and gave some tips and tricks on how to work more flexibly and securely with this framework. A lot has happened since, and it was hard to keep track of all the various changes and updates to Tastypie.
Since version 0.9.12, the authorization mechanisms in tastypie changed rather radically, and that’s a very good improvement. It plugged some holes with nested resources and authorization, and made authorization decisions more granular. From a simple
apply_limits, now each operation can be authorized, broken down to CRUD elements (
delete). Each element is authorized for
_detail operations (I’ll try to cover this in more depth on a follow-up post at some stage).
For now, I just wanted to highlight an important pitfall you might want to avoid when using the new tastypie authorization that could leave you exposed. There’s a fix in the pipeline very soon, but until then, you should protect yourself by making a small change to your authorization methods, and the
_detail ones in particular
The crux of the issue is that the
_detail authorization methods should make a binary decision – is this authorized? (yes/no). If the method returns
True, or does nothing, the request is authorized. If the method returns
False or raises an
Unauthorized exception, the request should be blocked.
The glitch is that if your authorization
_detail functions return False, the request still goes through and is effectively authorized. Until the fix is in place, please make sure to
raise Unauthorized() exception if you’re using Tastypie v0.9.12.
I’ve had a strange conversation with my wife this morning.
She told me that google reader is closing down.
She’s using it much more than I do. So I said to her something like “I’m sure you can install some other RSS reader software to replace Google”.
Her response was a bit of a surprise for me: “Software?! eugh!”.
Then I said “Ok then, or an app”, and she seemed rather pleased.
How did software become such a dirty word?!
I love Graphite. It’s the most robust, flexible, kick-ass monitoring tool out there. But when I say
monitoring, I’m actually not describing what graphite really does. In fact, it does almost anything but monitoring. It collects metrics via carbon, it stores them using whisper, and it provides a front-end (both API and web-based), via graphite-web. It does not however monitor anything, and certainly does not alert when certain things happen (or fail to happen).
So graphite is great for collecting, viewing and analyzing data, particularly with the multitude of dashboard front-ends, my favourite being giraffe ;-). But what can you do when you want to get an email or a text message when, say, carbon throws some errors, or your web server starts to bleed with 500’s like there’s no tomorrow? Even better – do you want to get an email when your conversion signup rates drops below a certain mark??
So what can you use if you want to monitor stuff using graphite? And what kind of stuff can you monitor? I’ve come across a really great approach using nagios. In fact, I ‘borrowed’ the method the author was using for alerting on 500 errors for my own approach. So I wanted to do something very similar, but I really didn’t want nagios. It’s an overkill for me, if all I want is to get an email (or run a script) when something goes wrong.
I’ve recently bumped into an interesting post about a stackoverflow vulnerability discovered by Anthony Ferrara. I didn’t think too much about it. I’ve come across similar issues before, where the application relies on a piece of information that might be easy to forge. Telephony systems are vulnerable to Caller ID spoofing, which becomes increasingly easier with Voice-Over-IP providers. Web based applications can also be fooled if they rely on header information, such as the X-Forwarded-For, typically used by Proxy servers.
I was experimenting with switching rails from Phusion Passenger to Unicorn, when I suddenly came across a strange error message:
ActionDispatch::RemoteIp::IpSpoofAttackError (IP spoofing attack?!HTTP_CLIENT_IP="192.168.0.131"HTTP_X_FORWARDED_FOR="192.168.0.131"): app/controllers/application_controller.rb:138:in `append_info_to_payload'
That looked quite impressive. Rails is trying to identify spoofing attacks and raise an exception when it happens? Nice.
However, after digging a little deeper, trying to figure out what’s actually happening, it seems that Rails might actually be vulnerable to spoofing attacks under certain setups. I will try to describe those scenarios and suggest a few workarounds to avoid any pitfalls.
What I observed applies to Rails latest stable (3.2.9 at the time of writing), previous versions and potentially future versions as well (including 4.0).
Your rails application might be vulnerable to IP spoofing. To test it, try to add a fake
X-Forwarded-For header and check which IP address appears in your log files.
curl -H "X-Forwarded-For: 188.8.131.52" http://your.website.com
You can try to implement one of the workarounds mentioned below.
Just a quick rant this time.
I recently signed-up for pinterest. I wasn’t actually interested in signing-up, but wanted to see what their sign-up process looks like. If you’ve read one of my previous posts, you’d know I nearly always use unique, unpredictable email addresses for new services I sign-up to. Pinterest registration is quite nice, and only asks for a few details and an email address (that is, if you prefer a username and password instead of using Facebook or Twitter to login). Once you enter the details, pinterest sends you a Please verify your email message to your inbox. So far, so good.
However, what happens if you don’t verify your email? As was the case here. I wasn’t actually interested in creating an account. I assumed that I won’t hear from pinterest again. Wrong. I just received an email from pinterest, announcing their new secret boards. So much for confirming my account. According to Spamhaus, this is considered unconfirmed opt-in which is categorized as spam.
To add insult to injury, if I try to opt-out from the email I just received, Pinterest asks me to login to my (unconfirmed) account. These are all small annoyances, I know. But is it really that difficult to do things right? An unconfirmed account should not receive any messages. Opt-out links should just be one click and that’s it.
I’ve written about installing and using Graphite and it’s a really great tool for measuring lots of kinds of metrics. Most of the guides online don’t touch on the security aspects of this setup, and there was at least one thing that I thought should be worth writing about.
How are we measuring
Metrics we gather from our applications have the current characteristics / requirements:
- We want to gather lots of data over time.
- Any single data-point isn’t significant on its own. Only in aggregate.
- Measuring is important, but not if it slows down our application in any way.
For those who followed my previous post, I thought I should post a quick update.
I was naturally quite surprised to be contacted rather quickly by Rackspace shortly after posting. This was a nice surprise, and the contact afterwards were somehow more understanding. At least I could sense they are feeling sorry for my situation.
As expected, there was no way to recover the lost image. I received a follow-up message on the original ticket confirming this quite clearly. They then rather swiftly changed the tone into legal-speak and referred me to their terms of service, which I quote here for the benefit of the world at large.
One of the greatest promises of cloud computing is resilliency. Store your data ‘in the cloud’ and access it from anywhere, enjoy high durability and speed. You know the marketing spiel already. A recent incident reminded me the importance of backups. In fact, the importance of backups of backups. Sounds strange? of course. This is the tale of a missing server image.
Coming from Django, I was a little surprised/disappointed that permissions aren’t very tightly integrated with the Rails ActiveAdmin as they are with the django admin. Luckily, my search for better authorization for ActiveAdmin has led me to this very informative post by Chad Boyd. It makes things much easier so we can authorize resources more flexibly.
However, there were a couple of aspects that I still wasn’t 100% happy with:
- When an unauthorized action is attempted, the user is simply redirected with an error message. I personally like to return a 403 response / page. Yes, I’m nitpicking. I know.
- Default actions like Edit, View and Delete still appear. They are not dependent on the permission the user has. Clicking on those won’t actually allow you to do anything, but why have some option on the screen if they are not actually allowed??
So with my rather poor Ruby/Rails skill, and together with my much more experienced colleague, we’ve made a few tweaks to the proposal on Chad’s post to make it happen.
It’s always nice to be able to get some feedback, or for users to make a contact via a simple Contact form. However, it didn’t take too long before spammers started hitting those forms too. It was quite interesting to see the kind of messages we started receiving. In a way, most of those submissions were more like stories, or snippets from an email to a friend. They didn’t have any of those very much expected keywords for fake watches or erectile dysfunction enhancers. Many didn’t even have any links either. So what were these messages then? My personal guess was that these were some kind of a reconnaissance attempts. The bots were sending innocent messages first to various online forms. Then I imagine they will crawl the site more, trying to see if those submissions appear elsewhere. If/when they do, they will hit those forms hard with the real spam content. In any case, these were all speculations that I didn’t really care to prove right or wrong. I just wanted to get rid of this junk. Fast.
A recent comment by Martyn on my cloud performance shoot-out post prompted me to do another round of testing. As the bootstrap process I described on the last post evolved, it’s always a good idea to test it anyway, so why not kill two birds with one stone? The comment suggested that the Amazon EC2
micro instance is CPU throttled, and that after a long period (in computer terms: about 20 seconds according to the comment), you could lose up to 99% of CPU power. Whereas on a
small instance, this shouldn’t happen. So is EC2
small going to perform way-better than the
micro instance? How is it going to perform against Linode or Rackspace equivalent VPS?
This post starts as a rant about webfaction, but somehow turns into a rave. I recently discovered (the hard way) that I can failover almost any site to a secondary host in a different data centre, all with a few scripts on a webfaction shared hosting account.
I was reading a few interesting posts about graphite. When I tried to install it however, I couldn’t find anything that really covered all the steps. Some covered it well for Apache, others covered Nginx, but had steps missing or assumed the reader knows about them etc.
I’m a big fan of fabric, and try to do all deployments and installations using it. This way I can re-run the process, and also better document what needs to be done. So instead of writing another guide, I created this fabric script.
One of my primary aims when building a resillient cloud architecture, is being able to spawn instances quickly. Many cloud providers give you tools to create images or snapshots of existing cloud instances and launch them. This is great, but not particularly portable. If I have one instance on Linode and I want to clone it to Rackspace, I can’t easily do that.
That’s one of the reasons I am creating bootstrap scripts that completely automate a server (re)build process. Given an IP address and root password, the script should connect to the instance, install all necessary packages, pull the code from the repository, initialize the database, configure the web server and get the server ready for restore of user-data.
I’m primarily using fabric for automating this process, and use a standard operating system across different cloud providers. This allows a fairly consistent deployments across different providers. This also means the architecture is not dependent on a single provider, which in my opinion gives a huge benefit. Not only can my architecture run on different data centres or geographic locations, but I can also be flxeible in the choice of hosting providers.
All that aside however, building and refining this bootstrapping process allowed me to run it across different cloud providers, namely: Rackspace, Linode, and EC2. Whilst running the bootrstrapping process many times, I thought it might be a great opportunity to compare performance of those providers side-by-side. My bootstrap process runs the same commands in order, and covers quite a variety of operations. This should give an interesting indication on how each of the cloud providers performs.
Continue reading “bootstrap shooting at the clouds”
One of the best rules of thumb I know is the 80/20 rule. I can’t think of a more practical rule in almost any situation. Combined with the law of diminishing returns, it pretty much sums up how the universe works. One case-study that hopes to illustrate both of these, if only a little, is a short experiment in optimization I carried out recently. I was reading so many posts about optimizing wordpress using nginx, varnish, W3-Total-Cache and php-fpm. The results on some of them were staggering in terms of improvements, and I was inspired to try to come up with a similar setup that will really push the boundary of how fast I can serve a wordpress site.
Spoiler – Conclusion
So I know there isn’t such a thing as too much cash, but does the same apply to cache?
Continue reading “How much (cache) is too much?”
It’s always nice to discover a new tool or service that does things differently. Even if just a little. I remember when someone first told me about hipmunk. Just when I thought all flight search websites are pretty much the same, here’s one example of something different.
Perhaps this wasn’t as obviously different as hipmunk is, but one of the tools I came across recently within the security testing world is Arachni. A number of things made it stand out a little. First of all, it is written in Ruby. That already sparked some curiosity. I’m not entirely sure why, but I guess I’m naturally more interested in programs and tools in Ruby and Python. The next thing that was evidently different from other web scanners was the fact that Arachni seems to be very pluggable and interface-able. Arachni appears to be geared towards interfacing with external scripts or programs though an API. One of its core features is its distributed architecture, allowing to launch many modules independently and control them programmatically.
After playing around with it, I came across some issues and couldn’t make it work as I expected. Most of them out of my own lack of knowledge or being lazy reading through the extensive documentation. Luckily, it didn’t take more than a few minutes after posting a question on github, that I received a response from Arachni’s creator, Tasos Laskos, aka Zapotek.
After chatting with Tasos a few times via email, I became even more intrigued about him and the project. I then decided it would be interesting to interview him for my blog. I have no experience interviewing people, but what the heck.
Tasos accepted my invitation for an interview, with the condition that it must be a text-based interview. So this interview was carried out via email alone. I personally suspect his voice is funny, but he (obviously) denied it :)
Tasos is certainly not an ordinary person. It becomes apparent when you read his blog, or even the documentation for Arachni. As you could see from the interview, Tasos appears to have very strong and clear opinions. He doesn’t mince his words, and very directly expresses what he thinks. Nevertheless, Tasos and Arachni seem to be doing something a little different, and there’s definitely more to wait for.
Continue reading “A different kind of spider”