Just a quick rant this time.
I recently signed-up for pinterest. I wasn’t actually interested in signing-up, but wanted to see what their sign-up process looks like. If you’ve read one of my previous posts, you’d know I nearly always use unique, unpredictable email addresses for new services I sign-up to. Pinterest registration is quite nice, and only asks for a few details and an email address (that is, if you prefer a username and password instead of using Facebook or Twitter to login). Once you enter the details, pinterest sends you a Please verify your email message to your inbox. So far, so good.
However, what happens if you don’t verify your email? As was the case here. I wasn’t actually interested in creating an account. I assumed that I won’t hear from pinterest again. Wrong. I just received an email from pinterest, announcing their new secret boards. So much for confirming my account. According to Spamhaus, this is considered unconfirmed opt-in which is categorized as spam.
To add insult to injury, if I try to opt-out from the email I just received, Pinterest asks me to login to my (unconfirmed) account. These are all small annoyances, I know. But is it really that difficult to do things right? An unconfirmed account should not receive any messages. Opt-out links should just be one click and that’s it.
I’ve written about installing and using Graphite and it’s a really great tool for measuring lots of kinds of metrics. Most of the guides online don’t touch on the security aspects of this setup, and there was at least one thing that I thought should be worth writing about.
How are we measuring
Metrics we gather from our applications have the current characteristics / requirements:
- We want to gather lots of data over time.
- Any single data-point isn’t significant on its own. Only in aggregate.
- Measuring is important, but not if it slows down our application in any way.
Continue reading “Statsd and Carbon security”
For those who followed my previous post, I thought I should post a quick update.
I was naturally quite surprised to be contacted rather quickly by Rackspace shortly after posting. This was a nice surprise, and the contact afterwards were somehow more understanding. At least I could sense they are feeling sorry for my situation.
As expected, there was no way to recover the lost image. I received a follow-up message on the original ticket confirming this quite clearly. They then rather swiftly changed the tone into legal-speak and referred me to their terms of service, which I quote here for the benefit of the world at large.
Continue reading “Rackspace ate my homework pt. II”
One of the greatest promises of cloud computing is resilliency. Store your data ‘in the cloud’ and access it from anywhere, enjoy high durability and speed. You know the marketing spiel already. A recent incident reminded me the importance of backups. In fact, the importance of backups of backups. Sounds strange? of course. This is the tale of a missing server image.
Continue reading “Rackspace ate my homework”
Coming from Django, I was a little surprised/disappointed that permissions aren’t very tightly integrated with the Rails ActiveAdmin as they are with the django admin. Luckily, my search for better authorization for ActiveAdmin has led me to this very informative post by Chad Boyd. It makes things much easier so we can authorize resources more flexibly.
However, there were a couple of aspects that I still wasn’t 100% happy with:
- When an unauthorized action is attempted, the user is simply redirected with an error message. I personally like to return a 403 response / page. Yes, I’m nitpicking. I know.
- Default actions like Edit, View and Delete still appear. They are not dependent on the permission the user has. Clicking on those won’t actually allow you to do anything, but why have some option on the screen if they are not actually allowed??
So with my rather poor Ruby/Rails skill, and together with my much more experienced colleague, we’ve made a few tweaks to the proposal on Chad’s post to make it happen.
Continue reading “More ActiveAdmin Customizations with CanCan”
It’s always nice to be able to get some feedback, or for users to make a contact via a simple Contact form. However, it didn’t take too long before spammers started hitting those forms too. It was quite interesting to see the kind of messages we started receiving. In a way, most of those submissions were more like stories, or snippets from an email to a friend. They didn’t have any of those very much expected keywords for fake watches or erectile dysfunction enhancers. Many didn’t even have any links either. So what were these messages then? My personal guess was that these were some kind of a reconnaissance attempts. The bots were sending innocent messages first to various online forms. Then I imagine they will crawl the site more, trying to see if those submissions appear elsewhere. If/when they do, they will hit those forms hard with the real spam content. In any case, these were all speculations that I didn’t really care to prove right or wrong. I just wanted to get rid of this junk. Fast.
Continue reading “Simple Detection of Comment Spam in Rails”
A recent comment by Martyn on my cloud performance shoot-out post prompted me to do another round of testing. As the bootstrap process I described on the last post evolved, it’s always a good idea to test it anyway, so why not kill two birds with one stone? The comment suggested that the Amazon EC2
micro instance is CPU throttled, and that after a long period (in computer terms: about 20 seconds according to the comment), you could lose up to 99% of CPU power. Whereas on a
small instance, this shouldn’t happen. So is EC2
small going to perform way-better than the
micro instance? How is it going to perform against Linode or Rackspace equivalent VPS?
Continue reading “Bootstrap cloud shoot-out part II”
This post starts as a rant about webfaction, but somehow turns into a rave. I recently discovered (the hard way) that I can failover almost any site to a secondary host in a different data centre, all with a few scripts on a webfaction shared hosting account.
Continue reading “Webfaction fail. over.”
fabric-graphite is a fabric script to install Graphite, Nginx, uwsgi and all dependencies on a debian-based host.
I was reading a few interesting posts about graphite. When I tried to install it however, I couldn’t find anything that really covered all the steps. Some covered it well for Apache, others covered Nginx, but had steps missing or assumed the reader knows about them etc.
I’m a big fan of fabric, and try to do all deployments and installations using it. This way I can re-run the process, and also better document what needs to be done. So instead of writing another guide, I created this fabric script.
Continue reading “Fabric Installer for Graphite”
One of my primary aims when building a resillient cloud architecture, is being able to spawn instances quickly. Many cloud providers give you tools to create images or snapshots of existing cloud instances and launch them. This is great, but not particularly portable. If I have one instance on Linode and I want to clone it to Rackspace, I can’t easily do that.
That’s one of the reasons I am creating bootstrap scripts that completely automate a server (re)build process. Given an IP address and root password, the script should connect to the instance, install all necessary packages, pull the code from the repository, initialize the database, configure the web server and get the server ready for restore of user-data.
I’m primarily using fabric for automating this process, and use a standard operating system across different cloud providers. This allows a fairly consistent deployments across different providers. This also means the architecture is not dependent on a single provider, which in my opinion gives a huge benefit. Not only can my architecture run on different data centres or geographic locations, but I can also be flxeible in the choice of hosting providers.
All that aside however, building and refining this bootstrapping process allowed me to run it across different cloud providers, namely: Rackspace, Linode, and EC2. Whilst running the bootrstrapping process many times, I thought it might be a great opportunity to compare performance of those providers side-by-side. My bootstrap process runs the same commands in order, and covers quite a variety of operations. This should give an interesting indication on how each of the cloud providers performs.
Continue reading “bootstrap shooting at the clouds”
One of the best rules of thumb I know is the 80/20 rule. I can’t think of a more practical rule in almost any situation. Combined with the law of diminishing returns, it pretty much sums up how the universe works. One case-study that hopes to illustrate both of these, if only a little, is a short experiment in optimization I carried out recently. I was reading so many posts about optimizing wordpress using nginx, varnish, W3-Total-Cache and php-fpm. The results on some of them were staggering in terms of improvements, and I was inspired to try to come up with a similar setup that will really push the boundary of how fast I can serve a wordpress site.
Spoiler – Conclusion
So I know there isn’t such a thing as too much cash, but does the same apply to cache?
Continue reading “How much (cache) is too much?”
It’s always nice to discover a new tool or service that does things differently. Even if just a little. I remember when someone first told me about hipmunk. Just when I thought all flight search websites are pretty much the same, here’s one example of something different.
Perhaps this wasn’t as obviously different as hipmunk is, but one of the tools I came across recently within the security testing world is Arachni. A number of things made it stand out a little. First of all, it is written in Ruby. That already sparked some curiosity. I’m not entirely sure why, but I guess I’m naturally more interested in programs and tools in Ruby and Python. The next thing that was evidently different from other web scanners was the fact that Arachni seems to be very pluggable and interface-able. Arachni appears to be geared towards interfacing with external scripts or programs though an API. One of its core features is its distributed architecture, allowing to launch many modules independently and control them programmatically.
After playing around with it, I came across some issues and couldn’t make it work as I expected. Most of them out of my own lack of knowledge or being lazy reading through the extensive documentation. Luckily, it didn’t take more than a few minutes after posting a question on github, that I received a response from Arachni’s creator, Tasos Laskos, aka Zapotek.
After chatting with Tasos a few times via email, I became even more intrigued about him and the project. I then decided it would be interesting to interview him for my blog. I have no experience interviewing people, but what the heck.
Tasos accepted my invitation for an interview, with the condition that it must be a text-based interview. So this interview was carried out via email alone. I personally suspect his voice is funny, but he (obviously) denied it :)
Tasos is certainly not an ordinary person. It becomes apparent when you read his blog, or even the documentation for Arachni. As you could see from the interview, Tasos appears to have very strong and clear opinions. He doesn’t mince his words, and very directly expresses what he thinks. Nevertheless, Tasos and Arachni seem to be doing something a little different, and there’s definitely more to wait for.
Continue reading “A different kind of spider”
If you’re updating your debian stable (squeeze) and using Lighttpd as your web server, you might come across a security notice on how to fix your lighttpd against the BEAST attack.
lighttpd (1.4.28-2+squeeze1) stable-security; urgency=high
This releases includes an option to force Lighttpd to honor the cipher order
in ssl.cipher-list. This mitigates the effects of a SSL CBC attack commonly
referred to as "BEAST attack". See  and CVE-2011-3389 for more details.
To minimze the risk of this attack it is recommended either to disable all CBC
ciphers (beware: this will break older clients), or pursue clients to use safe
ciphers where possible at least. To do so, set
ssl.ciphers = "ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM"
ssl.honor-cipher-order = "enable"
in your /etc/lighttpd/conf-available/10-ssl.conf file or on any SSL enabled
host you configured. If you did not change this file previously, this upgrade
will update it automatically.
There’s a mistake on this note however. Instead of
Continue reading “updating lighttpd on debian squeeze against the BEAST”
Please note that since Tastypie v0.9.12 the authorization framework was rewritten. Lots of information on this post no longer applies. I’m hoping to write a follow-up post at some stage.
I’ve been using tastypie, the very awesome django REST API framework for a little while now (btw, that’s not the official title, but it might as well be). I’m not going to write yet another comparison between tastypie and django-piston. My reasons for choosing tastypie were that its code looked nicer, and it seemed a much more active project.
One of the things that I immediately liked about tastypie, being a security-geek and all, was the security framework built into it. Primarily the authentication and authorization classes. They make it very easy to extend, and almost a no-brainer to apply to any resource. This means that providing resource-level authorization is also very easy and clean.
However, whilst working with tastypie and applying some authorization rules to my resources, I noticed a couple of pitfalls. Those are quite easy to miss if you’re not very familiar with the tastypie codebase. I wouldn’t say it’s a vulnerability or a bug as such, perhaps more of a (sub-optimal) design choice from a security-perspective. That said, if you use tastypie incorrectly, or unaware of those pitfalls, you might create a security vulnerability on your otherwise delicious API.
Continue reading “Keep your hands off my tastypie”
When talking about security, the first thing that usually comes to mind is encryption. Spies secretly coding (or de-coding) some secret message that should not be revealed to the enemy. Encryption is this mysterious thing that turns all text into a part of the matrix. Developers generally like encryption. It’s kinda cool. You pass stuff into a function, get some completely scrambled output. Nobody can tell what’s in there. You pass it back through another function – the text is clear again. Magic.
Encryption is cool. It is fundamental to doing lots of things on the Internet. How could you pay with your credit card on Amazon without encryption? How can you check your bank balance? How can MI5 pass their secret messages without Al-Qaida intercepting it?
But encryption is actually not as useful as people think. It is often used in the wrong place. It can easily give a false sense of security. Why? People forget that encryption, by itself, is usually not sufficient. You cannot read the encrypted data. But nothing stops you from changing it. In many cases, it is very easy to change encrypted data, without knowledge of the encryption key. Continue reading “encryption is not the right solution”
Scoring a goal against google is never easy. Google analytics allows you to do some strange and wonderful things, but not without some teeth grinding. I was struggling with this for a little while, and it was a great source of frustration, since there’s hardly any info out there about it. Or maybe there is lots of info, but no solution to this particular problem. I think I finally nailed it.
Dynamic Goal Conversion Values
I was trying to get some dynamic goal conversion values into Analytics. I ended up reading about Ecommerce tracking and it seemed like the way to go. Not only would I be able to pick the goal conversion value dynamically, it gives you a breakdown of each and every transaction. Very nice. After implementing it, I was quite impressed to see each transaction, product, sku etc appear neatly on the ecommerce reports. So far so good. But somehow, goals – which were set on the very same page as the ecommerce tracking code – failed to add the transaction value. The goals were tracked just fine, I could see them adding up, but not the goal value. grrrr…
Continue reading “dynamic goal values in google analytics”
I’ve come across a small nuisance that seemed to appear occasionally with unicode urls. Some websites seem to encode/escape/quote urls as soon as they see any symbol (particularly % sign). They appear to assume it needs to be encoded, and convert any such character to its URL-Encoded form. For example, percent (%) symbol will convert to %25, ampersand (&) to %26 and so on.
This is not normally a problem, unless the URL is already encoded. Since all unicode-based urls use this encoding, they are more prone to these errors. What happens then is that a URL that looks like this:
will be encoded again to this:
So clicking on such a double-encoded link will unfortunately lead to a 404 page (don’t try it with the links above, because the workaround was already applied there).
This workaround is specific to wordpress 404.php, but can be applied quite easily in other frameworks like django, drupal, and maybe even using apache htaccess rule(?).
/* detecting 'double-encoded' urls
* if the request uri contain %25 (the urlncoded form of '%' symbol)
* within the first few characeters, we try to decode the url and redirect
$pos = strpos($_SERVER['REQUEST_URI'],'%25');
if ($pos!==false && $pos < 10) :
header("Status: 301 Moved Permanently");
header("Location:" . urldecode($_SERVER['REQUEST_URI']));
<h2>Error 404 - Page Not Found</h2>
<?php get_sidebar(); ?>
This is placed only in the 404 page. It then grabs the request URI and checks if it contains the string ‘%25’ within the first 10 characters (you can modify the check to suit your needs). If it finds it, it redirects to a urldecoded version of the same page…
On my previous post I talked about django memory management, the little-known maxrequests parameter in particular, and how it can help ‘pop’ some balloons, i.e. kill and restart some django processes in order to release some memory. On this post I’m going to cover some of the things to do or avoid in order to keep memory usage low from within your code. In addition, I am going to show at least one method to monitor (and act automatically!) when memory usage shoots through the roof.
Continue reading “django memory leaks, part II”
A while ago I was working on optimizing memory use for some django instances. During that process, I managed to better understand memory management within django, and thought it would be nice to share some of those insights. This is by no means a definitive guide. It’s likely to have some mistakes, but I think it helped me grasp the configuration options better, and allowed easier optimization.
Does django leak memory?
In actual fact, No. It doesn’t. The title is therefore misleading. I know. However, if you’re not careful, your memory usage or configuration can easily lead to exhausting all memory and crashing django. So whilst django itself doesn’t leak memory, the end result is very similar.
Memory management in Django – with (bad) illustrations
Lets start with the basics. Lets look at a django process. A django process is a basic unit that handles requests from users. We have several of those on the server, to allow handling more than one request at the time. Each process however handles one request at any given time.
But lets look at just one.
cute, isn’t it? it’s a little like a balloon actually (and balloons are generally cute). The balloon has a certain initial size to allow the process to do all the stuff it needs to. Lets say this is balloon size 1.
Continue reading “django memory leaks, part I”
About a month ago I posted about tweaking timthumb to work with CDN. Timthumb is a great script, but great scripts also have bugs. A recently discovered one is a rather serious bug. It can allow attackers to inject arbitrary php code onto your site, and from there onwards, pretty much take control over it.
Luckily no websites I know or maintain were affected, possibly since the htaccess change I used shouldn’t allow using remote URLs in the first place (and also it renamed timthumb.php from the url string, making it slightly obfuscated). I still very strongly advise anybody using timthumb to upgrade to the latest version to avoid risks.
Following from my previous post, I’ve come across another issue related to caching in wordpress: dynamic content. There’s a constant trade-off between caching and dynamic content. If you want your content to be truly dynamic, you can’t cache it properly. If you cache the whole page, it won’t show the latest update. W3 Total Cache, WP Super Cache and others have some workarounds for this. For example, W3TC has something called fragment caching. So if you have a widget that displays dynamic content, you can use fragment caching to prevent caching. However, from what I worked out, all it does is essentially prevent the page with the fragment from being fully cached, which defeats the purpose of caching (especially if this widget is on the sidebar of all pages).
The thing is, I wanted to take it a step further. If I can do it by following this manual process, why can’t I use a plugin that, erm, ‘ajaxizes’ other plugins?? I tried to search for solutions, but found none. So I decided to write one myself. It’s my first ‘proper’ plugin, but I think it works pretty well. Continue reading “ajaxizing”
[IMPORTANT: please check that you have the latest version of timthumb! older versions might have a serious security vulnerability. A little more about it here]
I’ve been recently trying to optimize a wordpress based site. It was running fine, but I wanted to run it even faster, and make the best use of resources. So I ended up picking W3 Total Cache (W3TC). It’s very robust and highly configurable, if perhaps a bit complicated to fully figure out. So eventually things were running fine, and my next task was to boost it even further by using a Content Delivery Network (CDN). In this case, the choice was Amazon Cloudfront. The recent release allowed managing custom origin from the console, which made things even easier. One of the remaining issues however, was trying to optimize timthumb.
Timthumb was already included with the theme, and I liked the way it works. It allowed some neat features, like fitting screenshots nicely, and also fitting company logos well within a fixed size (with zc=2 option). Google search has led me to a couple of sources. However, for some reason none of them worked, so I ended using a slightly different solution… Continue reading “thumbs up”
A quick-tip on the importance of timestamps and making sure your time zone is set correctly.
I was recently playing around with fail2ban. It’s a really cool little tool that monitors your log files, matches certain patterns, and can act on it. Fail2ban would typically monitor your authentication log file, and if for example it spots 5 or more consecutive failures, it would simply add a filter to your iptables to block this IP address for a certain amount of time. I like fail2ban because it’s simple and effective. It does not try to be too sophisticated, or have too many features. It does one thing, and does it very well.
I was trying to build a custom-rule to watch a specific application log-file. I had a reasonably simple regular expression and I was able to test it successfully using fail2ban-regex. It matched the lines in the log file, and gave me a successful result
Success, the total number of match is 6
However, when running fail2ban, even though it loaded the configuration file correctly, and detected changes in the log files, fail2ban, erm, failed to ban… I couldn’t work out what was the problem.
As it turns-out, the timestamps on my log file was set to a different time-zone, so fail2ban treated those log entries as too old and did not take action. Make sure your timestamps are correct and on the same timezone as your system!! Once the timezone was set, fail2ban was working just fine.
[Also published on testuff.com]
Most people I know tend to simply use the same password on ALL websites. Email, Paypal, Amazon, Ebay, Facebook, Twitter. This is obviously a very bad idea.
Passwords are always a problem. Difficult to remember, hard to think of a good one when you need a new one, tricky to keep safe. For the moderately-paranoid and the sufficiently-techie there are many good solutions out there. Password managers. Online, offline, commercial, free. So I usually suggest to my friends and colleagues to use a password manager.
Continue reading “passwordless password manager”
This saturday, 8th January 2011 I’m running a small geeky arts project at Madame Lillie’s gallry in Stoke Newington.
SMILE – a temporary exhibition
The smile project attempts to capture snapshots within the exhibition space. The audience takes an active role as part of thework and passively or actively affects it. The exhibition space is a number of webcams, each captures still-image snapshots at random. Some cameras are hidden, whilst others are visible. Those snapshots are then randomly layed-out and printed onto a photographs every few minutes. The audience is invited to take those snapshots home, as a souvenir and a piece of the artwork. Each snapshot is unique and cannot be reproduced. The images are deleted immediately after being processed and printed out.
Influenced by thoughts about the London surveillance network, the smile project looks at the proliferation of cameras that capture parts of our lives, and the knowledge that we all, willingly or unknowingly appear in images captured by others. With the advances in technology it is becoming increasingly easy to take photos and videos. It is also cheap and easy to keep those on file for a long period of time, perhaps indefinitely. Photos and videos that we take these days are instant and perishable: they appear briefly on our facebook page and get immediate attention until quickly replaced by others. Yet at the same time we cannot truly delete them. Once posted online, they are beyond our control. They are stored electronically, archived and backed-up. They are searchable and indexed. Whether we are the subjects of the images or those who create them, we have little control over them.
smile is attempting to both make use of and question the technology that dominates our modern lives. It is meant to be a fun and humorous experience, involving the audience and rewarding it. It uses digital imaging technology, but produces a tangible, unique output. The creation process involves programming in various scripting languages, using a mix of digital tools, primarily open-source, all form a part of a random montage.
I was really pleased when my good friend chris asked me to help him with his edition of 1 project. I guess it was exciting working on an arts project. I also liked his project because randomness is an interesting concept, particularly when it comes to computers. Put very simply: computers have trouble picking stuff at random.
Continue reading “2nd edition of 1”
One-Time-Passwords always fascinated me. Long long time ago in a land far far away I suddenly had this idea. The idea was simple and in today’s terms pretty common, perhaps trivial. One-Time-Password without the need for an extra token. After the user keys in their username and password, they get sent a random password via SMS. Ten years ago there wasn’t anything that did that. I created a basic RADIUS implementation with support for different SMS gateways, all in Java. Sadly however, with no funding, no clue how to turn it into a business, and just finishing my computer science degree, it had to be abandoned for an easier day job.
I was recently pulled into looking at two-factor-authentication (2FA) solutions. I used SecurID at a previous job, and know of several solutions in this area. I was quite pleased to discover many soft-token solutions working on mobile phones (iphone, blackberry, HTC, Nokia) and USB-based ideas like Yubikey. I was even more pleased to discover open source initiatives in this area, and OATH HOTP in particular.
Continue reading “Once upon a time”
I recently noticed my iphone clock wasn’t accurate. I’m not exactly sure why. It was only a few minutes behind, but it still annoyed me. Why couldn’t my iphone sync its time with an internet time server?? I know it is supposed to sync with my mobile network operator, but I think mine doesn’t sync… Maybe it’s my operator?
For jailbroken iphones, there’s a neat app on cydia called NTPDate. It’s a great app and I recommend installing it. All you need is specify the ntp server, and click ‘set’ and it will sync your clock for you. However, I wanted to go a step further. I wanted my iphone to sync itself automatically for me, using a cron job. Well, not quite using cron, but it can be done automatically.
Continue reading “iphone running late”
On my last post I described how I get my asterisk box to know the caller name from a csv data file. The thing is, my address book keeps changing on my iphone. People change their phone numbers, I meet new people (can you believe it? I don’t let it happen too often though)… I wanted to be able to sync it automatically to my asterisk. This synchronisation also doubles up as a backup for my address book.
Continue reading “iphone asterisk sync”
Caller ID is a wonderful feature. Don’t we love getting a call from someone we like, and perhaps more importantly, ignore those annoying callers who we really don’t want to talk to.
But this is yesterday’s news. We all have caller IDs. It just works. Well, yes. It does. But what if we get a call on our landline? We get the caller ID there too, but do we know who it is?? All our contacts are on our mobile phones. Standard phones don’t usually have the capacity to hold more than 10 names on average. And even if they did. Who’s got the energy to key in those numbers?
Continue reading “who’s calling?”
ISPs are a strange breed. They’re supposed to give a very straight-forward service – plug me in to the Internet please. That’s all. Plain and simple. It seems like some ISPs have different ideas as to their roles and responsibilities. Traffic shaping is one of those. Port/Service blocking is its ugly cousin. I don’t like either. You’re not my Big Brother. If I wanted one I’d move to China.
Continue reading “Get in shape”
Perhaps yet another misleading title for this post, but bear with me. When I was a child we used to play outside a lot. I always remember the neighbours complaining if we made too much noise. Such is life. There was one period of time that I knew I would get in trouble though. We would get told off big time!! When?? Every day between 2 and 4 in the afternoon. There was even a sign in big red letters telling us all to keep quiet at ‘rest time’ (loosely translated). There was no sign about making noise after 11pm, but there was one for the afternoon nap. It was THAT important.
These days seem long gone now. Does anybody have time for an afternoon nap?? I certainly don’t recall seeing any such signs around.
Continue reading “Guilty Pleasures”
This is something I never thought would concern me. All those domain ownership issues, I thought, were with people who just aren’t organised enough to renew their domains, forget their passwords, or pick domains that others try to steal.
I’m currently managing a couple of .DE and one .CO.UK domains on Godaddy. The .co.uk domain was bought for a fairly long period, so no problems there, but recently, both the .de domains were up for renewal. I received a couple of email reminders from GoDaddy, but when I logged into my account I noticed they are only due to expire on 26th March. More than a month away. On the last email I received on the 20th, I decided it’s probably time to renew, so clicked on the link and was getting my credit card ready.
To my surprise, the GoDaddy domain portal did not allow me to renew them, and marked them as ‘pending expiration’. hmmm… weird. Oh well, I emailed GoDaddy and asked to renew, not even worrying too much. Maybe a system glitch of some sort.
Continue reading “GoDaddy taking european domains hostage prior to expiration”
I’ve had a strange thing happening a while ago. I sent a CD in a padded envelope to someone, and it was returned to me. Well, it didn’t look like it was returned, more like they actually delivered it to me instead of to the person I sent it to.
Then I noticed something. I was re-using an old envelope. For environmental reasons of course (read: being so tight-assed, saving money on padded envelopes). I did write the destination address on the right side, so my address was nowhere on the envelope. What was left there however was this tiny printed barcode used by the previous sender with my address.
Continue reading “Postcode, Barcode and python code”
Perhaps the title is a little misleading, but it’s an opportunity to combine two of my greatest loves: food and computers. I suppose even this intro is misleading. Oh. Forget it. Lets get down to business. And this time our business is rather short (and sweet).
Running tcpdump on my Linksys router (well, Buffalo WHR-54GS to be precise, but the same famous WRT54G clone that runs open source firmware).
Continue reading “sniffing some fresh tomatoes”
I don’t like being late. It runs in the family. My dad is so obsessed with being late that he’s always early. How embarrassing. Unfortunately I seemed to have inherited it from him. I do however try to compensate. I am deliberately late on-time. I usually plan to be a bit late.
Anyway, enough about me.
Continue reading “cron woes – file timsetamp monitoring plugin for munin”
Not a particularly interesting post, unless you happen to be running Debian 4.0 etch running on Virtuozzo host and are trying to install JDK5. I saw a post with a user reporting the same error on the parallels forum , but couldn’t find a clear solution, until I bumped onto a weird forum in German, which luckily linked to a related bug report (in English).
Continue reading “JDK 5 Debian etch Virtuozzo installation oddities”
Just so to set the records straight, I’m the real dummy here, as you would obviously see.
I absolutely have no experience with python, lighttpd, turbogears or wsgi, fcgi but I was given a task to set it up on a server. All I used was some online posts and common sense. Normally I wouldn’t have to post a blog entry about it, most of the stuff is already out there. On this occasion however, I felt there’s some missing documentation, or else – simply because I don’t know anything about any of these it was more tricky than it usually is. Perhaps it’s one indicator how this all seems from the sysadmin perspective, as opposed to a turbogears developer.
Continue reading “Turbogears, Lighttpd and WSGI for real dummies”
[Migrated from tuzig.com]
This is something I thought would be simple. Then I began scratching my head reading all the posts on the web, thinking I might actually need to get a proper book. Finally I realised it was actually quite simple…
Continue reading “Windows 2003 split DNS”
[Migrated from tuzig.com]
I’ve been struggling with getting the iCal app on Mac OS X to use a shared calendar on Apache installed on Windows and using domain authentication (SSPI). It is all supposed to be so simple, yet it didn’t work. Authentication failed with this message:
Access is denied at http://server.com/file.ics with this login and password
When using Safari and accessing the same webDAV url authentication works just fine. I could even download the calendar file from Safari and it would open it in iCal, but for some bizarre reason – neither Subscribing nor publishing worked
I’ll leave out all my trials and errors. Eventually it came down to SSPI authentication not working well with iCal, even with SSPIOfferBasic On and SSPIBasicPreferred On. Only when I added SSPIOfferSSPI Off did it start working…
I’m still slightly confused why no modification is allowed to subscribed calendars in iCal, as it does let you do it in Sunbird, but at least you can publish your own calendar and subscribe to other’s
[Migrated from tuzig.com]
2009 Aug 14 Update: Looks like rsnapshot is now packaged in cygwin! Thanks to pseudo-anonymous coward for the comment! Some of the information below may still be of interest, so it’s left unchanged.
Rsnapshot is a great tool for backing up data, keeping versions and using relatively minimal disk space. From the rsnapshot website:
“rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. Continue reading “Rsnapshot Server on Windows”