I’ve just released a simple chef cookbook that will install nodejs from source. You can check it out directly from github or download it from the opscode cookbook site. Let me know what you think if you find it useful.
Eucalyptus 2.0 was just released yesterday; the latest version of the Eucalyptus open source cloud introduces several new features, including iSCSI support for EBS volumes, S3 versioning, virtio support for KVM hypervisors, and new administrator tools:
High Scalability: Eucalyptus employs a software design in which scalability is achieved at two levels: front-end, transactional scalability and back-end, resource scalability. The new version of Eucalyptus provides increased back-end cluster scale improvements to support massive private and hybrid clouds.
Support for iSCSI protocol for EBS volumes: Eucalyptus now supports Internet Small Computer System Interface (iSCSI) protocol for EBS volumes, which can make overlaying a Eucalyptus cloud on top of existing IT infrastructure even easier. This feature gives Eucalyptus users the flexibility to situate the EBS controller machine anywhere on the cloud, including outside the broadcast domain of the cloud nodes.
KVM virtio support: Eucalyptus 2.0 supports KVM virtio, an efficient abstraction for hypervisors and a common set of I/O virtualization drivers. Users now have the flexibility to choose between emulated device drivers or direct kernel supported I/O devices via virtio for performance tuning.
S3 versioning: Eucalyptus 2.0 extends its innate compatibility with AWS with support for S3 versioning. Now users can perform version control on the objects stored in Eucalyptus Walrus. Through a well-defined API, Eucalyptus users can retrieve specific versions of objects.
Also Eucalyptus introduced enhancements to its open.eucalyptus.com website to make submitting patches to the Eucalyptus open source code easier and more transparent. Seems like many of these came as the result of the latest controversy about eucalyptus and its open core model that resulted in the launch of the highly publicized OpenStack at OSCon. This demonstrates again (if it was needed) that it’s good to have several alternatives and this will only result in innovation and progress.
“Google Analytics__ is the enterprise-class web analytics solution that gives you rich insights into your website traffic and marketing effectiveness. Powerful, flexible and easy-to-use features now let you see and analyze your traffic data in an entirely new way. With Google Analytics, you’re more prepared to write better-targeted ads, strengthen your marketing initiatives and create higher converting websites.”
Everyone uses Google Analytics (GA), right? It’s a great product and even better it’s free. It is a win/win situation where there is no point in anyone running their own analytics unless they have something custom and specific that is not covered by GA.
Still as other google product its documentation is, let’s say not the best. A while ago I started working on a project and there was no web analytics in place. I asked them why is that? They said they have too big traffic to be accepted in GA. Hmm… I looked into it and I must admit I could not find much information that we were interested. Finally looked over the terms of services and there I found: _ “2. FEES AND SERVICES . Subject to Section 15 herein, the Service is provided without charge to You for **up to 5 million pageviews per month per account**, and if You have an active Adwords campaign in good standing, the Service is provided without charge to You without a pageview limitation.”_
So there is a limit, a tiny one I would say of 5mil pageviews per month (per account, not even per site). Our site was making about 45Mil pageviews at that time. Per day! So what if we wanted to use GA? We searched everywhere but could not find any commercial offering of GA or any other information. We asked the @googleanalytics on twitter but we were completely ignored.
What to do? Well, we just gave it a try and added the site and started tracking it in GA as any other site. Surprisingly, it worked just fine for a few months. Yesterday though, we received an email from the google analytics team (or should I say GA “robot”?) telling us that they have detected we have a high traffic, much higher than the allowed limit of 5mil pageviews per month, and from now on we are no longer going to have live reports but only daily updated__ reports. This is a limitation we can live with, but it would have been great if they would have given us some option to pay for some extra services. My client would have been happy to pay in the first place, but I assume this is something google doesn’t care at all and they just want to offer it as a free services. There is a great opportunity for such a product that could handle high traffic analytics and can do real-time and other goodies; we would be definitely interested. In the meantime if you have a site that makes more than 5mil pageviews per month (not so uncommon) you can definitely use GA; in the worst case they will restrict your updates to keep up with your traffic. For our site we tracked 1,608,074,379 Pageviews last month in GA and it works just fine.
Everyone knows and loves screen for running longtime scripts in the background without worrying that the ssh connection will drop and will have to run it again. Still, I have found myself many times in the situation where I started a process and needed to put it in the background and run something else on the console. Uff.. If only I started it with screen. But wait, there is hope. This quick tip will show how to put a process in the background and then start it back in foreground.
This works in bash and uses the ’suspend’ key (CTRL+Z) and the bg - background and fg - foreground commands. Let’s say we were running an intensive rsync command, and are wanted to check if we still have the available space on the disk without opening a new ssh session (yes, I know):
1 2 3
Let it run in the background:
Now we can run some other commands like du:
We can see the background process with ps or jobs:
And finally we can bring it back to foreground with fg:
Note: this works only on the running ssh/bash session and it will be closed once you exit. Logout should warn about open/running jobs and that they will be lost if exit.
Ok, I must admit that I was not at all excited when I received the notice from SoftLayer that they have been acquired. By who? GI Partners that controls their biggest concurrent ThePlanet. This is a deja vu for me and I really hope it will not end the same way. A few years ago I was a happy customer of EV1Servers a hosting company that was one of the best in the business. I was using them for most of my clients and had a great relation with them. And then it happened. You normally receive this like: “we are so happy to announce the acquisition, we are going to take this to a new level, and bla bla bla”. Ha. never happend. Maybe it’s great news for the owners and the people cashing out, but for clients and sometimes even employes this is not quite the same. We were doing great until now, right? We don’t want to change… Anyway, short story is that this went horrible wrong and the service and support from the new ThePlanet (that incorporated ev1 also) was terrible. I moved all my clients to SoftLayer and was a happy again.
Until now. I mean, anyone with some experience can easily see that SofltLayer has already grown a lot and lowered their level of performance and support. Their tech people seem much less experienced and interested to help you out as they used to be, but this is not such a big deal because from how I see SolftLayer’s strength is their automation; they created a system designed to not need them so much. You can do everything yourself: from their control panel, or even from their api, and as long that works correctly all is good. You can order a server using api calls, you can cancel a server using the api, reboot it and you can even respond a ticket using api. Now with this merger I am am assuming that they are going to bring ThePlanet infrastructure to use SoftLayer automation; this is the only way that would make sense. SoftLayer is so much better than anything ThePlanet has, and there is no question in my mind this is what will happen. Still the concern remains and unfortunately for me, I don’t see the next ‘place’ for me to move if this will be needed. SolfLayer raised the bar so high, and other hosting companies don’t even dream to be close to that. SoftLayer was built by some of the original ThePlanet people (back in the days when it still was a great hosting company) and with their experience they knew exactly what they wanted to build. And they were right… They’ve done a great job.
I would be really interested if anyone knows or can recommend some other advanced hosting companies like SoftLayer? they need to have an api for everything and cloud computing solutions. I would love to try them. Let me know…
Debian has a nice way to handle multiple java installations on a the same machine. Let’s say that for some reason you want to have sun-java 1.5 and also 1.6 installed on the server, we can easily configure the default one with the update-java-alternatives command (part of the java-common package). Here is how it can be used:
To see what versions of java we have installed on the system (from debian packages):
1 2 3
We can see that the default version is 1.6 in my case (as it was the last installed):
1 2 3 4
We can change the default version with: update-java-alternatives –jre -s
and now the default is 1.5:
1 2 3 4
This is quite handy if you need to have multiple java versions installed, and need a quick way to change the default one (you can access any of them directly from their own path of course).
During the annual Debian Developer Conference ”Debconf10” in New York, the Debian’s release managers have announced the freeze of the upcoming stable release Debian 6.0 Squeeze. Basically this means that no new features will be added and all work will now be concentrated on fixing existing bugs.
The upcoming debian stable release will include:
- Linux 2.6.32 kernel
- Apache 2.2.16, PHP 5.3.2, MySQL 5.1.48, PostgreSQL 8.4.4
- Python 2.6 and 3.1, Perl 5.10, Ruby 188.8.131.529 and 1.9.2~svn28788, GCC 4.4
- DKMS, a framework to generate Linux kernel modules whose sources do not reside in the Linux kernel source tree.
- Dependency-based ordering of init scripts using insserv, allowing parallel execution to shorten the time needed to boot the system.
Hopefully we will see Squeeze going stable in the next 4-6 months, ideally by the end of the year!
Release Announcement: http://www.debian.org/News/2010/20100806
Today I’ve finally moved the emails for my domain ducea.com to google apps for domains. I’m probably one of the few people that still had their own email server these days, and I’m sure anyone would question why would I want to run that on my own server. And the answer to that is that I didn’t, but thought this migration would be more complicated and time consuming so I always put it in the back on my todo list. I wanted to do it for a long time, but never got to it.
Seems like lately I’ve moved everyone I could onto google apps; friends, clients, or even strangers I could easily convince them on how great it is to not worry about your email server and put this into the hands of someone like google; and all this for free. Then why did it take so long for me to move? Well, email is very important to my business and this is why a long time ago (too many years to remember) I’ve made the decision to serve it on my own dedicated server, instead of a cheap vps. This was the main reason I rented the server in a good hosting facility (started with ThePlanet and then moved to SoftLayer about 3 years ago) and was happy to pay for it to know that I have a reliable service and my email will be reliable also, and be sure that if I get an email from a client or some nagios alert that something is not working I will be getting it immediately as expected. I’ve been a big fan of imap and used that all the time so I can check in the emails from different locations and have a central place where the files are and can be easily backed up. As any sysadmin I ended up with a big .procmailrc file with many rules, where some of them are most certainly no longer needed (projects completed, etc.) and with a huge Maildir, as I like to save anything that might be useful in the future. Don’t get me wrong I hit delete probably 80% of the time, but over time this grew to something like 1.2G quite easy. I’m sure many people have much bigger mailboxes than this, but anyway…
Ever since I sow the oscon presentation of reconnoiter I wanted to check it out and play with it. Yesterday, I finally had some time to do this and thought it would be a good idea to document it as a short howto. Most of the infos I used are from the readme (BUILDING), the wiki and the excellent writeup of Thomas Dudziak on how to install reconnoiter on ubuntu.
The daemons noitd and stratcond are written in C, and the database used is postgressql, while the web interface is written in php. We will need to install a few dependencies to be able to compile noitd/stratcond:
As Matt Simmons announced on his blog, I’ll be one of the members of the LISA2010 blogging team. I’m really excited to be part of such a great team with Matt, Matthew and Ben, and looking forward for a great event. We will be blogging and sharing things we find interesting at LISA on the USENIX blog, that you should definitely bookmark it in case you don’t have it already. If you will be at LISA2010 definitely come say hi; I’d love to meetup and chat.
Matt’s full announcement on the USENIX blog: Introducing the 2010 LISA Blogging Team