I have been working with DigitalOcean for several months, on average DigitalOcean deploys your VPS server in 55 seconds. After the server is deployed, all the manual/prone to errors/boring configuration process is needed.
As I am using puppet to configure all my servers I have create provisioningDO rakefile script (based on John Arundel’s book Puppet 3 Cookbook) to deploy and configure my servers in 4 min 15 sec. It means After 4 min 15 sec my servers are ready for production.
provisioningDO uses Jack Pearkes’ tugboat CLI tool so, a fully installed and configured tugboat CLI is necessary. It shouldn’t take you more than 5-10 minutes have a working and ready to go tugboat installation
Today I have released to the public my first puppet module:
It installs and configures knockd (a port knocking software).
Several months ago I finally got the (WOL) Wake On Lan feature of my RTL8111/8168B NIC card working. The problem was that a new driver (other than the provided by Debian) and a special PCI configuration was needed.
The other problem I had to deal with was the ADSL Router (Comtrend HG532c, The one provided by the Spanish ISP Jazztel) configuration:
- Open the required port: This was an easy one just opening the 7 a 9 port and forwarding them to the server we want to WOL from the internet
- Make the router remember the server’s tuple MAC/IP address. That was easy too, but some manual work was needed as when router is restarted the ARP table is flushed.
In my current job I had to change recently some configuration and restart more than 600 IP phones. To perform such titanic task I created a quick and dirty script using expect. It worked like a charm and made me think about automatize the way I set the ARP table in my Comtrend HG532c ADSL router.
1 year ago I couldn’t get connected to my office’s network using my VPN client. The reason was that my p12 certificate was expired. AFAIK IPsec cannot renew certificates automatically as windows VPN client does. To make it work I needed to renew it using the windows client and then migrate a p12 certificate to a Linux/IPsec friendly format. As I was in a little hurry I tried installing the Linux Citrix client
Today I have assisted to the Ubuntu Cloud Webcast, Presented by: Mark Shuttleworth (Canonical Founder) and Stephen O’Grady from Redmonk.
Since 2005 I have hosted this web page in the Cpanel based Bluehost company. First with Joomla and recently migrated to WordPress.
Bluehost allows to download a daily, weekly and monthly backup from your Cpanel control panel, but manual intervention is needed:
- Logon in the control panel
- Navigate to the backup page
- Perform the backup
- Download it to your local computer.
This is a manually/time consuming task and of course you should not forget it!!
In this post I gonna show my automatic method to backup files and databases using:
- Crontab for automatic backups.
- Public/private keys for passwordless ssh connections.
- Rsync command for synchronizing directories between remote and local servers. This way bandwidth is reduced as if a file has already been copied to the local server no data transfer is needed.
- Mysqldump for dumping the MySQL databases to a local file.
- SpiderOak for data deduplication and remote backup.
Some previous knowledge is needed to understand how it works, anyway there are some useful links to understand it.
A long time ago, in a galaxy far far away when I started with openvz I followed this tutorial for Debian template creation. Now I am adapting it (using my own experience and this template-squeeze tutorial too) to Qemu/KVM disk images than later can be used directly or via libvirt.
This procedure tries to generalize the template. While working with disk cloned images many elements need to be “generalized” before capturing and deploying a disk image to multiple computers. Some of these elements include:
- ssh keys
The more “generalized” is a template, the less manual work is needed after deploying it.
This method must work in others virtualization systems: vmware, virtualbox, etc. As it is “virtualizator/hypervisor/emulator independent” as it is focused only in the disk image.
Since I started learning puppet several weeks ago I wanted to install the client and the server in the same host but using several aliases for the same machine. But there are several funny error related to puppet master and client sharing the same ssl directory: SSL certificate confusion, obscure errors, and SSL revocation horrors.
I took the main ideas from Splitting puppetd from puppetmaster from madduck‘s blog. But using this method you don’t have to create 2 differents ssl directories. Both installations (client and server) will share the same directory. I think it’s easier to implement and maintain.
The golden rule is to create all the SSL stuff (CA, keys, certificates,etc) in the right moment. And you may ask… When is the right moment? After the file /etc/puppet/puppet.conf is created with the certname directive properly updated. As by default puppet create all the SSL stuff using the hostname instead of the alias you want.
This tutorial assume you are using Debian (but should work on its derivatives: Ubuntu, Mint, etc) and have one server with two aliases replying to the same host (via /etc/hosts or DNS) In my case: puppet (server) and mediacenter (client).
Last week I finally finished the migration from my old 1.0.15 Joomla installation to the new shiny 3.2.1 WordPress. I had in mind to migrate to the new 1.5.X Joomla series but there was not an easy one-click upgrade tool as there was so many core differences between version and some manual work had to be done. That was the reason to study other options.
Finally I decided to move on with WordPress and with the help of Misterpah‘s Mambo Importer plug-in at least half of the work was already done. Although some manual work has to be done (recreating path’s, images, etc)
Special thanks to Misterpah for sharing his knowledge and time!
P.S.: Starting from today all (or at least almost) news posts/pages will be written in English.