During the COVID-19 I have invested some of the “free time” given by the lock down to refresh some old topics like capacity planning and command line optimizations.
In 2011 I got my
LPIC-3 and while studying for the previous LPIC-2 two of the topics were Capacity Planning and Predict Future Resource Needs. To refresh this knowledge I recently took Matthew Pearson’s course from the Linux Capacity Planning LinuxAcademy
My interest in Data Science and Business Intelligence started with a course I took where the main tool used was
Pentaho mostly PDI (aka Kettle) for ETL jobs and Report Designer for reports automation. Then I continued with Waikato’s university WEKA courses and this path drove me to read ‘ Jeroen Janssens Data Science at the Command Line book which I have recently re-read again. In his book, Jeroen uses Ole’s Tange GNU parallel a tool I have already written about in my A Quick and Neat 🙂 Orchestrator using GNU Parallel post
How are Linux Capacity Planning, ETL, command line and parallelization of jobs related you might wonder. Let’s dig into it
Posted in Bash, DevOps, ETL, GnuParallel, Linux, Scripts, SysOps, Virtualization |
Tagged batches, Capacity Planning, ETL, Gnu Parallel, Linux, Optimization, Serial VS parallel |
During the last few weeks I have been interviewed for several DevOps positions. In two of them I had to reply a skills check-list and in the other one an exercise to be solved and send back by email. I think these check-list interviews are not good for DevOps positions, specially if the check-lists used are not updated properly. Let’s see why…
Sometimes you have to deal with servers that you don’t know anything about:
You are a short temp IT consultant with not previous knowledge on the environment.
The CMDB is out of order.
You are on a DR situation.
Or simply the main administrator is not there.
And you need:
Run commands in parallel
Get info from many servers at a time
Troubleshoot DNS problems
Check how many servers are up and running
On my systems I use two orchestrators:
MCollective and SaltStack (configured automatically using puppet) that fulfill my needs. But let’s see a quick way to have an orchestrator in a rapid manner.
I have been working with
DigitalOcean for several months, on average DigitalOcean deploys your VPS server in 55 seconds. After the server is deployed, all the manual/prone to errors/boring configuration process is needed.
As I am using puppet to configure all my servers I have create
provisioningDO rakefile script ( based on John Arundel’s book Puppet 3 Cookbook) to deploy and configure my servers in 4 min 15 sec. It means After 4 min 15 secs, my servers are ready for production.
provisioningDO uses Jack Pearkes’ tugboat CLI tool so, a fully installed and configured tugboat CLI is necessary. It shouldn’t take you more than 5-10 minutes to have a working and ready to go tugboat installation 🙂
Today I have released to the public my first puppet module:
It installs and configures knockd (a port knocking software).
Llevaba ya tiempo dandole vueltas a la idea de montar un grupo de usuarios de puppet en Alicante, que no se si habra muchos…
La semana pasada mande un correo a la
lista de usuarios de puppet por si habia alguien interesado y hoy he recibido un correo de puppetlabs.com indicandome que si tenia un grupo de meetup, que ellos me pondrian un link en su web. por lo que me he decidido a crear un group en meetup.com.
Por lo que oficialemente hoy ha sido creado el
Alicante Puppet Users Group
Asi que si estas interesado en Puppet, DevOps, Data Center and Operations Automation y basicamente hacer las cosas una sola vez y que los ordenadores hagan el resto. Este es tu grupo.
Espero que os apunteis y cuando seamos unos cuantos hagamos la primera quedada.
Salu2 puppeteros Alicantinos