Zuzur's corner technology, the web, programming and a bit of this and that …

20Dec/12Off

notebook: importing existing users from chef

I recently had (too much) fun trying to import a few existing chef users (opscode's platform) to another chef server. There is currently no facility to export your existing users ('clients' if you are running your own server, 'users' on the opscode platform).

What i ended up doing involves a lot of manual typing, but i intend to automate that in the future.

The key is that there is no way you can export you users' public keys without accessing the underlying CouchDB server, and as i'm pretty sure Opscode uses a multi-tenant CouchDB server, you can't do that on their platform for obvious security concerns.

So i ended up:

  • creating the same users on my new chef server
  • use irb on the new chef server to overwrite every new users' public key using some of chef's internal methods

Some ranting on Opscode's platform

knife doesn't even know about users, as the chef server code on that platform makes the distinction between users (who can upload cookbooks, roles, etc ...) and API clients (nodes trying to get their run lists and values stored in data bags). The chef backup script provided by jtimberman only exports roles and nodes, and even after massaging it it to dump clients as well, the public_key field is empty - as if the platform's server removed it on purpose)

i didn't find any documentation about the differences in the REST API between the hosted platform and the open source chef server. I suppose that the API is here but not exposed in the gem ... i'll look into that later.

With opscode not allowing to export the public_key of the client, meaning there is no way you can easily script that process, you have to go through the very tedious task of copy and paste every user's public key and change your new users' document with their respective public key ...

Solution

  • quickly hack a chef-client.rb that would allow a client (knife or chef-client to connect using an existing API client that you will not modify during this process - doing so would be a nice way to shoot yourself in the foot :-P
  • display the user on opscode's platform : https://manage.opscode.com/users/
  • copy the public key
  • on a console, connected to the chef server (we're going to use Chef::ApiClient.cdb_load and Chef::ApiClient.cdb_save methods, which bypass Chef's REST API, and need the same access as the chef server ...)
earzur@chef-server [16:30:51]
 /etc/chef $ irb
 irb(main):001:0> require 'rubygems'
 => true
 irb(main):002:0> require 'chef'
 => true
<...>
 irb(main):004:0> Chef::Config.from_file('chef-client.rb')
 => "none"
<...>
irb(main):007:0> u = Chef::ApiClient.cdb_load('earzur')
 => #<Chef::ApiClient:0x7f0f498a5188 @couchdb=#<Chef::CouchDB:0x7f0f498aaef8 @rest=#<Chef::REST:0x7f0f498aaea8 @redirects_followed=0, @auth_credentials=#<Chef::REST::AuthCredentials:0x7f0f498aadb8 @key_file=nil, @client_name=nil>, @cookies={}, @sign_request=true, @default_headers={}, @disable_gzip=false, @url="http://localhost:5984", @sign_on_redirect=true, @redirect_limit=10>, @db="chef">, @admin=true, @couchdb_id="f0f19583-00a9-4b60-887d-95f410821856", @couchdb_rev="3-f3ce3f9e58f6988636168f8bd611db8e", @private_key=nil, @index_id="f0f19583-00a9-4b60-887d-95f410821856", @public_key="-----BEGIN RSA PUBLIC KEY-----\n....DAQAB\n-----END RSA PUBLIC KEY-----", @name="earzur">

 irb(main):008:0> u.public_key("-----BEGIN RSA PUBLIC KEY-----\n...\n-----END RSA PUBLIC KEY-----")
 => "-----BEGIN RSA PUBLIC KEY-----\n...-----END RSA PUBLIC KEY-----\n"

 irb(main):010:0> u.cdb_save
 ~ Qrack::Queue#publish will be removed in Bunny 0.8. Use direct_exchange = bunny.exchange(''); direct_exchange.publish('message', key: queue.name) if you want to publish directly to one given queue. For more informations see https://github.com/ruby-amqp/bunny/issues/15 and for more theoretical explanation check http://bit.ly/nOF1CK
 => "4-ed15fba63937223e6690765809ef0e2c"

 

 

done ... now repeat that for the other clients and they should be able to access both servers with the same certificates ... this doesn't scale well, but i'm not going to have to distribute another sets of certificates for this chef development server !

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

19iqbmpEtHqeCEnCtcbzzjSXYaDKTep2gm

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

2Oct/12Off

Notebook: chef interactive testing using irb

I'm using this trick a lot to check what are the values of attributes in chef nodes without having to navigate through the UI. You can do everything in the UI or whatnot (change attributes, save them ...). I usually do that from the top of my chef repository, but you can specify any valid path to a proper knife or chef-client configuration when calling Chef::Config.from_file ...

Quite handy:


[10:46:11]-erwan@ip-192-168-0-59:~/dev/chef-repo(master) > irb
1.9.2p320 :001 >
1.9.2p320 :001 > require 'rubygems'
false
1.9.2p320 :002 > require 'awesome_print'
false
1.9.2p320 :003 > require 'chef'
true
1.9.2p320 :004 > Chef::Config.from_file(".chef/knife.rb")
"none"
1.9.2p320 :015 > n = Chef::Node.load('build')
node[build]
1.9.2p320 :016 > n.ohai_time
1335465628.75024
1.9.2p320 :017 > n.ntp
#false, "service"=>"ntpd", "servers"=>["0.us.pool.ntp.org", "1.us.pool.ntp.org", "0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org"]}, @override={...}, @current_override={"servers"=>["0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org"]}, @automatic={...}, @current_automatic=nil, @current_nesting_level=[:ntp], @auto_vivifiy_on_read=false, @set_unless_value_present=false, @set_type=nil, @has_been_read=false>
1.9.2p320 :019 > n['ntp']['servers']
[
[0] "0.pool.ntp.org",
[1] "1.pool.ntp.org",
[2] "2.pool.ntp.org"
]

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1FkKo3gg5ni3Y9GzkhurGBoCn8XFRrqi9h

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Filed under: chef, sysadmin No Comments
29Dec/11Off

Notebook: batch m4a to flac conversion using find/xargs/ffmpeg

If you have a large collection of files in a certain format and need to convert it to another, it can be quite tricky to do so and retain every tag information and all (i didn't find any tool that would allow me to do that, even the excellent xACT doesn't)

So ...

you will need to have ffmpeg installed. I just used sudo port install ffmpeg on my mac.

First you need to write a little shell script. I've called it toflac.sh. This script will perform the conversion :

#!/bin/sh

ffmpeg -i "$1" -f flac "${1%.m4a}.flac"

(change the extensions appropriately to fit your needs)

then call it using the following command line:

find . -name *.m4a -print0 | xargs -0 -P 4 -n 1 ./toflac.sh

this runs 4 parallel ffmpeg processes to convert your files. Quite handy.

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

18fd5VWQawAJEBH8Gcf2k3EJK697hLAnfw

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Filed under: notebook No Comments
31Oct/10Off

Notebook: MacBook – Battery calibation

A few weeks ago, my apr'09 13" MacBook started behaving strangely. After a few minutes off the power cord, it would simply shutdown without warning (no display of the low battery warning message ... blam ! Black screen of death). The led battery indicators would show the battery to be at 80%, and if you started the laptop again, it would work for 5-10 minutes ... then blam !

I've looked for solutions to this issue for a while, this behaviour didn't match apple's documents about battery calibration / PMU reset in their knowledge base, i've tried following every steps documented there (links provided below) to no avail.

resetting the SMC
resetting the PRAM and NVRAM

I finally managed to get back to a properly running mac after following those instructions (Battery calibration) to the letter.

When the system would shutdown improperly, i would just start it up again until next time, and did this for 2 hours (about 6 or 7 power off / power on cycles) until the battery was completely depleted.

Then, i performed a full charge, allowing the system to rest for a few hours while on battery, powered it on, removed the battery adapter with my fingers crossed ... and ... I got my macbook back ! :-) Woooot :-)

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

16nYGedpQvLs7eEDPb1spySJWZf3Pukmxe

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

31Oct/10Off

scripting with chef

Chef is a very cool platform management solution. Once it is setup, you have a very clean solution to distribute server configurations across as many servers as you need, very cleanly (Ruby DSL) and quickly. A must on AWS.

One thing that you can't find easily in Chef's documentation is how to use the chef API to write scripts that would use the information chef is storing.

Imagine having a script that would run regularly (cron) and update your internal DNS zone with A records for each server it can find in your chef database.

I'm not sure I used the "right" approach. I just pasted some code i borrowed from knife (chef's command-line tool) and added it to my own script. I intend to find a cleaner approach later (mixins, inheritance, etc ...)

#!/usr/bin/env ruby
require 'rubygems'
require 'chef/application'
require 'chef/client'
 
require 'mixlib/cli'
 
class Client < Chef::Application
  include Mixlib::CLI
 
  banner "Usage: #{$0} (options)"
 
  option :config_file, 
    :short => "-c CONFIG",
    :long  => "--config CONFIG",
    :description => "The configuration file to use"
 
  option :log_level, 
    :short        => "-l LEVEL",
    :long         => "--log_level LEVEL",
    :description  => "Set the log level (debug, info, warn, error, fatal)",
    :proc         => lambda { |l| l.to_sym }
 
  option :log_location,
    :short        => "-L LOGLOCATION",
    :long         => "--logfile LOGLOCATION",
    :description  => "Set the log file location, defaults to STDOUT",
    :proc         => nil
 
  option :editor,
    :short        => "-e EDITOR",
    :long         => "--editor EDITOR",
    :description  => "Set the editor to use for interactive commands",
    :default      => ENV['EDITOR']
 
  option :no_editor,
    :short        => "-n",
    :long         => "--no-editor",
    :description  => "Do not open EDITOR, just accept the data as is",
    :boolean      => true
 
  option :help,
    :short        => "-h",
    :long         => "--help",
    :description  => "Show this message",
    :on           => :tail,
    :boolean      => true
 
  option :node_name,
    :short => "-u USER",
    :long => "--user USER",
    :description => "API Client Username"
 
  option :client_key,
    :short => "-k KEY",
    :long => "--key KEY",
    :description => "API Client Key"
 
  option :chef_server_url,
    :short => "-s URL",
    :long => "--server-url URL",
    :description => "Chef Server URL"
 
  option :yes,
    :short => "-y",
    :long => "--yes",
    :description => "Say yes to all prompts for confirmation"
 
  option :defaults,
    :long => "--defaults",
    :description => "Accept default values for all questions"
 
  option :print_after,
    :short => "-p",
    :long => "--print-after",
    :description => "Show the data after a destructive operation"
 
  option :format,
    :short => "-F FORMAT",
    :long => "--format FORMAT",
    :description => "Which format to use for output",
    :default => "json"
 
  option :version,
    :short        => "-v",
    :long         => "--version",
    :description  => "Show chef version",
    :boolean      => true,
    :proc         => lambda {|v| puts "Chef: #{::Chef::VERSION}"},
    :exit         => 0
 
  def configure_chef
    unless config[:config_file]
      full_path = Dir.pwd.split(File::SEPARATOR)
      (full_path.length - 1).downto(0) do |i|
        config_file_to_check = File.join([ full_path[0..i], ".chef", "client.rb" ].flatten)
        if File.exists?(config_file_to_check)
          config[:config_file] = config_file_to_check 
          break
        end
      end
      # If we haven't set a config yet and $HOME is set, and the home
      # knife.rb exists, use it:
      if (!config[:config_file]) && ENV['HOME'] && File.exist?(File.join(ENV['HOME'], '.chef', 'client.rb'))
        config[:config_file] = File.join(ENV['HOME'], '.chef', 'client.rb')
      end
    end
 
    # Don't try to load a knife.rb if it doesn't exist.
    if config[:config_file]
      Chef::Config.from_file(config[:config_file])
    else
      # ...but do log a message if no config was found.
      self.msg("No knife configuration file found")
    end
 
    Chef::Config[:log_level] = config[:log_level] if config[:log_level]
    Chef::Config[:log_location] = config[:log_location] if config[:log_location]
    Chef::Config[:node_name] = config[:node_name] if config[:node_name]
    Chef::Config[:client_key] = config[:client_key] if config[:client_key]
    Chef::Config[:chef_server_url] = config[:chef_server_url] if config[:chef_server_url]
    Mixlib::Log::Formatter.show_time = false
    Chef::Log.init(Chef::Config[:log_location])
    Chef::Log.level(Chef::Config[:log_level])
 
    Chef::Log.debug("Using configuration from #{config[:config_file]}")
 
    if Chef::Config[:node_name].nil?
      raise ArgumentError, "No user specified, pass via -u or specifiy 'node_name' in #{config[:config_file] ? config[:config_file] : "~/.chef/knife.rb"}"
    end
  end
 
 
  def parse_options(args=[]) 
    super
    config    
  end
 
  def run(args)
    parse_options(args)
    configure_chef
    Chef::Search::Query.new.search(:node,"*:*") do |n|
       ## iterate on every node. You have access to every information
       ## chef and ohai could collect from the nodes
    end
  end
end
 
Client.new.run

The command line options for your script are the same as knife's. And it works flawlessly.

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1LqxrtqyP7ctSMFrp8MUxpg46xQAz9oXNm

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Tagged as: , No Comments
12Oct/10Off

notebook: running a unix command for a given amount of time

Sometimes, you need to run a command (capture, etc ...) for a given amount of time. Here's an example of a script capturing mysql traffic for half an hour :

#!/bin/sh
DATE=`date +%Y%m%d%H%M%S`
tcpdump -i eth0 port 3306  -s 65535 -x -n -q -tttt > capture-$DATE.out &
sleep 1800 
kill $!
No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

114BqQf493ntTXdPBmQgZbM4GJ8VCp4JTV

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Tagged as: , , 1 Comment
10Oct/10Off

notebook: nagios / ndo2db on centOS 5.5 64 bits

Trying to setup nagios 3.1 with ndo2db on a 64bits platform, ndo2bd may not work properly and crash over and over.

The symptoms are:

[1286679019] ndomod: Still unable to connect to data sink.  7575 items lost, 5000 queued items to flush.

in the nagios log file
and /var/log/messages containing reports of ndo2db segfaults ...

Oct 10 12:52:26 ip-10-112-41-174 kernel: ndo2db[15666]: segfault at 00007fff8701cff8 rip 00002aaaabf2d211 rsp 00007fff8701d000 error 6

Apparently, this comes from ndo2db being improperly linked to a 32bits version of the mysql client lib.

You need to configure ndo2db like this :

./configure --prefix=/opt/nagios --enable-mysql --disable-pgsql --with-ndo2db-user=nagios --with-ndo2db-group=nagios --with-mysql-lib=/usr/lib/mysql

then you can go on and install ndo2db as documented, nagios will happily log events into you supervision database

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1MAnPE9WXyRa4Yx8MeRw8aCTg5mKuAcGGt

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

4Oct/10Off

Some fun with groovy and AWS Identity And Access Management

I'm currently playing with the all new AWS identity and access management, and wanted to share some groovy magic to play with users and groups ...

@Grapes([
  @Grab(group='com.amazonaws', module='aws-java-sdk', version='1.0.11')
])
 
import com.amazonaws.auth.BasicAWSCredentials
import com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient
import com.amazonaws.services.identitymanagement.model.*
 
AWS_ACCESS_KEY='MY AWESOME KEY'
AWS_SECRET_KEY='MY EVEN MORE AWESOME KEY'
 
def cred = new BasicAWSCredentials(AWS_ACCESS_KEY,AWS_SECRET_KEY)
 
def ami = new AmazonIdentityManagementClient(cred)
 
println "Group 'Administrators' ?"
 
def admins = null
try {
 admins = ami.getGroup(new GetGroupRequest().withGroupName('Administrators'))?.group
} catch (NoSuchEntityException e) {
  println "Didn't find group 'Administrators' : creating it ..."
  admins = ami.createGroup(new CreateGroupRequest().withGroupName('Administrators')).group
}
println admins
 
println "User 'erwan' ?"
def erwan = null
 
try {
  erwan = ami.getUser(new GetUserRequest().withUserName('erwan'))?.user
} catch (NoSuchEntityException e) {
  println "Didn't find user 'erwan' : creating it ..."
  erwan = ami.createUser(new CreateUserRequest().withUserName('erwan')).user
}
 
println erwan
 
if (erwan) {
  println "Listing erwan's groups ..."
  java.util.List groups = ami.listGroupsForUser(new ListGroupsForUserRequest().withUserName(erwan.userName)).getGroups().collect {
    it.groupName
  }
  println groups
 
  if (!groups.contains('Administrators')) {
    println "Adding user 'erwan' to 'Administrators'"
    ami.addUserToGroup (new AddUserToGroupRequest().withUserName(erwan.userName).withGroupName(admins.groupName))
  }
  println "done !"
}

Even with some groove in it, java is still way too verbose to my taste, but i guess, i'll have to live with it ...
And this is the output of the awesome script:

Group 'Administrators' ?
4 oct. 2010 19:05:50 com.amazonaws.http.HttpClient execute
INFO: Sending Request: POST https://iam.amazonaws.com / Parameters: (Action: GetGroup, GroupName: Administrators, SignatureMethod: HmacSHA256, AWSAccessKeyId: MY AWESOME KEY, Version: 2010-05-08, SignatureVersion: 2, Timestamp: 2010-10-04T17:05:50.839Z, Signature: 3V3lEJzcqXXXXXXXXXXXXBeyMx9DzwFXA=, )
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient handleResponse
INFO: Received successful response: 200, AWS Request ID: a40b6fd5-cfd9-11df-8b03-8bc9f2ff0492
{Path: /, GroupName: Administrators, GroupId: AGPAJZSRVEMSLHZEOKMI6, Arn: arn:aws:iam::XXXXXXXXXXXXX:group/Administrators, }
User 'erwan' ?
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient execute
INFO: Sending Request: POST https://iam.amazonaws.com / Parameters: (Action: GetUser, SignatureMethod: HmacSHA256, UserName: erwan, AWSAccessKeyId: MY AWESOME KEY, Version: 2010-05-08, SignatureVersion: 2, Timestamp: 2010-10-04T17:05:51.222Z, Signature: XXXXXXXXXXXXXXXXXXXXXXXXXXXX, )
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient handleResponse
INFO: Received successful response: 200, AWS Request ID: a422f04a-cfd9-11df-b738-6709d34e9585
{Path: /, UserName: erwan, UserId: XXXXXXXXXXXXXXXXXXXXXXX, Arn: arn:aws:iam::XXXXXXXXXXXXXX:user/erwan, }
Listing erwan's groups ...
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient execute
INFO: Sending Request: POST https://iam.amazonaws.com / Parameters: (Action: ListGroupsForUser, SignatureMethod: HmacSHA256, UserName: erwan, AWSAccessKeyId: MY AWESOME KEY, Version: 2010-05-08, SignatureVersion: 2, Timestamp: 2010-10-04T17:05:51.388Z, Signature: XXXXXXXXXXXXXXXXXXXXXXXXXXXXX, )
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient handleResponse
INFO: Received successful response: 200, AWS Request ID: a43bf67d-cfd9-11df-a1ef-f7061c8dca90
[]
Adding user 'erwan' to 'Administrators'
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient execute
INFO: Sending Request: POST https://iam.amazonaws.com / Parameters: (Action: AddUserToGroup, GroupName: Administrators, SignatureMethod: HmacSHA256, UserName: erwan, AWSAccessKeyId: MY AWESOME KEY, Version: 2010-05-08, SignatureVersion: 2, Timestamp: 2010-10-04T17:05:51.569Z, Signature: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX )
4 oct. 2010 19:05:51 com.amazonaws.http.HttpClient handleResponse
INFO: Received successful response: 200, AWS Request ID: a457e216-cfd9-11df-a356-3d1e141e353d
done !

AWESOME !! \o/

Next, I'll make something useful out of it, like unix's adduser/addgroup script that will create individual developers and admins access keys, play with policies in order to restrict usage based on group appartenance, try to reduce groovy's startup time by preloading the interpreter, etc ...

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

13yKJMGD6MCgMyYRHhsqdqf7greYpNkkBQ

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

31Aug/10Off

Time Machine on a network drive : you will need to increase the band size …

... but not by hiring a new drummer :-)

My QNAP TS-209 pro's newest firmware introduces direct support for Time Machine. It works out of the box and it is pretty cool, but i quickly realized there will be a problem in the long run: a full backup of my macbook creates about 22 000 bands in the sparse bundle volume, which the poor little QNAP has a lot of trouble reading. I believe Time Capsules and many other appliances with few CPU cycles to spare on reading directories will have troubles too. Maybe someone in Apple could think about adding 1 or 2 levels of directories in the bands of sparse images ? That would make a lot of happy Time Capsule users ;-)

Suggestion:

  • bands are named by hexadecimal codes (0..f, then 2 letters when the namespace is exhausted, etc ...), so create one sub-directory per prefix. 0/(every file that starts with 0), ..., f/(every file that starts with f).
  • in order to maintain compatibility with existing sparsebundle volumes, add a "directory-prefix-level" key in Info.plist that will be used by the function that returns a band in the bundle to find the band's exact location.
  • directory-prefix-level should be increased each time the namespace exhausts, and the storage driver should be updated to redistribute files across the new directories when it happens (blocking any access to the volume when it happens). This is a classical problem to solve in a B-TREE (key redistribution in pages)

So, until Apple fixes their sparse bundle driver, the solution is to increase the band sizes, in order to make sure to have less files in the directory.

You need to have an existing backup stored on your NAS. First, I strongly suggest making a copy of the backup's directory if you care about your backups (you care about your backups, don't you ?)

The QNAP Time Machine service creates a share named /share/TMBackup. Once time machine has used it one time (even with a cancelled backup), it will contain a directory named after your mac's hostname (mine is moody)

moody.sparsebundle

This directory is a plain, standard sparse bundle as those created with hdiutil(1), with the exception of a file named com.apple.TimeMachine.MachineID.plist. This file links the backup to your host (it contains its hostname and MAC address), and Time Machine will not recognize a sparse bundle as one of his if it can't find this file in it. When this happens, TM creates another directory and starts all over: moody 1.sparsebundle

Make sure TM won't try to access the drive remotely while you run the procedure. You should disable it altogether by choosing "no drive" in it's preferences pane.

Make sure you have the share mounted, it should be accessible from a terminal. In order to do that, "enter Time Machine", cancel, open a Terminal, sudo and then i was finally able to cd /Volumes/TMBackup (the name of the volume will obviously change depending on your setup)

Create a copy of your sparse bundle. This procedure needs free space, i believe there is no way to convert a sparse bundle in place, if you know how do to that in place, i'd be happy to hear about it :-) :

hdiutil convert -verbose -tgtimagekey sparse-band-size=262144 -format UDSB moody.sparsebundle -o tmp.moody.sparsebundle

It will take a long time. Maybe hours for an existing, valid backup. Be warned. You are transferring a lot of data across the network.

Once the command completes, you should have moody.sparsebundle and tmp.moody.sparsebundle in the share directory on the NAS. Copy com.apple.TimeMachine.MachineID.plist to the new location:


cp moody.sparsebundle/com.apple.TimeMachine.MachineID.plist tmp.moody.sparsebundle/

Change the new sparse bundle's name to the former one:


mv moody.sparsebundle moody.sparsebundle.old
mv tmp.moody.sparsebundle moody.sparsebundle

You should be good to go with TM now. Give it a try ...

Once you have validated TM can access your new bundle and is displaying data from your backup, you can get rid of the old moody.sparsebundle.old directory.

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

17t6ZqZeAj3KA9dWyCsQecAWnwx3rJM8Fd

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Tagged as: , , No Comments
8Jun/10Off

Monitis vs Open Source ? WTF ?

In a recently published white paper, Monitis is trying to prove that their cloud-based monitoring solution is far superior to "Open Source Monitoring Software".

What are the arguments that should make you discard any Open Source based monitoring solution ?

  • robust notifications and alerts
  • quick and easy to setup
  • low cost of entry and TCO
  • monitoring from outside the internal network
  • easily scalable
  • green
  • cool

So, what do these bullets have to do Open Source versus proprietary exactly ? absolutely nothing.

This whitepaper proves that there may be better and more cost-effective options than hosting your own monitoring infrastructure yourself. Well, news at ten ! Wow ! this is absolutely fantastic !

Any seasoned sysadmin should know that a switch can fry or a mail server can go down, letting your monitoring infrastructure without any mean to probe for services health and alert you. That's part of monitoring 101 !

This is not a "OSS or proprietary", "OSS or hosted", "OSS or whatever" decision, this is a design decision. They are comparing setting up a full-blown monitoring infrastructure (probes, queues, passive/active, agents, etc ...) against editing probes and alerts in an existing platform. Theirs, more exactly.

Robust notifications and alerts ? Can I be notified via jabber (or yammer) when using monitis ? no. Available notification options are e-mail and/or text messages. My monitoring infrastructure sends alerts using e-mails, jabber and private yams. Can i do that with monitis ? no. I wrote the alert scripts myself. Can i do that with monitis ? no.  Argument dismissed.

Quick and easy to setup ? I'm pretty sure the platform that monitis use for their service is not  easy to setup. That's part of why they provide a pretty good product, that's the barrier to entry in their business. They sell a service that's easy to setup. I would expect no less. Now, is adding monitoring for new hosts and services difficult in my current monitoring platform ? not at all. Once i have passed the initial setup phase, which is, admitedly, quite a steep slope to climb, i can add services and hosts in a breeze. My platform runs on a similar cloud infrastructure that they are selling as a green and cool tech. I even add the hosts automatically to the monitoring infrastructure when the instances are launched. I have started writing software that would monitor a specific metric (not a TCP port availability or a server's transmit time, a business-related metric) that would start instances when needed. Can i do that with the monitis platform ? no. My platform is as "quick and easy" to setup.

Low cost of entry and TCO ? Admitedly, if you want to be serious about monitoring, you need dedicated hardware. In the white paper, the arguments against setting up your own platform are the cost of the hardware, and the decently competent people required to operate it :

For open-source tools, however, the tab for monitoring is high. Consider
these costs:
  • $2,000 + for a server
  • $2,000 + for a backup server and storage
  • $1,000  yearly per server for electricity
  • Setup and maintenance labor cost (usually a dedicated or a part time resource – at least $30,000 per year) to do such things as:
    • Add plug-ins
    • Backup data
    • Fix issues, patching
    • Update software
    • Setup monitoring and alerting

Where is the connection with Open Source Software ? They are assuming operating costs for their customers, mutualizing them, thus making sure that they can collect significant margins. Nice, but if i change "Open Source" to "proprietary", the arguments stands. Even more, actually. I know a lot of proprietary monitoring solutions that require much more hardware and administration than any "Open Source" (thinking HP OpenView, CA Unicenter, ...)

Monitoring from outside ? Having an external monitor is good practice and a proper design decision. Again, what does it has to do with "Open Source" ? Nothing in the "Open Source" solution they are trying to debunk prevents you from setting up an external monitoring system. The software even has built-in support for that. And does monitis allow me to monitor some services inside my infrastructure without exposing them ? if their platform is hosted on the cloud, they must be using multiple different IP addresses for running their probes. How do i set up my border routers ? Do I even want to do that ?! No, I don't :-) The proper monitoring architecture must include both external and internal servers that will run probes.

Easily scalable ? Their platform scales. Good for them. Congrats. With its centralized architecture, and the fact that, by default, every probes are run from a centralized host, Nagios may not scale well, ok. But the "few hundreds" servers assumed in this white paper is far from the limit. And you can make Nagios scale. I don't claim it is easy though.

green ? my Nagios server is running on top of the cloud. It's green too ! it's cool too ! what a joke !

Ok, that was fun. Just a few totally irrelevant arguments against using Open Source Monitoring Software by a provider of a proprietary monitoring solution ...

It reminds me when Nominum was arguing about their "superior proprietary solution", allegedly more secure because the source code wasn't available. Just like if cache poisoning attacks were caused by the availability of BIND's source code.

If monitis can prove that they don't use any kind of open source software in their platform, they may have good basis in such arguments, but I just doubt it. I'm pretty sure they are using open source software all over their platform. For a start, netcraft.com data about their web servers doesn't display such a dislike toward OSS when it comes to web servers and operating systems. I can't help but wondering if it is the same internally.

In any case, they should not shoot on projects such as Nagios, which at least, must have helped them decide to design their own proprietary platform ! The reasoning in this white-paper is completely flawed. It should be titled "Why Monitis hosted monitoring is better", and every reference to "Open Source Software" in there should be replaced by "self-managed monitoring solution".

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1CthfjNFfs3trujiZaAkVLtdQf8JGRc6WW

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.