Tarus Balog : Early/Often on the Horizon

May 22, 2015 07:51 PM

Lots of stuff, and I mean lots of cool stuff is going on and to paraphrase Hamlet I have not enough thoughts to put them in, imagination to give them shape, or time to act them in. I spent this week in the UK but I should be home for awhile and I hope to catch up.

But I wanted to put down a at least one thought. When we made the very difficult decision to split OpenNMS into two products, Horizon and Meridian, we had some doubts that it was the right thing to do. Well, at least for me, those doubts have been removed.

It used to take us 18 or more months to get a major release out. Due to the support business we were both hesitant to remove code we no longer needed or to try the newest things. Since we moved to the Horizon model we’ve released 3 major versions in six months and not only have we added a number of great features, we are finally getting around to removing stuff we no longer need and finishing projects that have languished in the past.

In the meantime we’re delivering Meridian to customers who value stability over features with the knowledge that the version they are running is supported for three years. Seriously, we have some customers upgrading from OpenNMS 1.8 (six major releases back) who obviously want longer release cycles, and even if you don’t need support you can get Meridian software for a rather modest fee coupled with OpenNMS Connect for those times when you really just need to ask a question.

Anything OpenNMS does well is a reflection on our great team and community, but I take personally any shortcomings. At least now I can see the path to minimize them if not remove them completely.

It’s a good feeling.

Warren Myers : primary elections should happen at the same time across the country

May 21, 2015 03:08 PM

In Kentucky, this past Tuesday was Primary Day. The day every registered voter, in the appropriate party, could go to the polls and say who we want to run to represent our party in the General Election.

While political parties, as such, need to be abolished, and voting is borked, that every state holds its Primary on a different day is wildly unhelpful.

Because they are not on the same day, you often are presented with candidates who are neither your first nor second, or perhaps even your third choice.

Since the General Election is fixed nationally to happen on the first Tuesday after the first Monday in November, state-by-state Primary Election days should also be fixed to happen simultaneously across the country.

Jesse Morgan : The Pain and Fury of vmware-cli on CentOS 7, Part 2

May 20, 2015 07:27 PM

 

Last we left off with a functioning vmware-cli and no way to replicate this in ansible. Lets fix that. We’ll be using FPM to create an RPM containing all of the files that were created by the installer (including cpan modules).
But first, we need to figure out which files need to be packaged. VMWare aaallmost kinda makes this easy for us- each file it installs is tracked in /etc/vmware-vcli/locations (excluding cpan files). Parsing through this, we see

  • a list of all the CPAN modules installed
  • a list of directories it’s created
  • a list of the files it’s created (including symlinks

Step 1: Identifying vmware-cli Files to Package

We’ll hold off on CPAN modules for the moment and jump ahead to directories and files. We can trim this list down quite a bit because we can ignore the filenames in directories that the installer created- in other words, we can include /foo/bar/ and it’ll automatically include /foo/bar/baz.pm.  This allows us to cut down our 1400+ files and directories down to about 63- most of which are symlinks and can be simplified with a wild card. Once we refactor all that info,  the end result is something like this:

/etc/vmware-vcli/locations /usr/share/perl5/VMware/ /usr/share/perl5/WSMan/ /opt/vmwarecli/ /usr/bin/dcli /usr/bin/esxcfg-* /usr/bin/esxcli /usr/bin/resxtop /usr/bin/svmotion /usr/bin/vicfg-* /usr/bin/vifs /usr/bin/vihostupdate /usr/bin/vihostupdate35 /usr/bin/viperl-support /usr/bin/vmkfstools /usr/bin/vmware-cmd /usr/bin/vmware-uninstall-vSphere-CLI.pl /etc/vmware-vcli/config

Save this list for later.

An Aside- Thanks for the mess, CPAN.

 

If you recall, we took a snapshot of our perl modules before and after our installation.  You may remember us doing the following (don’t do this right now!!!):

perl -e 'use doesntexist;'
Can't locate doesntexist.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at -e line 1.
BEGIN failed--compilation aborted at -e line 
find /usr/local/lib64/perl5 /usr/local

then later:

find /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 >/tmp/final.list

Unfortunately that’s only part of the story. When we rerun that first command:

perl -e 'use doesntexist;'
Can't locate doesntexist.pm in @INC (@INC contains: /root/perl5/lib/perl5/x86_64-linux-thread-multi /root/perl5/lib/perl5 /root/perl5/lib/perl5/x86_64-linux-thread-multi /root/perl5/lib/perl5 /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.

What the heck? where did all those /root/perl5/ paths come from? from our /root/.bashrc:

<...>
export PERL_LOCAL_LIB_ROOT="$PERL_LOCAL_LIB_ROOT:/root/perl5";
export PERL_MB_OPT="--install_base /root/perl5";
export PERL_MM_OPT="INSTALL_BASE=/root/perl5";
export PERL5LIB="/root/perl5/lib/perl5:$PERL5LIB";
export PATH="/root/perl5/bin:$PATH";

export PERL_LOCAL_LIB_ROOT="$PERL_LOCAL_LIB_ROOT:/root/perl5";
export PERL_MB_OPT="--install_base /root/perl5";
export PERL_MM_OPT="INSTALL_BASE=/root/perl5";
export PERL5LIB="/root/perl5/lib/perl5:$PERL5LIB";
export PATH="/root/perl5/bin:$PATH";

These… these were not here before. So what is in /root/perl5/? The big takeaways are:

/root/perl5/lib/perl5/CPAN/Meta/*
/root/perl5/lib/perl5/ExtUtils/*
/root/perl5/lib/perl5/JSON/*
/root/perl5/lib/perl5/Parse/*
/root/perl5/bin/instmodsh

*sigh* I have no idea when or why or how these were inserted. I don’t know if that was CPAN not cleaning up after itself or what. all I know is that a script running as a user might now use different libraries than one running as root. I don’t know if those libs are important, but judging from the contents I’m presuming they’re only for building packages and are not functionally involved (I hope).

So we’ll blissfully ignore them and hope it doesn’t bite us later on.

Step 2: Identifying CPAN-Installed Files to Package

ok, so we have our two files- /tmp/pristine.list and /tmp/final.list. We can use the following to gather a list of new files:

cat /tmp/pristine.list /tmp/final.list |sort|uniq -u >/tmp/unique.list

We can do roughly the same thing with this list as we did with /etc/vmware-vcli/locations- ignore the files under any directories we’ve created. In addition, we can ignore any directories we’ve already documented, like /usr/share/perl5/VMware and /usr/share/perl5/WSMan/.  Unfortunately, there are some troublesome directories- /usr/local/share/perl5 and /usr/local/lib64/perl5 are pretty generic- if we include those, it may cause conflicts with other RPMs down the road- I honestly don’t know if this is the case, so we’ll hope for the best. We end up with the following list:

/usr/lib64/perl5/auto/Locale
/usr/lib64/perl5/auto/Params
/usr/lib64/perl5/perllocal.pod
/usr/local/lib64/perl5
/usr/local/share/perl5
/usr/share/perl5/Locale/Maketext
/usr/share/perl5/Params

Save this list for later.

Final Packages

Time to bundle everything up… But we’ll need fpm installed first:

yum install rpm-build ruby-devel gcc gem
gem install fpm

From here, we can take those file/directory lists and feed them a single file:

ls -d /etc/vmware-vcli/locations /usr/share/perl5/VMware/ /usr/share/perl5/WSMan/ /opt/vmwarecli/ /usr/bin/dcli /usr/bin/esxcfg-* /usr/bin/esxcli /usr/bin/resxtop /usr/bin/svmotion /usr/bin/vicfg-* /usr/bin/vifs /usr/bin/vihostupdate /usr/bin/vihostupdate35 /usr/bin/viperl-support /usr/bin/vmkfstools /usr/bin/vmware-cmd /usr/bin/vmware-uninstall-vSphere-CLI.pl /etc/vmware-vcli/config /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params > /tmp/fpm_final_list

now we’ll feed /tmp/fpm_final_list into FPM to create a shiny-new RPM:

fpm -s dir -d perl-Data-Dumper -a noarch --rpm-user root --rpm-group root -t rpm -n vmware-cli -v '6.0.0_2503617' --iteration 1 -C /  /etc/vmware-vcli/locations /usr/share/perl5/VMware/ /usr/share/perl5/WSMan/ /opt/vmwarecli/ /usr/bin/dcli /usr/bin/esxcfg-* /usr/bin/esxcli /usr/bin/resxtop /usr/bin/svmotion /usr/bin/vicfg-* /usr/bin/vifs /usr/bin/vihostupdate /usr/bin/vihostupdate35 /usr/bin/viperl-support /usr/bin/vmkfstools /usr/bin/vmware-cmd /usr/bin/vmware-uninstall-vSphere-CLI.pl /etc/vmware-vcli/config /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params

And because nothing is easy, this throws errors; apparently the current version of FPM doesn’t like symlinks (all of the files in /usr/bin). If we cut those out:

fpm -s dir -d perl-Data-Dumper -a noarch --rpm-user root --rpm-group root -t rpm -n vmware-cli -v '6.0.0_2503617' --iteration=1 -C / /etc/vmware-vcli /opt/vmwarecli /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params /usr/share/perl5/VMware /usr/share/perl5/WSMan

 

This builds fine, BUT we have no symlinks. Ugh this is kludgy, but we can make a fugly little postinstall script and have FPM include that (I hope)

echo '#!/bin/bash'> /tmp/vmlinker.sh ; echo 'cd /opt/vmwarecli/bin/ ; for i in * ; do ln -s $i /usr/bin/$i; done' >>/tmp/vmlinker.sh
chmod 755 /tmp/vmlinker.sh

and now we can include it in our fpm command:

fpm -f --post-install /tmp/vmlinker.sh -s dir -d perl-Data-Dumper -a noarch --rpm-user root --rpm-group root -t rpm -n vmware-cli -v '6.0.0_2503617' --iteration=1 -C / /etc/vmware-vcli /opt/vmwarecli /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params /usr/share/perl5/VMware /usr/share/perl5/WSMan
no value for epoch is set, defaulting to nil {:level=>:warn}
Force flag given. Overwriting package at vmware-cli-6.0.0_2503617-1.noarch.rpm {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"vmware-cli-6.0.0_2503617-1.noarch.rpm"}

And there we go. Time to test it.

Before we go, we need to snapshot our machine yet again as “package-created.” We also need to copy our wonderful RPM down to our local machine so we can revert back to the raw snapshot and test it.

Testing our RPM

Revert back to our Raw Snapshot, then copy our rpm to /tmp on it.

yum install /tmp/vmware-cli-6.0.0_2503617-1.noarch.rpm

Note that this will install perl-Data-Dumper as a dependency; why? because somehow the rpm was installed and I didn’t notice that it was still there and I don’t want to backtrack to have cpan build it. So now it’s a dependency- we’ll circle back to this in a moment- for now… does it work?

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/vmfs/volumes/54eaad0a-923ab37c-90b6-b82a72d4d796/hostname1/hostname1.vmx
/vmfs/volumes/54eaad24-7c233c62-970e-b82a72d4d796/hostname2/hostname2.vmx
<...>

SUCCESS!

Now that we know our RPM works, we can touch back on some more uncomfortable questions. Right now we have about 28 perl modules installed via CPAN (possibly more). Many of our other apps (such as morgnagplug and manubulon) are dependent on the RPM versions of some of those modules. When we install them, we could have conflicts down the road. So how many of those modules could be installed as RPMs?

module Devel::StackTrace
module Class::Data::Inheritable
module Convert::ASN1
module Crypt::OpenSSL::RSA
module Crypt::X509
module Exception::Class
module UUID::Random
module Archive::Zip
module Compress::Zlib
module Compress::Raw::Zlib
module Path::Class
module Try::Tiny
module Crypt::SSLeay
module IO::Compress::Base
module IO::Compress::Zlib::Constants
module HTML::Parser
module UUID
module Data::Dump
module SOAP::Lite
module URI
module XML::SAX
module XML::NamespaceSupport
module XML::LibXML::Common
module XML::LibXML
module LWP
module LWP::Protocol::https
module IO::Socket::INET6
module Net::INET6Glue

I’m not sure. Let’s find out.

Lets snapshot this as “vmware rpm installed”. As an aside, the snapshot functionality almost makes up for the grief vmware is putting me through.

Which CPAN modules can be swapped for RPMs?

Here’s my plan of attack; for each module listed above,

  1. yum search modulename; Not all modules will have RPMs, but this is the best way to find out if it’s possible to replace.
  2. run rpm -ql vmware-cli |grep modulename ; Note that you should replace the :: with a slash, so Devel::StackTrace becomes Devel/StackTrace
  3. delete matching files from the filesystem (we can always reinstall the RPM if we get in trouble, or even revert back to the “vmware rpm installed” snapshot).
  4. yum install moduleRPM ; this should install ONLY one module, no dependencies; if it has dependencies, put it to the side.

I’m now going to run through each module and document it as I go

  • Devel::StackTrace   RPM exists, has no dependencies. Files removed, rpm installed, vmware-cmd still works
  • Class::Data::Inheritable   RPM exists, has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • Convert::ASN1  RPM exists, has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • Crypt::OpenSSL::RSA RPM exists, HAS DEPENDENCIES, shelved
  • Crypt::X509  No RPM currently exists
  • Exception::Class   RPM exists, has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • UUID::Random No RPM currently exists
  • Archive::Zip  RPM exists, HAS DEPENDENCIES, shelved
  • Compress::Zlib  RPM exists, HAS DEPENDENCIES, shelved
  • Compress::Raw::Zlib   RPM exists, has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • Path::Class No RPM currently exists
  • Try::Tiny RPM exists (perl-try-tiny), has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • Crypt::SSLeay   RPM exists, has no dependencies, Files removed, rpm installed, vmware-cmd still works
  • IO::Compress::Base  No RPM currently exists (probably installed with perl-IO-Compress, but it has dependencies)
  • IO::Compress::Zlib::Constants  No RPM currently exists (possibly installed with perl-IO-Compress, but it has dependencies)
  • HTML::Parser  RPM exists, HAS DEPENDENCIES, shelved (here be dragons)
  • UUID  Difficult to Discern- UUID RPMs doesn’t seem to line up; shelved.
  • Data::Dump No RPM currently exists (not to be confused with Data::Dumper)
  • SOAP::Lite No RPM currently exists
  • URI Difficult to discern… RPM exists, HAS DEPENDENCIES, shelved
  • XML::SAX RPM exists, HAS DEPENDENCIES, shelved
  • XML::NamespaceSupport  RPM exists, has no dependencies. Files removed, rpm installed, vmware-cmd still works
  • XML::LibXML::Common  No RPM currently exists (may be included in a different RPM)
  • XML::LibXML  RPM exists, HAS DEPENDENCIES, shelved
  • LWP  RPM exists, HAS DEPENDENCIES, shelved (here be dragons)
  • LWP::Protocol::https   RPM exists, HAS DEPENDENCIES, shelved (here be dragons)
  • IO::Socket::INET6   RPM exists, HAS DEPENDENCIES, shelved
  • Net::INET6Glue No RPM currently exists (may be included in a different RPM)

Circling back, lets see if any of the libs with dependencies have had their dependencies installed

  • Archive::Zip  RPM exists, has no dependencies. Files removed, rpm installed, vmware-cmd still works

The rest are dead ends. We can now hypothesize that the following libs are acceptable in RPM form:

perl-Devel-StackTrace perl-Class-Data-Inheritable perl-Convert-ASN1 perl-Exception-Class perl-Compress-Raw-Zlib perl-Try-Tiny perl-Crypt-SSLeay perl-XML-NamespaceSupport perl-Archive-Zip

if we exclude the following paths from our FPM build process:

/usr/local/lib64/perl5/auto/Devel/StackTrace/ /usr/local/share/perl5/Devel/StackTrace.pm /usr/local/share/perl5/Devel/StackTrace/ /usr/local/lib64/perl5/auto/Class/Data/ /usr/local/share/perl5/Class/Data /usr/local/lib64/perl5/auto/Convert/ /usr/local/share/perl5/Convert/ /usr/local/lib64/perl5/auto/Exception/ /usr/local/share/perl5/Exception/ /usr/local/lib64/perl5/Compress/ /usr/local/lib64/perl5/auto/Compress/ /usr/local/lib64/perl5/auto/Compress/ /usr/local/lib64/perl5/auto/Try/ /usr/local/share/perl5/Try/ /usr/local/lib64/perl5/Crypt/SSLeay/  /usr/local/lib64/perl5/Crypt/SSLeay.pm  /usr/local/lib64/perl5/auto/Crypt/SSLeay/ /usr/local/lib64/perl5/auto/XML/NamespaceSupport/ /usr/local/share/perl5/XML/NamespaceSupport.pm /usr/local/lib64/perl5/auto/Archive/ /usr/local/share/perl5/Archive/

 

Lets switch over to “package created” Snapshot and repackage our RPM with new dependencies and exclusions:

fpm -f --post-install /tmp/vmlinker.sh -s dir -d perl-Data-Dumper -a noarch --rpm-user root --rpm-group root -t rpm -n vmware-cli -v '6.0.0_2503617' --iteration=1 -C / \
 \
-d perl-Devel-StackTrace -d perl-Class-Data-Inheritable -d perl-Convert-ASN1 -d perl-Exception-Class -d perl-Compress-Raw-Zlib -d perl-Try-Tiny -d perl-Crypt-SSLeay -d perl-XML-NamespaceSupport -d perl-Archive-Zip \
 \
-x "**usr/local/lib64/perl5/auto/Devel/StackTrace/**" -x "**usr/local/share/perl5/Devel/StackTrace.pm" -x "**usr/local/share/perl5/Devel/StackTrace/**" -x "**usr/local/lib64/perl5/auto/Class/Data/**" -x "**usr/local/share/perl5/Class/Data**" -x "**usr/local/lib64/perl5/auto/Convert/**" -x "**usr/local/share/perl5/Convert/**" -x "**usr/local/lib64/perl5/auto/Exception/**" -x "**usr/local/share/perl5/Exception/**" -x "**usr/local/lib64/perl5/Compress/**" -x "**usr/local/lib64/perl5/auto/Compress/**" -x "**usr/local/lib64/perl5/auto/Compress/**" -x "**usr/local/lib64/perl5/auto/Try/**" -x "**usr/local/share/perl5/Try/**" -x "**usr/local/lib64/perl5/Crypt/SSLeay/**" -x "**usr/local/lib64/perl5/Crypt/SSLeay.pm" -x "**usr/local/lib64/perl5/auto/Crypt/SSLeay/**" -x "**usr/local/lib64/perl5/auto/XML/NamespaceSupport/**" -x "**usr/local/share/perl5/XML/NamespaceSupport.pm" -x "**usr/local/lib64/perl5/auto/Archive/**" -x "**usr/local/share/perl5/Archive/**" \
 \
/etc/vmware-vcli /opt/vmwarecli /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params /usr/share/perl5/VMware /usr/share/perl5/WSMan

Note: fpm’s excludes are picky as hell- you need doublequotes around each one as well as double wildcards in place of a leading / and trailing it if a directory.

Even then, this STILL leaves the directories, just not the contents of the directory.

I’m trying too hard. Let’s try something slightly different:

fpm -f --post-install /tmp/vmlinker.sh -s dir -d perl-Data-Dumper -a noarch --rpm-user root --rpm-group root -t rpm -n vmware-cli -v '6.0.0_2503617' --iteration=1 -C / \
 \
-d perl-Devel-StackTrace -d perl-Class-Data-Inheritable -d perl-Convert-ASN1 -d perl-Exception-Class -d perl-Compress-Raw-Zlib -d perl-Try-Tiny -d perl-Crypt-SSLeay -d perl-XML-NamespaceSupport -d perl-Archive-Zip \
 \
-x "**Devel/StackTrace**" -x "**Class/Data/Inheritable**" -x "**Convert/ASN1**" -x "**Exception/Class**" -x "**Compress/Raw/Zlib**" -x "**Try/Tiny**" -x "**Crypt/SSLeay**" -x "**XML/NamespaceSupport**" -x "**Archive/Zip**" \
 \
/etc/vmware-vcli /opt/vmwarecli /usr/lib64/perl5/auto/Locale /usr/lib64/perl5/auto/Params /usr/lib64/perl5/perllocal.pod /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/share/perl5/Locale/Maketext /usr/share/perl5/Params /usr/share/perl5/VMware /usr/share/perl5/WSMan

Jackpot. None of the replaced modules are in the RPM, but it did reveal something else…

VMware didn’t track the dependencies that CPAN built, only the ones it needed. What do I mean? For example, /usr/local/lib64/perl5/Class/MethodMaker.pm appears in our RPM bundle and our final.list.  It was installed while vmware-cli was installing. but there is no trace of Class::MethodMaker in /etc/vmware-vcli/locations.

That means there’s an entirely new group of CPAN modules that exist that we probably don’t need. We’ll put this on the back burner and circle back later (maybe). In the meantime we need to switch over to our Raw snapshot and test this new RPM. Make sure to copy it to your local machine before rebooting.

Once it’s copied over, try installing it again:

yum install /tmp/vmware-cli-6.0.0_2503617-1.noarch.rpm
<...>
Install 1 Package (+10 Dependent packages)
<...>

It should be our friendly new RPMs.

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/vmfs/volumes/54eaad0a-923ab37c-90b6-b82a72d4d796/hostname1/hostname1.vmx
/vmfs/volumes/54eaad24-7c233c62-970e-b82a72d4d796/hostname2/hostname2.vmx
<...>

SUCCESS!

 

At this point, our RPM is installing the following CPAN modules (based on packlist files):

Locale::Maketext::Simple
Params::Check
Class::Inspector
Class::MethodMaker
Compress::Raw::Bzip2
Crypt::OpenSSL::RSA
Crypt::OpenSSL::Random
Crypt::X509
Data::Dump
Devel::CheckLib
Digest::MD5
Encode::Locale
Env
ExtUtils::CBuilder
ExtUtils::MakeMaker
File::Listing
HTML::Parser
HTML::Tagset
HTTP::Cookies
HTTP::Daemon
HTTP::Date
HTTP::Message
HTTP::Negotiate
IO::CaptureOutput
IO::Compress
IO::HTML
IO::SessionData
IO::Socket::INET6
IO::Socket::SSL
IPC::Cmd
Import::Into
JSON::PP
LWP
LWP::MediaTypes
LWP::Protocol::https
Module::Build
Module::CoreList
Module::Load
Module::Load::Conditional
Module::Metadata
Module::Runtime
Mozilla::CA
Net::HTTP
Net::INET6Glue
Net::SSLeay
Path::Class
Perl::OSType
SOAP::Lite
Socket6
Sub::Uplevel
Task::Weaken
Test::Simple
Test::Warn
URI
UUID
UUID::Random
WWW::RobotRules
XML::LibXML
XML::Parser::Lite
XML::SAX
XML::SAX::Base
autodie
version

 

I’m positive we could trim out many if not most of these, but I’m exhausted. I still need to get the rest of my stuff installed.

 

If this was helpful, please leave a comment below.

Jesse Morgan : The Pain and Fury of vmware-cli on CentOS 7

May 20, 2015 01:06 PM

ok, this has reached the point of absurdity that I need to document it.

The Goal

Install vmware-cli on CentOS 7 for check_vmware_esx.pl to use.

The Requirements

  • installation needs to be reproducible with ansible.
  • the latest version of the cli, VMware-vSphere-CLI-6.0.0-2503617.x86_64.tar.gz

The Complications

  1. CentOS 7 uses a new version of Perl that is binary incompatible.
  2. VMware doesn’t pay attention.
  3. CPAN seemed like a great idea 22 years ago. Unfortunately it didn’t keep up with the times.

The Setup

  • Freshly installed CentOS 7
  • The following custom ansible packages are installed (to give you an idea of what’s I was starting with there without absurd detail):
    • vim
    • sudo
    • sshkeys
    • bashconfig
    • snmpd
    • mariadb
    • httpd
    • git
    • icinga2(nagios-plugins,nagios-plugins-all )
    • manubulonplugins (perl-Net-SNMP)
    • morgnagplug (perl-HTML-Parser,perl-Compress-Raw-Zlib,perl-IO-Zlib,perl-libxml-perl,perl-XML-LibXML,perl-Time-Duration,perl-Number-Format,perl-Config-IniFiles,perl-DateTime)
  • CPAN is NOT installed (it’s a PITA with ansible, so it’s a last resort).

I have currently have two snapshots:

  • a barebones snapshot without all this stuff installed (but it takes 20 minutes to reinstall it all). (Raw Snapshot)
  • a snapshot with all of the stuff listed above installed. (Configured Snapshot)

Test 1: Simplest possible install using –default

The initial installation demands openssl-devel be installed (I’m skipping that hassle for brevity).

yum install openssl-devel
./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes --default

This will select the perl modules pre-packaged by VMware. Note that it does complain about the following:

MIME::Base64 3.14 or newer 
Try::Tiny 0.22 or newer 
Socket6 0.23 or newer

Which are installed via RPM for net-snmp and other packages and are non-negotiable. It installs fine, except it doesn’t work:

 /opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
Hiding the command line arguments: symbol lookup error: /usr/lib64/perl5/auto/Crypt/SSLeay/SSLeay.so: undefined symbol: Perl_Gthr_key_ptr

This is where Complication #1 comes into play- SSLeay.so was compiled against a different system, so it’s incompatible. That’s a simple fix, right?

./bin/vmware-uninstall-vSphere-CLI.pl
yum install perl-Crypt-SSLeay
./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes --default
/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
 length() used on @array (did you mean "scalar(@array)"?) at /usr/lib64/perl5/IO/Compress/Zlib/Extra.pm line 198. SOAP request error - possibly a protocol issue: 500 read timeout

out of the frying pan and into the fire. There’s two actual problems here:

  1. The pre-packaged modules include a broken copy of IO::Compress::Zlib. This is annoying, but it doesn’t stop anything (I don’t think).
  2. There’s a problem with reading the soap stuff. what problem? Not sure yet.

That SOAP error first appears in 2009. it ties back to an old version of LWP (aka perl-libwww-perl.rpm). This is installed because morgnagplug uses perl-XML-LibXML.

OK, fine. I can break morgnagplug for the time being (testing purposes), so I remove that and rerun the installer. This pulls down the vmware-cli pre-packaged version of LWP and libxml (note that perl-libxml-perl.noarch.rpm is still installed through all of this). I’ve also removed it from my

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/usr/bin/perl: symbol lookup error: /usr/lib64/perl5/auto/XML/LibXML/Common/Common.so: undefined symbol: Perl_Gthr_key_ptr

So Now I’m at an impasse- two of the vmware prepackaged modules are not binary compatible- while I could work around Crypt::SSLeay, I couldn’t work around XML::LibXML because it’s dependent on perl-libwww-perl (LWP), which is incompatible.

At this point I can attempt to track down older versions of perl-libwww-perl, but that will then require older RPMs of half a dozen other libraries. This will require pinning them so they’re not accidentally upgraded and ignoring any security patches that may appear. Obviously this is not acceptable.

I don’t think I can go any further down this road. Time to consider CPAN.

Test 2: Maximum CPAN

I roll back to my Configured snapshot, then install openssl-devel like before, then cpan and friends. This includes libuuid-devel for UUID and gcc (!) to get everything CPAN wants to install first try. Note this was about 3 days of trial and error to figure out what was missing so that everything would install properly (mostly because of UUID).

yum install openssl-devel libuuid-devel perl-YAML perl-Devel-CheckLib gcc perl-CPAN

Just as an FYI, CPAN is dependent on perl-ExtUtils-Install, perl-ExtUtils-MakeMaker, perl-ExtUtils-Manifest, perl-ExtUtils-ParseXS, perl-Test-Harness, perl-devel, and perl-local-lib among other things. This is important in a moment (Note: RPM installed perl-ExtUtils-MakeMaker-6.68-3.el7.noarch)

./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes
<...>
Do you want to install precompiled Perl modules for RHEL?
[yes] no
<...>
Can't locate Module/Build.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.
Below mentioned modules with their version needed to be installed,
these modules are available in your system but vCLI need specific 
version to run properly

Module: ExtUtils::MakeMaker, Version: 6.96 
Module: Module::Build, Version: 0.4205 
Module: LWP, Version: 5.837 
Do you want to continue? (yes/no)

Note that the compilation failure above is due to a missing Module::Build, which it looks like it’ll install below anyways.  Also note the three packages that need explicit versions-

  • perl-ExtUtils-MakeMaker 6.68 vs ExtUtils::MakeMaker 6.96 (Newer than RPM)
  • perl-Module-Build 0.40.05 vs Module::Build 0.4205 (Newer than RPM)
  • perl-libwww-perl 6.05 vs LWP 5.837 (Older than RPM)

In either case, this fails because Class::MethodMaker can’t be built and the script fails:

CPAN not able to install following Perl modules on the system. These must be 
installed manually for use by vSphere CLI:

Class::MethodMaker 2.10 or newer

Following it’s directions, I try to install it manually to see what it’s missing:

 cpan -i Class::MethodMaker
Reading '/root/.cpan/Metadata'
 Database was generated on Tue, 19 May 2015 16:17:02 GMT
Running install for module 'Class::MethodMaker'
Running make for S/SC/SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz
Checksum for /root/.cpan/sources/authors/id/S/SC/SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz ok

 CPAN.pm: Building S/SC/SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz

Checking if your kit is complete...
Looks good
JSON::PP 2.27103 is not available
 at /usr/local/share/perl5/CPAN/Meta/Converter.pm line 22.
 at /usr/local/share/perl5/ExtUtils/MM_Any.pm line 831.
JSON::PP 2.27103 is not available
 at /usr/local/share/perl5/CPAN/Meta/Converter.pm line 22.
Warning: No success on command[/usr/bin/perl Makefile.PL]
 SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz
 /usr/bin/perl Makefile.PL -- NOT OK
Running make test
 Make had some problems, won't test
Running make install
 Make had some problems, won't install
Could not read metadata file. Falling back to other methods to determine prerequisites

So CPAN fails because it requires JSON::PP; I have three choices- install JSON::PP via CPAN, install JSON::PP via RPM or install Class::MethodMaker via RPM. I don’t want to get into dependency hell with cpan, and I know from previous experience that it will complain that the rpm for JSON::PP is too old, so I’ll attempt to simply install the perl-Class-MethodMaker rpm (which oddly isn’t dependent on JSON::PP).  I’ll add this to my “always install first” list with openssl-devel.

(This is especially frustrating because I *know* that the installer will eventually use cpan to install JSON::PP, and this error is likely caused by poor dependency ordering.)

This works, and I’m able to run the install script again; this time with only two problematic modules:

The following Perl modules were found on the system but may be too old to work 
with vSphere CLI:

MIME::Base64 3.14 or newer 
Socket6 0.23 or newer

The real question- will vmware-cmd work?

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'

And that’s a no-go. It’ll hang for 5 minutes or so before erroring. I can tell 20 seconds in that this is the case, but for certainty I have to wait for the timeout before I can continue. What I get is even more interesting:

SOAP request error - possibly a protocol issue: <?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
<GIGANTIC BLOCK OF XML>

Well what the hell. 4 screens full of solid xml code. Looking over it I see snippets like this:

<summary>ISO [Datastore 1] ISOs/SW_DVD9_Windows_Svr_Std_and_DataCtr_2012_R2_64Bit_English_-4_MLF_X19-82891.ISO</summary>

Which means at least it’s getting through.

So I google for that error and find this page, which indicates Net::HTTP needs to be an older version just like LWP.  The thing is I’m not intentionally installing the perl-Net-HTTP RPM; it’s installed by morgnagplug along with perl-XML-LibXML,  perl-XML-SAX and perl-XML-LibXML… so I guess I uninstall those four RPMs and rerun the installer? This is getting confusing. I suspect I should reimage this machine without morgnagplug and see if that makes things easier. I’ll save that for the next attempt.

yum remove perl-Net-HTTP perl-XML-LibXML perl-XML-SAX perl-libwww-perl
./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes
<...>
CPAN not able to install following Perl modules on the system. These must be 
installed manually for use by vSphere CLI:

XML::LibXML::Common 0.13 or newer 
XML::LibXML 1.63 or newer

*Sigh*

cpan -i XML::LibXML
<...>
Checking for ability to link against libxml2...libxml2, zlib, and/or the Math library (-lm) have not been found.
Try setting LIBS and INC values on the command line
Or get libxml2 from
 http://xmlsoft.org/
If you install via RPMs, make sure you also install the -devel
RPMs, as this is where the headers (.h files) are.

Also, you may try to run perl Makefile.PL with the DEBUG=1 parameter
to see the exact reason why the detection of libxml2 installation
failed or why Makefile.PL was not able to compile a test program.
No 'Makefile' created SHLOMIF/XML-LibXML-2.0121.tar.gz
<...>

If I read that correctly, cpan won’t build it because the libxml2-devel rpm is not installed.

yum install libxml2-devel.x86_64
cpan -i XML::LibXML
<...>
Warning: prerequisite XML::SAX 0.11 not found.
JSON::PP 2.27103 is not available
 at /usr/local/share/perl5/CPAN/Meta/Converter.pm line 22.
 at /usr/local/share/perl5/ExtUtils/MM_Any.pm line 831.
JSON::PP 2.27103 is not available
 at /usr/local/share/perl5/CPAN/Meta/Converter.pm line 22.
Warning: No success on command[/usr/bin/perl Makefile.PL]
 SHLOMIF/XML-LibXML-2.0121.tar.gz
 /usr/bin/perl Makefile.PL -- NOT OK
<...>

damnit. Looks like I’m going down the CPAN rabbit hole.

cpan -i JSON::PP
cpan -i XML::SAX
<...>
t/21saxini.t ..... Can't locate Fatal.pm in @INC (@INC contains: /root/.cpan/build/XML-SAX-0.99-OJOmqR/blib/lib /root/.cpan/build/XML-SAX-0.99-OJOmqR/blib/arch /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at t/21saxini.t line 6.
BEGIN failed--compilation aborted at t/21saxini.t line 6.
t/21saxini.t ..... Dubious, test returned 2 (wstat 512, 0x200)
No subtests run 
t/40cdata.t ...... ok 
t/42entities.t ... ok 
t/99cleanup.t .... ok 

Test Summary Report
-------------------
t/21saxini.t (Wstat: 512 Tests: 0 Failed: 0)
 Non-zero exit status: 2
 Parse errors: No plan found in TAP output
Files=14, Tests=98, 2 wallclock secs ( 0.11 usr 0.02 sys + 2.07 cusr 0.16 csys = 2.36 CPU)
Result: FAIL
Failed 1/14 test programs. 0/98 subtests failed.
make: *** [test_dynamic] Error 255
 GRANTM/XML-SAX-0.99.tar.gz
 /bin/make test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
 reports GRANTM/XML-SAX-0.99.tar.gz
Running make install
 make test had returned bad status, won't install without force

what? Fine.

cpan -i Fatal
cpan -i XML::SAX
cpan -i XML::LibXML

FINALLY.

ok… where was I? Oh yes.

./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes

It runs and complains about MIME::Base64 and Socket6 again.

FINGERS CROSSED…

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
SOAP request error - possibly a protocol issue: <?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
<...>

and it takes forever again… not a good sign… aaand another giant block of xml.

Searching again finds this page.

Again pointing out that Net::HTTP is the source of the problem…. but I uninstalled the bad version and let the installer install the correct one, right?

 perl -MNet::HTTP -le 'print $Net::HTTP::VERSION'
6.07
perl -e 'use LWP; print LWP->VERSION."\n"'
6.13

WAT?

OK… so lets install specific friggen version manually.  the installer mentions LWP 5.837

cpan -i GAAS/libwww-perl-5.837.tar.gz
perl -MNet::HTTP -le 'print $Net::HTTP::VERSION'
5.834
perl -e 'use LWP; print LWP->VERSION."\n"'
5.837

ok, 17th try is a charm…

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/vmfs/volumes/54aead0a-9231b37c-90a6-b82a72d4d796/hostname1/hostname1.vmx
/vmfs/volumes/54aead24-7c23cb62-980e-b82a72d4d796/hostname2/hostname2.vmx
<...>

WOOOOOOOOOOOOO. IT FINALLY WORKS!

… so you remember how we did that, right?

It’s SOOOO tempting to throw my hand up, say done, and walk away, but it doesn’t fit our first requirement- reproducible from ansible.

Remember, you can’t use *cpan* from ansible. Sure, you can use cpanm, but not cpan.

 

Now that I roughly know what I need, I’m going to go back to the Raw Snapshot capture the state of all my perl module directories, reinstall vmware-cli, then bundle them all into a single RPM with FPM. Why? Because CPAN will not be needed at that point, and I will be able to rebuild that state much easier.

Test 3: Clean Slate and FPM

Clean slate sucks because I don’t have sudo set up along with a slew of other useful things. Oh well. First things first- lets get a list of all of our current perl modules

perl -e 'use doesntexist;'
Can't locate doesntexist.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at -e line 1.
BEGIN failed--compilation aborted at -e line 
find /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 >/tmp/pristine.list

Some of those dirs don’t even exist yet, but it’s still important to track them.

I copy the vmware folder back over, and begin installing all of my prereqs. At this point it’s only a few things:

yum install openssl-devel
yum install libuuid-devel perl-YAML perl-Devel-CheckLib gcc perl-CPAN libxml2-devel.x86_64

Next we’re gonna run the installation script. I’m pretty sure that it’ll fail, but at least it’ll configure CPAN the way it needs it rather than me bumbling through it on the first run.

./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes
<...>
Do you want to install precompiled Perl modules for RHEL?
[yes] no
<...>

Can't locate Module/Build.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.
Can't locate LWP.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.
Below mentioned modules with their version needed to be installed,
these modules are available in your system but vCLI need specific 
version to run properly

Module: ExtUtils::MakeMaker, Version: 6.96 
Module: Module::Build, Version: 0.4205 
Module: LWP, Version: 5.837 
Do you want to continue? (yes/no) yes
<...>

CPAN not able to install following Perl modules on the system. These must be 
installed manually for use by vSphere CLI:

Class::MethodMaker 2.10 or newer

OK, now that CPAN is installed, I should be able to simply install these two:

cpan -i JSON::PP
cpan -i Fatal
cpan -i Class::MethodMaker
<...>
PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/array.t .............. Can't locate Env.pm in @INC (@INC contains: /root/.cpan/build/Class-MethodMaker-2.24-hloQDv/t /root/.cpan/build/Class-MethodMaker-2.24-hloQDv/blib/lib /root/.cpan/build/Class-MethodMaker-2.24-hloQDv/blib/arch /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /root/.cpan/build/Class-MethodMaker-2.24-hloQDv/t/test.pm line 135.
BEGIN failed--compilation aborted at /root/.cpan/build/Class-MethodMaker-2.24-hloQDv/t/test.pm line 135.
Compilation failed in require at t/array.t line 22.
BEGIN failed--compilation aborted at t/array.t line 22.
t/array.t .............. Dubious, test returned 2 (wstat 512, 0x200)
<...>
Files=8, Tests=0, 1 wallclock secs ( 0.04 usr 0.00 sys + 0.44 cusr 0.06 csys = 0.54 CPU)
Result: FAIL
Failed 8/8 test programs. 0/0 subtests failed.
make: *** [test_dynamic] Error 2
 SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz
 /bin/make test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
 reports SCHWIGON/class-methodmaker/Class-MethodMaker-2.24.tar.gz
Running make install
 make test had returned bad status, won't install without force


Oh FFS. Maybe there’s an Env package I’m unfamiliar with that’s needed? I thought that was part of perl core… apparently not.

cpan -i Env
cpan -i Class::MethodMaker

and rerun the installer again…

./vmware-install.pl --prefix=/opt/vmwarecli EULA_AGREED=yes
<...>
Do you want to install precompiled Perl modules for RHEL?
[yes] no
<...>

SUCCESS.

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/vmfs/volumes/54eaad0a-923ab37c-90b6-b82a72d4d796/hostname1/hostname1.vmx
/vmfs/volumes/54eaad24-7c233c62-970e-b82a72d4d796/hostname2/hostname2.vmx
<...>

HURRAY IT WORKS!

Make a quick Snapshot to give us a recovery point.

Cleaning up

While there are many things we need to clean up, there’s one thing I want to get out of the way quickly- removing some of the cruft we installed to get CPAN to work. Unlike Debian’s apt, yum doesn’t notify you when dependencies are no longer needed, so I browed /var/log/yum.log and reviewed all of the packages installed while we were doing this. Here’s the list I came up with:

yum remove libcom_err-devel libsepol-devel zlib-devel keyutils-libs-devel pcre-devel libselinux-devel libverto-devel krb5-devel openssl-devel mpfr libmpc cpp kernel-headers glibc-headers glibc-devel libdb-devel perl-local-lib perl-Digest perl-Digest-SHA gdbm-devel perl-Test-Harness pyparsing systemtap-sdt-devel perl-ExtUtils-Manifest perl-ExtUtils-Install perl-ExtUtils-MakeMaker perl-ExtUtils-ParseXS perl-devel perl-CPAN gcc perl-YAML libuuid-devel perl-Devel-CheckLib xz-devel libxml2-devel

After removing that, we

/opt/vmwarecli/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username vcenteruser@domain.com --password 'somethingsecure'
/vmfs/volumes/54eaad0a-923ab37c-90b6-b82a72d4d796/hostname1/hostname1.vmx
/vmfs/volumes/54eaad24-7c233c62-970e-b82a72d4d796/hostname2/hostname2.vmx
<...>

Excellent- everything seems to be working.

Lets take one last snapshot of our perl modules for later comparison to pristine.list:

find /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 >/tmp/final.list

 

We now have a functioning vmware-cmd, but it STILL doesn’t meet our primary requirement- it must work via ansible.

 

We’ll pick up the rest in Part 2.

 

Warren Myers : a simple restructuring of elections

May 19, 2015 03:55 PM

In close follow-up with my desire to see political parties abolished, we also need to rethink how voting is done.

In the United States, you can only vote for a single candidate for most positions (town councils are an exception).

You do not have the opportunity to say anything more than a binary yes|no to a given person for a given office.

You can vote for Bob for mayor. But not voting for Mary, Quentin, and Zoe doesn’t really say anything about what you think of them – just that you liked Bob the best.

And there is the problem. There is an explicit elimination of relative preference when voting: all you can do is vote “yes” for a candidate.

That is very different from voting “no” against a candidate.

What should happen instead is you should vote for your favorite candidates in order of preference, so Bob is number 1, Zoe number 2, Quentin number 3, and Mary number 4.

Then when I vote, and rank them Mary 1, Zoe 2, Quentin 3, and Bob 4, we can get a picture of the relative preference of any given candidate running for the office.

Do this across all voters in a given election, and assign the winner to the person with the lowest score (in the numbering shown above – flip the values to assign the winner to the person with the highest score).

Perhaps even look at the top 3 or 4 after gestalt ranking, then vote again to determine the winner (this would be ideal for a Primary-then-General Election method).

What research shows is that while you and I may wildly disagree on “best” and “worst”, we’ll probably be pretty close on who we think is “good enough”.

In the Bob-Mary-Quentin-Zoe example with two voters, Mary & Bob both got 5 points. Quentin received 6, but Zoe earned 4.

The two voters, therefore, think Zoe is “good enough”, even though they part ways on “best” and “worst” (Bob & Mary).

Combine such a ranking system with a fully-open Primary election (ie you go rank every candidate regardless of “party”), and we would see much more representative-of-the-citizenry candidates appear at final Election.

Scott Schulz : The Werewolf of Bamberg

May 17, 2015 12:05 PM

I was pleasantly surprised to receive an email from Amazon letting me know that Oliver Pötzsch will be releasing the fifth book in his Hangman's Daughter series. Currently titled The Werewolf of Bamberg, Amazon lists the tentative release date as 29 September 2015, so it should be here in time for the fall reading pile. Hopefully the Audible version won't be too far behind.

If you haven't read any of the previous four books in the series, and if you are a fan of historical fiction / mysteries set in 1600's Germany, then this is for you.

Werewolf of Bamberg

Warren Myers : political parties should be abolished

May 16, 2015 03:47 PM

John Adams and George Washington, among many others, both warned of the dangers of political parties.

There is nothing which I dread so much as a division of the republic into two great parties, each arranged under its leader, and concerting measures in opposition to each other. This, in my humble apprehension, is to be dreaded as the greatest political evil under our Constitution. –John Adams

And from George Washington:

The alternate domination of one faction over another, sharpened by the spirit of revenge, natural to party dissension, which in different ages and countries has perpetrated the most horrid enormities, is itself a frightful despotism. But this leads at length to a more formal and permanent despotism. The disorders and miseries, which result, gradually incline the minds of men to seek security and repose in the absolute power of an individual; and sooner or later the chief of some prevailing faction, more able or more fortunate than his competitors, turns this disposition to the purposes of his own elevation, on the ruins of Public Liberty

Without looking forward to an extremity of this kind, (which nevertheless ought not to be entirely out of sight,) the common and continual mischiefs of the spirit of party are sufficient to make it the interest and duty of a wise people to discourage and restrain it.

It serves always to distract the Public Councils, and enfeeble the Public Administration. It agitates the Community with ill-founded jealousies and false alarms; kindles the animosity of one part against another, foments occasionally riot and insurrection. It opens the door to foreign influence and corruption, which find a facilitated access to the government itself through the channels of party passions. Thus the policy and the will of one country are subjected to the policy and will of another.

There is an opinion, that parties in free countries are useful checks upon the administration of the Government, and serve to keep alive the spirit of Liberty. This within certain limits is probably true; and in Governments of a Monarchical cast, Patriotism may look with indulgence, if not with favor, upon the spirit of party. But in those of the popular character, in Governments purely elective, it is a spirit not to be encouraged. From their natural tendency, it is certain there will always be enough of that spirit for every salutary purpose. And, there being constant danger of excess, the effort ought to be, by force of public opinion, to mitigate and assuage it. A fire not to be quenched, it demands a uniform vigilance to prevent its bursting into a flame, lest, instead of warming, it should consume.

And yet for the last 200+ years, not only have we had a party-based system, but even with the American public supposedly interested in viable third parties, of which there are myriad, none have come close to appearing in a major election since 1968, when George Wallace won 46 electoral votes, and just shy of 10,000,000 popular votes (Nixon and Humphrey won 301 & 191 electoral votes respectively, and 31.7m & 30.9m popular votes respectively). The major parties have enacted all kinds of de factorules” to prevent competition.

That’s nearly 50 years since a third-party candidate won a state in a Presidential election.

No wonder candidates declare to enter races affiliated with the Big 2 instead of whom they actually feel more closely aligned with.

“Politics exists as soon as two people are in the same room,” was cleverly told to me by a former colleague at a highly-politicized company. And it’s true. As soon as you have two people together, disagreement arises. Priorities are different. Interests are different. Parties can help group together folks with more-or-less similar ideas, but they tend to either be so tightly- or loosely-defined that affiliating with the “party” either makes you look like a kook, or says nothing at all about you.

We all know there are no perfect candidates (though I’m awful darn close!) – and while aligning with a party might tell you something about the person, it often it says little at all.

So I propose to make “official” party affiliation a thing of the past. Remove barriers to entry for candidates. Remove party affiliations when registering to vote.

After all, we’re all just citizens. We shouldn’t be judged by party affiliation.

Mark Turner : Another canceled credit card

May 16, 2015 02:06 AM

We got an email from Chase.com earlier this evening, alerting us to fraudulent charges on our credit card. Someone has apparently programmed our credit card number onto another card and gone on a shopping spree.

It began with a swipe in a food vending machine owned by Berkshire Food, Inc. somewhere in Connecticut. Berkshire is in Danbury but there’s no way of knowing whether the transaction was there or the payment was processed there. The first tranaction was $1.60. I’ve heard that thieves will usually start off their spree with a small amount and increase as they gain confidence in their card.

Our thieves then began to get hungry, so they stopped into L.C. Chen’s, a Chinese restaurant in Fairfield, CT, at 6:19 PM. The two women bought Pad Thai and a drink, one signing the receipt as “Vanessa Smith,” according to Linda, the nice lady I spoke with.

At this point, fraud party continued up Bridgeport, CT and to Modell’s, a sporting goods store. Two transactions for $200 were declined here, Chase’s fraud alert was triggered, and the card was canceled soon afterward. Thus, the spree is over. I’ve got a call in to Modell’s security to get their information, including surveillance video if any.

How can someone spend money with a credit card they do not have? It’s amazingly easy, actually. The magnetic stripe on the back of the card is just like a cassette tape, easily re-recorded. Someone, somewhere either got my credit card number or randomly generated it, then applied it to another card. Perhaps one belonging to Vanessa Smith. At the Chinese restaurant, the thief never had to provide the three-digit security code since the bank (wrongly) assumes that swiping a card in person is somehow more secure than using one online.

The fact that Chase has begun issuing credit cards with chips in them is a step in the right direction, but it does not help unless merchants upgrade their payment machine to read the chips.

What’s really needed is to require PINs with cards, the way the Europeans secure their card transactions. I ran into this with my last business trip to the Netherlands, where I could not use my credit card because I couldn’t remember my PIN. Banks think their customers are too dumb to use card + PIN, so it seems we’re stuck with getting new cards every 6-9 months as they get compromised. Sheesh.

Mark Turner : Keeping Secrets — STANFORD magazine — Medium

May 15, 2015 12:57 PM

WHAT IF your research could help solve a looming national problem, but government officials thought publishing it would be tantamount to treason? A Stanford professor and his graduate students found themselves in that situation 37 years ago, when their visionary work on computer privacy issues ran afoul of the National Security Agency. At the time, knowledge of how to encrypt and decrypt information was the domain of government; the NSA feared that making the secrets of cryptography public would severely hamper intelligence operations. But as the researchers saw it, society’s growing dependence on computers meant that the private sector would also need effective measures to safeguard information. Both sides’ concerns proved prescient; their conflict foreshadowed what would become a universal tug-of-war between privacy-conscious technologists and security-conscious government officials.

Source: Keeping Secrets — STANFORD magazine — Medium

Mark Turner : Underwater Test-fire of Korean-style Powerful Strategic Submarine Ballistic Missile | 38 North: Informed Analysis of North Korea

May 14, 2015 01:06 PM

The imagery and information released by KCNA would lead an observer to conclude that this recent test was conducted from the SINPO-class experimental ballistic missile submarine based at the Sinpo South Shipyard. This, however, may be incorrect … It would appear to be more reasonably in line with assessed North Korean capabilities, however, that the test launch was conducted from a submerged barge—possibly the one seen at the Sinpo South Naval Shipyard.

Source: Underwater Test-fire of Korean-style Powerful Strategic Submarine Ballistic Missile | 38 North: Informed Analysis of North Korea

Tarus Balog : OpenNMS 16 Released

May 13, 2015 08:53 PM

In keeping with our new Horizon release policy of a new major release every three to four months, we are happy to announce the availability of OpenNMS 16, codenamed Daredevil.

Most of the changes in OpenNMS 16 are under the covers. We are trying to streamline the code and thus have removed both capsd (which was deprecated) and linkd (which was replaced by enhanced linkd). This version also requires Java 8.

The main visible feature is that the Dashboard has been rewritten and should be a considerable improvement to those who use it.

A nearly complete list of changes is as follows:

Bug

  • [NMS-863] – "24hr Avail" went negative
  • [NMS-2213] – SLM categories totals are not being updated during runtime
  • [NMS-5631] – Deadlock inside RTC's DataManager during shutdown
  • [NMS-6100] – The Stp interface box page throws an exception
  • [NMS-6158] – When displaying Linkd link info on node, ifAlias data in interface columns missing opening quote
  • [NMS-6536] – NRTG is throwing ConcurrentModificationException
  • [NMS-6567] – IfIndex not updated in ipinterface table on change
  • [NMS-6568] – Requisition UI has inconsistent field labels for building the provisioning requisition
  • [NMS-6583] – linkd can't make use of learned MAC addresses on ports to determine path mapping
  • [NMS-6593] – sort order interfaces on node page
  • [NMS-6802] – EnLinkD IS-IS Link discovery fails on Cisco routers
  • [NMS-6902] – Geomaps are quite slow
  • [NMS-6905] – Remove Link Status Menu Item
  • [NMS-6912] – lldpchassisid not properly decoded for DragonWave in Enhanced Linkd Lldp node discovery
  • [NMS-6972] – test failure: org.opennms.netmgt.provision.detector.SmtpDetectorTest
  • [NMS-6974] – Link Status Provider is still an option for older Linkd Topology Provider
  • [NMS-7029] – Java 8 build fails some tests
  • [NMS-7089] – MAC 00:00:00:00:00:00 should be treated as null
  • [NMS-7090] – IpNetToMedia Table: Manage duplicated ip address
  • [NMS-7096] – Toggle icons on Node List Page are too small on resolutions greater than Full HD
  • [NMS-7148] – Geo-Maps running on a server without internet connection breaks the UI for valid nodes.
  • [NMS-7175] – Alarms dashlet: "ago" and node label columns can overlap when tiled
  • [NMS-7183] – LLdp link discovery: lldpRemLocalPortNum value 0
  • [NMS-7184] – LldpHelper decode exception
  • [NMS-7192] – Remove the logging directories from the DEB package
  • [NMS-7207] – Switch direction to zoom in and out in the topology
  • [NMS-7251] – Change filterfavorites.filter to 'text' SQL data type
  • [NMS-7294] – Enhanced Linkd inserts wrong Local Port bridge number
  • [NMS-7320] – Java environment in Debian has to be configured twice
  • [NMS-7337] – Database Report "Response time by node" Not Working.
  • [NMS-7358] – IllegalArgumentException on ipnettomediatable
  • [NMS-7362] – No CDP neighbors on a topological map
  • [NMS-7372] – ACLs ineffective in geographic map
  • [NMS-7379] – Unable to display performance data from Host Resource processor table
  • [NMS-7400] – KSC Reports with non-existing resources generate exceptions on the WebUI
  • [NMS-7410] – Title information on the node detail page are confusing
  • [NMS-7412] – Double footer in resource graph page
  • [NMS-7432] – Normalize the HTTP Host Header with the new HttpClientWrapper
  • [NMS-7434] – Disabling Notifd crashes webUI
  • [NMS-7456] – JRB to RRD converter no longer compiles
  • [NMS-7466] – Reload Collectd and Pollerd Configuration without restart OpenNMS
  • [NMS-7467] – Path Outage severity is not indicated in Web UI
  • [NMS-7481] – DrayTek Vigor2820 Series agent bug: zero-length IpAddress instance ID
  • [NMS-7485] – queued creates its own category for loggings
  • [NMS-7518] – SNMP version syntax inconsistent across components
  • [NMS-7531] – Surveillance View configuration is no longer dynamic
  • [NMS-7533] – EventconfFactoryTest fails with no events eventconf.xml
  • [NMS-7537] – Vaadin SV on index page not fitting to view
  • [NMS-7543] – Vaadin:Dashboard SV dashlet no longer indicate context of other dashlets
  • [NMS-7549] – NPE on admin/notification/noticeWizard/chooseUeis.jsp
  • [NMS-7554] – Smoke test is failing with the new dashboard
  • [NMS-7563] – gui and maps does not display lldp and cdp links
  • [NMS-7570] – Dashboard Auto-Refresh runs JVM out of memory (Full-GC)
  • [NMS-7576] – The XSD for the SNMP Hardware Inventory Provisioning Adapter is not included on the RPM/DEB packages.
  • [NMS-7577] – Search by foreignSource or severityLabel doesn't work on Geo Maps
  • [NMS-7590] – List of service names in the requisition editor should be pulled from the poller conifguration instead of capsd
  • [NMS-7597] – Tog depth for VmwareMonitor and VmwareCimMonitor is wront
  • [NMS-7598] – Varbinddecodes are being ignored on Notifications
  • [NMS-7603] – Some parameters logged out of order since slf4j conversion
  • [NMS-7604] – Replace PermGen VM arguments with Metaspace equivalents
  • [NMS-7610] – Remote Poller throws ClassNotFound Exception when loading config
  • [NMS-7615] – RPM dependency for JDK 8 is wrong
  • [NMS-7616] – Compass can't make a POST request from FILE URLs in some cases
  • [NMS-7617] – Test failure: org.opennms.netmgt.provision.service.Nms5414Test
  • [NMS-7620] – Scrolling issue
  • [NMS-7622] – Memory leak in RTC
  • [NMS-7626] – The PSM doesn't work with IPv6 addresses if the ${ipaddr} placeholder is used on host or virtual-host
  • [NMS-7629] – Timeline image links are not working with services containing spaces
  • [NMS-7630] – Database reports don't run in 16
  • [NMS-7631] – Match event params for auto-ack of Notification
  • [NMS-7633] – include-url doesn't work on poller packages
  • [NMS-7634] – ClassCastException in BSFNotificationStrategy
  • [NMS-7636] – Node resources are deleted when provisiond aborts a scan
  • [NMS-7637] – Default date width in Database Reports is too small
  • [NMS-7640] – Test failure: testImportAddrThenChangeAddr
  • [NMS-7641] – The IP Interface page is blank.
  • [NMS-7642] – The global variable org.opennms.rrd.queuing.category is set to OpenNMS.Queued and should be queued
  • [NMS-7643] – Test failure: testSerialFailover
  • [NMS-7644] – Fixing Logging Prefix/Category on several classes
  • [NMS-7645] – Test failure: tryStatus
  • [NMS-7650] – XML data collection with HTTP POST requests is not working
  • [NMS-7651] – Improving exception handling on the XML Collector
  • [NMS-7657] – Vaadin surveillance view configuration doesn't work with Firefox
  • [NMS-7658] – Error in Debian/Ubuntu init script

Enhancement

  • [NMS-1504] – Add option to turn off snmp v3 passphrase clear text in log files
  • [NMS-2995] – Trapd is not able to process SNMPv3 INFORMs
  • [NMS-4619] – XMPP: Make SASL mechanism configurable
  • [NMS-6442] – Set vertex to focal point
  • [NMS-6581] – Drools Update to 6.0.1 Final
  • [NMS-6963] – PATCH — Bridgewave Wireless Bridge
  • [NMS-7146] – Move RTC over to Spring and Hibernate
  • [NMS-7229] – Be able to set the rescanExisting flag when defining a scheduler task on provisiond-configuration.xml
  • [NMS-7310] – add Siemens HiPath 3000 event files
  • [NMS-7311] – add Siemens HiPath 3000 HG1500 event files
  • [NMS-7312] – add Siemens HiPath 8000 / OpenScapeVoice event files
  • [NMS-7318] – Move notification status indicator to header
  • [NMS-7424] – Add pathOutageEnabled="false" to poller-configuration.xml by default
  • [NMS-7441] – Change varchar to text for CDP and LLDP tables
  • [NMS-7453] – Update Smack API
  • [NMS-7461] – Update asciidoctor maven plugin from 1.5.0 to 1.5.2
  • [NMS-7473] – Remove Capsd from OpenNMS
  • [NMS-7474] – Modify WebDetector/Monitor/Plugin/Client to expose ability to enable/disable certificate validation
  • [NMS-7476] – Add support for gzip compression on REST APIs
  • [NMS-7479] – Allow RRD data to be retrieved via REST
  • [NMS-7480] – Make resource data accessible through ReST
  • [NMS-7505] – The DefaultResourceDao loads all child resources when retrieving a specific resource by id
  • [NMS-7528] – Use the default threshold definition as a template when adding TriggeredUEI/RearmedUEI on thresholds through the WebUI
  • [NMS-7579] – Remove unnecessary output from opennms-doc module
  • [NMS-7593] – BSFMonitor creates a new BSFManager every poll which makes caching script engines ineffective
  • [NMS-7595] – SNMP interface RRD migrator should create and clean up backups interface-wise
  • [NMS-7609] – Create a ReST API to expose the available detectors/policies/categories/assets/services required to manipulate foreign sources
  • [NMS-7612] – Need upgrade task for collection strategy classes
  • [NMS-7619] – Create opennms.properties option to choose between new and old dashboard
  • [NMS-7632] – Deprecation of LinkD

Story

  • [NMS-7299] – Allow user to create and modify surveillance views
  • [NMS-7303] – Migrate Surveillance view GWT UI component to Vaadin
  • [NMS-7304] – Migrate Alarms GWT UI component to Vaadin
  • [NMS-7305] – Migrate Notifications GWT UI component to Vaadin
  • [NMS-7306] – Migrate Node Status component from GWT to Vaadin
  • [NMS-7307] – Migrate Resource Graph Viewer component from GWT to Vaadin
  • [NMS-7323] – Update user documentation
  • [NMS-7325] – Allow user to select surveillance view in the Dashboard
  • [NMS-7326] – Remove the GWT dashboard from the code base
  • [NMS-7429] – Remove "report-category" attribute
  • [NMS-7430] – Add surveillance view's name in the left header cell
  • [NMS-7431] – Add an option to disable "refreshing"
  • [NMS-7469] – Add preview window in config UI
  • [NMS-7489] – Icons for alarms and notifications
  • [NMS-7490] – Modal window to show node, alarm and notification details
  • [NMS-7491] – Admin configuration panel shows dashboard instead of surveillance view
  • [NMS-7492] – Allow to configure refresh time per surveillance view
  • [NMS-7530] – Rename the surveillance config panel link in Admin menu
  • [NMS-7540] – Dashboard Dashlet: Refresh indicator
  • [NMS-7542] – Vaadin Dashboard: Alarm Dashlet should have severity sorting by default

Tarus Balog : Dev-Jam 2015 – Magical Number 10

May 13, 2015 03:59 PM

We are just about a month away from one of my favorite weeks of the year: The OpenNMS Developer’s Jamboree, or Dev-Jam.

This is the tenth one we’ve had, which is hard for me to believe. I think it is a testament to the community around the OpenNMS Project that we can have these year after year (and not a testament to the fact that I’m quickly becoming an “old guy”).

We have people from all over the world who contribute to OpenNMS, and for one week out of the year we get together to hack and hang out. It was an “unconference” before such things were popular.

The first one was held at the Pittsboro OpenNMS HQ in 2005, but we quickly learned that we needed a bigger venue. The requirements for a successful Dev-Jam are as follows:

  • A room big enough to hold everyone
  • Fast Internet
  • A place for everyone to sleep
  • Food

We found a great home for Dev Jam at the University of Minnesota’s Twin Cities campus in Minneapolis, specifically in a dorm called Yudof Hall. We lease the downstairs “club room” which is a large rectangular room that is big enough for our crowd. On one side is a kitchen and on the other side is an area with a television and couches. In the middle we set up tables for everyone to work.

We also get rooms in the same dorm, so people can come and go as they please. Some people like to get up in the morning. Others stay up late and don’t come down until noon. The campus offers a number of places to eat, and in the evening we can walk to a restaurant for dinner and drinks. We try to see a Twins game while we are there as well as take a trip to Mall of America.

This will be the first year that access to the light rail system is available from campus, which will make getting around so much easier.

For those of you who haven’t spend a lot of time embedded with an open source project, you probably don’t understand how much fun an event like this can be, or why just writing about it makes me eager for June to arrive. Technically I’ll be at work, but it is unlike any other job I’ve ever had.

If you would like to come, we still have a few places left. Check out the Registration page for more information. Everyone is welcome, but be advised that this is a “code” heavy conference with little formal structure. For more casual OpenNMS users, there is the User’s Conference in September.

Hope to see you at Dev-Jam, and if not there, at the OUCE.

Mark Turner : Seymour M. Hersh · The Killing of Osama bin Laden · LRB 21 May 2015

May 13, 2015 12:13 AM

On Sunday, Investigative journalist Seymour Hersh published an account of the bin Laden SEAL raid that differs markedly from the official account. Hersh insists that Pakistan knew of the raid and that the Obama administration’s is a “lie.” Hersh’s reporting is now being called into question as he relies heavily on a single anonymous source.

I’ve been a fan of Hersh’s work, but these are extraordinary claims which demand convincing evidence. Unless Hersh can provide stronger sources I will have to wonder whether his account is trustworthy.

It’s been four years since a group of US Navy Seals assassinated Osama bin Laden in a night raid on a high-walled compound in Abbottabad, Pakistan. The killing was the high point of Obama’s first term, and a major factor in his re-election. The White House still maintains that the mission was an all-American affair, and that the senior generals of Pakistan’s army and Inter-Services Intelligence agency (ISI) were not told of the raid in advance. This is false, as are many other elements of the Obama administration’s account.

Source: Seymour M. Hersh · The Killing of Osama bin Laden · LRB 21 May 2015

Tarus Balog : Touchscreen Issues with OnePlus One Phone

May 12, 2015 06:58 PM

Last September I was able to purchase a OnePlus One phone, and my initial impressions were very positive.

Having owned it now for over six months, I can state that this is the best smartphone I’ve ever owned, a list which has included two iPhones, several Nexus devices, a couple of Samsung devices and an HTC One. It is fast, runs well, has a wonderful screen and is the right size for my hand.

Being a fan, I have followed the drama surrounding OnePlus and CyanogenMod, and I am very unhappy about the new OxygenOS being closed source. But still, I decided to upgrade to Lollipop (Cyanogenmod 12S) when it became available and that’s when I started to notice an issue with the touchscreen.

I play a game called Ingress, and within the application is a mini-game called “Glyph Hacking“. In the mini-game you are presented with a number of patterns on a grid, and you have to replicate the patterns, in order, in a certain amount of time. I really enjoy the game as a mental exercise, but I started noticing that as I was trying to draw the glyphs it would often just stop drawing or jump to the next glyph in the sequence. This was frustrating.

I found a thread that suggests a number of other people are having this issue with the phone and that it may be a software problem. I’m not so sure this is the case with my handset, because up until this last week it has been working fine (never seen the issue before). But just in case, I was able to restore the phone to KitKat and (CyanogenMod 11S) the problem remained. All of the suggestions I’ve found on-line, from plugging the phone in to “ground” it to rebooting, haven’t helped.

Using a program called “Yet Another Multitouch Test” I was able to demonstrate that the screen is registering additional touches that I did’t make, especially near the top of the screen. I’ve contacted OnePlus support so we’ll see what happens. Here is a video demonstrating the issue.

Warren Myers : pydio has clients now

May 12, 2015 02:10 PM

In update to my recent how-to, I found out from the founder of Pydio there are dedicated clients now. IOW, you don’t have to use just the WebUI.

I haven’t tried any of them yet, but good to know they’re now there – it makes comparing Pydio and other tools like ownCloud easier.

Warren Myers : welcome, zebediah!

May 11, 2015 04:30 PM

We got to meet the latest addition to our family a few days ago, on the 5th. For the second time in under a year, we had the last-minute opportunity to adopt a baby boy. Last year we welcomed a 3.5 month-old, and this year we have a newborn.

He’s had some complications, and been in the NICU since a few hours after birth. However, he’s started to make some good progress, and while not out of the woods, is on his way to being able to come home in, hopefully, a week.

Zebediah joins big brother Abijah, and brings our family from three to four.

Mark Turner : Jacksonville

May 11, 2015 12:09 PM

Waking up to a Florida sunrise on Amtrak's southbound Silver Star

Waking up to a Florida sunrise on Amtrak’s southbound Silver Star


Good morning, Jacksonville! I am passing through Jacksonville, Florida, now. Jacksonville is the largest city by population in Florida and the largest city by area in the contiguous United States.

This city holds a special place in my heart. Why, do you ask? Why would America’s most sprawling city captivate me? It’s the rich history of the city as well as the months I spent here in 2000, working on a deal when I was working at NeTraverse.

I was working on a deal at AllTel, implementing a proof of concept of NeTraverse’s Win4Lin product. I stayed at a charming bed and breakfast within walking distance, owned by two characters (is there any other kind of BnB owner?). My hosts were an English professor of economics and a former Alabama beauty queen, an unlikely pairing. Yet they were so welcoming! I’ll always remember this home away from home.

My career seemed to be so full of promise those days. I was doing what I really loved to do: sales engineering, where I provided solutions to customers. My mentor and our Sales VP, Ed Stevens (or as we called him, “Fast Eddie”) visited occasionally and pumped us up with tales of how we were all going to be filthy rich. That didn’t work out, obviously, but it sure was a hell of a ride.

The train ride itself has been pretty smooth. I brought double hearing protection, an eyemask, and a blanket to better sleep and it mostly worked. I caught several hours of shuteye. My back would have preferred a seat that angled further back but all told it wasn’t so bad. Sure beats driving 13 hours!

The scenery is much more interesting now that the sun is up, so I’ll update this more as I go.

Mark Turner : Train happenings

May 10, 2015 07:59 PM

Groundbreaking of Raleigh Union Station

Groundbreaking of Raleigh Union Station


I raced out of work Friday morning to see the groundbreaking of Raleigh’s new Union Station. Mayor McFarlane, Gov. McCrory, NCDOT Secretary Tata, Rep. David Price, and Federal DOT and Amtrak officials were there to break ground on this new multi-modal station. Looking around the crowd of spectators, many of whom were sweating under the strong sun, I wondered how many of them had ever actually ridden Amtrak. I’d bet the closest most have come is the hundred yards to the tracks where the NCDOT’s version of Amtrak, the Piedmont, was right then pulling into Raleigh.

Fortunately up front I spied my friend Rob Rousseau, a train buff involved in the local railroad historical society. We chatted a bit about the meaningfulness of the new station and how it stacks up to renovations at other local stations, like Durham’s. The consensus is that Raleigh is in for a big improvement over the cramped station we have now.

I’m actually taking Amtrak’s Silver Star overnight tonight to attend the funeral of my aunt in Florida. She lived in Winter Park, Florida, which is just north of Orlando and a whopping 13 hour drive away from Raleigh. Fortunately for me, her home is less than three miles from the Winter Park station. The problem I have with Amtrak is that it typically doesn’t go everywhere you want it to, but it covers many East Coast destinations well. I’m happy not to be driving and hope I can catch some sleep on the train as I go. I’m angling to get a sleeper room but if that doesn’t work out I will still be more comfortable on Amtrak than I would riding in my car or someone else’s.

It’s weird packing for this trip since I’m traveling overnight. How do I dress for a train trip that takes me to morning? It’s the same conundrum I had when flying 13 hours to Australia, only in this case I’m staying in the same time zone, so that helps.

I’ll keep you posted on the journey.

Mark Turner : Text of Brian Dyson’s commencement speech at Georgia Tech, Sept 1991.

May 10, 2015 03:46 PM

This is the full text of the speech given by Brian Dyson, former CEO of Coca-Cola Enterprises, at Georgia Tech’s 172nd commencement on September 6, 1991, as reported in the Georgia Tech Whistle faculty newspaper. See my previous post to learn how I tracked this down.

Coca-Cola CEO’s Secret Formula For Success: Vision, Confidence And Luck

(Brian G. Dyson. president and chief executive officer of Coca-Cola Enterprises Inc. was the featured speaker at Georgia Tech’s 172nd commencement on Sept. 6.)

I think the ingredients for success, or as we would say at Coca-Cola. “the secret formula,” is a combination of three things: vision, knowing what you want to be when you grow up; confidence, knowing who you are; and luck, or what I would call being in the right place at the right time.

With those three ingredients and your Georgia Tech diploma, you have the formula for success. You have a first class education from a world class university, and I really congratulate you all on your achievement.

Georgia Tech is not just a school that has national leadership in many categories of scholarship and research. It is not just an institution that has gone from the most humble beginnings to great international recognition. It is not just the home of the 1990 football champions! It is all of those things and much more. Georgia Tech today is an inspiring realization of the American dream! Like my company, Coca-Cola, your school has expanded its influence from small beginnings on North Avenue to the farthest reaches of the globe, including being a future centerpiece for the 1996 Olympics and hopefully for the 1994 World Cup Soccer. I travel extensively and I am very much attuned to worldwide trademarks and brands, and I can tell you in the academic field, Georgia Tech is achieving worldwide name recognition.

The first ingredient in the secret formula for success is vision — what you would like to be. Because remember that we all live under the same sky, but we do not have the same horizon. A vision is different, I think, from the short-term goals that characterize a young life. These are often set for you by teachers. parents, advisers. They all have, to one degree or another, some stock in your life, and they quite appropriately set goals for you.

There’s no harm in taking advice, but now you will shape your own destiny. Now you need a larger vision.

I believe that vision is an essential component of the life of a successful individual, of a successful institution, of a successful company. Let’s take my own enterprise, Coca-Cola. It has a rich history of vision.

Sometime around 1899, three wise men travelled from Chattanooga to Atlanta. Two were businessmen and the third was the inevitable lawyer. They visited with Mr. Asa Candler, the then owner of Coca-Cola, and described how, on a recent visit to Havana, Cuba, they had observed a crowd of Cubans watching a baseball game and drinking a soft drink called Pina Colada. This drink was served in a bottle that had a marble-like top that you popped open in order to consume it. They felt that this same principle could be applied to the soft drink, Coca-Cola, so as to take it out of its exclusive soda fountain venue and have it enjoyed everywhere. As some of you know. this led to these three wise men receiving the sole rights for almost all of the U.S. to place Coca-Cola in bottles, and the legal tender for this right was a symbolic $1, which appears to have never actually changed hands. Pretty good vision!

Similarly, we have the vision of Mr. Robert Woodruff who in the 1920s dreamed of creating a global marketplace for Coca-Cola. Undoubtedly, it was sparked by his belief that “life belongs to the discontented“ — that restlessness of spirit that impels some of us to go that extra step that brings about a breakthrough. Pretty good vision!

Again, still on home ground, consider Billy Payne’s vision of having Atlanta compete for the honor of becoming the host city of the 1996 Summer Olympics. In Tokyo last September, I listened to Billy Payne relate a personal, very inspiring story to the International Olympic Committee [IOC]. Billy told them how he had been a child and a young teenager in 1956 and 1960 watching the daily highlights of the Melbourne and Rome Games and how every single night of those Olympic Games he had fallen asleep imagining himself on the starting line of the 100 meter finals, only to discover the next morning that he had been dreaming. He related how later in life, while always a good athlete, he realized he would never be good enough to be an Olympian, but he never stopped dreaming. Billy was able to communicate to the IOC his new dream, the dream of an Atlantan — one of many Atlantans — who held the same dream. The dream that the Centennial Games would be celebrated in Atlanta in 1996. I tell you ladies and gentlemen, that was pretty good vision!

Georgia Tech also had the vision to participate in that effort with a total commitment of time and technological expertise. Unquestionably, Tech’s interactive video programs were a decisive factor in convincing the IOC. But I suspect that Georgia Tech had more in mind than just helping out the Atlanta Olympic Committee as a proud citizen of this city. I suspect that Georgia Tech saw that, through the platform of the Olympics, it would project an image for itself to a worldwide audience that not even Madison Avenue could conjure up. And I think it is for that same reason that Dr. Crecine is so involved in our bid for Atlanta to be one of the venues for the World Cup Soccer in 1994, an effort I am knowledgable of and appreciative of in my capacity as co-chairman of the Atlanta World Cup Soccer Advisory Board. Visionary people see in these associations things that cannot be wrought through conventional molds.

The final example of vision I will give you is America — not just the geographical entity of the U.S. — but for what America means as a vision to the world at large. You may think I am exaggerating. I beg you not to make that mistake. I have lived most of my life in other countries as an outsider looking in at the U.S. I have a deep. deep regard and affection for this nation. even though I did not have the privilege of being born here.

America is made up of an amazing, remarkable population representing virtually every race, religion, nationality, and language on earth. The diverse American people are a fabulous resource unequalled in any other nation.

The reason people continue to come here from all over the world — sometimes risking their lives — is because of this very simple, but very clear vision and that’s this incredible notion of a chance, a chance to start again. It‘s this brilliant idea that here you can wipe the slate clean and try to be whatever you want to be! With all that is wrong in this nation, it still offers people the best chance on earth to apply their skills and realize their dreams.

Even with all her warts and blemishes exposed through an open democratic society and probing news cameras, as an outsider I can tell you that the world at large still sees a nation of freedom and opportunity unequalled anywhere. While many here focus only on the failures, the world at large sees a nation that has delivered on more of its promises than any other nation ever in history.

You should be proud because although people may criticize this country. they also yearn to come here. It’s this dichotomy of feelings that is important to understand. Another ingredient I mentioned as being important to me is confidence — a basic acceptance of what I am and a realistic understanding of what I am not. It is an understanding of your potential.

To realize this potential, you must be at peace with yourself. You must focus on your strengths and attributes, and you must develop them to the max. I think I was in my twenties when this truth finally dawned on me, because until then I had been thrashing around, trying to be all things to all people. It doesn‘t work that way. Confidence in your potential means you can look anybody in the eye and not be in awe of them. Confidence is seeing an equal, level playing field.

So there you have it, my ingredients for success. Vision, confidence and thirdly, luck. Don‘t think that if you have vision and confidence, luck will come looking for you. Sometimes you have to make your own luck.

Lastly, I would caution you that as intelligent and active participants in a dynamic society like America, you must bring balance into your lives. Imagine life as a game in which you are juggling some five balls in the air. You name them — work, family, health, friends and spirit — and you’re keeping all of these in the air. You will soon understand that work is a rubber ball. If you drop it, it will bounce back. But the other four balls ~ family. health, friends and spirit — are made of glass. If you drop one of these, they will be irrevocably scuffed, marked. nicked. damaged or even shattered. They will never be the same. You must understand that and strive for balance in your life.

You live in a world of growing opportunity at one of the most exciting times in history, and you have been prepared with an exceptionally fine education. Because you are all so well educated, let me pose this final question to you. What is education for? Is it for the pursuit of knowledge or for the pursuit of significance? How you answer makes a difference.

Knowledge is merely a tool. There is someone in Argentina or Singapore who has the same degree as you. The difference lies in how you use it. Will you use your education for life or just as a living? It‘s up to you now.

Mark Turner : Yes, Coca-Cola CEO Brian Dyson really did give that “five balls” speech

May 10, 2015 03:41 PM

On social media, a friend forwarded what was called the “Shortest speech by CEO of Coca Cola…”
bryan-dyson-image-quote
It reads:

Imagine life as a game in which you are juggling some five balls in the air. You name them – Work, Family, Health, Friends and Spirit and you’re keeping all of these in the air.

You will soon understand that work is a rubber ball. If you drop it, it will bounce back. But the other four Balls – Family, Health, Friends and Spirit – are made of glass. If you drop one of these, they will be irrevocably scuffed, marked, nicked, damaged or even shattered. They will never be the same. You must understand that and strive for it.”

Work efficiently during office hours and leave on time. Give the required time to your family, friends & have proper rest

Value has a value only if its value is valued

While I’m not the first to cast doubt on this alleged speech, the quote sounded too cheesy to be true so I decided to study it a bit.

“Value has a value only if its value is valued”

First off, what the hell does this even mean? No CEO would say this. Perhaps the goober who put together the image with the quote on it tacked this on. I’ll assume this is the case.

A Google search of “Brian Dyson speech” turns up 236,000 results. Many of them are “inspirational quote”-type websites. Even NPR quotes the speech:

Brian J. Dyson
Georgia Tech, September 6, 1996

Nothing is really over until the moment you stop trying.

Some search results include longer versions of the alleged speech:

Imagine life as a game in which you are juggling some five balls in the air. You name them – work, family, health, friends and spirit … and you’re keeping all of these in the air.

You will soon understand that work is a rubber ball. If you drop it, it will bounce back. But the other four balls – family, health, friends and spirit – are made of glass. If you drop one of these, they will be irrevocably scuffed, marked, nicked, damaged or evenshattered. They will never be the same. You must understand that and strive for Balance in your life.

How?

Don’t undermine your worth by comparing yourself with others. It is because we are different that each of us is special.

Don’t set your goals by what other people deem important. Only you know what is best for you.

Don’t take for granted the things closest to your heart. Cling to them as you would your life, for without them, life is meaningless.

Don’t let your life slip through your fingers by living in the past or for the future. By living your life one day at a time, you live all the days of your life.

Don’t give up when you still have something to give. Nothing is really over until the moment you stop trying.

Don’t be afraid to admit that you are less than perfect. It is this fragile thread that binds us to each together.

Don’t be afraid to encounter risks. It is by taking chances that we learn how to be brave.

Don’t shut love out of your life by saying it’s impossible to find time. The quickest way to receive love is to give; the fastest way to lose love is to hold it too tightly; and the best way to keep love is to give it wings!

Don’t run through life so fast that you forget not only where you’ve been, but also where you are going.

Don’t forget, a person’s greatest emotional need is to feel appreciated.

Don’t be afraid to learn. Knowledge is weightless, a treasure you can always carry easily.

Don’t use time or words carelessly. Neither can be retrieved. Life is not a race, but a journey to be savored each step of the way.

Georgia Tech
Sept. 6th, 1996

A search for this commencement date at Georgia Tech returns few plausible results, so it appears this date is incorrect. A more general search of “georgia tech” “commencement” “1996” turns helpfully turns up an archive at Georgia Tech of the President’s commencement speeches.

According to this archive, the Spring 1996 commencement speaker was Georgia Gov. Zell Miller:

It is now my great pleasure to introduce our commencement speaker. We are very fortunate this commencement, to have a speaker who is not only renowned for his
political achievements, but is also recognized for his profounf influence on the
future of Georgia.

Before ascending to his current office, our speaker had a diverse career. At one time or other, he has been a businessman, a college professor, a Marine Sergeant, the author of three books, and even a short-order cook and college baseball coach.

Today, he is the governor of Georgia.

Since taking office in 1991, Governor Zell Miller’s love of teaching and commitment to education has resulted in one of the most ambitious agendas to improve public education in this century.

The Fall 1996 commencement speaker was Ms. Jackie M. Ward:

It is now my pleasure to introduce our commencement speaker, Ms. Jackie M. Ward.

Ms. Ward is a founder and chief executive officer of Computer Generation Incorporated. No stranger herself to drive and dedication, she began her career in data processing with the J.P. Stevens Company and later worked her way up through
technical and management positions with General Electric and the UNIVAC division of Sperry Rand Corporation.

So, at least the date of this alleged speech is incorrect.

A search of the Snopes message board reveals an early mention of Mr. Dyson’s commencement address to Georgia Tech … in 1991. It mentions a news story from the Athens Banner-Herald where a school superintendent was accused of plagiarism for appropriating part of Mr. Dyson’s speech:

The superintendent of schools has admitted plagiarizing a portion of the commencement address he gave [in June 1999] at Hopkinton High School, saying it was an oversight.

Michael Ananis used a portion of a 1991 speech given to Georgia Tech graduates by former Coca-Cola executive Brian Dyson. The plagiarism was discovered after an anonymous letter to a local newspaper.

Does this prove Mr. Dyson gave the 1991 commencement speech? Well, not by itself. The Banner-Herald story ran (and Mr. Ananis accused of plagiarism) in the summer of 1999, at which point the Internet (and its search engines) had become commonplace. The Banner-Herald story also does not quote the parts of the speech plagiarized, nor does it name the “local newspaper” that published the letter from the “anonymous” writer. It’s possible that Mr. Ananis simply searched online for a commencement speech to borrow and found mention of the Dyson speech. We don’t know if what he found was an accurate transcript or simply fiction on someone’s part.

Unfortunately, Georgia Tech’s speech archive appears to only go back as far as 1996, which doesn’t do us much good in determining if Mr. Dyson spoke at commencement in 1991.

Just when it looks as if this speech has been busted, yet another Google search hits pay dirt. The same SmartTech server that archives Georgia Tech’s Presidential speeches also archives The Whistle, Georgia Tech’s faculty newspaper. The September 30, 1991 edition of the Whistle includes Mr. Dyson’s speech in its entirety:
Whistle-Brian_Dyson-Georgia_Tech_Commencement_Sept_1991-p3

Here’s a link to Mr. Dyson’s full speech and the PDF scan of the Whistle page.

So, there you have it. Brian Dyson did speak at Georgia Tech’s commencement and he really did provide the “five balls” example, however

  • It was at the 172nd commencement on Sept. 6, 1991, not Sept 6, 1996.
  • It was not the “shortest speech” ever, but was part of a full, well-written speech.
  • This dumb “value only has value” part is superfluous, as is the “longer version” mentioned on other websites.
  • The quote on NPR’s website is pure fiction.

That sets the record straight on that little Internet mystery!

Mark Turner : Folks don’t appreciate this

May 08, 2015 10:46 PM

I mostly agreed with this McLean’s story about America Dumbing Down, until the author quoted Susan Jacoby’s nitpicking the word “folks.”

By 2008, journalist Susan Jacoby was warning that the denseness—“a virulent mixture of anti-rationalism and low expectations”—was more of a permanent state. In her book, The Age of American Unreason, she posited that it trickled down from the top, fuelled by faux-populist politicians striving to make themselves sound approachable rather than smart. Their creeping tendency to refer to everyone—voters, experts, government officials—as “folks” is “symptomatic of a debasement of public speech inseparable from a more general erosion of American cultural standards,” she wrote. “Casual, colloquial language also conveys an implicit denial of the seriousness of whatever issue is being debated: talking about folks going off to war is the equivalent of describing rape victims as girls.”

Whoa. Talking about “folks” is like denigrating rape victims? Hyperbole much?

Obama can be “the most cerebral and eloquent American leader in a generation” and still say “folks” in a speech. Bill Clinton is brilliant and also … well, a “hayseed.” Can he not say “folks?”

There’s nothing wrong with the word “folks.” Unless you’re an elitist, that is.

via America dumbs down: a rising tide of anti-intellectual thinking.

Warren Myers : vision for lexington

May 08, 2015 01:25 PM

Over the past 5 years, I have witnessed some of the growth Lexington KY has started to undergo. From a population in the city proper of about 260,000 in 2000 to 295,000 in 2010 to an estimated 315,000 in 2015,

While there seems to be something of a plan/vision for the downtown area, the majority of Lexington (and its urban area) seems to be more-or-less ignored from an infrastructural perspective (the last update was in 2009, and only for a small part of Lexington).

Public Transit

The public transit system, as hard as I am sure Lextran employees work, is underutilized, poorly routed, and has no means of connecting into it form out of Lexington (full route map (PDF)).

In comparison to where I grew up, the Capital District of New York, the public transit system is both too inwardly-focused, and too poorly-promoted to be useful more most Lexingtonians. CDTA, for example, has connectors to other cities and towns other than just Albany. You can start where I grew up in Cohoes (about 10 miles north of Albany), and get more-or-less anywhere in the greater Capital District by bus. It might take a while, but you can get there (or get close). There are also several Park’n’Ride locations for commuters to take advantage of.

Lextran doesn’t offer anything to connect to Nicholasville, Versailles, or Georgetown. With workers commuting-in from those locales (and more – some come from Richmond or Frankfort (or go in the opposite direction)), one would think urban planners would want to offer alleviations of traffic congestion. But there is nothing visible along those lines.

Lost Neighborhoods

There are large chunks of Lexington where the houses are crumbling, crime rates are higher than the rest of the city, and the citizens living there are being [almost] actively avoided and/or neglected by the city.

Some limited business development has gone into these neighborhoods (like West 6th Brewing), but as a whole they are becoming places “to be avoided”, rather than places where anyone is taking time and effort to improve, promote, and generally line-up with the rest of the city.

Yes, everywhere has regions that folks try to avoid, but the lost and dying neighborhoods in Lexington are saddening.

Walking

 

Lexington is – in places – a walkable city, but for most of the residential areas, it was/is up to the developers of the subdivisions as to whether or not there are sidewalks. And if they weren’t put in then, getting them done now is like pulling teeth.

Being able to walk to many/most places (or types of places) you might want to go is one of the major hallmarks of a city. One that is only exhibited in pockets in Lexington.

It should even be a hallmark of shopping areas – but look at Hamburg Pavillion. A shopping, housing, and services mini town (apartments, condos, houses, banking, education, restaurants, clothes, etc), Hamburg is one of the regional Meccas for folks who want to do major shopping trips or eat at nice restaurants. The map (PDF), however (which only shows part of the Hamburg complex) demonstrates that while pockets of the center are walkable, getting from one shopping/eating/entertainment pod to another requires walking across large parking lots – impractical if shopping with children, or when carrying more than a couple bags.

Crosswalks and lighted crossings on major roads, in some cases, leave mere seconds to spare before the light changes – if you’re moving at a crisp clip. Add a stroller, collapsible shopping cart, or heavy book bag, and several crossings become “safe” only if drivers see you are already crossing and wait for you. Stories like of pedestrians being hit, like this one, are far too common to read in local news media.

Employment

There is no lack of employment opportunities in the Lexington area – there are 15 major employers in Lexington, hundreds of small-to-medium businesses running the gamut of offerings from auto dealers to lawn care, IT to healthcare, equine products, home construction, etc; and hundreds of national chains (retail, restaurants, services, etc) are here, too.

Finding said employment can be difficult, though. There are some services like In2Lex which send newsletters with employment opportunities – but if you don’t know about them, finding work in the area isn’t as easy as one would think a Chamber of Commerce would want. Yes, employers need to advertise their openings, but even finding lists of companies in the area is difficult.

Connectivity to Other Areas

Direct flights into and out of Lexington Bluegrass Airport reach 15 major metro areas across half the country.

Interstates 75 and 64 cross just outside city limits.

The Underlying Problem

The major problem Lexington seems to have is that it doesn’t know it’s become a decent-sized metropolitan area. There are about 500,000 people in MSA, or about 12% the population of the whole state. It’s a little under half the size of the Louisville MSA (which includes a couple counties in Indiana). There are 8 colleges/universities in Lexington alone (PDF), and 15 under an hour from downtown.

To paraphrase Reno NV’s slogan, Lexington is the biggest little town in Kentucky. The last major infrastructural improvement done was Man O’ War Boulevard, completed in 1988 – more than a quarter century past. There were improvements done to New Circle Road in the 1990s, but that ended over 15 years ago. Lexington proper was 30% smaller in 1990 than it is now (225,000 vs 315,000).

Lexington’s 65+ year-old Urban Service Area, while great to maintain the old character of the city and region, hasn’t been reviewed since 1997. A few related changes have been added since, but the last of those was in 2001.

One and a half decades since major infrastructural improvements. Activities like the much-delayed Centre Point (which I agree doesn’t need to be done in the manner originally planned), the begun Summit, and other development projects may, eventually, be good for business and the city as a whole, but there has been little-to-no consideration for what will happen with traffic. Traffic problems and general accessibility is one of the core responsibilities of local government.

The double diamond interchange installed a couple years back on Harrodburg Rd was a good improvement to that intersection. But it was only good for that intersection. It alleviated some traffic concerns, crashes, and complications, but only on one road.

Lexington needs leadership that sees where the city not only was 10, 25, 50 years ago, but where it is now and where it wants to be in another 10, 20, 50 years.

My Vision

My vision for Lexington, infrastructurally, includes interchange improvements / rebuilds for more New Circle Road exits. Exit 7, Leestown Road, grants access to Coke, FedEx, Masterson Station, the VA hospital, a BCTC campus, and more. Big Ass Fans is between exit 8 from New Circle and  exit 118 of I-75. Exit 9 from New Circle more-or-less exists to provide Lexmark with a way for their employees to arrive. The major employers in the area are great for economic stability. But with traffic congestion getting into and out of them needs to be as smooth as possible.

West Sixth Brewery and Transylvania University are two of the highlights in an otherwise-aging, -dying, and -lost area of the city. There needs to be a public commitment on the part of both the city and the citizenry to not allow the city to become segregated. Not segregated based on skin tone, but on economic status.

Bryan Station High School has a reputation, deservedly or not, of being one of the worst high schools in the region, because of the dying/lost status of the parts of town it draws from. You can buy a 2 bedroom, 1 bath, 1300 square foot house for under $20,000 near Bryan Station. It needs a little bit of work, but what does that say about the neighborhood?

The leadership of Lexington seems to be ignoring parts of the city that are going downhill, preferring instead to focus on regions that are going up. Ignoring dying parts of the city form an infrastructural perspective isn’t going to make them any better – they will only drag more of the city down with them. As a citizen and a homeowner, I want to see my city do well.

I do not like paying taxes any more than anyone else, but I do like seeing the city taking initiative and working to both heal itself and take steps towards attracting future generations, businesses, and more that we don’t even know are coming.

Lexington has great promise – it is growing, expanding, and burgeoning. But if its leadership – political, business, and citizenry – doesn’t take the time, effort, and money to ensure it’s prepared for this growth, it will become a morass to traverse, live in, and do business with.


Some more interesting regional data (PDF)

Tanner Lovelace : Blogging

May 07, 2015 03:26 AM

Back in December I had all these plans about blogging all about my experience this year training for my first iron distance triathlon. Well, now it’s May and I haven’t done anything. :-( I suppose, thought, that now is as good as time as any to start.

So, what’s happened since my last blog post? First, I trained for and ran my first marathon: the Raleigh Rock-n-Roll Marathon.Made it to the start line. #runhappy #rnrral If you are thinking of doing a marathon for the first time, let me give you a bit of advice. Pick a different race. While I did the half last year and had lots of fun, and I had lots of fun this year too, I think the Raleigh course is just too hilly for a first marathon. I had hoped to finish in under 4:45 but with the hills I was lucky to come in under five hours. I just barely made it, crossing the finish line in 4:59:38.Had hoped for 4:45 but the hills in Raleigh nixed that. The backup was under 5 hours. I barely made it. My hamstrings started acting up near the end and I basically hobbled the last 25 yards over the finish line but I made it in at 4:59:38 and then immedi While I don’t plan on doing another marathon this year, except for my iron distance triathlon, I may look into doing another one next year and if I do I’m going to pick something flat. Either the Tobacco Road (although, maybe not since spectator support there is apparently very weak) or the Wrightsville Beach marathon. My Garmin 920XT seems to think I could to a marathon in under 4 hours but I think that would have to be on flat and perhaps 20 pounds lighter.

Anyway, at the moment, my next big race is the Raleigh 70.3. Unfortunately, I’ve had a few bumps in the road getting there. Just over two weeks ago, while heading out for a bike ride, I slipped and fell on the stairs in my garage and hit my arm. While I didn’t break it, I managed to end up with a hematoma the size of a chicken’s egg! This was possibly caused by the fact that I went out and did my hour long bike ride after the fall! In any case, I ended up in a sling for a week and then took it easy for another week and therefore lost two weeks of training! It’s just this week that I’m back to my training schedule.

This evening, I managed to get my first open water swim of the year in with a group that swims on Wednesdays and Fridays out at Falls Lake. I’m very glad I got out, even if I didn’t do that much because this Sunday I am signed up for the Jordan Lake Open Water Swim. I’m doing that in conjunction with the Inside-Out Sports Triathlon Club’s Raleigh 70.3 Preview ride so I will, in effect, get both a preview swim and ride for my upcoming half iron race at the end of the month. Never mind that I had two weeks off. I’m pretty sure I can do them both, I just won’t break any time records doing so.

After that, the next weekend I will be racing the Cary Long Course Duathlon. This will be my first ever duathlon and it will be interesting to see how it goes. I actually won an entry into this race at my triathlon club’s annual kickoff party back in Feb/Mar (so long ago I don’t remember exactly when it was!).  Again, I’m not expecting to break any speed records, but it will help me remember how to handle transitions.

The other big news since December is that my wife was diagnosed with breast cancer back in February/March. Now, any cancer is bad news, but since the diagnosis we’ve probably gotten the best news we could have possibly gotten. The cancer came back as only stage 1. It had not migrated to the lymph nodes yet. It was removed successfully via a lumpectomy. And, while she is currently undergoing radiation treatment and will then do hormone therapy afterward, she will not need to have chemotherapy. So, while we would rather she just not have gotten the cancer in the first place, we feel fairly blessed with how things have gone since then. That, obviously, puts a bit of strain on my training plans, but by and large I’ve been able to work around that.

Also, I recently spent two weekends taking the Boy Scout Wood Badge advanced leadership training.Spending the weekend with the Boy Scouts #woodbadge #scouts As part of that, I now have 18 months in which to accomplish some goals that I defined. All of my goals have to do with adding more STEM to my son’s Boy Scout troop. The Boy Scouts have a STEM program via a set of awards called Nova and Supernova. Unfortunately, at the moment, they aren’t well known and hardly anyone even bothers to do the requirements for them. I aim to chance that, for our troop at least.

I guess that’s enough for the moment. Hopefully I’ll be able to keep this up and start regularly reporting on my race training. As soon as the Raleigh 70.3 is done I hope to retain the services of a triathlon coach in my quest to finish my first iron distance race: the Beach2Battleship full in Wilmington in October. While I have now run a full marathon, I have yet to swim 2.4 miles or bike 112 miles so getting there will be quite a journey.

Mark Turner : The world of threats to the US is an illusion – Opinion – The Boston Globe

May 06, 2015 05:19 PM

When Americans look out at the world, we see a swarm of threats. China seems resurgent and ambitious. Russia is aggressive. Iran menaces our allies. Middle East nations we once relied on are collapsing in flames. Latin American leaders sound steadily more anti-Yankee. Terror groups capture territory and commit horrific atrocities. We fight Ebola with one hand while fending off Central American children with the other.

In fact, this world of threats is an illusion. The United States has no potent enemies. We are not only safe, but safer than any big power has been in all of modern history.

via The world of threats to the US is an illusion – Opinion – The Boston Globe.

Warren Myers : traveling consultant cheat sheet

May 06, 2015 03:55 PM

“Join the Navy and See the World!”*

Perhaps one of the most famous recruitment phrases ever established in the United States.

And it’s not at all dissimilar form what a lot of budding consultants think they are going to do when either joining a services organization, or starting their own business.

I have been fortunate in that I have gotten to “see the world” as a professional services engineer – at least a little.

What the recruitment phrase fails to mention is that while you may “see” the world, you [probably] won’t get to do much while you’re “seeing” it. I’ve been to or through nearly 60 airports in the last several years. I “saw” the coast of Japan a few times when going into and out of Narita. I’ve “seen” Las Vegas – from a couplefew thousand feet. I’ve “seen” Houston – from IAD. And so on and so forth.

The far more realistic view of what will happen is something like this:

  • get call Friday afternoon asking you to be onsite in <someplace> Monday morning
  • book flight, hotel, rental car (if appropriate)
  • make sure clothes are clean
  • do as much Saturday and/or Sunday as you can, since you’ll be gone for a week
  • fly out Sunday evening or Monday morning (I’ll talk about this later)
  • get rental car
  • check into hotel
  • go to customer site
    • work
    • eat
    • sleep
    • repeat
  • check out from hotel
  • return car
  • fly home
  • repeat all of above

As someone who has been doing a travel-based job for 7+ years now, let me share some of the things I have learned with you.

Basics

Loyalty programs

Sign up for airline frequent flyer programs. In the US, this means Delta, United, Southwest, and American Airlines.

Sign up for hotel rewards. Hyatt, Hilton, Marriott, Wyndham.

Sign up for the car rental programs. Hertz, Avis, Budget, Dollar, Thrifty, Enterprise, National.

Stay “loyal”

So long as you are able, ie costs are reasonable, schedules are good, etc, stick with a single primary chain for each of the travel categories (airline, car, hotel). If you’re going to get status, might as well get it all with one place when possible.

Sign up for every promotion your loyalty partners make available. For example, I’m a United Guy (used to be a Delta Guy – but that’s a different story). I’m also a Hilton Guy (because Marriott hasn’t been as competitive (price, location) in the markets I’ve been to as they used to be). I have my Hilton HHonors Double Dip go to HHonors points and United miles. And I make sure ay time there is a promo to get more points or miles that I sign-up for it. If Hilton wants to give me an extra 5,000 United miles for every stay after the second between now and 31 August, why not take advantage of that?

Choose the best rewards – for you

Maybe you like traveling so much you want to have Avis points so you can get free car rentals on vacation. Personally, I find turning all my reward points into frequent flyer miles is my best option – renting a car for a week is almost always less expensive than paying for a flight – especially when my family goes somewhere on vacation.

Clothes

Every shirt and pair of pants I take when I go onsite are “no iron”. This saves time when you arrive. And you won’t have nearly as much time as you think you will, most of the time.

Get slip-on dress shoes. You will appreciate this most when going through airport security. But also if you have to go through security to get into customer buildings, etc.

Have an arrival and departure change of clothes that are comfortable – I like jeans and either a polo or comfortable t-shirt.

What about jackets? I like the lightest-weight jacket I can carry/wear: there will not be enough space on the plane for it, it’ll get hot in the airport, and you really only normally need it to walk from the airport to the rental car shuttle / counter, form the rental counter to the car, the car to the hotel, the hotel to the office, and all in reverse. You probably won’t need a parka for those types of activities.

Baggage

There’s a big conversation that surrounds this topic, but I’m going to tell you what works for me. First, check your main bag – it’ll accelerate your time to board, your time between flights (if you have one or more connections), and make it easier to get around the airport when you arrive (easier to use the bathroom, get a meal, etc). So save everyone headaches and check your main bag.

In your one carry-on – a laptop bag- you should have the following:

  • single change of clothes
  • snack & water bottle (empty, of course)
  • basic minimal toiletries (toothbrush, toothpaste, deodorant, etc)
  • book (or Kindle, but I like a physical book – there’s never anything to have to turn off)
  • all required chargers (laptop, cell phone, mifi, etc)
  • portable battery backup like an EasyAcc Classic

Arriving and Departing

Day-of? Or night before?

This is almost entirely a personal preference: arriving day-of (eg Monday morning) can be good if you have a family, don’t mind getting up hyper early to get to the airport, and can functional well enough on little sleep.

Arriving night before (eg Sunday night) can be good because if you’re bumped or delayed on a flight, you have cushion before your customer expects to see you.

Either way, always try to check-into your hotel before going to your customer – if it’s an early-Monday arrival, change out of your travel clothes at the airport into work clothes, and have the hotel hold your bags for you.

I alternate between which is better for me to do based on how many connections I have, customer expectations (if you have a mandatory 0900 meeting Monday, and you flight won’t arrive til 0930, you have to come in Sunday night), time of year (weather considerations), etc.

What did I miss?

What would you add/change/tweak on this cheat sheet?


* I always though it should read, “Join the Navy and Sea the World”

Warren Myers : what level of abstraction is appropriate?

May 05, 2015 01:58 PM

Every day we all work at multiple levels of abstraction.

Perhaps this XKCD comic sums it up best:

If I'm such a god, why isn't Maru *my* cat?

abstraction

But unless you’re weird and think about these kinds of things (like I do), you probably just run through your life happily interacting at whatever level seems most appropriate at the time.

Most drivers, for example, don’t think about the abstraction they use to interact with their car. Pretty much every car follows the same procedure for starting, shifting into gear, steering, and accelerating/decelerating: you insert a key (or have a fob), turn it (or push a button), move the drive mode selection stick (gear shift, knob, etc), turn a steering wheel, and use the gas or brake pedals.

But that’s not really how you start a car. It’s not really how you select drive mode. It’s not really how you steer, etc.

But it’s a convenient, abstract interface to operate a car. It is one which allows you to adapt rapidly to different vehicles from different manufacturers which operate under the hood* in potentially very different ways.

The problem with any form of abstraction is that it’s just a summary – an interface – to whatever it is trying to abstract away. And sometimes those interfaces leak. You turn the key in your car and it doesn’t start. Crud. What did I forget to do, or is the car broken? Did I depress the break and clutch pedal? Is it in Park? Did I make sure to not leave the lights on overnight? Did the starter motor seize? Is there gas in the tank? Did the fuel pump quit? These are all thoughts that might run through your mind (hopefully in decreasing likelihood of probability/severity) when the simple act of turning the key doesn’t work like you expect.

For a typical computer user, the only time they’ll even begin to care about how their system really works is when they try to do something they expect it to do … and it doesn’t. Just like drivers don’t think about their cars’ need for the fuel injector system to make minute adjustments thousands of times per second, most people don’t think about what it actually takes to go from typing “www.google.com” in their browser bar to getting the website returned (or how their computer goes from off to “ready to use” after pushing the power button).

Automation provides an abstraction to manual processes (be it furniture making or tier 1 operations run book scenarios). And abstractions are good things .. except when they leak (or outright break).

Depending on your level of engagement, the abstraction you need to work with will differ – but knowing that you’re at some level of abstraction (and, ideally, which level) is vital to being the most effective at whatever your role is.

I was asked recently how a presentation on the benefits of automation would vary based on audience. The possible audiences given in the question were: engineer, manager, & CIO. And I realized that when I’ve been asked questions like this before, I’ve never answered them wrong, but I’ve answered them very inefficiently: I have never used the level of abstraction to solve the general case of what this question is really getting at. The question is not about whether or not you’re comfortable speaker to any given “level” of customer representative (though it’s important). It is not about verifying you’re not lying about your work history (though also important).

No. That question is about finding out if you really know how to abstract to the proper level (in leakier fashions as you go upwards assumed) for the specific “type” of person you are talking to.

It is vital to be able to do the “three pitches” – the elevator (30 second), the 3 minute, and the 30 minute. Every one will cover the “same” content – but in very different ways. It’s very much related to the “10/20/30 rule of PowerPoint” that Guy Kawasaki promulgates: “a PowerPoint presentation should have ten slides, last no more than twenty minutes, and contain no font smaller than thirty points.” Or, to quote Winston Churchill, “A good speech should be like a woman’s skirt; long enough to cover the subject and short enough to create interest.”

The answer that epiphanized for me when I was asked that question most recently was this: “I presume everyone in the room is ‘as important’ as the CIO – but everyone gets the same ‘sales pitch’ from me: it’s all about ROI. The ‘return’ on ‘investment’ is going to look different from the engineer’s, manager’s, or CIO’s perspectives, but it’s all just ROI.”

The exact same data presented at three different levels of abstraction will “look” different, even though it’s conveying the same thing – because the audience’s engagement is going to be at their level of abstraction (though hopefully they understand at least to some extent the levels above (and below) themselves).

A simple example: it currently takes a single engineer 8 hours to perform all of the tasks related to patching a Red Hat server. There are 1000 servers in the datacenter. Therefore it takes 8000 engineer-hours to patch them all.

That’s a lot.

It’s a crazy lot.

But I’ve seen it countless times in my career. It’s why patching can so easily get relegated to a once-a-year (or even less often) cycle. And why so many companies are woefully out-of-date with their basic systems from known issues. If your patching team consists of 4 people, it’ll take them a year to patch all 8000 systems – and then they just have to start over again. It’d be like painting the Golden Gate Bridge – an unending process.

Now let’s say you happen to have a management tool available (could be as simple as pssh with preshared SSH keys, or as big and encompassing as Server Automation). And let’s say you have a local mirror of RHN – so you can decide just what, exactly, of any given channel you want to apply in your updates.

Now that you have a central point from which you can launch tasks to all of the Red Hat servers that need to be updated, and a managed source from which each will source their updates, you can have a single engineer launch updates to dozens, scores, even hundreds of servers simultaneously – bringing them all up-to-date in one swell foop. What had taken a single engineer 8 hours is still 8 – but it’s 8 in parallel: in other words, the “same” 8 hours is now touching scores of machines instead of 1 at a time. The single engineer’s efficiency has been boosted by a factor of, say, 40 (let’s stay conservative – I’ve seen this number as high as 1000 or more).

Instead of it taking 8000 engineer-hours to update all 1000 servers, it’s now only 200. Your 4 engineer patching team can now complete their update cycle in well under 2 weeks. What had taken a full year, is now being measured in days or weeks.

The “return on investment” at the abstraction level of the engineer is they have each been “given back” 1900 hours a year to work on other things (which helps make them promotable). The team’s manager sees an ROI of >90% of his team’s time is available for new/different tasks (like patching a new OS). The CIO sees an ROI of 7800 FTE hours no longer being expended – which means the business’ need for expansion, with an associated doubling of server estate, is now feasible without having to double his patching staff.

Every abstraction is like that – there is a different ROI for a taxi driver on his car “just working” than there is for a hot rodder who’s truly getting under the hood. But it’s still an ROI – one is getting his return by being able to ferry passengers for pay, and the other by souping-up his ride to be just that little (or lot) bit better. The ROI of a 1% fuel economy improvement by the fuel injector system being made incrementally smarter in conjunction with a lighter engine block might only be measured in cents per hour driving – but for FedEx, that will be millions of dollars a year in either unburned fuel, or additional deliveries (both of which are good for their bottom line).

Or consider the abstraction of talking about financial statements (be they for companies or governments) – they [almost] never list revenues and expenditures down to the penny. Not because they’re being lazy, but because the scale of values being reported do not lend themselves well to such mundane thinking. When a company like Apple has $178 billion in cash on hand, no one is going to care if it’s really $178,000,102,034.17 or $177,982,117,730.49. At that scale, $178 billion is a close-enough approximation to reality. And that’s what an abstraction is – it is an approximation to the reality being expressed down one level. It’s good enough to say that you start your car by turning the key – if you’re not an automotive engineer or mechanic. It’s good enough to approximate the US Federal Budget at $3.9 trillion or maybe $3900 billion (whether it should be that high is a totally different topic). But it’s not a good approximation to say $3,895,736,835,150.91 – it may be precise, but it’s not helpful.

I guess that means the answer to the question I titled this post with is, “the level of abstraction appropriate is directly related to your ‘function’ in relation to the system at hand.” The abstraction needs to be helpful – the minute it is no longer helpful (by being either too approximate, or too precise), it needs to be refined and focused for the audience receiving it.


*see what I did there?

Jesse Morgan : Change Fatigue

May 05, 2015 02:37 AM

It’s not that I fear change, it’s that I’m weary of it.

This is a good example. I was recently notified that a new version of my operating system (Kubuntu) was release.

After upgrading, here is the list of things that needed to be fixed, in the order that I found them:

  • grub timeout needs to be reset to 1 second
  • log in and see all of my settings, widgets, backgrounds, etc are gone.
  • Rage because the text in konsole is almost illegible
  • Go to control panel
    • Go to look and feel
      • change desktop theme back to Oxygen, because I *think* that’s what I had. Still doesn’t look right
      • Switch cursor theme to Oxygen white. Looks better, but not sure if it’s the same one. I notice that this doesn’t affect the cursor when it’s over the desktop, just over the system settings box.
      • Switch splash screen to none because the new one is Ugly and I see no other options.
    • Go to Colors
      • try various schemes, none look right
      • try “get new schemes” to find it doesn’t work
      • go with Oxygen because it might be the one I had
    • Go to Fonts
      • switch all the damn fonts back to ubuntu (I recall this being the previou default)
    • Go to Icons
      • Switch the theme back to Oxygen, which is what I think I had
    • Rage because my cursor keeps getting bumped and moved because of the touchpad sensitivity being changed
    • Skip down to Input Devices
      • click on touchpad
        • go to scrolling
          • disable edge scrolling
          • disable twofinger scrolling
        • go to sensitivity
          • enable palm detection
          • make random decreases to width and pressure settings because you can’t tell if that fixed the problem
        • Go to Taps, set all corners to no action, disable tap and drag gestures because I don’t think that’s there.
    • Go Desktop Behavior
      • go to desktop effects
        • disable zoom, background contrast, blur, fade, login, logout, maximize, screen edge, sliding popups, translucency, minimize animation, slide, desktop grid, and present windows because they’re all annoying.
  • I cannot find “change desktop image” under System Settings despite having clicked on numerous things labeled “desktop” and “workspace”
  • right click on desktop to go to desktop settings
    • notice that all of the preloaded images have generic preview icons.
    • click through and apply each one trying to find my wallpaper I set 26 months ago.
    • Try to open and find the damn thing in my directory
    • find the Open Image dialog doesn’t behave *at all*. sidebar shortcuts do not open to the right places, it’s slow, laggy, misaligned and a general clusterfuck.
  • Find and edit /etc/default/grub and switch timeout to 1 second, then run “update-grub”

Hope that this is a weird fever dream and reboot.

 

After a second reboot, it’s slightly more responsive.

  • re-add panel on second monitor
  • re-add task manager to second panel
  • re-enable “only show tasks from current screen”
  • try changing wallpaper again, previews still broken, dialog broken.
  • Open Konsole and set the shell profile font back to droid sans mono 10
  • Attempt to start virtualbox, get error that drivers need to be rebuilt

I’ve to re-add my desktop widgets (memory usage, cpu, weather, etc), but they appear to be gone now.

  • get error that headers are missing
  • switch back to classic application K menu

 

Overall it was disappointing and frustrating, and I wish I hadn’t. While I understand it’s mainly due to the new version of plasma, it simply wasn’t ready for release. I need to get stuff done, and I can’t wait for this to be fixed.

Update: Since writing this, I’ve decided to re-image my machine using mint with KDE. So far, it’s much better than 15.04.

 

Mark Turner : The “Entitlement Generation” : Anchor Mom

May 04, 2015 05:04 PM

I had a few friends repost this on their Facebook pages, holding it up perhaps as an example of ideal parenting:

“If your parents had to use a wooden spoon on you, then they clearly didn’t know how to parent you.”

Yep. I got that email last night after I posted my blog. I honestly had to laugh. Here was a stranger criticizing my parents. I tend to think they did a pretty good job. They raised three, well-rounded children. One is a successful HR exec, one is a journalist and the other is a doctor. Clearly they did something right. 😉 And let’s be real for a minute, it wasn’t all about a wooden spoon. It was about manners and respect.

Put me in the camp of the person who told this woman “If your parents had to use a wooden spoon on you, then they clearly didn’t know how to parent you.”

There are better ways to earn respect than by beating your child. If you have to beat your child, you are doing it wrong. You. Are. Doing. It. Wrong.

You know, maybe if we stop teaching kids that might makes right and that violence is a legitimate solution to a problem, we would have fewer domestic abuse issues, murders, riots, and maybe even wars. Maybe adults could try acting like adults and work a little bit at the parenting thing, rather than striking out like a three-year-old would?

I don’t hit my kids, I’ve never hit my kids, and the thought of hitting my kids makes me sick. And you know what? They are awesome. They can be frustrating at times because they’re kids, but they respect me because I model the kind of behavior that I expect from them. If my kids make a mistake, they don’t feel the need to be deceitful in an effort to escape a beating. The lesson we teach is to own up to your mistakes and fix them. They claim both their successes and failures.

My ultimate job as a parent is to teach my kids how to interact with the adult world. If my friends or coworkers don’t do what I say, I don’t go punch them in the face. I talk with them and sort things out. This is what grown-ups do. This is how we solve problems.

I’m sick of corporal punishment apologists blaming the “sparing of the rod” for a kid’s issues. If a rod is all you’ve got in your parental toolbox, you’re a poor parent. And it’s not just your kid who will suffer.

via The “Entitlement Generation” : Anchor Mom.

Warren Myers : may 11 bglug meeting 6:30p at beaumont branch: topic – freeipa

May 04, 2015 03:55 PM

We will be meeting at the Beaumont Library Branch at 6:30p on 11 May.

Our speaker is the LUG’s own Nathaniel McCallum, one of the FreeIPA maintainers – and all-around nice guy.

Come out and support the LUG, learn something new, and meet cool people.

Warren Myers : all hail the thunderstorm!

April 30, 2015 07:06 PM

Got our first hail of the year today – pea sized, and not much (thankfully) – but it’s here.

Warren Myers : owncloud vs pydio – more diy cloud storage

April 30, 2015 11:55 AM

Last week I wrote a how-to on using Pydio as a front-end to a MooseFS distributed data storage cluster.

The big complaint I had while writing that was that I wanted to use ownCloud, but it doesn’t Just Work™ on CentOS 6*.

After finishing the tutorial, I decided to do some more digging – because ownCloud looks cool. And because it bugged me that it didn’t work on CentOS 6.

What I found is that ownCloud 8 doesn’t work on CentOS 6 (at least not easily).

The simple install guide and process really is about version 8, and the last one that can be speedy-installed is 7. And as everyone knows, major version releases often make major changes in how they work. This appears to be very much the case with ownCloud going from 7 to 8.

In fact, the two pages needed for installing ownCloud are so easy to follow, I see no reason to copy them here. It’s literally three shell commands followed by a web wizard. It’s almost too easy.

You need to have MySQL/MariaDB installed and ready to accept connections (or use SQLite) – make a database, user, and give the user perms on the db. And you need Apache installed and running (along with PHP – but yum will manage that for you).

If you’re going to use MooseFS (or any other similar tool) for your storage backend to ownCloud, be sure, too, to bind mount your MFS mount point back to the ownCloud data directory (by default it’s /var/www/html/owncloud/data). Note: you could start by using local storage for ownCloud, and only migrate to a distributed setup later.

Pros of Pydio

  • very little futzing needed to make it work with CentOS 6
  • very clean user management
  • very clean webui
  • light system requirements (doesn’t even require a database)

Pros of ownCloud

  • apps available for major mobile platforms (iOS, Android), desktop)
  • no futzing needed to work with CentOS 7
  • very clean user management
  • clean webui

Cons of Pydio

  • no interface except the webui

Cons of ownCloud

  • needs a database
  • heavier system requirements
  • doesn’t like CentOS 6

What about other cloud environments like Seafile? I like Seafile, too. Have it running, in fact. Would recommend it – though I think there are better options now than it (including ownCloud & Pydio).


*Why do I keep harping on the CentOS 6 vs 7 support / ease-of-use? Because CentOS / RHEL 7 is different from previous releases. I covered that it was different for the Blue Grass Linux User Group a few months ago. Yeah, I know I should be embracing the New Way™ of doing things – but like most people, I can be a technical curmudgeon (especially humorous when you consider I work in a field that is about not being curmudgeonly).

Guess this means I really need to dive into the new means of doing things (mostly the differences in how services are managed) – fortunately, the Fedora Project put together this handy cheatsheet. And Digital Ocean has a clew of tutorials on basic sysadmin things – one I used for this comparison was here.

Mark Turner : David Simon: ‘There are now two Americas. My country is a horror show’ | US news | The Guardian

April 29, 2015 04:47 PM

More of David Simon.

America is a country that is now utterly divided when it comes to its society, its economy, its politics. There are definitely two Americas. I live in one, on one block in Baltimore that is part of the viable America, the America that is connected to its own economy, where there is a plausible future for the people born into it. About 20 blocks away is another America entirely. It’s astonishing how little we have to do with each other, and yet we are living in such proximity.

There’s no barbed wire around West Baltimore or around East Baltimore, around Pimlico, the areas in my city that have been utterly divorced from the American experience that I know. But there might as well be. We’ve somehow managed to march on to two separate futures and I think you’re seeing this more and more in the west. I don’t think it’s unique to America.

via David Simon: 'There are now two Americas. My country is a horror show' | US news | The Guardian.

Mark Turner : David Simon on Baltimore’s Anguish | The Marshall Project

April 29, 2015 04:47 PM

Great interview of David Simon on the Baltimore police situation.

David Simon is Baltimore’s best-known chronicler of life on the hard streets. He worked for The Baltimore Sun city desk for a dozen years, wrote “Homicide: A Year on the Killing Streets” (1991) and with former homicide detective Ed Burns co-wrote “The Corner: A Year in the Life of an Inner-City Neighborhood”(1997), which Simon adapted into an HBO miniseries. He is the creator, executive producer and head writer of the HBO television series “The Wire” (2002–2008). Simon is a member of The Marshall Project’s advisory board. He spoke with Bill Keller on Tuesday.

via David Simon on Baltimore’s Anguish | The Marshall Project.

Warren Myers : jump start your brain by doug hall

April 29, 2015 11:55 AM

I’m happy I didn’t pay for this copy of Jump Start Your Brain.

I’m saddened someone else did in order to give it to me.

The core of Doug Hall’s creative self-help book from 1996 is decent: get outside yourself, remember what it’s like to be a kid, have fun, don’t take yourself too seriously, and be willing to take calculated risks.

The problem is that summary could be said of pretty much any 3-5 page group of the book, and the rest of the pages seem to be filled with text, quotes, and graphics to show you that you can’t be effectively creative if you’re stagnant in your thinking.

Save yourself the trouble of buying (or even reading) this book, and instead take its core advice:

Maybe version 2.0 is better? I dunno. Not really psyched to find out.

But the blog looks nice.

Mark Turner : Death dream

April 28, 2015 02:55 AM

i-told-you-i-was-sick

I don’t normally post about my dreams but this one has been on my mind. An entry from my dream journal, dated 16 July 2013:

I dreamt that I had 1,346 more days to live. I would die of an expensive disease like cancer, one that would stretch the limits of my health insurance. It was all matter-of-fact. According to the calculator on timeanddate.com, 1,346 days from now is Thursday, 23 March 2017. Of course, I am not ready to die and almost certainly won’t be ready on 23 March 2017. Even so, it makes me consider how I might choose to spend these days if I know I only have x number left.

To add some detail, I was told in my dream by someone in authority that this was how many days I had left to live. It was simply explained to me that this was how it was going to be. This was my fate. And it did seem matter-of-fact, as if this was the plan I had agreed to all along. I recall not being particularly excited or concerned about the news.

And the way the data was presented in days rather than a date really stuck with me. It is a very unusual way of conveying that information, perhaps so that I would better remember it.

Dreams don’t always come true. I know this. This dream had a very sober reality that I can’t ignore, though. It is an important message to me.

So if it’s wrong, we will all have a good laugh. I will go ahead and pen a future blog entry, scheduled to post on 24 March 2017. With good fortune perhaps I will mock it along with everyone else. In the meantime, though, I am going to take in as much as I can in the 696 days I might have left.

Because you never knows when you might die. Or do you?

Warren Myers : steam by andrea sutcliffe

April 27, 2015 11:55 AM

Andrea Sutcliffe’s book Steam: The Untold Story of American’s First Great Invention was a pure joy to read. Being the second review I’m writing with my “new” system, I hope you find this book as interesting as I have.

In 1784, James Rumsey designed a boat that could, by purely mechanical means, move its way upstream. What he devised was truly brilliant: imagine a catamaran or pontoon boat with a platform across the two hulls. Anchored to the platform is a waterwheel. The waterwheel dips into the river, and is connected via a linkage to poles that push the boat against the current like a Venetian Gondola.

Why did he develop such a device? Because at the time, shipping by barge etc was incredibly simple downstream – you load-up the barge, give it a small crew, and float downriver. But because there was no way of mechanically returning the vessel upstream (without using sail power, which can be fickle to use, and uses a lot of otherwise-usable cargo area). So barges and shipping vessels tended to be crudely made so they would only ever go downstream – at their destination they’d be turned into building materials. And the crews would have to return on foot. To put this in perspective, it took about 4 weeks to float a barge from Pittsburg down the Ohio to the Mississippi to New Orleans. And it took about 6 months to get home.

Enter the need for reliable mechanical ship propulsion.

Beginning in his teens as a surveyor for the 6th Lord Fairfax, George Washington became enamored with the idea of inland navigation – that is, using streams, canals, rivers, and lakes to transport people and goods instead of the ocean. During his tenure as a surveyor, then an engineer, then a general, he never lost sight of what he viewed as the budding nation’s biggest hurdle to westward expansion – the overwhelmingly high cost of transporting goods from east to west, and vice versa. Along the coast, transport was simple and cheap. But to go far inland made prices exorbitantly high for both consumers and shippers – which made markets hard to tap.

The initial days of the steam wars are proof that ideas are worthless. Stationary steam engines, like those made by Boulton & Watt were too heavy and inefficient to possibly consider putting on a boat – at any scale. So while the idea of steam-powered travel had been running around folks’ minds for 20+ years by the time Rumsey built his simple mechanical boat, there was no way to practically use it.

What was needed were major improvements on steam engine design and implementation before wider applications for their power could be found. This is where the steamboat wars start to become exciting. Independently, Rumsey and a man named John Fitch (with his business partner) developed the pipe boiler which reduced the amount of water needed for operating an engine for the same power output, increased fuel efficiency, cut heating time, and lightened the engine itself. Traditional steam engines used a pot boiler – effectively a massive tank of water that would be heated in gestalt. As anyone who has ever timed how long it takes to start boiling water in a tea kettle vs a stock pot knows, water is very difficult to heat, and lots of energy is needed to move it even a couple degrees.

The fact is, that one new idea leads to another, that to a third, and so on through a course of time until someone, with whom none of these ideas was original, combines all together, and produces what is justly called a new invention. –Thomas Jefferson

Fascinatingly, Thomas Jefferson was against the idea of patents and copyright law, and likely would have campaigned heavily against it in the Constitutional process had he not been Minister to France. From a letter he wrote years after serving on the first Patent Commission Board:

He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature… Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising form them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society.

Contrast this to the efforts of both Fitch and Rumsey who lobbied for patent boards of some kind (at both the state and federal levels) between the end of the Revolutionary War and the ratification of the Unites States Constitution.

Sutcliffe’s account of the first “steamboat wars” shows that intellectual property litigation is an expensive, time-consuming, and distracting effort – whose end may or may not have any value.

Progress is an illusion, it happens, but it is slow and invariably disappointing. –George Orwell

Thornton’s condenser is undoubtedly one of the best calculated to condense without a jet of water, but I conceive the difficulty of getting rid of the air insurmountable .. when [the air] is drove back again by the steam to the cold condenser, it becomes nearly equal to common air in density, and skulks into the bottom of the condenser for security. –John Fitch (describing a new condenser design in 1790)

Based upon the extensive research Ms Sutcliffe has done into the early history and designs of steam engines and their associated mechanical conveyances, an old idea of mine has newly gained plausible validity: that of a steam-powered tank. Back in high school I postulated that both the power-to-weight and power-to-size ratio of steam engines had advanced sufficiently by the late 1850s that, in conjunction with a primitive form of caterpillar track design (which Fitch would have called an “endless chain of feet” (vs an early idea of his to use an “endless chain of paddles”)), that the first fully-mechanized war machines could have been built and sent into battle not in WWI, as the first tanks actually were, but instead during the Civil War – 50 years sooner. Leonardo Da Vinci has designed a human-powered armored car in the late 15th century. Replacing man power with steam power could have been a logical thing to have done – but no one ever did.

In the availability of men willing to persevere with a possibly “ridiculous” idea, America had an advantage. –Frank D Pager on the early successes of the Industrial Revolution in America.

Fitch and Rumsey took their war to the people in a series of “pamphlets” published over the course of many months. From Sutcliffe’s description of a “pamphlet” in this context, it seems they were the late 18th century version of a sourced blog or op-ed. Ranging from 20 to 50 (or more) pages in length, with affidavits, letters, and histories presented, the pamphlet was the common man’s research or position paper. I suppose they may have been used by others, too – but the context given in Steam shows them used as marketing and propaganda pieces.

He that studies and writes on the improvements of the arts and sciences labours to benefit generations unborn, for it is impossible that his contemporaries will pay any attention to him. –Oliver Evans

It’s the same each time with progress. First they ignore you, then they say you’re mad, then dangerous, then there’s a pause and then you can’t find anyone who disagrees with you. –Tony Benn (British Labour politician)

Seems that’s where Ghandi may have gotten the inspiration for this famous quotation:

First they ignore you, then they laugh at you, then they fight you, then you win.

Or perhaps it was Benn who was inspired by Ghandi. Or maybe they just realized the same thing independently.

Mark Turner : Charles Lane has taken vet fire before

April 27, 2015 12:07 AM

Today’s opinion piece is not the first time Charles Lane has come under fire from veterans. Veteran blogger Jonn Lilyea took Lane apart after Lane took aim last year at TriCare, the veteran health care system:

So, this fairly disingenuous fellow, Charles Lane, writes in the Washington Post opinion section about how we veterans don’t deserve Tricare as it currently exists. Apparently, we shouldn’t expect the government to honor it’s promises after we’ve fulfilled our commitment;

And this:

I wouldn’t bother mailing your ignorant ass, Mr. Lane. Especially someone who feels a need to say that he respects and honors veterans, you know, right before he throws us under the bus. That’s probably the most disingenuous statement one of these mighty mouths can make. I respect and honor journalists at the Washington Post, but they should all be tarred and feathered and run out of town on a rail. See how that feels, Mr. Lane? At least he spared us the usual “My grandfather’s neighbor’s doctor’s dog’s mother’s owner was a veteran, so I respect and honor veterans.”

Ouch.

Mark Turner : Privatizing veteran’s care? I don’t think so

April 26, 2015 11:58 PM

Journalist Charles Lane

Journalist Charles Lane


Washington Post opinion writer Charles Lane suggested today that “market signals” can do a better job than the Veteran’s Administration in taking care of our nation’s veterans.

Without market signals to help allocate resources, long waits and other patient frustrations are inevitable, no matter how sincerely, or how threateningly, Washington orders their elimination.

Ah yes, market signals. That must be why every hospital in America is clamoring to staff its cardiology department, since heart disease is the leading cause of death in the United States. Cancer is #2, so cancer centers are springing up everywhere, too. There’s a huge market for these services but do they do anything to actually advance medical science? The vast majority of them do not. They are, however, unbelievably profitable for the hospitals that have them.

“Market signals” would say every hospital needs heart and cancer centers, but what about the other diseases that are just as deadly if not as popular? ALS was off the public’s radar until last year’s “Ice Bucket Challenge.” The fad brought in more research money for ALS than ever but will the interest remain? Should we not pursue research and treatment because the “market signals” say it’s not as profitable as cancer? Do you tell your loved one with ALS, “sorry, dear. Our death panels, … er, I mean “Wall Street analysts” … say you should’ve gotten cancer instead.”

Serving in the military is one of the few jobs around where one knows full well that one could be asked to sacrifice one’s life. Military veterans bring to civilian life unique health issues. Veterans make up only 13% of the American population. Are “market signals” enough to provide the care to this group that was promised when these citizens signed on the bottom line?

Even to suggest that veterans’ health care should be partially spun off to the private sector — that, say, former service members should be provided a generous subsidy to purchase health insurance — is to invite a charge of callousness toward those who have sacrificed so much, or to risk being labeled a right-wing ideologue.

We not only ask for military members to put their lives at risk, we pay them poorly for it, too. Over 5,000 active duty military members live on SNAP benefits (food stamps), and base pay is barely above the poverty line. Health benefits are one of the few upsides to a life of constant financial struggle, relocation, and time away from family. If our country is going to ask these sacrifices of our men and women in uniform, it’s our duty to honor our commitment to care for them when their service is done. Through their service to our country, veterans have earned the right to heal their broken bodies. Does it make you “callous” to suggest otherwise? Hell fucking yeah, it does.

As for VA construction, undoubtedly its woes reflect a lack of institutional competency, compared with other federal agencies such as the Army Corps of Engineers, which some reformers believe should take over the job. Yet the root cause would seem to be the insistence on a dedicated system of state-run hospitals in the first place.

Here Lane accuses state-run hospitals yet offers no supporting evidence. I suppose it’s okay if he says it “would seem.”

Let’s talk about state-run hospitals a minute. When our infant son became feverish on vacation to Italy years ago, the state-run hospital patched him up in a caring and professional manner without costing us a dime. I reentered the VA system again after mysterious health issues I suspect are Gulf War Illness became worse. The entire process was unexpectedly smooth, with friendly assistance and efficient care. My appointment was actually on time and I never felt the least bit rushed by my primary care physician. She was happy to listen rather than dismiss me in mere minutes the way civilian doctors do.

You know how efficient “market signal-based” health care is? Market-based health care charges you $30 for an aspirin and $546 for six liters of saltwater. Market-based health care sues poor patients when they can’t afford to pay. It’s a market worth billions of dollars, which means many billions of those dollars go to waste.

Is there waste in the VA? Sure. But any VA waste absolutely pales in comparison to the waste generated by the private health care market. Yes, it’s expensive to care for our veterans. As long as there’s a need for veterans, though, there’s a need to honor our commitments and provide them with the health care they’ve earned, expenses be damned.

David Cafaro : Using a ReadyNAS with OS6 as a central syslog server

April 24, 2015 03:05 AM

In the process of building out my network intelligence system I need to have a central location to collect system and event logs on my network.  Since my ReadyNAS has Linux under the hood I figured what better place (since it has plenty of space to store LOTS of logs).  Here is what I did.

First, you need to have a a ReadyNAS with OS6 on it.  In my case I have one of the older ReadyNAS Pro 6 boxes which only officially support the older 4.x OS.  But, there is a very easy way to upgrade to OS6 and it has been very reliable for me.  Down side is that it will require wiping out all data on your NAS and reformatting (Backup, Backup, BACKUP!).  I believe it’s well worth the hassle of backing up and restoring data to get this upgrade.  It will void your warranty (or make it much more difficult to get through tech support), but it appears that Netgear has been reasonably responsive in adding fixes for the unsupported legacy hardware.  Once my NAS was converted updates have been easy and automatic.  Anyways, here is the info I followed to convert:  ReadyNAS Forums

Now to setup syslog (rsyslog) to receive incoming logs on your network do the following:

  1. Log into your NAS and enable SSH
    1. Go to System -> Settings -> Service -> SSH
  2. Create a new folder to store/share your logs
    1. Go to Shares -> Choose a Volume (or create one)
    2. Create a new Folder (call it logs?) and set permissions as you like
  3. Create a new group
    1. Go to Accounts -> Groups -> New Group
    2. Create a new Group (call it logs?) and set permissions as you like
  4. Go back to your new “logs” share folder and set permissions such that the “logs” group has read/write perms
    (These are very liberal permissions and basic groups/users, you can go much more restrictive, which I would recommend once you’ve got the basics working)
  5. Now ssh to your ReadyNAS as root using the same password as your web based admin account
  6. Install rsyslog
    1. apt-get install rsyslog
  7. Configure rsyslog
    1. vim.tiny /etc/rsyslog.conf
      If you don’t know vim go read-up first, you need to know how to insert, delete, and save
    2. Change the following lines:

      Remove the # signs in front of these lines at the top:
      $ModLoad imudp
      $UDPServerRun 514
      $ModLoad imtcp
      $InputTCPServerRun 514

      Add the # sign to these lines:
      #*.*;auth,authpriv.none -/var/log/syslog
      #cron.* /var/log/cron.log
      #daemon.* -/var/log/daemon.log
      #kern.* -/var/log/kern.log
      #lpr.* -/var/log/lpr.log
      #mail.* -/var/log/mail.log
      #user.* -/var/log/user.log
      #mail.info -/var/log/mail.info
      #mail.warn -/var/log/mail.warn
      #mail.err /var/log/mail.err
      #news.crit /var/log/news/news.crit
      #news.err /var/log/news/news.err
      #news.notice -/var/log/news/news.notice
      #*.=debug;\
      #            auth,authpriv.none;\
      #            news.none;mail.none -/var/log/debug
      #*.=info;*.=notice;*.=warn;\
      #             auth,authpriv.none;\
      #             cron,daemon.none;\
      #             mail,news.none -/var/log/messages

      And add these lines to the bottom:
      $template RemoteLog,”/data/logs/%$YEAR%/%$MONTH%/%fromhost-ip%/syslog.log”
      *.* ?RemoteLog

    3. Be sure to change the /data/logs part to match with your volume and folder you created in steps 2 above
  8. Now enable and restart rsyslog
    1. systemctl restart rsyslog.service
    2. systemctl enable rsyslog.service
  9. Check to make sure rsyslog started happily
    1. systemctl status rsyslog.service
    2. tailf /data/logs/2015/03/127.0.0.1/syslog.log
      You should see something like this “rsyslogd: [origin software=”rsyslogd” swVersion=”5.8.11″ x-pid=”24127″ x-info=”http://www.rsyslog.com”] start”
  10. Log out of SSH and disable it if you don’t need it anymore.

That should cover the basics.  By default the ReadyNAS will log as from an IP of 127.0.0.1, all other hosts will log from their IPs on your network.  There is of course a lot more custom configuration you can do.  This is just the basics.  You will also be able to view your logs from the shared volume you created.

I commented out a lot of lines above to avoid duplicate logging in the /var/log directory as that’s only about 4GB in size.  You can always re-enable them and change there path if you choose.

 

Eric Christensen : Fedora Security Team’s 90-day Challenge

April 23, 2015 05:05 PM

Earlier this month the Fedora Security Team started a 90-day challenge to close all critical and important CVEs in Fedora that came out in 2014 and before.  These bugs include packages affected in both Fedora and EPEL repositories.  Since we started the process we’ve made some good progress.

Of the thirty-eight Important CVE bugs, six have been closed, three are on QA, and the rest are open.  The one critical bug, rubygems-activesupport in EPEL, still remains but maybe fixed as early as this week.

Want to help?  Please join us in helping make Fedora (and EPEL) and safer place and pitch in to help close these security bugs.


Mark Turner : We Can’t Let John Deere Destroy the Very Idea of Ownership | WIRED

April 23, 2015 04:39 PM

You should have the right to use anything you own the way you want to use it. John Deere needs to get a grip.

It’s official: John Deere and General Motors want to eviscerate the notion of ownership. Sure, we pay for their vehicles. But we don’t own them. Not according to their corporate lawyers, anyway.

In a particularly spectacular display of corporate delusion, John Deere—the world’s largest agricultural machinery maker —told the Copyright Office that farmers don’t own their tractors. Because computer code snakes through the DNA of modern tractors, farmers receive “an implied license for the life of the vehicle to operate the vehicle.

”It’s John Deere’s tractor, folks. You’re just driving it.

via We Can't Let John Deere Destroy the Very Idea of Ownership | WIRED.

Warren Myers : hey yahoo! sports – why not always post the magic number for every team?

April 23, 2015 03:00 PM

Since the magic number (and I’ll take the example of baseball, because while I don’t get to watch them much, I do follow the Mets) is so easy to calculate, why not post it on the standings as soon as there have been games played?

This would be a good use of technology relative to baseball (or any sport).

In case you’re wondering, the math for the magic number is as follows:

G + 1 − WA − LB

where

  • G is the total number of games in the season
  • WA is the number of wins that Team A has in the season
  • LB is the number of losses that Team B has in the season

As of today, the magic number for the Mets is 162 + 1 – 12 – 6, or 145.

Warren Myers : why do i use digital ocean?

April 22, 2015 02:41 AM

Besides the fact that I have a referral code, I think Digital Ocean has done a great job of making an accessible, affordable, cloud environment for folks (like me) to spin-up and -down servers for trying new things out.

You can’t beat an average of 55 seconds to get a new server.

There are other great hosting options out there. I know folks who work at and/or use Rackspace. And AWS. Or Chunk Host.

They all have their time and place, but for me, DO has been the best option for much of what I want to do.

Their API is simple and easily-accessed, billing is straight-forward, and you can make your own templates to deploy servers from. For example, I could make a template for MooseFS Chunk servers so I could just add new ones whenever I need them to the cluster.

And I can expand/contract servers as needed, too.

Warren Myers : create your own clustered cloud storage system with moosefs and pydio

April 21, 2015 11:09 AM

This started-off as a how-to on installing ownCloud. But their own installation procedures don’t work for the 8.0x release and CentOS 6.

Most of you know I’ve been interested in distributed / cloud storage for quite some time.

And that I find MooseFS to be fascinating. As of 2.0, MooseFS comes in two flavors – the Community Edition, and the Professional Edition. This how-to uses the CE flavor, but it’d work with the Pro version, too.

I started with the MooseFS install guide (pdf) and the Pydio quick start steps. And, as usual, I used Digital Ocean to host the cluster while I built it out. Of course, this will work with any hosting provider (even internal to your data center using something like Backblaze storage pods – I chose Digital Ocean because they have hourly pricing; Chunk Host is a “better” deal if you don’t care about hourly pricing). In many ways, this how-to is in response to my rather hackish (though quite functional) need to offer file storage in an otherwise-overloaded lab several years back. Make sure you have “private networking” (or equivalent) enabled for your VMs – don’t want to be sharing-out your MooseFS storage to just anyone :)

Also, as I’ve done in other how-tos on this blog, I’m using CentOS Linux for my distro of choice (because I’m an RHEL guy, and it shortens my learning curve).

With the introduction out of the way, here’s what I did – and what you can do, too:

Preliminaries

  • spin-up at least 3 (4 would be better) systems (for purposes of the how-to, low-resource (512M RAM, 20G storage) machines were used; use the biggest [storage] machines you can for Chunk Servers, and the biggest [RAM] machine(s) you can for the Master(s))
    • 1 for the MooseFS Master Server (if using Pro, you want at least 2)
    • (1 or more for metaloggers – only for the Community edition, and not required)
    • 2+ for MooseFS Chunk Servers (minimum required to ensure data is available in the event of a Chunk failure)
    • 1 for ownCloud (this might be able to co-reside with the MooseFS Master – this tutorial uses a fully-separate / tiered approach)
  • make sure the servers are either all in the same data center, or that you’re not paying for inter-DC traffic
  • make sure you have “private networking” (or equivalent) enabled so you do not share your MooseFS mounts to the world
  • make sure you have some swap space on every server (may not matter, but I prefer “safe” to “sorry”) – I covered how to do this in the etherpad tutorial

MooseFS Master

  • install MooseFS master
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-master moosefs-cli
  • make changes to /etc/mfs/mfsexports.cfg
    • # Allow everything but “meta”.
    • #* / rw,alldirs,maproot=0
    • 10.132.0.0/16 / rw,alldirs,maproot=0
  • add hostname entry to /etc/hosts
    • 10.132.41.59 mfsmaster
  • start master
    • service moosefs-master start
  • see how much space is available to you (none to start)
    • mfscli -SIN

MooseFS Chunk(s)

  • install MooseFS chunk
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-chunkserver
  • add the mfsmaster line from previous steps to /etc/hosts
    • cat >> /etc/hosts
    • 10.132.41.59 mfsmaster
    • <ctrl>-d
  • make your share directory
    • mkdir /mnt/mfschunks
  • add your freshly-made directory to the end of /etc/mfshdd.cfg, with a size you want to share
    • /mnt/mfschunks 15GiB
  • start the chunk
    • service moosefs-chunkserver start
  • on the MooseFS master, make sure your new space has become available
    • mfscli -SIN
  • repeat for as many chunks as you want to have

Pydio / MooseFS Client

  • install MooseFS client
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-client
  • add the mfsmaster line from previous steps to /etc/hosts
    • cat >> /etc/hosts
    • 10.132.41.59 mfsmaster
    • <ctrl>-d
  • mount MooseFS share somewhere where Pydio will be able to get to it later (we’ll use a bind mount for that in a while)
    • mfsmount /mnt/mfs -H mfsmaster
  • install Apache and PHP
    • yum -y install httpd
    • yum -y install php-common
      • you need more than this, and hopefully Apache grabs it for you – I installed Nginx then uninstalled it, which brought-in all the PHP stuff I needed (and probably stuff I didn’t)
  • modify php.ini to support large files (Pydio is exclusively a webapp for now)
    • memory_limit = 384M
    • post_max_size = 256M
    • upload_max_filesize = 200M
  • grab Pydio
    • you can use either the yum method, or the manual – I picked manual
    • curl http://hivelocity.dl.sourceforge.net/project/ajaxplorer/pydio/stable-channel/6.0.6/pydio-core-6.0.6.tar.gz
      • URL correct as of publish date of this blog post
  • extract Pydio tgz to /var/www/html
  • move everything in /var/www/html/data to /mnt/moosefs
  • bind mount /mnt/moosefs to /var/www/html/data
    • mount –bind /mnt/moosefs /var/www/html/data
  • set ownership of all Pydio files to apache:apache
    • cd /var/www/html && chown -R apache:apache *
    • note – this will give an error such as the following screen:
    • Screen Shot 2015-04-20 at 16.32.48this is “ok” – but don’t leave it like this (good enough for a how-to, not production)
  • start Pydio wizard
  • fill-in forms as they say they should be (admin, etc)
    • I picked “No DB” for this tutorial – you should use a database if you want to roll this out “for real”
  • login and starting using it

Screen Shot 2015-04-20 at 17.07.51

Now what?

Why would you want to do this? Maybe you need an in-house shared/shareable storage environment for your company / organization / school / etc. Maybe you’re just a geek who likes to play with new things. Or maybe you want to get into the reselling business, and being able to offer a redundant, clustered, cloud, on-demand type storage service is something you, or your customers, would find profitable.

Caveats of the above how-to:

  • nothing about this example is “production-level” in any manner (I used Digital Ocean droplets at the very small end of the spectrum (512M memory, 20G storage, 1 CPU))
    • there is a [somewhat outdated] sizing guide for ownCloud (pdf) that shows just how much it wants for resources in anything other than a toy deployment
    • Pydio is pretty light on its basic requirements – which also helped this how-to out
    • while MooseFS is leaner when it comes to system requirements, it still shouldn’t be nerfed by being stuck on small machines
  • you shouldn’t be managing hostnames via /etc/hosts – you should be using DNS
    • DNS settings are far more than I wanted to deal with in this tutorial
  • security has, intentionally, been ignored in this how-to
    • just like verifying your inputs is ignored in the vast majority of programming classes, I ignored security considerations (other than putting the MooseFS servers on non-public-facing IPs)
    • don’t be dumb about security – it’s a real issue, and one you need to plan-in from the very start
      • DO encrypt your file systems
      • DO ensure your passwords are complex (and used rarely)
      • DO use key-based authentication wherever possible
      • DON’T be naive
  • you should be on the mailing list for MooseFS and Pydio forum.
    • the communities are excellent, and have been extremely helpful to me, even as a lurker
  • I cannot answer more than basic questions about any of the tools used herein
  • why I picked what I picked and did it the way I did
    • I picked MooseFS because it seems the easiest to run
    • I picked Pydio because the ownCloud docs were borked for the 8.0x release on CentOS 6 – and it seems better than alternatives I could find (Seafile, etc) for this tutorial
    • I wanted to use ownCloud because it has clients for everywhere (iOS, Android, web, etc)
    • I have no affiliation with either MooseFS or Pydio beyond thinking they’re cool
    • I like learning new things and showing them off to others

Final thoughts

Please go make this better and show-off what you did that was smarter, more efficient, cheaper, faster, etc. Turn it into something you could deploy as an AMID on AWS. Or Docker containers. Or something I couldn’t imagine. Everything on this site is licensed under the CC BY 3.0 – have fun with what you find, make it awesomer, and then tell everyone else about it.

I think I’ll give LizardFS a try next time – their architecture is, diagrammatically, identical to the “pro” edition of MooseFS. And it’d be fun to have experience with more than one solution.

Mark Turner : Liberals and the racist label

April 21, 2015 01:23 AM

Our local, world-famous RPD beat officer posted to the East CAC Facebook page today about his upcoming meeting with the owners of the local shopping center and asked neighbors what things he should discuss with the owners. Several citizens posted thoughtful, helpful critiques of the shopping center, though a few noted how some teens who sometime loiter in the parking lot make them nervous.

This made one neighbor uncomfortable. She responded:

“I’ve shopped at [this store] regularly for five years and I have never–not once–been solicited, approached, or bothered in any way, shape, or form by teenagers or loiterers. I’m confused as to where this concern is coming from (and yeah, I know there was that big fight there a month or so ago) Frankly, it’s making me a little bit uncomfortable, as this thread seems to be a bunch of white people talking about how to make the neighborhood shopping center a better place. A good conversation, for sure, but are (black) teenagers hanging out outside of a local grocery store really a safety concern?”

This led me to dryly remark on Twitter:

“The community discussion made it all the way to 31 posts before a white person accused the other white people of being racist.”

It was a great community discussion, full of good suggestions for the shopping center but as soon as one or two people mentioned feeling uncomfortable by the people hanging out suddenly they were branded racists.

The way I see it there are two types of people: those who give a shit about others and those who do not. You’d be surprised at how well this sums people up. I do my best to treat everyone the way I want to be treated. If you’re good people, it doesn’t matter what color you are: you’re good people and I will be proud to be associated with you.

That said, when a fellow liberal throws a nasty label on others in a holier-than-thou attempt at liberal one-upmanship it really boils my blood. Nowhere in the earlier conversation had anyone mentioned anything about race until the accuser did. In fact, the conversation was perfectly reasonable up until that point, where it took an extreme left turn into name-calling.

Have there been problems at that shopping center? Yes. Even some frightening ones. Still, I consider it safe enough that I have no concern with Kelly shopping there alone. When we moved here, though, the first shopping center story I heard was that of the elderly mother of a well-known community activist getting mugged in the parking lot. She was African-American and to my knowledge no one accused her of being racist.

It was this particular crime that spurred me to urge that Raleigh Police Department boost its presence in the shopping center. This effort led to the opening of the shopping center’s RPD Neighborhood Office, a place in the shopping center where for three years beat officers could stop in and do paperwork or meet with community members. I was disappointed to see the office close last year without much fanfare. Fortunately, the shopping center has progressed to the point that the office is really no longer needed.

So, if a group of loitering teens makes one nervous, does that make one a racist? Having once been a teenage boy, I can say with some authority that any group of unsupervised teenage boys, no matter the race, has innate potential to do stupid things. At the Harris Teeter over in lily-white Cameron Village, teens loitering outside the door have occasionally made me feel uncomfortable. White teens. Teens who often were sitting on the curb in handcuffs by the time I exited the store.

I’m a sailor. I’ve walked some tough streets in my time and I’ve learned how to keep a sharp eye on people when I’m in areas that call for it. I can take care of myself but I understand when a crowd of unruly teens might make someone feel threatened. I don’t belittle them for it, though.

There are enough real examples of racism in the world that there’s no need to make up bullshit ones. Everyone deserves to live in a safe neighborhood. Everyone! Pretending crime doesn’t exist does no one any good, and neither does throwing around damning labels.

Tarus Balog : POSSCON 2015

April 19, 2015 09:32 PM

POSCONN (or the Palmetto Open Source Software Conference) is a regional conference held every year in Columbia, South Carolina. It dawned on me that I travel too much, because when I mentioned to a neighbor that I spent some time in Columbia, she paused and then asked “oh, it’s almost winter down there”. I had to explain that I meant the Columbia that is three hours away and not the Columbia in South America.

I really like regional grassroots open source conferences, but for some reason I was never able to make POSSCON. This year I decided to change that and OpenNMS was even able to sponsor it.

Sponsor Sign

POSSCON is organized by IT-ology, a non-profit dedicated to promoting technology careers for students in kindergarten through 12th grade. I think they must know what they are doing since they really know how to organize conferences (they are also responsible for All Things Open held in Raleigh, North Carolina, each October).

We piled five of us into the Ulf-mobile and drove down Monday night. Ben came along even though Tuesday was his birthday, so we decided to go out on Monday night to celebrate. There are a number of highly rated restaurants in the downtown Columbia area, and with my penchant for vintage cocktails and Ben’s taste for whiskey we decided on Bourbon. It was a wonderful evening and for his birthday we bought him a flight of Pappy Van Winkle, an incredibly difficult to find bourbon. The verdict: it is worth the hype.

Pappy van Winkle bottles

The show officially started on Tuesday and spanned two days. The first day consisted of roughly hour-long talks like most conferences. Where it differed was that the talks were held in different buildings around downtown Columbia. While it made it a little harder to jump from one venue to another, the weather, for the most part, was good.

The opening keynote was held at the Music Farm. As a sponsor we had a table which was also in the auditorium and I really liked that. One of the issues with having any sort of booth is that they are often set off in a side room. If you have booth duty you can’t see any of the presentations, and traffic between presentations is light. This way we had some down time during the presentations and yet got a lot of foot traffic in between them. Seemed to make the day go faster. The mayor of Columbia spoke and claimed to be the only mayor in America who was into open source, but I know of at least one other mayor, the mayor of Portland, Oregon, who attends these shows (I should disclose that the City of Portland is an OpenNMS customer). I didn’t want to bring it up though, ’cause this is a good thing to be proud of.

POSSCON Keynote

My presentation on the Linux Desktop was held at the Liberty Tap Room (‘natch) and while it was cool, it wasn’t the best place for presentations. The projector screen was dim (more useful for sports broadcasting at night then for tech talks in the middle of the morning). During one talk I had to listen to the Miller Lite truck idling on the road outside the door as the driver made his delivery.

Mine was the last one of the day, but I wanted to check out the venue so I went early and stayed for a talk on open source licensing (by one of the other sponsors) and one by Jason Hibbets of opensource.com fame.

I thought the presenter of the law talk was pretty brave discussing licensing with Bradley Kuhn in the room, but while I enjoyed the talk I could tell it was over the heads of most of the audience (you have to have lived it to really enjoy the finer aspects of the GPL and enforcement). I liked Jason’s talk, which I had not seen before, on the tools and processes they use at opensource.com to build community.

Jason Hibbets

Toward the end of the day I saw a talk by Erica Stanley on open source and the Internet of Things. It was good but due to the lack of a sound system it was hard to hear everything. I presented after her and didn’t have that problem (grin).

I think my talk on using the Linux Desktop went well. Now three years after leaving Apple I’m still using it and still loving it.

Tuesday evening there was a reception back at Music Farm followed by a speaker/sponsor dinner held at Blue Marlin. Ben, Jess and I ended up at a table with Bradley Kuhn, Erica Stanley and Carol Smith from Google. We talked briefly about the Google Summer of Code. OpenNMS was involved for several years, but these last two years we were not accepted. Last year I was told it was because they wanted to give other projects a chance, and this year, to be quite frank, I don’t think our proposals were strong enough. Instead of complaining like some projects, I am hoping this will motivate the team to do better next year. I think GSoC is a wonderful program and I wish it was around when I was in school, as both the pay and work environment would have been better than the hours I put in at a non-air-conditioned plastic injection molding plant (although I will say the experience motivated me to finish my degree).

Wednesday’s format was a little different. Everything was held at the IT-ology offices, which was good since the weather was rainy all day. It was made up of workshops, and I did two and a half hours on OpenNMS. Everyone seemed to enjoy it.

Overall, it was a great conference. Over 800 people registered and I think they all got their money’s worth. It was also a great way to market Columbia (I know we spent some money there). It has made me look forward to this year’s All Things Open conference (note that the Call for Speakers is open).

Tarus Balog : Review: System 76 Sable

April 18, 2015 05:24 PM

As you might guess, I am a big fan of all things open, and I tend to vote with my wallet. When the need arose to replace some iMacs in the office, I decided to check out the Sable systems offered by Linux-friendly vendor System 76.

System 76 was a sponsor at SCaLE this year (like OpenNMS) and they also sponsored the Bad Voltage Live event where they gave away a laptop and a server, so they already had my goodwill.

Back in 2008 I needed some machines for our training courses, so being an Apple fanboy at the time I bought iMacs. Outfitting training rooms can be problematic if you don’t do training full time because you usually end up with nice systems that you don’t use very often. Seems wasteful, so we decided to use them to run Bamboo and our unit tests for OpenNMS when they weren’t being used for training.

Seth noticed that it was taking those machines around 240 minutes to run the suite of tests versus 160 minutes for the newer iMacs we were using, and this was having a negative impact development (almost everything we do relies on test driven development). Since we were running Ubuntu on the boxes anyway, I decided on a Linux alternative and chose System 76 for the first six replacement systems.

I like all-in-one systems for training since they tend to move around (we use the training room as a conference room when there are no classes). The all-in-one form factor makes them easy to carry. The Sables I ordered came with a 23.6 inch touch screen at 1080p, 3.1 GHz i7 processor, 16GB of RAM and a 500GB SSD for a total price of US$1731.

The ordering process went smoothly (there was one glitch when the original quote was for seven instead of six but it was quickly corrected). I placed the order on March 18th and they shipped a week later on the 25th.

They arrived in six boxes marked AIO PC:

System 76 boxes

I think AIO must be the manufacturer in China, but I couldn’t find a similar system on the web. One box had a smashed-in corner, so I opened it first, but it was packed well enough that the unit wasn’t damaged:

System 76 open box

I removed the packing and pulled the unit out. It was wrapped to protect the screen.

System 76 screen wrap

and the whole unit was covered in plastic wrap to prevent scratches.

System 76 plastic wrap

These units come with a power brick that is external to the system and I ordered them with a Logitech keyboard and mouse. These came in a separate box along with extra cables, etc., for expansion (unlike Apple products, you can actually work on these systems).

System 76 keyboard box

The hardest part about the whole process was figuring out how to turn the darn thing on. I finally found the switch on the back of the system on the lower right side (as you face it). I felt kind of stupid and yes, I even read the little pamphlet that came with it. Perhaps they should add and IKEA-like drawing with the little dude pointing to the switch.

It booted right up into Ubuntu 14.10, and all I had to do was create an account and set the IP address. Ben was then able to get in and deploy our Bamboo image and we were up and running in no time.

System 76 screen

While we still have some iMacs being used, the Sables have, so far, proven to be a solid replacement. I haven’t really used them as a desktop, yet, but they can run our test suite in a little over an hour which is almost a four-fold increase.

System 76 in a line

While Apple doesn’t offer a 24-inch iMac anymore, the 21-inch version with similar processor, RAM and SSD is US$2399, or quite a premium. The Sable is not nearly as thin or stylish as the iMac, but it is a nice looking machine and after struggling this week to correctly replace the hard drive in a late 2009 iMac I appreciate the fact that I can work on these if I need to, and the extra cables shipped with it even encourage me to do so.

And that’s what open is all about.

Mark Turner : Former Obama Pilot: TWA Flight 800 was shot down, here’s why – NY Daily News

April 17, 2015 04:44 PM

I’m glad I’m not the only one.

Was TWA Flight 800 shot out of the sky?As a former pilot, that is a question I get asked about all the time.

I’m no conspiracy theorist, but let’s be clear: Yes. I say it was. And I believe the FBI covered it up.

There are many reasons to disbelieve the official explanation of what happened to TWA 800 almost 19 years ago, on July 17, 1996, off the South Shore of Long Island. There’s hardly an airline pilot among the hundreds I know who buys the official explanation — that it was a fuel-tank explosion — offered by the National Transportation Safety Board some four years later.

Lots can go wrong with an airplane. Engines can fail; they can catch fire. Devices can malfunction. Pilots make errors.

But jets do not explode in midair.

via Former Obama Pilot: TWA Flight 800 was shot down, here's why – NY Daily News.

Mark Turner : Obama to Remove Cuba From State Sponsor of Terror List – ABC News

April 15, 2015 01:59 AM

Obama removes Cuba from the terror sponsor list. I wonder if Raul Castro will remove America from Cuba’s terror sponsor list?

The terror designation has been a stain on Cuba’s pride and a major stumbling block for efforts to mend ties between Washington and Havana.In a message to Congress, Obama said the government of Cuba "has not provided any support for international terrorism" over the last six months. He also told lawmakers that Cuba "has provided assurances that it will not support acts of international terrorism in the future."

via Obama to Remove Cuba From State Sponsor of Terror List – ABC News.

Alan Porter : tar + netcat = very fast copy

April 13, 2015 10:50 PM

I reformatted a hard disk this weekend. In the process, I needed to copy a bunch of files from one machine to the other. Since both of these machines were smaller embedded devices, neither one of them had very capable CPUs. So I wanted to copy all of the files without compression or encryption.

Normally, I would use “rsync -avz --delete --progress user@other:/remote/path/ /local/path/“, but this does both compression (-z) and encryption (via rsync-over-ssh).

Here’s what I ended up with. It did not disappoint.

Step 1 – On the machine being restored:

box1$ netcat -l -p 2020 | tar --numeric-owner -xvf -

Step 2 – On the machine with the backup:

box2$ tar --numeric-owner -cvf - | netcat -w3 box1 2020