Tarus Balog : Nextcloud and OpenNMS

August 25, 2016 07:40 PM

Last weekend, OpenNMS-er extraordinare Ronny Trommer was at a conference where he met Jos Poortvliet from Nextcloud. I’ve been following Nextcloud pretty intently since I recognized kindred souls in their desire to create a business that was successful and still 100% open source (and not, for example, fauxpensource). Jos mentioned that Nextcloud was getting a new monitoring API and thought it would be cool if OpenNMS could use it.

Since their API returns the monitoring information as XML, Ronny used the XML Collector to gather the data. Once the data is in OpenNMS, you can graph it, set thresholds, configure notifications, etc.

Available metrics include:

  • CPU load and memory usage
  • Number of active users over time
  • Number of shares in various categories
  • Storage statistics
  • Server settings like PHP version, database type and size, memory limits and more

Here’s an example of the number of files from a small demo system:

Files in Nextcloud

Of course, since OpenNMS is a platform, once the data is in the system you can leverage its integrations with applications such as Grafana:

Nextcloud Metrics in Grafana

Some applications will go on and on about how many “plugins” they have. Often, these are little more than scripts that do something simple, like an SNMP GET, but with all the overhead of having to run a shell. To add something like Nextcloud to OpenNMS, it is just a simple matter of configuring a couple of files, but to make that easier a lot of configurations have been added to a git repository. If you want to try out the Nextcloud integration, follow these instructions.

True open source solutions can offer the best feature, performance and value for most companies, but unfortunately there are so few pure open source companies providing them. I applaud Nextcloud and look forward to working with them for years to come.

Tarus Balog : New Additions to OpenNMS

August 25, 2016 04:07 PM

I am very happy to announce that Chris Manigan has joined the OpenNMS team.

Chris has been using OpenNMS since 2010 when he worked at Towerstream in Rhode Island. He gave us a very nice testimonial for the website, and has a lot of experience with using OpenNMS as scale.

Chris Manigan

He put that experience to use at Turbine, insuring that their infrastructure could deliver gaming content to users who demand performance. Now he’s going to use that experience to insure that OpenNMS is ready to take on the Internet of Things, for both our internal infrastructure and those of our customers.

I also want to announce that Jesse White, our CTO, and his wife Sara welcomed Charles White into the world early yesterday morning.

Charles White

Weighing 7 pounds and 11 ounces, he is already writing code in Python and we hope to have him making commits in Java in the next week or so.

Tarus Balog : Nagios XI vs. OpenNMS Meridian – the Return of the FUD

August 23, 2016 04:29 PM

It seems like our friends over at Nagios have been watching a little too much election coverage this year, and they’ve updated their “Nagios vs. OpenNMS” document with even more rhetoric and misinformation.

As my three readers may recall, back in 2011 I tore apart the first version of this document. Now they have decided to update it to target our Meridian™ version.

Let’s see how they did (please look at it and follow along as it is quite amusing).

The first misleading bit is the opening paragraph with the phrase “most widely used open-source monitoring project in the world”. Now, granted, they do indicate that means “Nagios Core” but it seems a little disingenuous since what they are selling is Nagios XI, which is much different.

Nagios XI is not open source. It is published under the “Nagios Open Software License” which is about as proprietary as they get. I’m not even sure why the word “open” was added, except to further mislead people into thinking it is open source. The license contains clauses like “The Software may not be Forked” and “The Software may only be used in conjunction with products, projects, and other software distributed by the Company.” Think about it, you can’t even integrate Nagios XI with, say, a home grown trouble ticketing system without violating the license. Doesn’t sound very “open” at all. OpenNMS Meridian is published under the AGPLv3, or a similar proprietary license should your organization have an issue with the AGPL. You don’t have that choice with Nagios XI.

Next, let’s check out the price. The OpenNMS Group has always published its prices on-line. One instance of Meridian, which includes support in the form of access to our “Connect” community, is $6,000. They have it listed as $25,995, which is the price should you choose the much more intensive “Prime” support option. I’m not sure why they didn’t just choose our most expensive product, Ultra Support with the 24×7 option, to make them seem even better.

Nagios XI Node Limitation

Also, note the fine print “Price based on one instance of XI with 220 nodes/devices”. There is no device limit with OpenNMS Meridian. So let’s be clear, for $6000 you get access to the Meridian software under an open source license versus $5000 to monitor 220 nodes with extreme limitations on your rights.

Our smaller customers tend to have around 2000 devices, which means to manage that with Nagios XI you would need roughly ten instances costing nearly $50,000 (using the math presented in this document). And from the experience we’ve heard with customers coming to us from Nagios, the reason it is limited to so few nodes is that you probably can’t run much more on a single instance of Nagios XI. Compare that to OpenNMS where we have customers with over 100,000 devices in a single instance (and they’ve been running it for years).

We also price OpenNMS as a platform. You get everything: trouble-ticketing integration, graphing, reporting, etc. in one application. It looks like Nagios has decided to nickel and dime you for logs, etc. and a thing called “Nagios Fusion” which you’ll need to manage your growing number of Nagios instances since it won’t natively scale. And remember, due to the license you are forbidden from using the software with your own tools.

I especially had to laugh at the “You Speak, We Listen” part. If you have a feature or change you need, if you ask nicely they might make it for you. With OpenNMS Meridian you are free to make any changes you need since it is 100% open source, and with our open issue tracker we address dozens of user requests each point release.

Finally, there is the feature comparison, which at a minimum is misleading and is often just blatantly false. Almost every feature marked as lacking in Meridian exists, and at a level far beyond what Nagios XI can provide. Seriously, is it really objective to state that OpenNMS doesn’t support Nagvis, a specific tool that even has “Nagios” in the name?

Nagvis

I had to laugh at the hubris. They obviously didn’t Google “opennms nagvis“, because, guess what? There has been an OpenNMS Nagvis integration for some time now, contributed by our community. Just in case you were wondering, we have an integration with Network Weathermap as well.

Nagios is just another proprietary software product that wants to lock you into its ecosystem, and this is just a shameful attempt to monetize an application that is long past its prime. Heck, it was the inability of the Nagios leadership to get along with others that resulted in the very popular Icinga fork, and with it Nagios lost a lot of contribution that helped make up its “Thousands of Free Add-Ons” (and the way Nagios took over the community lead plug-in site was also poorly handled). Plus, many of those add-ons won’t scale in an enterprise environment, which probably lead to the 220 device limit.

Compare that to OpenNMS. We not only want to encourage you integrate with other products, we do a lot of it for you. OpenNMS has great graphing, but we also created the first third party plug-in for Grafana. When it comes to mapping, OpenNMS is on the leading edge, with a focus on various topology views that can ultimately handle millions of devices in a fashion that is actually usable. Need to see a Layer 2 topology? Choose the “enhanced linkd view”. Run VMware and Vcenter? It is simple to import all of your machines and see them in a view that shows hosts, guests and network storage. Plus the unique ability to focus on just those devices of interest allows you to use a map with hundreds of thousands if not millions of nodes.

Nagios Map

Compare that to the Nagios map screenshot where it looks like “localhost” is having some issues. Oh no, not localhost! That’s like, all of my machines.

As for “Business Process Intelligence” I’ve been told that the Nagios XI version is like our Business Service Monitor “Except BSM is more featureful, and has a significantly better UI/UX”. Need real Business Intelligence? OpenNMS has Red Hat Drools support, the open source leader, built right into the product.

We also support integration with popular Trouble Ticketing systems such as Request Tracker, Jira, OTRS and Remedy. And the kicker is that you can also run any Nagios check script natively in OpenNMS using the “System Execute Monitor“, but once you get used to the OpenNMS platform, why would you?

I’m not really sure why Nagios goes out of its way to spread fear, uncertainty and doubt about OpenNMS. We rarely compete in the same markets. I’m sure that Sunrise Community Banks get their money’s worth from Nagios, and for companies like NRS Small Business Solutions, Nagios might be a good fit. But if you have enterprise and carrier-level requirements, there is no way Nagios will work for you in the long term.

When a company does something like this to mislead, from wrong information about our product to using terms like “open” when they mean “closed”, it shows you what they think of their competition. What does it say about what they think about their customers?

Tarus Balog : The Inverter: Episode 72 – Walking Into Trees

August 10, 2016 12:40 PM

I figured I’d better get this review of the latest Bad Voltage out before the next one drops this week (sigh).

The episode clocks in at a svelte 51 minutes, and mainly focuses on two segments, one on Pokémon Go and the other on streaming music.

As the guys point out, unless you’ve been living in a cave you have probably heard of Pokémon Go (and even people in caves are playing it). It is the augmented reality (AR) game from Niantic based on the characters made popular by Nintendo.

It has also had its share of controversy, with stories of people being injured while playing and in my own neck of the woods a row over people being fined for visiting the grave of a friend.

The game from Niantic that proceeded it was Ingress, which I’ve talked about before and I showed it to the team when they did their show in Fulda. Ingress can be pretty addictive, so I was set on not playing Pokémon at all. I didn’t really need another time sink in my life.

But a couple of things happened. First, I was introduced to this short from South Park that parodied Pokémon with “Chinpokomon”. I laughed since “chinpoko” is a rude Japanese word, so of course it was one of the few words of Japanese I know. I was determined to be “chinpokomon” on Pokémon Go.

I installed the app the weekend it was released and tried to register with that “trainer” name. It wasn’t to be. I tried every variation I could think of but it wouldn’t accept it. I’m not sure if it was because they were disallowing names with anything like Pokémon in them, or that, by that time, some of the 10 million people who had downloaded it had had the same idea. So I uninstalled the app and forgot about it.

Flash forward a week or so and not only did Andrea start playing, a bunch of people at the office did too, so I decided to check it out again.

I’ll post a full review soon, but I have a few thoughts to share here. Ingress suffers from three main issues: GPS “spoofing” where people fake their location, people playing multiple accounts and the in-game chat system which is often used to heap abuse on other players.

Pokémon Go is much nicer in that there is no chat system and you can’t trade items (making multiple accounts somewhat useless). That may change in the future but for now you can play in a rather friendly environment. Even in battles your Pokémon don’t die, they “faint” and you get them back. There is still an issue with spoofing, which is how many players access the game in countries that don’t have it.

The problem with Pokémon Go is the gameplay gets old, fast. The variety of game items makes it an order of magnitude more complex than Ingress, but I’m not really into collecting a 100% version of each Pokémon. I do like getting new ones (catch them all) but Niantic has made that pretty difficult. There is a part of the screen that will show you nearby Pokémon but you don’t get a clue as to where they are. There was a website called Pokévision that reverse engineered the API and would display them on a map, and I used that extensively to get uniques. I got a lot of exercise running around the UMN campus during Dev-Jam to get one I needed. I was averaging 25,000 steps a day according to my watch, but since Pokévision has been shut down I am less eager to run around in circles hoping a Pokémon will pop up.

Pokémon Gold Medal

In a couple of weeks of casual play I’ve made it to Level 24 and caught 105 Pokémon (I’ve seen 106, damn Wheezing) and my interest is starting to wane (although the Tauros is my favorite, ‘natch). I’ll probably hit Level 25 (where you get access to a new item) and then cut back drastically. Which I think is going to be the major problem with Pokémon Go.

We often eat at this one restaurant in Pittsboro every Friday night, and two weeks ago one young lady who works there was really into the game. This past Friday I asked her how she was doing with it, and she said she’d stopped playing because “it was boring”.

Don’t get me wrong, Niantic has a hit on its hands, I just don’t think they will sustain the level of interest they had a launch.

The guys made some good points about it. Jeremy noted that while it is called “AR” it is really nothing more than taking the video feed from the camera and superimposing Pokémon pictures on it. It does nothing for scale or distance, for example.

Bryan detailed some interesting history that I didn’t know concerning the origin of Niantic. It grew out of a spooky company called Keyhole with designs on tracking and influencing people’s habits (although they are more well known for being the technology behind Google Maps and Google Earth). Now, as an Ingress player I’ve already opted in to allow Google to track my location, and it came in handy when Jeremy roofied me at Oktoberfest and I wandered around Munich for a few hours. I had a record of where I had gone.

On a side note, Bryan went on to state that on Android you can’t control access to the microphone. Now, I’ll agree that the only way to be sure would be to have a hardware kill switch installed so you could disable the microphone entirely, but I run a version of AOSP called OmniROM and I seem to have the choice to limit access to the microphone on a “per app” basis.

Android Microphone Permissions

Not sure if that isn’t available on all Android releases but it seems to work on mine. Of course, many apps use Google Play Services so there’s that.

The second segment was on streaming music services. I don’t stream music so I don’t really have much to add, but I have heard that Pandora uses OpenNMS so I’m a big fan. (grin)

I do sometimes listen to SiriusXM at my desk. We have it in our cars so I have the option to stream it as well. I was listening to AltNation when I heard a track called “Loud(y)” by Lewis Del Mar. I found it on Soundcloud with a number of other tracks by them, and after it played them all it continued with similar artists. I really liked the mix (which included songs like “Thrill” by CZAR) and ended up listening to it for a couple of days. What I liked most about it is that all of them were from artists new to me. I buy the music I like and so I tend not to get much from streaming, and I also tend to listen when being connected to a network is not feasible (such as in a car or a plane), but I am considering the service from Soundcloud that let’s you listen offline (ironically called Soundcloud Go).

Which brings me to another sore point with me. The guys brought up vinyl. As many people know, vinyl is making a comeback, but dammit, it is just some sort of hipster thing since almost all music today is digitally mastered. You probably haven’t listened to a commercial record that didn’t go through Pro Tools, so when I hear “oh, but vinyl is so much richer and warmer” I have to call bullshit. Get a FLAC version of the song and you can’t get any better. Sure, you may need to upgrade your sound card and your speakers, but when, say, I get a FLAC master track from MC Frontalot it is the one that is sitting on his computer where he created it. It contains all of the information captured, and I can’t see how that gets improved by sticking it on a vinyl record whose sound quality starts to decay the moment you play it for the first time.

(sigh)

The Outro for the episode was kind of cool, as the guys talked about old gadgets and things like BBS’s. I can remember being in Tokyo when the Sharp Zaurus was introduced and I scoured the city looking for one in English. It was a cool device and I also liked the name. And the show brought back memories of having flame wars on a WWIV BBS system over a 2400 baud modem. The host (a high school kid who worked as a bag boy at a grocery store to pay the phone bill) could only afford a single phone line so you had to take turns. It made flame wars kind of fun – once you got in, you’d post your rant, log off, wait 30 minutes and then log back in to see if there was a reply.

All in all a nice, light episode. Nothing too heavy, kind of a like a sorbet. Hoping they bring back the meat this week.

Eric Christensen : POSM, OSM without the Internet

August 09, 2016 02:11 PM

Disclaimer: I am in no way affiliated with the POSM or its development.  I’m just an OSM contributor who thought this was neat and wanted to share the love.

For a while I’ve been envisioning some sort of system that would allow map data to be collected over a large area and then committed and later shared without an Internet connection.  Going into a rural area without sufficient or existing Internet connectivity would surely be a problem with using tools for compiling and rendering OpenStreetMap (OSM) data.  I had come up with a few solutions that were not unique and seems to have been tried before.

Sneakernet

Yep, just toss your GPS tracks, pictures, and JOSM output onto a USB thumb drive and walk/drive it over to a centralized location, where Internet connectivity is available, for processing.  Sure, it might take a while to collect all the information and take a while longer to redistribute all the information to the people in the field but it works.

Intranet

Okay, being a network geek this is my favorite solution; build your own network!  For the record, I’m not talking about stringing wire from village to village like soldiers did around Europe in WWII.  No, I’m talking about building wireless MANs to connect wired/wireless LANs that may already exist in these villages (or we can build our own!).

Adding our own infrastructure (email, web, and other servers) to the network would provide basic communications between villages with a potential connection to the Internet from a faraway town.

But this is far from fun for a software geek (I’m not one of those).  From here enter the POSM.

POSM

The Portable OpenStreetMap, or POSM, device is a small server that hosts all the tools needed to compile, edit, and publish collected mapping data without Internet connectivity.  The project was discussed at the US State of the Map (2016) and the video is a must-watch.

Of course a POSM could be added to either a Sneakernet or Intranet to allow for distributed data to be collected faster but the POSM, alone, seems to make working with this data much easier in the field.

Back to my thoughts

Honestly, my first thoughts around making a box like this, even before I heard about POSM, was the syncing of data back to the master OSM database.  If you watched the video to the end it appears someone else in the crowd had the same concern.  The answer to this was the use of git to manage conflicts.  To me this is very smart as git was made for this type of use-case (distributed data that needs to be compiled together at a core location).

I do wonder how well POSM would work if you had one in each village with MAN connections between and having the POSMs sync among themselves, sharing the data in near-real time.  This would be beneficial as there would be a backup of the data in the event one of the POSM devices died and could add some redundancy.  Providing connectivity could also aid in communications between sites through IRC or XMPP.

Lots of ideas…  Lots of options…


Tarus Balog : 2016 PB and Jam

August 05, 2016 06:22 PM

OpenNMS is headquartered in the idyllic small town of Pittsboro, NC, sometimes just called “PBO”. Since a number of people who come to Dev-Jam travel a fair distance, we’ve started a tradition of a “mini Dev-Jam” the week after, hosted at OpenNMS HQ.

This is much more focused on the work of The OpenNMS Group, but it is still a lot of fun. Last night as a team building exercise we decided to try an “escape” room.

This is a a relatively new thing where a group of people get put in a room and they have a certain amount of time to figure out puzzles and escape. Jessica set us up with Cipher Escape in their “Geek Room” which was the only one that could accommodate 11 of us.

It’s a lot of fun. For our experience we were lead into about a 15×15 room and given the following backstory: you are watching your neighbors cat while they are on vacation and after you feed her you realize you are locked in their house. You have 60 minutes to escape.

One thing I thought was funny was that the room was dotted with little pink stickers and we were told that these indicate things that don’t need to be manipulated (e.g. there was a picture frame that when you turned it over you would see the stickers, which meant you weren’t supposed to take it apart). I can only imagine the beta testing that went into determining where to put the stickers (our hostess specifically mentioned that you didn’t need to take the legs off the furniture).

To tell anything more would spoil it, but I was extremely proud that the team escaped with over 10 minutes to spare (we missed the best time by ten minutes, so it wasn’t close, but we did beat a team from Cisco that didn’t escape at all).

Escape Room Success

It was a ton of fun, and I’d put this team up against any challenge.

Afterward, most of us went out for sushi at Waraji. I’ve known the owner Masatoshi Tsujimura for almost 30 years, and even though they were packed they were able to set us up with a tatami room.

Waraji Dinner

It’s a bit out of the way for me to visit often, so I was happy to have an excuse for a victory celebration.

Tarus Balog : 2016 Dev-Jam: Day 5

August 05, 2016 05:59 PM

The last day of Dev-Jam is always bittersweet. The bitter part is the goodbyes, but the sweet part is “Show and Tell” when folks share what they have accomplished in the week.

We also get together for a group picture. Just before that Jonathan’s son Eddie joined us from the UK on the robot:

Dev-Jam Jonathan and Eddie

and David, who had to leave for a family issue, joined us via robot as well.

Dev-Jam 2016 Group Pic

All of the presentations are up on Youtube.

Chandra has been working on adding provisioning detectors to the Minion:

Deepak and Pavan, who work for a large electronic medical records company, discuss how they are using OpenNMS at scale:

Seth has been managing a lot of that work, which is currently focused on syslog, and he did a presentation on new syslog parsing functionality:

Alejandro presented some awesome improvements to the UI:

Markus has been working on project Atlas, which includes major improvements to OpenNMS maps. Here he demonstrates the integration of the geographical map with the topology map:

More UI enhancements were offered by Christian who added trend lines to the OpenNMS home page:

Ronny talked about his ideas for making device configurations more modular and managing that with git:

And he has also been creating reusable Docker containers with OpenNMS:

One project I found extra exciting was “Underling” which is an instance of Minion written in Go. This makes it incredibly small (about 6MB) which should allow the Minion to run on very inexpensive hardware.

I plan to demonstrate more Minion stuff at the OpenNMS Users Conference (and if you haven’t registered, you should).

In the evening we walked back across the river to dine at Town Hall Brewery.

Dev-Jam Final Dinner

It will be the last time all of us were together until next year, and I can’t wait.

Tarus Balog : 2016 Dev-Jam: Day 4

August 04, 2016 03:47 PM

Dev-Jam is made up of two main groups of people: those who work on OpenNMS full time and those who don’t. For those who work on OpenNMS full time, we try to depart from the day to day running of the project to both try new things and have fun. Think of it as “special projects week”.

Since OpenNMS is aiming to be a platform for the Internet of Things, this tends to involve a lot of electronics.

Dev-Jam Electronics

I decided to take some time out to further explore the Virtual Reality provided by Google Cardboard. I played with it last Dev-Jam, but I bought a nice headset from Homido since the Cardboard experience with the actual cardboard holder, while novel, was a little bit wanting.

The downside is that it doesn’t have the little magnet thingie that acts as a mouse click. Most people using the Homido tend to pair some other controller to their Android device in order to navigate, and since I have a PS3 (that I mainly use to play Blu-ray disks) I had a Sixaxis controller I could use. I had to buy an app in order to deploy a driver on my Nexus 6P that would work with the Sixaxis, and after a bit of tinkering I got it to work (note that it disables the regular Bluetooth driver when you run it).

I configured the “X” button to act as a mouse click, and pretty soon I was able to move about the Google Cardboard demos. The Homido fits well and the image is good, but it does allow some light to bleed around the edges in so it would work best in a dim or dark room.

I then went off to find some apps. This is not a field that a lot of developers have explored, and most of them are pretty passive. While this can work (check out the creepy “Sisters”) I wanted something more along the lines of what I experienced with the Samsung Gear VR, which includes immersive games. I found one called Hardcode VR that was fun, sort of a platformer along the lines of Portal. The controller worked out of the box exactly like you would expect it to: the right joystick was used for looking around and the left one for moving. I did get a slight headache after playing it for awhile, though, so I think that for the time being mobile device-based VR is still a novelty.

My experiment did amuse some of the folks at the conference, and Ronny made this comparison:

Tarus vs. Bender

I am always humbled by the people who give a week of their lives up to come to Dev-Jam, and even more so since DJ was away from his wife on his birthday. We did make sure he had a cake, though.

DJ's birthday cake

The cake was from Salty Tart and it was mighty tasty.

Warren Myers : here seems like it would be perfect for pilots

July 29, 2016 09:18 PM

With Here, you can download maps to use offline. 

And, via personal experimentation, I can attest to the rapidity with which the screen will update (even in “airplane mode”) on my iPhone when in a commercial jet if I have Here open. 

So why don’t they advertise their mapping product(s) to pilots?

Or do they, and I just haven’t noticed?

I’d think running Here on an iPad Pro or even an iPhone 6S Plus would be fantastic for pilots of all stripes – private, charter, military, and commercial.


I’m sure other devices will handle Here well, too – but have only tried on my iPhone & my dad’s Samsung Note.

Warren Myers : automating mysql backups

July 29, 2016 10:39 AM

I want to backup all of the MySQL databases on my server on a routine basis.

As I started asking how to get a list of all databases in MySQL on Stack Overflow, I came across this previous SO question, entitled, “Drop All Databases in MySQL” (the best answer for which, in turn, republished the kernel from this blog post). Thinking that sounded promising, I opened it and found this little gem:

mysql -uroot -ppassword -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| |gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword

That will drop all databases. No doubt about it. But that’s not what I want to so, so I edited the leading command down to this:

`mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema

Which gives back a list of all the databases created by a user.

Now I need a place to keep the dumps .. /tmp sounded good.

And each database should be in its own file, for I need mysqldump $db.identifier.extension

Made the ‘identifier’ the output of date +%s to get seconds since the Unix epoch (which is plenty unique enough for me).

All of which adds up to this one-liner:

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > /tmp/$db.dump.`date +%s`.sql; done

Plop that puppy in root’s crontab on a good schedule for you, and you have a hand-free method to backup databases.

Thought about using xargs, but I couldn’t come up with a quick/easy way to uniquely identify each file in the corresponding output.

Might consider adding some compression and/or a better place for dumps to live and/or cleaning-up ‘old’ ones (however you want to determine that), but it’s a healthy start.


You can also do mysqldump --all-databases if you think you want to restore all of them simultaneously … I like the idea of individually dumping them for individual restoration / migration / etc.

The full script I am using (which does include backups, etc):

############################
#!/bin/bash

date

echo 'Archiving old database backups'

tar zcf mysql-dbs.`date +%s`.tar.gz ~/sqlbackups
rm -f ~/sqlbackups/*

date

echo 'Backing up MySQL / MariaDB databases'

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > ~/sqlbackups/$db.dump.`date +%s`.sql; done

echo 'Done with backups. Files can be found in ~/sqlbackups'

date

Tarus Balog : 2016 Dev-Jam: Day 3

July 28, 2016 02:25 PM

It’s hard to believe this year’s Dev-Jam is half over. After months of planning it seems to go by so fast.

One of the goals I had this week was to understand more about the OpenNMS Documentation Project. For years I’ve been saying that OpenNMS documentation sucks like most open source projects, but I can’t say that any more. It has actually become quite mature. There is a detailed installation guide, a users guide, and administrators guide and a guide for developers. Each release the docs are compiled right alongside the code, and it even rates its own section on the new website.

Web Site Docs Page

It’s written in AsciiDoc, and all of the documentation is version controlled and kept in git.

Ronny Trommer is one of the leads on the documentation project, and I asked him to spend some time with me to explain how everything is organized.

Ronny Trommer

Of the four main guides, the installation guide is almost complete. Everything else is constantly improving, with the user guide aimed at people working through the GUI and the administration guide is more focused on configuration. For example, the discussion of the path outage feature is in the users guide but how to turn it on is in the admin guide.

There is even something for everyone in the developers guide (I am the first to state I am not a developer). One section details the style rules for documentation, in great detail. For example, in order to manage changes, each sentence should be on a single line. That way a small change to, say, a misspelled word, doesn’t cause a huge diff. Also, we are limited as to the types of images we can display, so people are encouraged to upload the raw “source” image as well as an exported one to save time in the future should someone want to edit it.

It is really well done and now I’m eager to start contributing.

Speaking of well done, Jonathan has figured out what is keeping OpenNMS from using the latest version of OTRS (and he’s sent a patch over to them) and Jesse showed me some amazing work he’s done on the Minion code.

We’ve been struggling to figure out how to implement the Minion code since we want to be able to run it on tiny machines like the Raspberry Pi, but since OpenNMS is written in Java there is a lot of overhead to using that language on these smaller systems. He re-wrote it in Go and then uploaded it to a device on my home network. At only 5.6MB it’s tiny, and yet it was able to do discovery as well as data collection (including NRTG). Sheer awesomeness.

Wednesday was also Twins night.

Twins Tickets

For several years now we’ve been going as a group to see the Minnesota Twins baseball team play at Target Field. It’s a lot of fun, although this year the Germans decided that they’d had enough of baseball and spent the time wandering around downtown Minneapolis.

At first I thought they had the right idea, as the Braves went up 4 to 0 in the first and by the top of the fourth were leading 7 to 0. However, the Twins rallied and made it interesting, although they did end up losing 9 to 7.

Our seats were out in left field, ‘natch.

Twins Tickets

Warren Myers : change your default font in windows 10

July 27, 2016 09:22 PM

Starting from a tutorial I found recently, I want to share how to change your default font in Windows 10 – but in a shorter edition than that long one (and in, I think, a less-confusing way).

Back in the Good Ole Days™, you could easily change system font preferences by right-clicking on your desktop, and going into the themes and personalization tab to set whatever you wanted however you wanted (this is also where you could turn off (or back on) icons on your desktop (like My Documents), set window border widths, colors for everything, etc).

Windows 10 doesn’t let you do that through any form of Control Panel anymore, so you need to break-out the Registry Editor*.

0th, Start regedit

WindowsKey-R brings up the Run dialog – type regedit to start the Registry Editor

2016-07-27 (3)

NOTE: you should back-up any keys you plan to edit, just in case you forget what you did, want to revert, or make a mistake.

1st, Navigate to the right key area
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontSubstitutes

2016-07-27&&

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts

2016-07-27 (1)Are where you’ll need to be to make these changes.

2nd, Blank entries for Segoe UI

For all of the “Segoe UI” entries in Fonts, change their Data field to blank (“”)

3rd, Add a Segoe UI substitute font

In FontSubstitutes, click Edit->String Value. Name it “Segoe UI” (without the quotes). In the “Value data” field, enter your preferred font name. I used Lucida Console.

2016-07-27 (2)

4th, Logout, or reboot, and login again to see your changes take effect.

* You can also download my registry keys, which have the substitution already done here. And you can pick any other font instead of Lucida Console you like – just edit the key file in your favorite text editor (I like TextPad) before merging into your Registry.

Tarus Balog : 2016 Dev-Jam: Day 2

July 27, 2016 02:49 PM

By Day Two people have settled into a rhythm. Get up, eat breakfast, start hacking on OpenNMS. I tend to start my day with these blog posts.

It’s nice to have most of the team together. Remember, OpenNMS is over 15 years old so there is a lot of different technology in the monitoring platform. I think David counted 18 different libraries and tools in the GUI alone, so there was a meeting held to discuss cleaning that up and settling on a much smaller set moving forward.

In any case ReST will play a huge role. OpenNMS Compass is built entirely on ReST, and so the next generation GUI will do the same. It makes integrating with OpenNMS simple, as Antonio demonstrated in a provisioning dashboard he wrote for one of his customers in Italy.

Antonio Teaching

They needed an easier way to manage their ten thousand plus devices, so he was able to use the ReST interface to build out exactly what they wanted. And of course the source is open.

Several years ago we started a tradition of having a local restaurant, Brasa, cater dinner one night. This year it was Tuesday, and it is always the best meal of the week.

Antonio Teaching

As we were getting ready to eat, Alex Hoogerhuis, a big supporter of OpenNMS who lives in Norway, decided to join us via our Double Robotics robot, Ulfbot. It worked flawlessly, and he was the best first time driver we’ve had. Ben, Jeff and Jonathan joined him for a picture.

Alex and Team

We like using the Yudof Hall Club Room for Dev-Jam for a number of reasons, one includes the big patio overlooking the river with picnic tables. Alex was able to drive around and spend some time with the rest of the team, although we had to lift him up to see over the wall to the Mississippi (we also had to carry him in when the wind picked up – heh).

Alex at Dinner

After dinner people kept working (DJ was up until nearly 2am chasing a bug) but we also took a break to watch Deadpool. It’s why “Dev-Jam” rhymes with “fun”.

Tarus Balog : Review: X-Arcade Gaming Cabinet

July 26, 2016 10:07 PM

Last year I wanted to do something special for the team to commemorate the release of OpenNMS Meridian.

Since all the cool kids in Silicon Valley have access to a classic arcade machine, I decided to buy one for the office. I did a lot of research and I settled on one from X-Arcade.

X-Arcade Machine

The main reasons were that it looked well-made but it also included all of my favorites, including Pac-Man, Galaga and Tempest.

X-Arcade Games

The final piece that sold me on it was the ability to add your own graphics. I went to Jessica, our Graphic Designer, and she put together this wonderful graphic done in the classic eight-bit “platformer” style and featuring all the employees.

X-Arcade Graphic

Ulf took the role of Donkey Kong, and here is the picture meant to represent me:

X-Arcade Tarus

The “Tank Stick” controls are solid and responsive, although I did end up adding a spinner since none of the controls really worked for Tempest.

When you order one of these things, they stress that you need to make sure it arrives safely. Seriously, like four times, in big bold letters, they state you should check the machine on delivery.

I was going to be out of town when it arrived, so I made sure to tell the person checking in the delivery to make sure it was okay (i.e. take it out of the box).

They didn’t (the box looked “fine”) and so we ended up with this:

X-Arcade Cracked Top

(sigh)

Outside of that, everything arrived in working order. You get a small Dell desktop running Windows with the software pre-installed, but you also get CDs with all the games that are included with the system. It’s a little bit of a pain to set up since the instructions are a little vague, but after about an hour or so I had it up and running.

Anyway, it is real fun to play. It supports MAME games, Sega games, Atari 2600 games and even that short lived laserdisc franchise “Dragon’s Lair”. You can copy other games to the system if you have them, although scrolling through the menu can get a bit tiring if you have a long list of titles.

We had an issue with the CRT about 11 months after buying the system. I came back from a business trip to find the thing dark (it never goes dark, if the computer is hung for some reason you’ll still see a “no signal” graphic on the monitor”). Turns out the CRT had died, but they sent us a replacement under warranty and hassle free. It took about an hour to replace (those instructions were pretty detailed) and it worked better than ever afterward.

This motivated me to consider fixing the top. When we had the system apart to replace the monitor, I noticed that the top was a) the only thing broken and b) held on with eight screws. I contacted them about a replacement piece and to my surprise it arrived two days later – no charge.

The only issue I have remaining with the system is the fact that it is Windows-based. This seems to be the perfect application for a small solid-state Linux box, but I haven’t had the time to investigate a migration. Instead I just turned off or removed as much software as I could (all the Dell Update stuff kept popping up in the middle of playing a game) and so far so good.

I am very happy with the product and extremely happy with the company behind it. If you are in the market for such a cabinet, please check them out.

Tarus Balog : 2016 Dev-Jam: Day 1

July 26, 2016 03:15 PM

Dev-Jam officially started on Monday at 10am, where I did my usual kick-off speech before turning it over to Seth and Jesse who handle the technical side of things.

Yesterday I stated that this was our tenth Dev-Jam at UMN. I forgot that the first one was held at my house, so this actually makes this the ninth (we’ve still had eleven since 2005).

Yudof Club Room

Everyone went around the room and talked about the things they wanted to work on this week. A lot of them focused on Minion, a technology rather unique to OpenNMS. A Minion is a a Karaf container that implements features for remote monitoring. It is key for OpenNMS to be able to scale to the Internet of Things (IoT) level of millions of devices and billions of metrics. And speaking of IoT, Ken turned me on to openHAB which is something I need to check out.

Yudof Kitchen

It is often hard for me to describe Dev-Jam to other people, as it is truly a lightly structured “un-conference”. In a great example of the Open Source Way it is very self-organizing, and I look forward to Friday when everyone presents what they have done.

Some of the Germans

We did have Alex Finger, one of the creators of the OpenNMS Foundation, join use via robot. He was having some sound issues and I think he did get stymied by the robot’s lack of hands when he came across a door, but it was cool he was able to visit from Europe.

Alex on the Robot

We use this week for planning and sharing, so Jesse took some time to go over the Business Service Monitor (BSM) which allows you to create a “business level” view of your services versus just the devices themselves. It is fully implemented via ReST and it pretty powerful, although as with a lot of things OpenNMS that very power can add complexity. I’m hoping our community will find great uses for it.

jesse and BSM

That evening about half of us walked to a theatre to see Star Trek Beyond. Most of us disliked it and I posted a negative review, but it was fun to go out with my friends.

Tarus Balog : 2016 Dev-Jam: Day 0

July 25, 2016 02:25 PM

♬ It’s the most wonderful time of the year ♬

Ah yes, it’s Dev-Jam time, where we descend onto the campus of the University of Minnesota, Twin Cities, for a week of OpenNMS goodness.

This is our eleventh annual Dev-Jam and our tenth at UMN. They are really good hosts so we’ve found it hard to look elsewhere for a place to hold the conference.

This is not a user’s conference. That is coming up in September. Instead, this is a chance for the core contributors of OpenNMS, and those people who’d like to become core contributors, to get together, share and determine the direction of OpenNMS for another year.

This year we are just shy of 30 people from four different countries: The US, the UK, Italy and India. Alejandro and his wife Carolina are now permanent residents of the US so I can’t really count them as being from Venezuela any more, and that happened directly through his involvement with Dev Jam. We’ve had more people but 30 seems to be the magic number (one year we had 40 and it was much harder to manage)

MSP sign at airport

My trip to MSP was uneventful. I flew through Dallas even though there is a direct RDU->MSP flight on Delta since I’m extremely close to Lifetime Platinum status on American Airlines. Also, AA has added a cool feature on their mobile app that lets me track my bags. This was important since I was shipping a box of four 12-packs of Cheerwine – a Dev-Jam favorite and as always a target for TSA inspection (apparently a 40+ pound box of soda is suspicious). Everything got here fine.

Including Ulf:

Ulf in Admiral's Club

Ulf is the OpenNMS mascot and he, too, is a product of Dev-Jam. Many years ago Craig Miskell came to Dev-Jam from New Zealand. He brought this plush toy and gave it to the Germans, who named him “Ulf”. Since then he has been around the world spreading the Good News about OpenNMS, so it wouldn’t be Dev-Jam without him.

We stay in a dorm called Yudof Hall where we take over the Club Room, a large room on the ground floor that includes a kitchen and an area with sofas and a television. In the middle we set up tables where we work, and due to UMN being a top-tier university we have great bandwidth. There is a huge brick patio next to it that looks out over the Mississippi River. It’s a very nice place to spend the week.

Speaking of the Mississippi, we crossed it last night to our usual kick-off spot, the Town Hall Brewery. As a cocktail aficionado, I was happy to see some craft cocktails on the menu, and a number of us tried the “Hallbach”, their take on the Seelbach Cocktail:

Hallbach Cocktail

It was very nice, as they used a high proof bourbon and replaced the champagne with sparkling cider.

We like Town Hall since we can seat 30 people. We do cater in as well as go out. The new light rail service to campus makes getting around easy, especially to the Mall of America and Target Field.

Speaking of baseball, we’re all going to the game on Wednesday. If you are in the area and want to join us, I should have a couple of tickets available. Just drop me a note. We also brought along the Ulfbot, which is a tele-presence robot so do the note dropping thing if you want to “visit”.

Dev-Jam!

Mark Turner : Dix Park Advisory Committee chosen

July 20, 2016 04:23 PM

Raleigh city council approved the members of the Dix Park Advisory Committee yesterday. My son Travis and I did not make the list. I was disappointed about this for a little while until I recognized how much time I now won’t be spending in meetings. I had cleared the decks to devote the proper time and attention to this but now I am free to pursue other initiatives. Now, how to fill it?

Mark Turner : Aunt Beverly and Uncle Bill

July 20, 2016 12:46 PM

My uncle, Bill Turner

My uncle, Bill Turner


Up until recently I’ve been fortunate not to have any of my friends and relatives die. Sadly, this has not been the case recently. My Aunt Linda died last spring and in the past two weeks I’ve lost both my Aunt Beverly and my Uncle Bill.

Aunt Beverly was married to my dad’s oldest brother, Jimmy. She was a longtime Spanish teacher in Birmingham and raised two of her three kids on her own after Jimmy died in the 80s. I didn’t know her too well.

While I was making plans to attend her funeral I found out Uncle Bill was in serious condition. Bill died last week. I was unable to attend Beverly’s funeral but was able to get away for Bill’s, also in Birmingham.

It’s an 8+ hour drive to Birmingham from Raleigh, so my brothers and I carpooled there. We spent the next few days eating barbecue, catching up, and attending services. Then it was a long trip back.

Uncle Bill was just a fun guy. He always had a funny story to tell, the result of a keen sense of observation. He worked as a service manager at a car dealership for most of his career but filled his retirement with golf and trips. He was also the only other member of my family to be a veteran of the Navy. Like me, Uncle Bill only spent four years in the Navy but those four years transformed his life. He was fiercely proud of his service and his Navy. I hope my service was up to his standards!

The occasion did give me the opportunity to spend time with my family in a way that has become increasingly rare. I just hope the next occasion is a more positive one.

Tarus Balog : New Fancy Website for www.opennms.org

July 20, 2016 11:39 AM

As some of you may have noticed, a little while ago the OpenNMS Project website got updated to a new, fancy, responsive version.

OpenNMS Platform

This was mainly the work of Ronny Trommer with a big assist from our graphic designer, Jessica.

We are often so busy working on the code we often forget how important it is to tell people about what we are doing. Most people who take the time to learn about the project realize how awesome it is, but it can be hard to get over that first hump in the learning curve.

I hope that the new site will both reflect the benefits of using OpenNMS as well as the work of the community behind it.

Tarus Balog : OpenNMS Meridian 2016 Released

July 19, 2016 12:32 AM

I am woefully behind on blog posts, so please forgive the latency in posting about Meridian 2016.

As you know, early last year we split OpenNMS into two flavors: Horizon and Meridian. The goal was to create a faster release cycle for OpenNMS while still providing a stable and supportable version for those who didn’t need the latest features.

This has worked out extremely well. While there used to be eighteen months or so between major releases, we did five versions of Horizon in the same amount of time. That has led to the rapid development of such features as the Newts integration and the Business Service Monitor (BSM).

But that doesn’t mean the features in Horizon are perfect on Day One. For example, one early adopter of the Newts integration in Horizon 17 helped us find a couple of major performance issues that were corrected by the time Meridian 2016 came out.

The Meridian line is supported for three years. So, if you are using Meridian 2015 and don’t need any of the features in Meridian 2016, you don’t need to upgrade. Major performance issues, all security issues and most of the new configurations will be backported to that release until Meridian 2018 comes out.

Compare and contrast that with Horizon: once Horizon 18 was released all work stopped on Horizon 17. This means a much more rapid upgrade cycle. The upside being that Horizon users get to see all the new shiny features first.

Meridian 2016 is based on Horizon 17, which has been out since the beginning of the year and has been highly vetted. Users of Horizon 17 or earlier should have an easy migration path.

I’m very happy that the team has consistently delivered on both Horizon and Meridian releases. It is hoped that this new model will both keep OpenNMS on the cutting edge of the network monitoring space while providing a more stable option for those with environments that require it.

Warren Myers : hov lanes are misnamed

July 13, 2016 02:23 PM

I dislike HOV lanes on principle, but I also dislike them grammatically: "high-occupancy vehicle" states the vehicle can hold many people (ie, it has a "high-occupancy").

They should be labeled "highly-occupied" vehicle lanes – same acronym, but with better grammar.

Alan Porter : The Wrist Watch Boneyard

July 11, 2016 12:42 AM

Audrey was looking for a replacement battery for an old watch, and that got me looking through my own wrist watch boneyard. I gave up wearing watches in 2008.

Back in the late 1990’s and early 2000’s, I wore one of these:

casio-abx-20-u-b3

The Casio ABX-20 was an analog watch with a digital display that floated above the hands. I thought it was pretty cool at the time (although I am sure everyone else thought it was dorky). I also had a couple of Timex “Expedition” analog/digital watches — they had Indiglo backlights.
I still think the analog/digital dual format is pretty cool.

Sadly, the Casio ABX-20 is beyond repair. But while we were getting a battery for Audrey’s watch, I picked up a few batteries for some of the other boneyard watches, just to take them for a nostalgic spin.

Tarus Balog : Upgrading Linux Mint 17.3 to Mint 18 In Place

July 06, 2016 03:02 PM

Okay, I thought I could wait, but I couldn’t, so yesterday I decided to do an “in place” upgrade of my office desktop from Linux Mint 17.3 to Mint 18.

It didn’t go smoothly.

First, let me stress that the Linux Mint community strongly recommends a fresh install every time you upgrade from one release to another, and especially when it is from one major release, like Mint 17, to another, i.e. Mint 18. They ask you to backup your home directory and package lists, base the system and then restore. The problem is that I often make a lot of changes to my system which usually involves editing files in the system /etc directory, and this doesn’t capture that.

One thing I’ve always loved about Debian is the ability to upgrade in place (and often remotely) and this holds true for Debian-based distros like Ubuntu and Mint. So I was determined to try it out.

I found a couple of posts that suggested all you need to do is replace “rosa” with “sarah” in your repository file, and then do a “apt-get update” followed by an “apt-get dist-upgrade”. That doesn’t work, as I found out, because Mint 18 is based on Xenial (Ubuntu 16.04) and not Trusty (Ubuntu 14.04). Thus, you also need to replace every instance of “trusty” with “xenial” to get it to work.

Finally, once I got that working, I couldn’t get into the graphical desktop. Cinnamon wouldn’t load. It turns out Cinnamon is in a “backport” branch for some reason, so I had to add that to my repository file as well.

To save trouble for anyone else wanting to do this, here is my current /etc/apt/sources.list.d/official-package-repositories.list file:

deb http://packages.linuxmint.com sarah main upstream import backport #id:linuxmint_main
# deb http://extra.linuxmint.com sarah main #id:linuxmint_extra

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu/ xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ xenial partner

Note that I commented out the “extra” repository since one doesn’t exist for sarah yet.

The upgrade took a long time. We have a decent connection to the Internet at the office and still it was over an hour to download the packages. There were a number of conflicts I had to deal with, but overall that part of the process was smooth.

Things seem to be working, and the system seems a little faster but that could just be me wanting it to be faster. Once again many thanks to the Mint team for making this possible.

Tarus Balog : MC Frontalot and The Doubleclicks at All Things Open

July 05, 2016 01:50 PM

I am happy to finally be able to confirm that MC Frontalot and his band, along with The Doubleclicks, will be playing an exclusive show during the All Things Open conference in October. The OpenNMS Group, at great expense (seriously, this is like our entire marketing budget for the year), has secured these two great acts to help celebrate all things open, and All Things Open.

MC Frontalot

I first met Damian (aka Frontalot) back in 2012 when I hired him to play at the Ohio Linuxfest. I subscribe to the Chris Dibona theory that open source business should give back to the community (he once described his job as “giving money to his friends”) and thus I thought it would be cool to introduce the übernerd Frontalot to the open source world.

We hit it off and now we’ve hired him a number of times. The last time was for OSCON in 2015, where we decided to bring in the entire band. What an eye-opening experience that was. A lot of tech firms talk about “synergy” – the situation when the whole is greater than the sum of its parts – but Front with his band takes the Frontalot experience to a whole new level.

Also at the OSCON show we were able to get The Doubleclicks to open. This duo of sisters, Angela and Aubrey Webber, bring a quirky sensibility to geek culture and were the perfect opening act.

Now, I love open source conferences, but I overdid it last year. So this year I’m on a hiatus and have been to *zero* shows, but I made an exception for All Things Open. First, it’s in my home city of Raleigh, North Carolina, which is also home to Red Hat. We like to think of the area as the hot bed of open source if not its heart. Second, the conference is organized by Todd Lewis, the Nicest Man in Open Source™. He spends his life making the world a better place and it is reflected in his show. We couldn’t think of a better way to celebrate that then to bring in some top entertainment for the attendees.

That’s right: there are only two ways to get in to this show. The easiest is to register for the conference, as the conference badge is what you’ll need to get in to the venue. The second way is to ask us nicely, but we’ll probably ask you to prove your dedication to free and open source software by performing a task along the lines of a Labor of Hercules, except ours will most likely be obscenely biological.

Seriously, if you care about FOSS you don’t want to miss All Things Open, so register.

If you are unfamiliar with the work of MC Frontalot, may I suggest you check out “Stoop Sale” and “Critical Hit“, or if you’re Old Skool like me, watch “It Is Pitch Dark“. His most recent album was about fairy tales (think of it as antique superhero origin stories). Check out “Start Over” or better yet the version of “Shudders” featuring the OpenNMS mascot, Ulf.

As for The Doubleclicks, you can browse most of their catalog on their website. One song that really resonates with me, especially at conferences, is “Nothing to Prove” which I hope they’ll do at the show.

Oh, and I saved the best for last, Front has been working on a free software song. Yup, he is bringing his mastery of rhymes to bear on the conflict between “free as in beer” and “free as in liberty” and its world premiere will be, you guessed it, at All Things Open.

The show will be held at King’s Barcade, just a couple of blocks from the conference, on Wednesday night the 26th of October. You don’t want to miss it.

Tarus Balog : First Thoughts on Linux Mint 18 “Sarah”

July 01, 2016 10:42 PM

I am a big fan of Linux Mint and I look forward to every release. This week Mint 18 “Sarah” was released. I decided to try it out on my Dell XPS 13 laptop since it is the easiest machine of mine to base and they really haven’t suggested an upgrade path. The one article I was able to find suggested a clean install, which is what I did.

First, I backed up my home directory, which is where most of my stuff lives, and I backed up the system /etc directory since I’m always making a change there and forgetting that I need it (usually concerning setting up the network interface as a bridge).

I then installed a fresh copy of Mint 18. Now they brag that the HiDPI support has improved (as I will grouse later, so does everyone else) but it hasn’t. So the first thing I did was to go to Preferences -> General and set “User interface scaling” to “Double”. This worked pretty well in Mint 17 and it seems to be fine in Mint 18 too.

I then did a basic install (I used a USB dongle to connect to a wired network since I didn’t want to mess with the Broadcom drivers at this point) and chose to encrypt the entire hard drive, which is something I usually do on laptops.

I hit my first snag when I rebooted. The boot cycle would hang at the password screen to decrypt the drive. In Mint 17 the password prompt would be on top of the “LM” logo. I would type in the password and it would boot. Now the “LM” logo has five little dots under it, like the Ubuntu boot screen, and the password prompt is below that. It’s just that it won’t accept input. If I boot in recovery mode, the password prompt is from the command line and works fine.

(sigh)

This seems to be a problem introduced with Ubuntu 16.04. Well, before I dropped back down to Mint 17 I decided to try out that distro as well as Kubuntu. My laptop was based in any case.

I ran into the usual HiDPI problems with both of those. I really, really want to like Kubuntu but with my dense screen I can’t make out anything and thus I can’t find the option to scale it. Ubuntu’s Unity was easier as it has a little sliding scaler, but when I got it to a resolution I liked many of the icon labels were clipped, just like last time I looked at it.

(sigh)

Then it dawned on my that I could just install Mint 18 but see if encrypting just my home directory would work. It did, so for now I’m using Mint 18 without full disk encryption. The next step was to install the proprietary Broadcom driver and then wireless worked.

Next, I edited /etc/fstab and added my backup NFS mount entry, mounted the drive and started restoring my home directory. That went smoothly, until I decided to reboot.

The laptop just hung at the boot screen.

Now there is a bug in Dell BIOS that if I try to boot with a USB network adapter plugged in, it erases the EFI entry for “ubuntu” and I have to go into setup and manually re-add it. Thus I was disconnecting the dongle for every reboot. On a whim I plugged it back in and the system booted. This led me to believe that there was an issue with the NFS mount in /etc/fstab, and that’s what the problem turned out to be.

The problem is that systemd likes to get its little hands into everything, so it tries to mount the volume before the wireless network is initialized. The solution is to add a special option that will cause systemd to automount the volume when it is first requested. Here is what worked:

172.20.10.5:/volume1/Backups /media/backups nfs noauto,x-systemd.automount,nouser,rsize=8192,wsize=8192,atime,rw,dev,exec,suid 0

The key bits are “noauto,x-systemd.automount”.

With that out of the way, I added mounts for my music and my video collection. That’s when I noticed a new weirdness in Cinnamon: dual icons on the desktop. I have set the desktop option to display icons for mounted file systems and now I get two of them for each remote mount point.

Double Desktop Icons

Annoying and I haven’t found a solution, so I just turned that option back off.

Now I was ready to play with the laptop. I’m often criticized for buying brand new hardware and expecting solid Linux support (yeah, you, Eric) but this laptop has been out for over a year. Still the trackpad is a little wonky – the cursor tends to jump to the lower right hand corner. Mint 18 ships with a 4.4 kernel but I had been using Mint 17 with a 4.6 kernel. One of the features of 4.6 is “Dell laptop improvements” so while I was hoping 4.4 would work for me (and that the features I needed would have been backported) it isn’t so. I installed 4.6 and my trackpad problems went away.

The final issue I needed to fix concerned ssh. I use ssh-agent and keys to access a lot of my remote servers, and it wasn’t working on Mint 18. Usually this is a permissions issue, but I compared the laptop to a working configuration on my desktop and the permissions were identical.

The error I got was:

debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/tarus/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0

It turns out that OpenSSH 7.0 seems to require that an “IdentityFile” parameter be expressly defined. I might be able to do this in ssh_config but instead I just created a ~/.ssh/config file with the line:

IdentityFile ~/.ssh/id_dsa_main

That got me farther. Now the error changed to:

debug1: Skipping ssh-dss key /home/tarus/.ssh/id_dsa_main - not in PubkeyAcceptedKeyTypes
debug1: Skipping ssh-dss key tarus@server1.sortova.com - not in PubkeyAcceptedKeyTypes

It seems the key I created back in 2001 is no longer considered secure. Since I didn’t want to go through the process of creating a new key just right now, I added another line to my ~/.ssh/config file:

IdentityFile ~/.ssh/id_dsa_main
PubkeyAcceptedKeyTypes=+ssh-dss

and now it works as expected. The weird part is that you would think this would be controlled on the server side, but the failure was coming from the client and thus I had to fix it on the laptop.

Now that it is installed and seems to be working, I haven’t really played around with Mint 18 much, so I may have to write another post soon. I do give them props for finally updating the default desktop wallpaper. I know the old wallpaper was traditional, but man was it dated.

This was a more complex upgrade than usual, and I don’t agree that you must base your system to do it, even from major release to major release. This isn’t Fedora. It’s based on Ubuntu which is based on Debian and I have rarely had issues with those upgrades. Usually you just change you repositories and then do “apt-get dist-upgrade”.

But … I might wait a week or two once they approve an upgrade procedure and let other people hit the bugs first, just in case. My desktops are more important to me than my laptop.

Hats off to the Mint team. I’m pretty tied to this operating system so I’m encouraged that it keeps moving forward as quickly as it does.

Tarus Balog : The Inverter: Episode 70– Delicious Amorphous Tech Bubble

June 30, 2016 08:11 PM

This week the Gang of Four is down to three as Stuart is off on holiday with his daughter in New York City. The episode runs 82 minutes long and I’m seeing a trend that the shorter episodes happen when Jeremy is out. I think it is because he clutters up the whole show with facts and reasoning.

The first segment asked the question “Are we in another tech bubble, and if so, what shape is it?” Of course we are in another tech bubble, as Jeremy so deftly demonstrates by comparing a number of start ups with over a billion dollars in valuation to real companies such as General Electric. They talk about a number of reasons for it, but I think they left an important one out: egos.

Look, growing up as a geek in the late 1970s early 1980s, we didn’t get much respect. Now with the various tech bubbles and widespread adoption of technology by the masses, geeks can at least be wealthy if not popular. But I think we still harbor, deep down, a resentment of the jocks and popular kids that results in problems with self-esteem. Take Marc Andreessen as an example. By most measures he’s successful, but take a look at him. He is not a pretty man, even though that male pattern baldness does suggest a big wee-wee. I think he still has something to prove which is why he dumps money into impossible things like uBeam which has something like a $500MM valuation. I think a lot of the big names in Silicon Valley have such a huge fear of missing out that they drive up valuations on companies without a business model and no hope of making a profit, much less a product.

But then Microsoft bought LinkedIn for $26.2B so what do I know.

Well, I do know the shape of the tech bubble: it’s a pear.

In the next segment the guys almost spooge all over themselves talking about the Pixel C tablet. I’ve never been a tablet guy. I have a six-inch … smart phone and it works fine for all of my mobile stuff. If I need anything bigger, I use a Dell XPS laptop running Mint. I do own a Nexus 10 but only use it to read eBooks that come in PDF format.

But all three of them really like it, meaning that if I decide to get a new tablet I’ll seriously consider it. Bryan did mention a couple of apps I was unfamiliar with, so I’ll have to check them out.

The first is called Termux and it provides a terminal emulator (already got one) but it adds a Linux environment as well. Could be cool. The other is DroidEdit which is a text editor for Android with lots of features, similar to vim or gedit on steroids. Bryan used these during his ill-fated attempt to live in the Linux shell for 30 days.

Apparently the Pixel C is magnetic, with magnets so strong you can hang it on your fridge. Add a webcam and I won’t need one of these.

The third segment was on Nextcloud. I’ll give the Nextcloud guys some props for getting press. This is something like the third in-depth interview I’ve listened to in the past three weeks. If you’ve been living under a rock and don’t know that Nextcloud is a fork of OwnCloud, start here. They interviewed Frank Karlitschek and Jan-Christoph Borchardt about the split and their plans.

I was hoping for more details on what caused the fork (because I’m a nosy bastard) but Jono starting off with something like a 90 second leading question to Frank that pretty much handed him an explanation. I was screaming “Objection! Leading the witness!” but it didn’t help. I guess it really doesn’t matter.

I do think I’d really enjoy meeting Frank. They are dedicated to keeping Nextcloud 100% open source (like good ol’ OpenNMS). They also brought up a point that is very hard to make with large, complex open source projects. Everyone will ask “How do you compare with OwnCloud” when the better question is “How do you compare to Dropbox”? At OpenNMS we are always getting the “How are you different from Nagios” when the better question is “How do you compare to Tivoli or OpenView”?

The fourth segment was on the XPrize Global Learning Project. The main takeaway I got from it was that the very nature of the XPrize doesn’t lend itself to the Open Source Way. The prize amount is so high it doesn’t encourage sharing. Still, a couple of projects are trying it so I wish them all the luck.

The final “segment” is the outro where the guys usually just shoot the breeze. They mentioned Stuart, visiting the US, getting slammed with Brexit questions, and I do find that amusing having traveled to the UK numerous times and been peppered with questions about stupid US politics. It’s one of the reasons I hope Donald Trump doesn’t get elected – I’m not ready to go back to claiming to be Canadian when I travel.

They also talked about fast food restaurants. I’m surprised In-N-Out Burger didn’t get a mention. From the moment a new one opens it is usually slammed at all hours. They did mention Chick-Fil-A, which I used to love until I boycotted them over their political activism. There is a pretty cool article on five incredible fast food chains you shouldn’t eat at (including Chick-Fil-A) and one you should but probably can’t (In-N-Out).

Overall I thought it was a solid show, although it needed more ginger. Good to see the guys getting back into form.

Warren Myers : browsers should have integrated sharing ability

June 30, 2016 11:06 AM

Mobile browsers can all share pages via whatever is available on your tablet, iPad, Android, iPhone, etc.

Why do ‘full’ browsers not offer the same thing without goofy extensions?

Mark Turner : Russia is harassing U.S. diplomats all over Europe – The Washington Post

June 27, 2016 03:00 PM

Russian intelligence and security services have been waging a campaign of harassment and intimidation against U.S. diplomats, embassy staff and their families in Moscow and several other European capitals that has rattled ambassadors and prompted Secretary of State John F. Kerry to ask Vladimir Putin to put a stop to it.

At a recent meeting of U.S. ambassadors from Russia and Europe in Washington, U.S. ambassadors to several European countries complained that Russian intelligence officials were constantly perpetrating acts of harassment against their diplomatic staff that ranged from the weird to the downright scary. Some of the intimidation has been routine: following diplomats or their family members, showing up at their social events uninvited or paying reporters to write negative stories about them.

Source: Russia is harassing U.S. diplomats all over Europe – The Washington Post

Warren Myers : tesla’s solarcity bid isn’t about energy production

June 27, 2016 01:02 AM

Ben Thompson* (temporary paywall) makes an excellent first-order analysis of Elon Musk's bid to acquimerge SolarCity with Tesla. But he, uncharacteristically, stops short of seeing the mid- and long-term reasons for the acquimerge.

It's about SpaceX.

It's about Mars.

It's about the Moon.

Musk knows that he needs an incredibly-solid pipeline of technology to get SpaceX past its initial "toy" phases of being a launch company to the ISS.

He wants to ensure that he's able to support the future on non-terrestrial bodies – lunar missions, Mars missions, long-term space exploration, high-altitude space stations, etc.

Sure, it happens to be good for Tesla (integrating solar tech at Tesla charging stations is a no-brainer). But that's not the end game.

The goal is space.


* Follow Ben on Twitter – @benthompson

Mark Turner : Brexit Is Only the Latest Proof of the Insularity and Failure of Western Establishment Institutions

June 27, 2016 12:41 AM

Great commentary by Glenn Greenwald on Brexit.

Brexit — despite all of the harm it is likely to cause and despite all of the malicious politicians it will empower — could have been a positive development. But that would require that elites (and their media outlets) react to the shock of this repudiation by spending some time reflecting on their own flaws, analyzing what they have done to contribute to such mass outrage and deprivation, in order to engage in course correction. Exactly the same potential opportunity was created by the Iraq debacle, the 2008 financial crisis, the rise of Trumpism and other anti-establishment movements: This is all compelling evidence that things have gone very wrong with those who wield the greatest power, that self-critique in elite circles is more vital than anything.

But, as usual, that’s exactly what they most refuse to do.

Source: Brexit Is Only the Latest Proof of the Insularity and Failure of Western Establishment Institutions

Tarus Balog : OpenNMS and Elasticsearch

June 24, 2016 09:06 PM

With Horizon 18 we added support for sending OpenNMS events into Elasticsearch. Unfortunately, it only works with Elasticsearch 1.0. Elasticsearch 2.0 and higher requires Camel 17, but OpenNMS can’t use it. I wondered why, and if you were wondering too, here is the answer from Seth:

Camel 17 has changed their OSGi metadata to only be compatible with Spring 4.1 and higher. We’re still using Spring 4.0 so that’s one problem. The second issue is that ActiveMQ’s OSGi metadata bans Spring 4.0 and higher. So currently, ActiveMQ and Camel are mutually incompatible with one another inside Karaf at any version higher than the ones that we are currently running.

The biggest issue is the ActiveMQ problem, I’ve opened this bug and it sounds like they’re going to address it in their next major release

So there you have it.

Mark Turner : What mysterious force whisked away the water on Venus? – CSMonitor.com

June 23, 2016 03:11 PM

Fascinating research might explain why all the water on Venus has disappeared.

Venus is remarkably Earth-like, with a similar size and gravity to our own planet. But the second planet from the sun is missing a key element to be a twin to our blue planet: water.

Scientists say there were once oceans on Venus’s surface, but with surface temperatures topping 860 degrees Fahrenheit, it’s no surprise the surface of Venus today is bone-dry.

But where did that water disappear to?

Source: What mysterious force whisked away the water on Venus? – CSMonitor.com

Tarus Balog : The Inverter: Episode 69 – Bill and Ted and Jeremy and Bryan and Jono and Stuart’s Excellent Adventure

June 17, 2016 03:24 PM

So the Gang of Four decided to actually produce a regular episode of Bad Voltage for the first time in, like, a month, so I decided to resurrect this little column making fun of them.

I am actually supposed to be on vacation this week, but for me vacation means working around the farm. I was working outside when the heat index hit 108.5F so while I was recovering from heat stroke I decided to give this week’s show a listen.

Clocking in at a healthy 75 minutes, give or take, it was an okay show, although the last fifteen minutes kind of wandered (much like most of this review).

The first segment concerned the creation of NextCloud as a fork of OwnCloud. I’ve already presented my thoughts on it from Bryan’s Youtube interview with the founders of NextCloud, and not much new was covered here. But it was a chance for all four of them to discuss it. One of the touted benefits of the new project is the lack of a contributor agreement. I don’t find this a good thing. Note that while I whole heartedly agree that many contributor agreements are evil, that doesn’t make them all evil. Take the OpenNMS contributor agreement. It’s pretty simple, and it protects both the contributor and the project. The most important feature, to me, is that the contributor states that they have a right to contribute the code to the project. I think that’s important, although if it were lacking or the contributor lied, the results would be the same (the infringing code would be removed from the application). It at least makes people think just a bit before sending in code.

Bryan made an offhand mention about trademarks in the same discussion, and I wasn’t sure what he meant by it. Does it mean NextCloud won’t enforce trademarks, or that there is an easy process that allows people to freely use them? I think enforcing trademarks is extremely important for open source companies. Otherwise, someone could take your code, crap all over it, and then ship it out under the same name. At OpenNMS we had issues with this back in 2005 but luckily since then it has been pretty quiet.

While there was even more speculation, no one really knows why the NextCloud fork happened. Some say it was that Frank Karlitschek was friends with Niels Mache of Spreed.me and wanted a partnership, but OwnCloud was against it. I think we’ll never know. Another suggestion that was been made is that it had to deal with the community of OwnCloud vs. the investors. Jono made the statement that VCs don’t take an active role in the community, but I have to disagree. My interactions with 90% of VCs have been an episode of Silicon Valley, and while they may not take an active role, you can expect them to say things like “These features over here will be part of our ‘enterprise’ version and not open, and make sure to hobble the ‘community’ version to drive sales, but other than that, run your community the way you want.”

One new point that was brought up was the business perception of the company. I think everyone who self identifies as an open source fan who is using OwnCloud will most likely switch to NextCloud since that is where the developers went, but will businesses be cautious about investing in NextCloud? The argument can be made that “who knows what will set Frank off next?” and the threat of NextNextCloud might worry some. I am not expecting this to happen (once bitten, twice shy, I bet Frank has learned a lot about what he wants out of his project) but it is a concern.

It is similar to Libreoffice. I don’t know anyone in the open source world using OpenOffice, but it is still huge outside of that world (I did a ride along with a friend who is police and was pleasantly surprised to see him bring up OpenOffice on his patrol car’s laptop).

It kind of reminds me when Google killed Reader and then announced Keep – seemed a bit ironic at the time. If a company can radically change or even remove a service you have come to rely on, will you trust them in the future?

The segment ended with a discussion of the early days of Ubuntu. Bryan made the claim that Ubuntu was made as an easier to use version Debian which Jono vehemently denied. He claimed the goal was to create a free, powerful desktop operating system. All I remember from those days were those kids from the United Colors of Bennetton ads on the covers of the free CDs.

The next piece was Bryan reviewing the latest Dell XPS 13 laptop. My last two laptops have been XPS 13 models and I love them. They ship with Linux (which I want to encourage) and I find they provide a great Linux desktop experience.

I got my newest one last year, and the main issue I’ve had is with the trackpad. Later kernels seem to have addressed most of my problems. I also dumped the Ubuntu 14.04 that shipped with it in exchange for Linux Mint, but I’m still running mainline kernels (4.6 at the moment). I’m eager for Mint 18 to release to see if the (rumoured) 4.4 kernel will work well (they keep backporting device driver changes) but outside of that I’ve had few problems.

Battery life is great, and the HiDPI screen is a big improvement over my old XPS 13. The main weirdness, for my model, is the location of the camera. In order to make the InfinityEdge display, they moved it to the bottom left of the screen so that the top bezel could be as thin as possible. It means people end up looking at the flabby underside of my chin instead of my face at times, but I use it so little that it doesn’t bother me much.

The third segment was about funding open source projects. It’s an eternal question: how do you pay for developers to work on free software? The guys didn’t really address it, focusing for the most part on programs that would provide some compensation for, say, travel to a conference, versus paying someone enough to make their mortgage. Stuart finally brought up that point but no real answers were offered.

The last fifteen minutes was the gang just shooting the breeze. Bryan used the term “duck fart” which apparently is a cocktail (sounds nasty, so don’t expect it on the cocktail blog). There is also, apparently, a science fiction novel called Bad Voltage that is not supposed to be that great, and the suggestion was made that the four of them should write their own version, but in the form of an “exquisite corpse” (my term, not theirs) where each would right their section independently and see what happens when it gets combined.

All in all, not a horrible show but not great, either. It is nice to have them all back together.

I’m eager to see how Bryan manages the next one, since he is spending 30 days solely in the Linux shell. How will Google Hangouts (which is what they use to make the show) work?

Curious minds want to know.

Tarus Balog : Choose the Right Thermometer

June 09, 2016 10:59 PM

Okay, so I have a love/hate relationship with Centurylink. Centurylink provides a DSL circuit to my house. I love the fact that I have something resembling broadband with 10Mbps down and about 1Mbps up. Now that doesn’t even qualify as broadband according to the FCC, but it beats the heck out of the alternatives (and I am jealous of my friends with cable who have 100Mbps down or even 300Mbps).

The hate part comes from reliability, which lately has been crap. This post is actually focused on OpenNMS so I won’t go into all of my issues, but I’ve been struggling with long outages in my service.

The latest issue is a new one: packet loss. Usually the circuit is either up or completely down, but for the last three days I’ve been having issues with a large percentage of dropped packets. Of course I monitor my home network from the office OpenNMS instance, and this will usually manifest itself with multiple nodeLostService events around HTTP since I have a personal web server that I monitor.

The default ICMP monitor does not measure packet loss. As long as at least one ping reply makes it, ICMP is considered up, so the node itself remains up. OpenNMS does have a monitor for packet loss called Strafeping. It sends out 20 pings in a short amount of time and then measures how long they take to come back. So I added it to the node for my home and I saw something unusual: a consistent 19 out of 20 lost packets.

Strafeping Graph

Power cycling the DSL modem seems to correct the problem, and the command line ping was reporting no lost packets, so why was I seeing such packet loss from the monitor? Was Strafeping broken?

While it is always a possibility, I didn’t think that Strafeping was broken, but I did check a number of graphs for other circuits and they looked fine. Thus it had to be something else.

This brings up a touchy subject for me: false positives. Is OpenNMS reporting false problems?

It reminds me of an event happened when I was studying physics back in the late 1980s. I was working with some newly discovered ceramic material that exhibited superconductivity at relatively high temperatures (around 92K). That temperature can be reached using liquid nitrogen, which was relatively easy to source compared to cooler liquids like liquid helium.

I needed to measure the temperature of the ceramic, but mercury (used in most common thermometers) is a solid at those temperatures, so I went to my advisor for suggestions. His first question to me was “What does a thermometer measure?”

I thought it was a trick question, so I answered “temperature” (“thermo” meaning temperature and meter meaning “to measure”). He replied, “Okay, smart guy, the temperature of what?”

That was harder to answer exactly, so I said vague things like the ambient environment, whatever it was next to, etc. He interrupted me and said “No, a thermometer measures one thing: the temperature of the thermometer”.

This was an important lesson, even though it seems obvious. In the case of the ceramic it meant a lot of extra steps to make sure the thermometer we were using (which was based on changes in resistance) was as close to the temperature of the material as possible.

What does that have to do with OpenNMS? Well, OpenNMS is like that thermometer. It is up to us to make sure that the way we decide to use it for monitoring is as close to our criteria as possible. A “false positive” usually indicates a problem with the method versus the tool – OpenNMS is behaving exactly as it should but we need to match it better to what we expect.

In my case I found out the router I use was limited by default to responding 1 ping per second (to avoid DDoS attacks I assume), so last night when I upped that to allow 20 pings per second Strafeping started to work as expected (as you can see in the graph above).

This allowed me to detect when my DSL circuit packet loss started again today. A little after 14:00 the system detected high packet loss. When this happened before, power cycling the modem seemed to fix it, so I headed home to do just that.

While I was on the way, around 15:30, the packet loss seemed to improve, but as you can see from the graph the ping times were all over the place (the line is green but there is a lot of extra “smoke” around it indicating a variance in the response times). I proactively power cycled the modem and things settled down. The Centurylink agent agreed to send me a new modem.

The point of this post is to stress that you need to understand how your monitoring tools actually work and you can often correct issues that make a monitor unusable and turn it into to something useful. Choose the right thermometer.

Tarus Balog : Nextcloud, Never Stop Nexting!

June 03, 2016 07:19 PM

It’s been awhile since I’ve posted a long, navel-gazing rant about the business of open source software. I’ve been trying to focus more on our business than spending time talking about it, but yesterday an announcement was made that brought all of it back to the fore.

TL;DR; Yesterday the Nextcloud project was announced as a fork of the popular ownCloud project. It was founded by many of the core developers of ownCloud. On the same day, the US corporation behind ownCloud shut it doors, citing Nextcloud as the reason. Is this a good thing? Only time will tell, but it represents the (still) ongoing friction between open source software and traditional software business models.

I was looking over my Google+ stream yesterday when I saw a post by Bryan Lunduke announcing a special “secret” broadcast coming at 1pm (10am Pacific). As I am a Lundookie, I made a point to watch it. I missed the start of it but when I joined it turned out to be an interview with the technical team behind a new project called Nextcloud, which was for the most part the same team behind ownCloud.

Nextcloud is a fork, and in the open source world a “fork” is the nuclear option. When a project’s community becomes so divided that they can’t work things out, or they don’t want to work things out for whatever reasons, there is the option to take the code and start a new project. It always represents a failure but sometimes it can’t be helped. The two forks I can think of off hand, Joomla from Mambo and Icinga from Nagios, both resulted in stronger projects and better software, so maybe this will happen here.

In part I blame the VC model for financing software companies for the fork. In the traditional software model, a bunch of money is poured into a company to create software, but once that software is created the cost of reproducing it is near zero, so the business model is to sell licenses to the software to the end users in order to generate revenue in the future. This model breaks when it comes to free and open source software, since once the software is created there is no way to force the end users to pay for it.

That still doesn’t keep companies from trying. This resulted in a trend (which is dying out) called “open core” – the idea that some software is available under an open source license but certain features are kept proprietary. As Brian Prentice at Gartner pointed out, there is little difference between this and just plain old proprietary software. You end up with the same lack of freedom and same vendor lock in.

Those of us who support free software tend to be bothered by this. Few things get me angrier than to be at a conference and have someone go “Oh, this OpenNMS looks nice – how much is the enterprise version?”. We only have the enterprise version and every bit of code we produce is available under an open source license.

Perhaps this happened at ownCloud. When one of the founders was on Bad Voltage awhile back, I had this to say about the interview:

The only thing that wasn’t clear to me was the business model. The founder Frank Karlitschek states that ownCloud is not “open core” (or as we like to call it “fauxpensource“) but I’m not clear on their “enterprise” vs. “community” features. My gut tells me that they are on the side of good.

Frank seemed really to be on the side of freedom, and I could see this being a problem if the rest of the ownCloud team wasn’t so dedicated.

On the interview yesterday I asked if Nextcloud was going to have a proprietary (or “enterprise”) version. As you can imagine I am pretty strongly against that.

The reason I asked was from this article on the new company that stated:

There will be two editions of Nextcloud: the free of cost community edition and the paid enterprise edition. The enterprise edition will have some additional features suited for enterprise customers, but unlike ownCloud, the community and enterprise editions for Nextcloud will borrow features from each other more freely.

Frank wouldn’t commit to making all of Nextcloud open, but he does seem genuinely determined to make as much of it open as possible.

Which leads me to wonder, what’s stopping him?

It’s got to be the money guys, right? Look, nothing says that open source companies can’t make money, it’s just you have to do it differently than you would with proprietary software. I can’t stress this enough – if your “open source” business model involves selling proprietary software you are not an open source company.

This is one of the reasons my blood pressure goes up whenever I visit Silicon Valley. Seriously, when I watch the HBO show to me it isn’t a comedy, it’s a documentary (and the fact that I most closely identify with the character of Erlich doesn’t make me feel all that better about myself).

I want to make things. I want to make things that last. I can remember the first true vacation I took, several years after taking over the OpenNMS project when it had grown it to the point that it didn’t need me all the time. I was so happy that it had reached that point. I want OpenNMS to be around well after I’m gone.

It seems, however, that Silicon Valley is more interested in making money rather than making things. They hunt “unicorns” – startups with more than a $1 billion valuation – and frequently no one can really determine how they arrive at that valuation. They are so consumed with jargon that quite often you can’t even figure out what some of these companies do, and many of them fade in value after the IPO.

I can remember a keynote at OSCON by Martin Mickos about Eucalyptus, and how it was “open source” but of course would have proprietary code because “well, we need to make money”. He is one of those Silicon Valley darlings who just doesn’t get open source, and it’s why we now have OpenStack.

The biggest challenge to making money in open source is educating the consumer that free software doesn’t mean free solution. Free software can be very powerful but it comes with a certain level of complexity, and to get the most out of it you have to invest in it. The companies focused on free and open source software make money by providing products that address this complexity.

Traditionally, this has been service and support. I like to say at OpenNMS we don’t sell software, we sell time. Since we do little marketing, all of our users are self selecting (which makes them incredibly intelligent and usually quite physically beautiful) and most of them have the ability to figure out their own issues. But by working with us we can greatly shorten the time to deploy as well as make them aware of options they may not know exist.

In more recent times, there is also the option to offer open source software as a service. Take WordPress, one of my favorite examples. While I find it incredibly easy to install an instance of WordPress, if you don’t want to or if you find it difficult, you can always pay them to host it for you. Change your mind later? You can export it to an instance you control.

The market is always changing and with it there is opportunity. As OpenNMS is a network monitoring platform and the network keeps getting larger, we are focusing on moving it to OpenStack for ultimate scalability, and then coupled with our Minions we’ll have the ability to handle an “Internet of Things” amount of devices. At each point there are revenue opportunities as we can help our clients get it set up in their private cloud, or help them by letting them outsource some or all of it, such as Newts storage. The beauty is that the end user gets to own their solution and they always have the option of bringing it back in house.

None of these models involves requiring a license purchase as part of the business plan. In fact, I can foresee a time in the near future where purchasing a proprietary software product without fully exploring open source alternatives will be considered a breach of fiduciary responsibility.

And these consumers will be savvy enough to demand pure open source solutions. That is why I think Nextcloud, if they are able to focus their revenue efforts on things such as an appliance, has a better chance of success than a company like ownCloud that relies on revenue from software licensing sales. The fact that most of the creators have left doesn’t help them, either.

The lack of revenue from licenses sales makes most VCs panic, and it looks like that’s exactly what happened with the US division of ownCloud:

Unfortunately, the announcement has consequences for ownCloud, Inc. based in Lexington, MA. Our main lenders in the US have cancelled our credit. Following American law, we are forced to close the doors of ownCloud, Inc. with immediate effect and terminate the contracts of 8 employees. The ownCloud GmbH is not directly affected by this and the growth of the ownCloud Foundation will remain a key priority.

I look forward to the time in the not too distant future when the open core model is seen as quaint as selling software on floppy disks at the local electronics store, and I eagerly await the first release of Nextcloud.

Magnus Hedemark : breaking up with Twitter

May 29, 2016 05:36 PM

This morning I found myself locked out of my Twitter account. Twitter claimed that my account was playing shenanigans (inconceivable!) and that to rescue my account, I had to give Twitter my phone number to validate that it’s really me.

Except Twitter never had my phone number before, so giving it to them would validate nothing.

Twitter and I have had a rough year. I was pretty active under the account @Magnus919, but had to abandon it since Twitter did not give me the means by which to clean up the cruft in my content. Earlier this year, I moved to @MagNetDevOps which was being used a little more carefully, and for the purpose of engaging with the professional community in my field.

With @Magnus919, I was using Twitter to connect with people from any of a number of far-flung interests, from DevOps to deafness, from autism to fountain pens. But the way Twitter is structured, it actually punishes people for having a wide variety of interests. It rewards deep focus on one or two special interests. If I spent a couple of days focusing on DevOps, I’d build followers in that space, and then just as quickly lose them when engaging the community of people with disabilities.

So I thought I’d create a number of accounts that were all clearly identified as belonging to me, but each specializing in a single field of interest. Twitter made it impossible to clean up the corpus of old tweets on @Magnus919, so I deleted that account and created @MagNetDevOps for focusing on content focused on my professional interests. From there, I was going to create some other accounts for other areas of interest, but never got that far.

Photo: Twitter error message that reads: "What happened? Your account appears to have exhibited automated behavior that violates the Twitter Rules. To unlock your account, please complete the steps below and confirm that you are the valid account owner. What you can do: To unlock you account, you must do the following: Verify your phone number" Twitter form that reads: "Add a phone number. Enter the phone number that you would like to associate with your Twitter account. We will send you an SMS with a verification code. SMS fees may apply."

Twitter, you see, has joined Facebook in requiring a phone number in order to have an account. Under duress, I complied with Facebook’s policy. Since Facebook was always meant to be one person / one account / real name, I wasn’t too terribly broken up about it. But Twitter is something else all together. I’m aware of a number of people who, for their own safety, require keeping a safe space between their online identity and their legal identity. These are people who, for any number of reasons, have legitimate reasons to fear real-world hostility from the people online around them. They may be seeking support for an invisible disability, or trying to have frank conversations about transgender rights. Or maybe, like the many who led the Arab Spring movement, they are speaking out against a government who oppresses them. By requiring such personally identifiable information in conjunction with a Twitter account, Twitter is going to silence those voices.

Twitter made a vague claim that my account had done something wrong, via automation, without giving any details about what it had supposedly been party to. I could not find any evidence of wrong doing on my Twitter account. They claimed that giving them a phone number would validate that I’m the account owner, yet this account was created with an email address and not a phone number. If the goal was to validate that I owned the account, it would make sense to do this via the email account used to create the Twitter account. Twitter seems to want my phone number, but I doubt that their stated reason matches their actual intent. I’m calling Twitter’s integrity into question here.

As a matter of principle, I don’t want to be a part of legitimizing this change in policy. Going forward, my Twitter account will lay fallow until Twitter no longer requires personally identifiable information to be associated with its accounts.


Mark Turner : Obama, Truman, and the atomic bomb

May 27, 2016 09:48 PM

Harry S. Truman

Harry S. Truman

On my port visit to Sasebo, Japan, during my Navy service, I decided to take a tour of Nagasaki. Standing at ground zero of this city was an unexpectedly deeply moving experience for me, one that I will never forget. The U.S. Army photos displayed there of mangled, radiation-poisoned bodies will haunt me forever.

It was a horrendous decision to drop the bomb. Anyone who visits Nagasaki or Hiroshima and does not agree has lost all humanity.

Obama is visiting Hiroshima and some of my right-wing friends are having a hissy fit about it. Many claim this is a “slap in the face to veterans,” though many of them are not veterans themselves, so it’s unclear how they can speak for veterans.

As a veteran I have debated whether dropping the bomb was the right thing to do. I always thought Harry Truman did a lot of good as President but how could I reconcile his decision to nuke hundreds of thousands of people with his good deeds? I’ve since grudgingly come to think it was the right call, given the fanaticism in Japan at the time. Casualties from an invasion of Japan (proposed as Operation Downfall) would have been from 500,000 to over a million in bloody, take-no-prisoners fighting.

So Truman’s decision most likely saved lives, though it brought the world the madness of nuclear weapons. It was a decision we’re still paying for today.

It’s easy to second-guess President Truman today since things look so much different from our perspective. The war, however, has long been over. Japan and America are close friends and important allies.

Should Obama apologize? I really don’t care either way. The only people who do care are the ones who just can’t let go.

Warren Myers : there is no such object on the server

May 27, 2016 10:13 AM

Gee. Thanks, Active Directory.

This is one of the more useless error messages you can get when trying to programmatically access AD.

Feel free to Google (or DuckDuckGo, or Bing, or whomever) that error message. Go ahead, I’ll wait.

Your eyes bleeding, and gray matter leaking from your ears yet? No? Then you obviously didn’t do what I just told you to – go search the error message, I’ll be here when you get back.

Background for how I found this particular gem: I have a customer (same one I was working with on SAP a while back where I had BAPI problems) that is trying to automate Active Directory user provisioning with HP Operations Orchestration. As a part of this, of course, I need to verify I can connect to their AD environment, OUs are reachable, etc etc.

In this scenario, I’m provisioning users into a custom OU (ie not merely Users).

Provisioning into Users doesn’t give this error – only in the custom OU. Which is weird. So we tried making sure there was already a user in the OU, in case the error was being kicked-back by having and empty OU (if an OU is empty, does it truly exist?).

That didn’t help.

Finally, after several hours of beard-stroking, diving into deep AD docs, MSDN articles, HP forae, and more … customer’s AD admin says, “hey – how long is the password you’re trying to use; and does it meet 3-of-4?” I reply, “it’s ‘Password!’ – 3-of-4, 9 characters long”. “Make it 14 characters long – for kicks.”

Lo and behold! There is a security policy on that OU that mandates a minimum password length as well as complexity – but that’s not even close to what AD was sending back as an error message. “There is no such object on the server”, as the end result of a failed user create, is 100% useless – all it tells you is the user isn’t there. It doesn’t say anything about why it isn’t there.

Sigh.

Yet another example of [nearly] completely ineffective error messages.

AD should give you something that resembles a why for the what – not merely the ‘what’.

Something like, “object could not be created; security policy violation” – while not 100% of the answer – would put you a lot closer to solving an issue than just “there is no such object on the server”.

Get it together, developers! When other people cannot understand your error messages, regardless of how “smart” they are, what field they work in, etc, you are Doing It Wrong™.

Mark Turner : The Astonishing Age of a Neanderthal Cave Construction Site – The Atlantic

May 27, 2016 12:47 AM

The Bruniquel Cave site is an incredible discovery of the earliest known civilization in Europe, 176,000 years ago. We are learning that our distant Neandertal cousins were at least as clever as we were.

Bruniquel Cave

Bruniquel Cave

After drilling into the stalagmites and pulling out cylinders of rock, the team could see an obvious transition between two layers. On one side were old minerals that were part of the original stalagmites; on the other were newer layers that had been laid down after the fragments were broken off by the cave’s former users. By measuring uranium levels on either side of the divide, the team could accurately tell when each stalagmite had been snapped off for construction.

Their date? 176,500 years ago, give or take a few millennia.

Source: The Astonishing Age of a Neanderthal Cave Construction Site – The Atlantic

Mark Turner : Does criticism of government turn off new leaders?

May 27, 2016 12:42 AM

A few weeks ago, a local media outlet published a story taking a few swipes at Raleigh’s city manager. While the criticism was mostly harmless (and city managers know it comes with the territory), it reminded me again that while taking digs at city government might seem to win points with hipster readers, it also alienates those hipsters from possibly getting involved themselves. Make public service look uncool and you run the risk of scaring off good people who might do great things with it.

I’m not saying don’t afflict the comforted when they rightfully earn it, but at the same time if you’re taking swipes just for the sake of taking swipes then you could be inadvertently turning away the bright, creative people who could be doing us all good.

I guess the constant focus on the negative when there’s really a ton of good being done gets tiring to me. And it’s not just the local level but at every level. Maybe it’s human nature to find something to complain about. Or maybe not.

Mark Turner : Soaring profit?

May 27, 2016 12:27 AM

A “free market” story I read tonight reminded me of one of the most surprising aspects of the Wright Brothers’ invention of the airplane. The Bishop’s Boys author Tom D. Crouch makes the point that Wilbur and Orville Wright were not motivated by profit when they began their chase for powered flight. The Wrights took their airplane designs on more as an interesting hobby, funded by their very successful bicycle shop. They were not venture-funded and did not answer to Wall Street. Their innovation grew mainly from their intense curiosity and desire to create things.

That’s not to say that they were altruistic because they certainly weren’t. Once they began flying, the brothers became secretive and litigious. They went after anyone else who seemed to infringe on their patents, with the aim of making as much money as possible.

While they were not top-notch businessmen, they were top-notch engineers. Their love of engineering, not their love of money, wound up making them a fortune.

Tarus Balog : Emley Moor, Kirklees, West Yorkshire

May 26, 2016 10:21 PM

I spent last week back in the United Kingdom. I always find it odd to travel to the UK. When I’m in, say, Germany or Spain, I know I’m in a different country. With the UK I sometimes forget and hijinks ensue. As Shaw may have once said, we are two countries separated by a common language.

Usually I spend time in the South, mainly Hampshire, but this trip was in Yorkshire, specifically West Yorkshire. I was looking forward to this for a number of reasons. For example, I love Yorkshire Pudding, and the Four Yorkshiremen is my favorite Monty Python routine.

Also, it meant that I could fly into Manchester Airport and miss Heathrow. Well, I didn’t exactly miss it.

I was visiting a big client that most people have never heard of, even though they are probably an integral part of your life if you live in the UK. Arqiva provides the broadcast infrastructure for much of the television and mobile phone industry in the country, as well as being involved in deploying networks for projects such as smart metering and the Internet of Things.

We were working at the Emley Moor location, which is home to the Emley Moor Mast. This is the largest freestanding structure in Britain (and third in the European Union). With a total height of 1084 feet, it is higher than the Eiffel Tower and almost twice as high as the Washington Monument.

Emily Moor Mast View

The mast was built in 1971 to replace a metal lattice tower that fell, due to a combination of ice and wind, in 1969. I love the excerpt from the log book mentioned in the Wikipedia article:

  • Day: Lee, Caffell, Vander Byl
  • Ice hazard – Packed ice beginning to fall from mast & stays. Roads close to station temporarily closed by Councils. Please notify councils when roads are safe (!)
  • Pye monitor – no frame lock – V10 replaced (low ins). Monitor overheating due to fan choked up with dust- cleaned out, motor lubricated and fan blades reset.
  • Evening :- Glendenning, Bottom, Redgrove
  • 1,265 ft (386 m) Mast :- Fell down across Jagger Lane (corner of Common Lane) at 17:01:45. Police, I.T.A. HQ, R.O., etc., all notified.
  • Mast Power Isolator :- Fuses removed & isolator locked in the “OFF” position. All isolators in basement feeding mast stump also switched off. Dehydrators & TXs switched off.

They still have that log book, open to that page.

Emily Moor Log Book

If you have 20 minutes, there is a great old documentary on the fall of the old tower and the construction of the new mast.

On my last day there we got to go up into the structure. It’s pretty impressive:

Emily Moor Mast Up Close

and the inside looks like something from a 1970s sci-fi movie:

Emily Moor Mast Inside

The article stated that it takes seven minutes to ride the lift to the top. I timed it at six minutes, fifty-seven seconds, so that’s about right (it’s fifteen seconds quicker going down). I was working with Dr. Craig Gallen who remembers going up in the open lift carriage, but we were in an enclosed car. It’s very small and with five of us in it I will admit to a small amount of claustrophobia on the way up.

But getting to the top is worth it. The view is amazing:

View from Emily Moor Mast

It was a calm day but you could still feel the tower sway a bit. They have a plumb bob set up to measure the drift, and it was barely moving while we were up there. Toby, our host, told of a time he had to spend seven hours installing equipment when the bob was moving four to five inches side to side. They had to move around on their hands and knees to avoid falling over.

Plumb Bob

I’m glad I wasn’t there on that day, but our day was fantastic. Here is a shot of the parking lot where the first picture (above) was taken.

View of Emily Moor Parking Lot

I had a really great time on this trip. The client was amazing, and I really like the area. It reminds me a bit of the North Carolina mountains. I did get my Yorkshire Pudding in Yorkshire (bucket list item):

Yorkshire Pudding in Yorkshire

and one evening Craig and I got to meet up with Keith Spragg.

Keith Spragg and Craig Gallen

Keith is a regular on the OpenNMS IRC channel (#opennms on freenode.net), and he works for Southway Housing Trust. They are a non-profit that manages several thousand homes, and part of that involves providing certain IT services to their tenants. They are mainly a Windows/Citrix shop but OpenNMS is running on one of the two Linux machines in their environment. He tried out a number of solutions before finding that OpenNMS met his needs, and he pays it forward by helping people via IRC. It always warms my heart to see OpenNMS being used in such places.

I hope to return to the area, although I was glad I was there in May. It’s around 53 degrees north latitude, which puts it level with the southern Alaskan islands. It would get light around 4am, and in the winter ice has been known to fall in sheets from the Mast (the walkways are covered to help protect the people who work there).

I bet Yorkshire Pudding really hits the spot on a cold winter’s day.

Mark Turner : Clinton allies blame Bernie for bad polls | TheHill

May 24, 2016 11:15 PM

Here it goes. Clinton supporters are already blaming Sanders for Clinton losing to Trump. It has nothing to do with all of Clinton’s faults, of course. Oh no. If she didn’t win, surely it must Bernie Sanders’s fault.

I’m so tired of Clinton playing the victim card. All. The. Time. The same thing played out in this political cartoon.

Poor Hillary.

Poor Hillary.

Hillary Clinton allies worried about polls that suggest a tightening general election match-up with Donald Trump are placing blame on Bernie Sanders. They say that the long primary fight with the independent senator from Vermont, which looks like it could go all the way to the Democratic convention in Philadelphia, has taken a toll on Clinton’s standing in the polls. In the latest RealClearPolitics average, she is two-tenths of a point behind Trump, the presumptive GOP presidential nominee.

The surrogates say they’re concerned that Sanders is still — this late in the game — throwing shots at Clinton and the Democratic establishment.

“I don’t think he realizes the damage he’s doing at this point,” one ally said of Sanders. “I understand running the campaign until the end, fine. But at least take the steps to begin bringing everyone together.”

Source: Clinton allies blame Bernie for bad polls | TheHill

Alan Porter : Moogfest

May 22, 2016 05:19 PM

This is either a story of poorly-managed expectations, or of me being an idiot, depending on how generous you’re feeling.

Eight months ago, when I heard that Moogfest was coming to Durham, I jumped on the chance to get tickets. I like electronic music, and I’ve always been fascinated by sound and signals and even signal processing mathematics. At the time, I was taking an online course in Digital Signal Processing for Music Applications. I recruited a wingman; my friend Jeremy is also into making noise using open source software.

moogfest2016

The festival would take place over a four-day weekend in May, so I signed up for two vacation days and I cleared the calendar for four days of music and tech geekery. Since I am not much of a night-owl, I wanted to get my fill of the festival in the daytime and then return home at night… one benefit of being local to Durham.

Pretty soon, the emails started coming in… about one a week, usually about some band or another playing in Durham, with one or two being way off base, about some music-related parties on the west coast. So I started filing these emails in a folder called “moogfest”. Buried in the middle of that pile would be one email that was important… although I had purchased a ticket, I’d need to register for workshops that had limited attendance.

Unfortunately, I didn’t do any homework in advance of Moogfest. You know, life happens. After all, I’d have four days to deal with the festival. So Jeremy and I showed up at the American Tobacco campus on Thursday with a clean slate… dumb and dumber.

Thursday

Moog shop keyboards

Thursday started with drizzly rain to set the mood.

I’m not super familiar with Durham, but I know my way around the American Tobacco campus, so that’s where we started. We got our wristbands, visited the Modular Marketplace (a very small and crowded vendor area where they showed off modular synthesizer blocks) and the Moog Pop-up Factory (one part factory assembly area, and one part Guitar Center store).  Thankfully, both of these areas made heavy use of headphones to keep the cacophony down.

From there, we ventured north, outside of my familiarity. The provided map was too small to really make any sense of — mainly because they tried to show the main festival area and the outlying concert area on the same map. So we spent a lot of time wandering, trying to figure out what we were supposed to see. We got lost and stopped for a milkshake and a map-reading. Finally, we found the 21c hotel and museum. There were three classrooms inside the building that housed workshops and talks, but that was not very clearly indicated anywhere. At every turn, it felt like we were in the “wrong place“.

girl in Moog shop

We finally found a talk on “IBM Watson: Cognitive Tech for Developers“. This was one of the workshops that required pre-registration, but there seemed to be room left over from no-shows, so they let us in. This ended up being a marketing pitch for IBM’s research projects — nothing to do with music synthesis or really even with IBM’s core business.

Being unfamiliar with Durham, and since several points on the map seemed to land in a large construction area, we wandered back to the American Tobacco campus for a talk. We arrived just after the talk started, so the doors were closed. So we looked for lunch. There were a few sit-down restaurants, but not much in terms of quick meals (on Friday, I discovered the food trucks).

Finally, we declared Thursday to be a bust, and we headed home.

We’d basically just spent $200 and a vacation day to attend three advertising sessions.  I seriously considered just going back to work on Friday.

With hopes of salvaging Friday, I spent three hours that night poring over the schedule to figure out how it’s supposed to be done.

  • I looked up all of the venues, noting that several were much farther north than we had wandered.
  • I registered (wait-listed) for workshops that might be interesting.
  • I tried to visualize the entire day on a single grid, gave up on that, and found I could filter the list.
  • I read the descriptions of every event and put a ranking on my schedule.
  • I learned – much to my disappointment – that the schedule was clearly divided at supper time, with talks and workshops in the daytime and music at night.
  • I made a specific plan for Friday, which included sleeping in later and staying later in the night to hear some music.

Friday

I flew solo on Friday, starting off with some static displays and exploring the venues along West Morgan Street (the northern area).  Then I attended a talk on “Techno-Shamanism“, a topic that looked interesting because it was so far out of my experience.  The speaker was impressively expressive, but it was hard to tell whether he was sharing deep philosophical secrets or just babbling eloquently… I am still undecided.

I rushed off to the Carolina Theater for a live recording of the podcast “Song Exploder“.  However, the theater filled just as I arrived — I mean literally, the people in front of me were seated — and the rest of the line was sent away.  Severe bummer.

I spent a lot of time at a static display called the Wifi Whisperer, something that looked pretty dull from the description in the schedule, but that was actually pretty intriguing.  It showed how our phones volunteer information about previous wifi spots we have attached to.  My question – why would my phone share with the Moogfest network the name of the wifi from the beach house we stayed at last summer?  Sure enough, it was there on the board!

Polyrhythmic Loops

Determined to not miss any more events, I rushed back to ATC for a talk on Polyrhythmic Loops, where the speaker demonstrated how modular synth clocks can be used to construct complex rhythms by sending sequences of triggers to sampler playback modules.  I kind of wish we could’ve seen some of the wire-connecting madness involved, but instead he did a pretty good job of describing what he was doing and then he played the results.  It was interesting, but unnecessarily loud.

The daytime talks were winding down, and my last one was about Kickstarter-funded music projects.

To fill the gap until the music started, I went to “Tech Jobs Under the Big Top“, a job fair that is held periodically in RTP.  As if to underscore the craziness of “having a ticket but still needing another registration” that plagued Moogfest, the Big Top folks required two different types of registration that kept me occupied for much longer than the time I actually spent inside their tent.  Note: the Big Top event was not part of Moogfest, but they were clearly capitalizing on the location, and they were even listed in the Moogfest schedule.

Up until this point, I had still not heard any MUSIC.

Sonic Pi

My wingman returned and we popped into our first music act, Sam Aaron played a “Live Coding” set on his Sonic Pi.  This performance finally brought Moogfest back into the black, justifying the ticket price and the hassles of the earlier schedule.  His set was unbelievable, dropping beats from the command line like a Linux geek.

Grimes

To wrap up the night, we hiked a half mile to the MotorCo stage to see Grimes, one of the headline attractions of Moogfest.  Admittedly, I am not part of the target audience for this show, since I had never actually heard of Grimes, and I am about 20 years older than many of the attendees.  But I had been briefly introduced to her sound at one of the static displays, so I was stoked for a good show.  However, the performance itself was really more of a military theatrical production than a concert.

Sure, there was a performer somewhere on that tiny stage in the distance, but any potential talent there was hidden behind explosions of LEDs and lasers, backed by a few kilotons of speaker blasts.

When the bombs stopped for a moment, the small amount of interstitial audience engagement reminded me of a middle school pep rally, both in tone and in body language. The words were mostly indiscernible, but the message was clear.  Strap in, because this rocket is about to blast off!  We left after a few songs.

Saturday

Feeling that I had overstayed my leave from home, I planned a light docket for Friday. There were only two talks that I wanted to see, both in the afternoon. I could be persuaded to see some more evening shows, but at that point, I could take them or leave them.

Some folks from Virginia Tech gave a workshop on the “Linux Laptop Orchestra” (titled “Designing Synthesizers with Pd-L2Ork“). From my brief pre-study, it looked like a mathematical tool used to design filters and create synthesizers. Instead, it turned out to be an automation tool similar to PLC ladder logic that could be used to trigger the playback of samples in specific patterns. This seemed like the laptop equivalent to the earlier talk on Polyrhythmic Loops done with synth modules. The talk was more focused on the wide array of toys (raspi, wii remotes) that could be connected to this ecosystem, and less about music. Overall, it looked like a very cool system, but not enough to justify a whole lot of tinkering to get it to run on my laptop (for some reason, my Ubuntu 15.10 and 16.04 systems both rejected the .deb packages because of package dependencies — perhaps this would be a good candidate for a docker container).

The final session of Moogfest (for me, at least) was the workshop behind Sam Aaron’s Friday night performance. Titled “Synthesize Sounds with Live Code in Sonic Pi“, he explained in 90 minutes how to write Ruby code in Sonic Pi, how to sequence samples and synth sounds, occasionally diving deep into computer science topics like the benefits of pseudo-randomness and concurrency in programs. Sam is a smart fellow and a natural teacher, and he has developed a system that is both approachable by school kids and sophisticated enough for post-graduate adults.

Wrap Up

I skipped Sunday… I’d had enough.

My wife asked me if I would attend again next year, and I’m undecided (they DID announce 2017 dates today).  I am thrilled that Moogfest has decided to give Durham a try. But for me personally, the experience was an impedance mismatch. I think a few adjustments, both on my part and on the part of the organizers, would make the festival lot more attractive.  Here is a list of suggestions that could help.

  • Clearly, I should’ve done my homework.  I should have read through each and every one of the 58 emails I received from them, possibly as I received them, rather than stockpiling them up for later.  I should have tuned in more closely a few weeks in advance of the date for some advanced planning as the schedule materialized.
  • Moogfest could have been less prolific with their emails, and clearly labeled the ones that required some action on my part.
  • The organizers could schedule music events throughout the day instead of just during the night shift… I compare this festival with the IBMA Wide Open Bluegrass festival in Raleigh, which has music throughout the day and into the nights.  Is there a particular reason why electronic music has to be played at night?
  • I would enjoy a wider variety of smaller, more intimate performances, rather than megawatt-sized blockbuster performances.  At least one performance at the Armory was loud enough to send me out of the venue, even though I had earplugs.  It was awful.
  • The festival could be held in a tighter geographic area.  The American Tobacco Campus ended up being an outlier, with most of the action being between West Morgan Street and West Main Street (I felt like ATC was only included so Durham could showcase it for visitors).  Having the events nearer to one another would mean less walking to-and-from events (I walked 14½ miles over the three days I attended).  Shuttle buses could be provided for the severely outlying venues like MotorCo.
  • The printed schedule could give a short description of the sessions, because the titles alone did not mean much.  Static displays (red) should not be listed on the schedule as if they are timed events.
  • The web site did a pretty good job of slicing and dicing the schedule, but I would like to be able to vote items up and down, then filter by my votes (don’t show me anything I have already thumbs-downed).  I would also like to be able to turn on and off entire categories – for example, do not show me the (red) static events, but show all (orange) talks and (grey) workshops.
  • The register-for-workshops process was clearly broken.  As a late-registerer, my name was not on anyone’s printed list.  But there was often room anyway, because there’s no reason for anyone to ever un-register for a workshop they later decided to skip.
  • The time slots did not offer any time to get to and from venues.  Maybe they should be staggered (northern-most events start on the hour, southern-most start on the half-hour) to give time for walking between them.

All in all, I had a good time.  But I feel like I burned two vacation days (and some family karma/capital) to attend a couple of good workshops and several commercial displays.  I think I would have been equally as happy to attend just on Saturday and Sunday, if the music and talks were intermixed throughout the day, and did not require me to stick around until 2am.

Alan Porter : Duck Patrol

May 15, 2016 11:19 PM

IMG_2176

On my way home today, I stopped by our neighborhood gas station to fill up the tank. As I was leaving, I noticed a mother duck and four ducklings walking along the curb of the shopping center driveway. They were making a lot of noise. The mother was cluck-cluck-clicking, and the ducklings were cheep-cheep-cheeping.

IMG_2177

They were standing pretty close to a storm drain. Then a car came whizzing by and one of the ducklings jumped into the storm drain! I went over to the storm drain and found six ducklings at the bottom!

So I rushed home and recruited Audrey and Sydney, who were eager to help. We got some buckets and brooms and some rope and went back to the shopping center. By that time, a couple of other people were gathered around, and they said they had called the Cary Police.

We went ahead and lifted the storm drain grate and one lady climbed in, carrying a bucket. One by one, she lured them close and plucked them up and into the bucket!

IMG_2183 IMG_2180 IMG_2182

The Policeman finally showed up, and we went looking for the mother duck and the other three ducklings. They could’ve been in the woods or near one of the storm drains. We finally spotted them in the pond across the street.

IMG_2186

So we carried our bucket to the pond. When we got close, the mother heard the ducklings cheeping and she ran over to us. Sydney laid the bucket down sideways in the grass and we all backed away. The mother duck ran to us, quacking like crazy, and all of the ducklings started cheeping even louder. The mother went to the bucket and then escorted them all down the grass and into the pond. And then they swam away in a tight formation, all nine babies clinging closely behind the mother.

Sydney said that it was the best day ever!

Mark Turner : Rosie the Seaboard Station ghost?

May 15, 2016 01:59 AM

Does Rosie the Riveter have a doppelganger at Seaboard Station?

Does Rosie the Riveter have a doppelganger at Seaboard Station?

I needed a part to fix our broken dishwasher so I drove over to Seaboard Ace Friday morning before work. On my way out of the store, I spotted an African American woman slowly walking toward me from the north in the parking lot. I did not want to keep her waiting as I backed out of the space so I wasted no time in getting going. Sure I was out of her way, I headed towards the lot’s exit. In the time it took me to reach the stop sign in front of Logan’s Trading Company the woman had somehow made it into the next parking lot, where the Phydeaux store used to be.

I was stunned. I was sure I backed out of the space before this woman could’ve reached my car, and somehow she had beaten my car to the stop sign? How?

Not wanting to seem like I was stalking her, I continued left to Halifax Street, then turned right to go back down the little one-way alley between Phydeaux and 18 Seaboard. The woman was still in the Phydeaux parking lot, this time slowly walking west.

Just to make sure I hadn’t mistaken the woman for another one dressed similarly, I drove back down in front of the hardware store. No other similarly-dressed women were around. I turned around just past Peace China and headed back towards the woman.

This time when I reached the Phydeaux parking lot the woman was gone. I drove the counterclockwise loop from Logan’s back to the one-way alley but could not find her.

I still couldn’t believe what I had just seen. How did this slow-walking woman suddenly leap ahead of me? And where had she gone? What had just happened here!?

Not sure what I had just seen, I later emailed the hardware store owner (an acquaintance of mine), asking if he may have captured video of the encounter. Unfortunately, his cameras pointed the wrong way. How else could I rule things out, I wondered.

I had to make another trip to the hardware store this afternoon and took the opportunity to time how long it would take for the woman to travel that distance. At my usual, typically brisk pace it took me 80 seconds. I then tried driving it the way I did yesterday. Even though I had to wait a moment when some rude driver insisted on passing me while I backed out, I still made it to the stop sign in 48 seconds.

I should have beaten the woman to the intersection by roughly half a minute, give or take. That’s a long time. If it were a race it would have been no contest.

I never saw her cross behind my car as I was backing up (remember my goal was to get out before she reached me). If she had opted to take the sidewalk instead of the lot this almost certainly would have added 5-10 seconds to her trip as she navigated the steps. So that’s unlikely.

Then I recalled my coworkers’ stories of ghosts that hung around the nearby, century-old Pilot Mill offices where I work. Pilot Mill and Seaboard Station have been around for many decades and might be unsurprising places to find ghosts. Was this a ghost I had seen?

Both times I saw her she looked like any person would. She was a medium-complexioned African American woman, stocky build, who wore a long blue denim dress, dark-rimmed glasses, and had a headcovering, perhaps a blue handkerchief. She was moving somewhat slowly, hobbling really, and did not at all seem like anyone capable of sprinting ahead of me even if she had wanted to. Her clothes were a little homely but not entirely out of place. They looked like work clothes, and could have been in style from 80 years ago to today. Rosie the Riveter comes to mind.

Was “Rosie” a ghost? If not, how the hell did she make it to the next parking lot so astonishingly fast? And without me seeing her? I can think of few rational explanations. The world is a strange place indeed.

Mark Turner : What science knows (and doesn’t know) about animals

May 14, 2016 10:47 PM

I was unexpectedly on-call Monday night and the pages I got made me sleep very lightly the rest of the night. When 3:30 AM rolled around, I was a little surprised to be serenaded by the birds outside. As I dozed, I began to wonder what it is about 3:30 AM that prompts the birds to sing? There can be no sign of dawn at that early time, even on May 10th. Is there some sort of environmental variable that tips birds off that it’s time to sing?

Later that day, naturally I then did some Googling on the research about birds. A query on “what makes birds sing in the morning” brought up a few interesting articles but also left me exasperated.

Here’s why. So much of the research into this is incomplete. For instance, around 2003, two researchers attempted to see what made the early morning special to birds, but the way they tested it was by playing recordings of bird songs at various times of day and comparing how their human ears perceived those sounds. The theory they were testing (and ultimately claimed to confirm) was that sound traveled better in the morning (allegedly when the wind isn’t blowing). Though their sound theory was later disproved, it bothered me that this was the test they tried since it was based on a flawed assumption. They tested how sound is perceived by humans at various times of day but didn’t actually test the damn birds.

It got me thinking that what we know about the animal kingdom is laughably incomplete. Other bird studies theorize that birds are announcing their territory. How does anyone know this? How can we humans really know the motivation behind animal behavior? Is it all about territory, or have we been grossly simplifying things?

It may make me a bad science nerd, but I’ve never been fully convinced that the motivations we have attributed to animals in the Theory of Evolution and other theories are accurate. Since the time of Darwin we’ve learned so much more about the beings we share the planet with. What these studies show is that we have long greatly underestimated the abilities of animals. Animal behavior is far more sophisticated than we’ve recognized. The old thought that other animals couldn’t be possibly intelligent due to their smaller brains is beginning to change.

Just because other Earth creatures cannot build a nuclear weapon does not mean they don’t possess intelligence (and frankly, as far as nukes are concerned, they show more intelligence). Just because animals don’t possess our sophisticated spoken language doesn’t mean they don’t have their own rich means of communication.

So, what cranks birds up before the crack of dawn? I don’t know. Then again, it seems that no one else really knows, either, from what I can gather and I find this really astounding.

Tarus Balog : OpenNMS Horizon 18 “Tardigrade” Is Now Available

May 11, 2016 05:03 PM

I am extremely happy to announce the availability of Horizon 18, codenamed “Tardigrade”. Ben is responsible for naming our releases and he’s decided that the theme for Horizon 18 will be animals. The name “Tardigrade” was suggested in the IRC channel by Uberpenguin, and while they aren’t the prettiest things, Wikipedia describes them as “perhaps the most durable of known organisms” so in the context of OpenNMS that is appropriate.

OpenNMS Horizon 18

I am also happy to see the Horizon program working. When we split OpenNMS into Horizon and Meridian, the main reason was to drive faster development. Now instead of a new stable release every 18 months, we are getting them out every 3 to 4 months. And these are great releases – not just major releases in name only.

The first thing you’ll notice if you log in to Horizon 18 as a user in the admin role is that we’ve added a new “opt-in” feature that let’s us know a little bit about how OpenNMS is being used by people. We hope that most of you will choose to send us this information, and in the spirit of the Open Source Way we’ve made all of the statistics available publicly.

OpenNMS Opt-In Screen

One of the key things we are looking for is the list of SNMP Object IDs. This will let us know what devices are being monitored by our users and to increase their level of support. Of course, this requires that your OpenNMS instance be able to reach the stats server on the Internet, and you can change your choice at any time on the Configuration admin page under “Data Choices”. It will only send this information once every 24 hours, so we don’t expect it to impact network traffic at all.

Once you’ve opted in, the next thing you’ll probably notice is new problem lists on the home page listing “services” and “applications”.

OpenNMS BSM Problem Lists

This related to the major feature addition in Horizon 18 of the Business Service Monitor (BSM).

OpenNMS BSM OpenDaylight

As people move from treating servers as pets to treating them like cattle, the emphasis has shifted to understanding how well applications and microservices are running as a whole instead of focusing on individual devices. The BSM allows you to configure these services and then leverage all the usual OpenNMS crunchy goodness as you would a legacy service like HTTP running on a particular box. The above screenshot comes from some prototype work Jesse has been doing with integrating OpenNMS with OpenDaylight. As you can see at a glance, while the ICMP service is down on a particular device, the overall Network Fabric is still functioning perfectly.

Another thing I’m extremely proud of is the increase in the quality of documentation. Ronny and the rest of the documentation team are doing a great job, and we’ve made it a requirement that new features aren’t complete without documentation. Please check out the release notes as an example. It contains a pretty comprehensive lists of changes in 18.

A few I’d like to point out:

Horizon 17 is one of the most powerful and stable releases of OpenNMS ever, and we hope to continue that tradition with Horizon 18. Hats off to the team for such great work.

Here is a list of all the issues addressed in Horizon 18:

Release Notes – OpenNMS – Version 18.0.0

Bug

  • [NMS-3489] – "ADD NODE" produces "too much" config
  • [NMS-4845] – RrdUtils.createRRD log message is unclear
  • [NMS-5788] – model-importer.properties should be deprecated and removed
  • [NMS-5839] – Bring WaterfallExecutor logging on par with RunnableConsumerThreadPool
  • [NMS-5915] – The retry handler used with HttpClient is not going to do what we expect
  • [NMS-5970] – No HTML title on Topology Map
  • [NMS-6344] – provision.pl does not import requisitions with spaces in the name
  • [NMS-6549] – Eventd does not honor reloadDaemonConfig event
  • [NMS-6623] – Update JNA.jar library to support ARM based systems
  • [NMS-7263] – jaxb.properties not included in jar
  • [NMS-7471] – SNMP Plugin tests regularly failing
  • [NMS-7525] – ArrayOutOfBounds Exception in Topology Map when selecting bridge-port
  • [NMS-7582] – non RFC conform behaviour of SmtpMonitor
  • [NMS-7731] – Remote poller dies when trying to use the PageSequenceMonitor
  • [NMS-7763] – Bridge Data is not Collected on Cisco Nexus
  • [NMS-7792] – NPE in JmxRrdMigratorOffline
  • [NMS-7846] – Slow LinkdTopologyProvider/EnhancedLinkdTopologyProvider in bigger enviroments
  • [NMS-7871] – Enlinkd bridge discovery creates erroneous entries in the Bridge Forwarding Tables of unrelated switches when host is a kvm virtual host
  • [NMS-7872] – 303 See Other on requisitions response breaks the usage of the Requisitions ReST API
  • [NMS-7880] – Integration tests in org.opennms.core.test-api.karaf have incomplete dependencies
  • [NMS-7918] – Slow BridgeBridgeTopologie discovery with enlinkd.
  • [NMS-7922] – Null pointer exceptions with whitespace in requisition name
  • [NMS-7959] – Bouncycastle JARs break large-key crypto operations
  • [NMS-7967] – XML namespace locations are not set correctly for namespaces cm, and ext
  • [NMS-7975] – Rest API v2 returns http-404 (not found) for http-204 (no content) cases
  • [NMS-8003] – Topology-UI shows LLDP links not correct
  • [NMS-8018] – Vacuumd sends automation events before transaction is closed
  • [NMS-8056] – opennms-setup.karaf shouldn't try to start ActiveMQ
  • [NMS-8057] – Add the org.opennms.features.activemq.broker .xml and .cfg files to the Minion repo webapp
  • [NMS-8058] – Poll all interface w/o critical service is incorrect
  • [NMS-8072] – NullPointerException for NodeDiscoveryBridge
  • [NMS-8079] – The OnmsDaoContainer does not update its cache correctly, leading to a NumberFormatException
  • [NMS-8080] – VLAN name is not displayed
  • [NMS-8086] – Provisioning Requisitions with spaces in their name.
  • [NMS-8096] – JMX detector connection errors use wrong log level
  • [NMS-8098] – PageSequenceMonitor sometimes gives poor failure reasons
  • [NMS-8104] – init script checkXmlFiles() fails to pick up errors
  • [NMS-8116] – Heat map Alarms/Categories do not show all categories
  • [NMS-8118] – CXF returning 204 on NULL responses, rather than 404
  • [NMS-8125] – Memory leak when using Groovy + BSF
  • [NMS-8128] – NPE if provisioning requisition name has spaces
  • [NMS-8137] – OpenNMS incorrectly discovers VLANs
  • [NMS-8146] – "Show interfaces" link forgets the filters in some circumstances
  • [NMS-8167] – Cannot search by MAC address
  • [NMS-8168] – Vaadin Applications do not show OpenNMS favicon
  • [NMS-8189] – Wrong interface status color on node detail page
  • [NMS-8194] – Return an HTTP 303 for PUT/POST request on a ReST API is a bad practice
  • [NMS-8198] – Provisioning UI indication for changed nodes is too bright
  • [NMS-8208] – Upgrade maven-bundle-plugin to v3.0.1
  • [NMS-8214] – AlarmdIT.testPersistManyAlarmsAtOnce() test ordering issue?
  • [NMS-8215] – Chart servlet reloads Notifd config instead of Charts config
  • [NMS-8216] – Discovery config screen problems in latest code
  • [NMS-8221] – Operation "Refresh Now" and "Automatic Refresh" referesh the UI differently
  • [NMS-8224] – JasperReports measurements data-source step returning null
  • [NMS-8235] – Jaspersoft Studio cannot be used anymore to debug/create new reports
  • [NMS-8240] – Requisition synchronization is failing due to space in requisition name
  • [NMS-8248] – Many Rcsript (RScript) files in OPENNMS_DATA/tmp
  • [NMS-8257] – Test flapping: ForeignSourceRestServiceIT.testForeignSources()
  • [NMS-8272] – snmp4j does not process agent responses
  • [NMS-8273] – %post error when Minion host.key already exists
  • [NMS-8274] – All the defined Statsd's reports are being executed even if they are disabled.
  • [NMS-8277] – %post failure in opennms-minion-features-core: sed not found
  • [NMS-8293] – Config Tester Tool doesn't check some of the core configuration files
  • [NMS-8298] – Label of Vertex is too short in some cases
  • [NMS-8299] – Topology UI recenters even if Manual Layout is selected
  • [NMS-8300] – Center on Selection no longer works in STUI
  • [NMS-8301] – v2 Rest Services are deployed twice to the WEB-INF/lib directory
  • [NMS-8302] – Json deserialization throws "unknown property" exception due to usage of wrong Jax-rs Provider
  • [NMS-8304] – An error on threshd-configuration.xml breaks Collectd when reloading thresholds configuration
  • [NMS-8313] – Pan moving in Topology UI automatically recenters
  • [NMS-8314] – Weird zoom behavior in Topology UI using mouse wheel
  • [NMS-8320] – Ping is available for HTTP services
  • [NMS-8324] – Friendly name of an IP service is never shown in BSM
  • [NMS-8330] – Switching Topology Providers causes Exception
  • [NMS-8335] – Focal points are no longer persisted
  • [NMS-8337] – Non-existing resources or attributes break JasperReports when using the Measurements API
  • [NMS-8353] – Plugin Manager fails to load
  • [NMS-8361] – Incorrect documentation for org.opennms.newts.query.heartbeat
  • [NMS-8371] – The contents of the info panel should refresh when the vertices and edges are refreshed
  • [NMS-8373] – The placeholder {diffTime} is not supported by Backshift.
  • [NMS-8374] – The logic to find event definitions confuses the Event Translator when translating SNMP Traps
  • [NMS-8375] – License / copyright situation in release notes introduction needs simplifying
  • [NMS-8379] – Sluggish performance with Cassandra driver
  • [NMS-8383] – jmxconfiggenerator feature has unnecessary includes
  • [NMS-8386] – Requisitioning UI fails to load in modern browsers if used behind a proxy
  • [NMS-8388] – Document resources ReST service
  • [NMS-8389] – Heatmap is not showing
  • [NMS-8394] – NoSuchElement exception when loading the TopologyUI
  • [NMS-8395] – Logging improvements to Notifd
  • [NMS-8401] – There are errors on the graph definitions for OpenNMS JMX statistics
  • [NMS-8403] – Document styles of identifying nodes in resource IDs

Enhancement

  • [NMS-2504] – Create a better landing page for Configure Discovery aftermath
  • [NMS-4229] – Detect tables with Provisiond SNMP detector
  • [NMS-5077] – Allow other services to work with Path Outages other than ICMP
  • [NMS-5905] – Add ifAlias to bridge Link Interface Info
  • [NMS-5979] – Make the Provisioning Requisitions "Node Quick-Add" look pretty
  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7446] – Enhance Bridge Link Object Model
  • [NMS-7447] – Update BridgeTopology to use the new Object Model
  • [NMS-7448] – Update Bridge Topology Discovery Strategy
  • [NMS-7756] – Change icon for Dell PowerConnector switch
  • [NMS-7798] – Add Sonicwall Firewall Events
  • [NMS-7903] – Elasticsearch event and alarm forwarder
  • [NMS-7950] – Create an overview for the developers guide
  • [NMS-7965] – Add support for setting system properties via user supplied .properties files
  • [NMS-7976] – Merge OSGi Plugin Manager into Admin UI
  • [NMS-7980] – provide HTTPS Quicklaunch into node page
  • [NMS-8015] – Remove Dependencies on RXTX
  • [NMS-8041] – Refactor Enhanced Linkd Topology
  • [NMS-8044] – Provide link for Microsoft RDP connections
  • [NMS-8063] – Update asciidoc dependencies to latest 1.5.3
  • [NMS-8076] – Allow user to access local documentation from OpenNMS Jetty Webapp
  • [NMS-8077] – Add NetGear Prosafe Smart switch SNMP trap events and syslog events
  • [NMS-8092] – Add OpenWrt syslog and related event definitions
  • [NMS-8129] – Disallow restricted characters from foreign source and foreign ID
  • [NMS-8149] – Update asciidoctorj to 1.5.4 and asciidoctorjPdf to 1.5.0-alpha.11
  • [NMS-8152] – Collect and publish anonymous statistics to stats.opennms.org
  • [NMS-8160] – Remove Quick-Add node to avoid confusions and avoid breaking the ReST API
  • [NMS-8163] – Requisitions UI Enhancements
  • [NMS-8179] – ifIndex >= 2^31
  • [NMS-8182] – Add HTTPS as quick-link on the node page
  • [NMS-8205] – Generate events for alarm lifecycle changes
  • [NMS-8209] – Upgrade junit to v4.12
  • [NMS-8210] – Add support for calculating the derivative with a Measurements API Filter
  • [NMS-8211] – Add support for retrieving nodes with a filter expression via the ReST API
  • [NMS-8218] – External event source tweaks to admin guide
  • [NMS-8219] – Copyright bump on asciidoc docs
  • [NMS-8225] – Integrate the Minion container and packages into the mainline OpenNMS build
  • [NMS-8226] – Upgrade SNMP4J to version 2.4
  • [NMS-8238] – Topology providers should provide a description for display
  • [NMS-8251] – Parameterize product name in asciidoc docs
  • [NMS-8259] – Cleanup testdata in SnmpDetector tests
  • [NMS-8265] – SNMP collection systemDefs for Cisco ASA5525-X, ASA5515-X
  • [NMS-8266] – SNMP collection systemDefs for Juniper SRX210he2, SRX100h
  • [NMS-8267] – Create documentation for SNMP detector
  • [NMS-8271] – Enable correlation engines to register for all events
  • [NMS-8296] – Be able to re-order the policies on a requisition through the UI
  • [NMS-8334] – Implement org.opennms.timeseries.strategy=evaluate to facilitate the sizing process
  • [NMS-8336] – Set the required fields when not specified while adding events through ReST
  • [NMS-8349] – Update screenshots with 18 theme in user documentation
  • [NMS-8365] – Add metric counter for drop counts when the ring buffer is full
  • [NMS-8377] – Applying some organizational changes on the Requisitions UI (Grunt, JSHint, Dist)

Story

Task

  • [NMS-8236] – Move the "vaadin-extender-service" module to opennms code base

Warren Myers : new service – free, secure password generation

May 01, 2016 04:53 PM

Today, I am formally announcing a brand-new service / website for secure password generation.

Go visit password.cf

Get yourself random passwords of commonly-required lengths and complexities*.

Password Varieties:

  • 4 of 4
  • upper & lower alphanumeric
  • lower alphanumeric

Lengths generated: 12, 16, & 24 characters

Visit the GitHub project page ..

.. if you want to run the site on your own server.

You can view the source “live” ..

.. if you’d like to see how it works without visiting GitHub – and verify nothing is saved anywhere by the code: it’s just a script with no filesystem / database access.

It’s fast ..

.. load times tend to be under 0.15 seconds!

It will always be linked from my Projects page, and from the ‘External’ links menu on this blog.


*Also findable at password.ga – same server, same code

Mark Turner : Neighborhood joy

April 30, 2016 01:08 PM

As sad as it is that Miss Ruth has moved away, our changing neighborhood ain’t all bad. In fact, there is lots to celebrate. Over the winter, Kelly and I finally bought a storm door for our front door, which gives us a look at what goes on outside. With the arrival of beautiful spring weather, I’ve been delighted to see all the neighbors out walking, running, pushing strollers, walking their dogs, and being neighborly. Last Friday evening alone I must have watched a dozen people passing happily by our home.

I’ve always considered as a sign of the health of a community how many people you see out interacting with each other. I’m thrilled to see so many of my friends and neighbors out getting to know their community.