Warren Myers : tesla’s solarcity bid isn’t about energy production

June 27, 2016 01:02 AM

Ben Thompson* (temporary paywall) makes an excellent first-order analysis of Elon Musk's bid to acquimerge SolarCity with Tesla. But he, uncharacteristically, stops short of seeing the mid- and long-term reasons for the acquimerge.

It's about SpaceX.

It's about Mars.

It's about the Moon.

Musk knows that he needs an incredibly-solid pipeline of technology to get SpaceX past its initial "toy" phases of being a launch company to the ISS.

He wants to ensure that he's able to support the future on non-terrestrial bodies – lunar missions, Mars missions, long-term space exploration, high-altitude space stations, etc.

Sure, it happens to be good for Tesla (integrating solar tech at Tesla charging stations is a no-brainer). But that's not the end game.

The goal is space.


* Follow Ben on Twitter – @benthompson

Mark Turner : Brexit Is Only the Latest Proof of the Insularity and Failure of Western Establishment Institutions

June 27, 2016 12:41 AM

Great commentary by Glenn Greenwald on Brexit.

Brexit — despite all of the harm it is likely to cause and despite all of the malicious politicians it will empower — could have been a positive development. But that would require that elites (and their media outlets) react to the shock of this repudiation by spending some time reflecting on their own flaws, analyzing what they have done to contribute to such mass outrage and deprivation, in order to engage in course correction. Exactly the same potential opportunity was created by the Iraq debacle, the 2008 financial crisis, the rise of Trumpism and other anti-establishment movements: This is all compelling evidence that things have gone very wrong with those who wield the greatest power, that self-critique in elite circles is more vital than anything.

But, as usual, that’s exactly what they most refuse to do.

Source: Brexit Is Only the Latest Proof of the Insularity and Failure of Western Establishment Institutions

Tarus Balog : OpenNMS and Elasticsearch

June 24, 2016 09:06 PM

With Horizon 18 we added support for sending OpenNMS events into Elasticsearch. Unfortunately, it only works with Elasticsearch 1.0. Elasticsearch 2.0 and higher requires Camel 17, but OpenNMS can’t use it. I wondered why, and if you were wondering too, here is the answer from Seth:

Camel 17 has changed their OSGi metadata to only be compatible with Spring 4.1 and higher. We’re still using Spring 4.0 so that’s one problem. The second issue is that ActiveMQ’s OSGi metadata bans Spring 4.0 and higher. So currently, ActiveMQ and Camel are mutually incompatible with one another inside Karaf at any version higher than the ones that we are currently running.

The biggest issue is the ActiveMQ problem, I’ve opened this bug and it sounds like they’re going to address it in their next major release

So there you have it.

Mark Turner : What mysterious force whisked away the water on Venus? – CSMonitor.com

June 23, 2016 03:11 PM

Fascinating research might explain why all the water on Venus has disappeared.

Venus is remarkably Earth-like, with a similar size and gravity to our own planet. But the second planet from the sun is missing a key element to be a twin to our blue planet: water.

Scientists say there were once oceans on Venus’s surface, but with surface temperatures topping 860 degrees Fahrenheit, it’s no surprise the surface of Venus today is bone-dry.

But where did that water disappear to?

Source: What mysterious force whisked away the water on Venus? – CSMonitor.com

Tarus Balog : The Inverter: Episode 69 – Bill and Ted and Jeremy and Bryan and Jono and Stuart’s Excellent Adventure

June 17, 2016 03:24 PM

So the Gang of Four decided to actually produce a regular episode of Bad Voltage for the first time in, like, a month, so I decided to resurrect this little column making fun of them.

I am actually supposed to be on vacation this week, but for me vacation means working around the farm. I was working outside when the heat index hit 108.5F so while I was recovering from heat stroke I decided to give this week’s show a listen.

Clocking in at a healthy 75 minutes, give or take, it was an okay show, although the last fifteen minutes kind of wandered (much like most of this review).

The first segment concerned the creation of NextCloud as a fork of OwnCloud. I’ve already presented my thoughts on it from Bryan’s Youtube interview with the founders of NextCloud, and not much new was covered here. But it was a chance for all four of them to discuss it. One of the touted benefits of the new project is the lack of a contributor agreement. I don’t find this a good thing. Note that while I whole heartedly agree that many contributor agreements are evil, that doesn’t make them all evil. Take the OpenNMS contributor agreement. It’s pretty simple, and it protects both the contributor and the project. The most important feature, to me, is that the contributor states that they have a right to contribute the code to the project. I think that’s important, although if it were lacking or the contributor lied, the results would be the same (the infringing code would be removed from the application). It at least makes people think just a bit before sending in code.

Bryan made an offhand mention about trademarks in the same discussion, and I wasn’t sure what he meant by it. Does it mean NextCloud won’t enforce trademarks, or that there is an easy process that allows people to freely use them? I think enforcing trademarks is extremely important for open source companies. Otherwise, someone could take your code, crap all over it, and then ship it out under the same name. At OpenNMS we had issues with this back in 2005 but luckily since then it has been pretty quiet.

While there was even more speculation, no one really knows why the NextCloud fork happened. Some say it was that Frank Karlitschek was friends with Niels Mache of Spreed.me and wanted a partnership, but OwnCloud was against it. I think we’ll never know. Another suggestion that was been made is that it had to deal with the community of OwnCloud vs. the investors. Jono made the statement that VCs don’t take an active role in the community, but I have to disagree. My interactions with 90% of VCs have been an episode of Silicon Valley, and while they may not take an active role, you can expect them to say things like “These features over here will be part of our ‘enterprise’ version and not open, and make sure to hobble the ‘community’ version to drive sales, but other than that, run your community the way you want.”

One new point that was brought up was the business perception of the company. I think everyone who self identifies as an open source fan who is using OwnCloud will most likely switch to NextCloud since that is where the developers went, but will businesses be cautious about investing in NextCloud? The argument can be made that “who knows what will set Frank off next?” and the threat of NextNextCloud might worry some. I am not expecting this to happen (once bitten, twice shy, I bet Frank has learned a lot about what he wants out of his project) but it is a concern.

It is similar to Libreoffice. I don’t know anyone in the open source world using OpenOffice, but it is still huge outside of that world (I did a ride along with a friend who is police and was pleasantly surprised to see him bring up OpenOffice on his patrol car’s laptop).

It kind of reminds me when Google killed Reader and then announced Keep – seemed a bit ironic at the time. If a company can radically change or even remove a service you have come to rely on, will you trust them in the future?

The segment ended with a discussion of the early days of Ubuntu. Bryan made the claim that Ubuntu was made as an easier to use version Debian which Jono vehemently denied. He claimed the goal was to create a free, powerful desktop operating system. All I remember from those days were those kids from the United Colors of Bennetton ads on the covers of the free CDs.

The next piece was Bryan reviewing the latest Dell XPS 13 laptop. My last two laptops have been XPS 13 models and I love them. They ship with Linux (which I want to encourage) and I find they provide a great Linux desktop experience.

I got my newest one last year, and the main issue I’ve had is with the trackpad. Later kernels seem to have addressed most of my problems. I also dumped the Ubuntu 14.04 that shipped with it in exchange for Linux Mint, but I’m still running mainline kernels (4.6 at the moment). I’m eager for Mint 18 to release to see if the (rumoured) 4.4 kernel will work well (they keep backporting device driver changes) but outside of that I’ve had few problems.

Battery life is great, and the HiDPI screen is a big improvement over my old XPS 13. The main weirdness, for my model, is the location of the camera. In order to make the InfinityEdge display, they moved it to the bottom left of the screen so that the top bezel could be as thin as possible. It means people end up looking at the flabby underside of my chin instead of my face at times, but I use it so little that it doesn’t bother me much.

The third segment was about funding open source projects. It’s an eternal question: how do you pay for developers to work on free software? The guys didn’t really address it, focusing for the most part on programs that would provide some compensation for, say, travel to a conference, versus paying someone enough to make their mortgage. Stuart finally brought up that point but no real answers were offered.

The last fifteen minutes was the gang just shooting the breeze. Bryan used the term “duck fart” which apparently is a cocktail (sounds nasty, so don’t expect it on the cocktail blog). There is also, apparently, a science fiction novel called Bad Voltage that is not supposed to be that great, and the suggestion was made that the four of them should write their own version, but in the form of an “exquisite corpse” (my term, not theirs) where each would right their section independently and see what happens when it gets combined.

All in all, not a horrible show but not great, either. It is nice to have them all back together.

I’m eager to see how Bryan manages the next one, since he is spending 30 days solely in the Linux shell. How will Google Hangouts (which is what they use to make the show) work?

Curious minds want to know.

Tarus Balog : Choose the Right Thermometer

June 09, 2016 10:59 PM

Okay, so I have a love/hate relationship with Centurylink. Centurylink provides a DSL circuit to my house. I love the fact that I have something resembling broadband with 10Mbps down and about 1Mbps up. Now that doesn’t even qualify as broadband according to the FCC, but it beats the heck out of the alternatives (and I am jealous of my friends with cable who have 100Mbps down or even 300Mbps).

The hate part comes from reliability, which lately has been crap. This post is actually focused on OpenNMS so I won’t go into all of my issues, but I’ve been struggling with long outages in my service.

The latest issue is a new one: packet loss. Usually the circuit is either up or completely down, but for the last three days I’ve been having issues with a large percentage of dropped packets. Of course I monitor my home network from the office OpenNMS instance, and this will usually manifest itself with multiple nodeLostService events around HTTP since I have a personal web server that I monitor.

The default ICMP monitor does not measure packet loss. As long as at least one ping reply makes it, ICMP is considered up, so the node itself remains up. OpenNMS does have a monitor for packet loss called Strafeping. It sends out 20 pings in a short amount of time and then measures how long they take to come back. So I added it to the node for my home and I saw something unusual: a consistent 19 out of 20 lost packets.

Strafeping Graph

Power cycling the DSL modem seems to correct the problem, and the command line ping was reporting no lost packets, so why was I seeing such packet loss from the monitor? Was Strafeping broken?

While it is always a possibility, I didn’t think that Strafeping was broken, but I did check a number of graphs for other circuits and they looked fine. Thus it had to be something else.

This brings up a touchy subject for me: false positives. Is OpenNMS reporting false problems?

It reminds me of an event happened when I was studying physics back in the late 1980s. I was working with some newly discovered ceramic material that exhibited superconductivity at relatively high temperatures (around 92K). That temperature can be reached using liquid nitrogen, which was relatively easy to source compared to cooler liquids like liquid helium.

I needed to measure the temperature of the ceramic, but mercury (used in most common thermometers) is a solid at those temperatures, so I went to my advisor for suggestions. His first question to me was “What does a thermometer measure?”

I thought it was a trick question, so I answered “temperature” (“thermo” meaning temperature and meter meaning “to measure”). He replied, “Okay, smart guy, the temperature of what?”

That was harder to answer exactly, so I said vague things like the ambient environment, whatever it was next to, etc. He interrupted me and said “No, a thermometer measures one thing: the temperature of the thermometer”.

This was an important lesson, even though it seems obvious. In the case of the ceramic it meant a lot of extra steps to make sure the thermometer we were using (which was based on changes in resistance) was as close to the temperature of the material as possible.

What does that have to do with OpenNMS? Well, OpenNMS is like that thermometer. It is up to us to make sure that the way we decide to use it for monitoring is as close to our criteria as possible. A “false positive” usually indicates a problem with the method versus the tool – OpenNMS is behaving exactly as it should but we need to match it better to what we expect.

In my case I found out the router I use was limited by default to responding 1 ping per second (to avoid DDoS attacks I assume), so last night when I upped that to allow 20 pings per second Strafeping started to work as expected (as you can see in the graph above).

This allowed me to detect when my DSL circuit packet loss started again today. A little after 14:00 the system detected high packet loss. When this happened before, power cycling the modem seemed to fix it, so I headed home to do just that.

While I was on the way, around 15:30, the packet loss seemed to improve, but as you can see from the graph the ping times were all over the place (the line is green but there is a lot of extra “smoke” around it indicating a variance in the response times). I proactively power cycled the modem and things settled down. The Centurylink agent agreed to send me a new modem.

The point of this post is to stress that you need to understand how your monitoring tools actually work and you can often correct issues that make a monitor unusable and turn it into to something useful. Choose the right thermometer.

Tarus Balog : Nextcloud, Never Stop Nexting!

June 03, 2016 07:19 PM

It’s been awhile since I’ve posted a long, navel-gazing rant about the business of open source software. I’ve been trying to focus more on our business than spending time talking about it, but yesterday an announcement was made that brought all of it back to the fore.

TL;DR; Yesterday the Nextcloud project was announced as a fork of the popular ownCloud project. It was founded by many of the core developers of ownCloud. On the same day, the US corporation behind ownCloud shut it doors, citing Nextcloud as the reason. Is this a good thing? Only time will tell, but it represents the (still) ongoing friction between open source software and traditional software business models.

I was looking over my Google+ stream yesterday when I saw a post by Bryan Lunduke announcing a special “secret” broadcast coming at 1pm (10am Pacific). As I am a Lundookie, I made a point to watch it. I missed the start of it but when I joined it turned out to be an interview with the technical team behind a new project called Nextcloud, which was for the most part the same team behind ownCloud.

Nextcloud is a fork, and in the open source world a “fork” is the nuclear option. When a project’s community becomes so divided that they can’t work things out, or they don’t want to work things out for whatever reasons, there is the option to take the code and start a new project. It always represents a failure but sometimes it can’t be helped. The two forks I can think of off hand, Joomla from Mambo and Icinga from Nagios, both resulted in stronger projects and better software, so maybe this will happen here.

In part I blame the VC model for financing software companies for the fork. In the traditional software model, a bunch of money is poured into a company to create software, but once that software is created the cost of reproducing it is near zero, so the business model is to sell licenses to the software to the end users in order to generate revenue in the future. This model breaks when it comes to free and open source software, since once the software is created there is no way to force the end users to pay for it.

That still doesn’t keep companies from trying. This resulted in a trend (which is dying out) called “open core” – the idea that some software is available under an open source license but certain features are kept proprietary. As Brian Prentice at Gartner pointed out, there is little difference between this and just plain old proprietary software. You end up with the same lack of freedom and same vendor lock in.

Those of us who support free software tend to be bothered by this. Few things get me angrier than to be at a conference and have someone go “Oh, this OpenNMS looks nice – how much is the enterprise version?”. We only have the enterprise version and every bit of code we produce is available under an open source license.

Perhaps this happened at ownCloud. When one of the founders was on Bad Voltage awhile back, I had this to say about the interview:

The only thing that wasn’t clear to me was the business model. The founder Frank Karlitschek states that ownCloud is not “open core” (or as we like to call it “fauxpensource“) but I’m not clear on their “enterprise” vs. “community” features. My gut tells me that they are on the side of good.

Frank seemed really to be on the side of freedom, and I could see this being a problem if the rest of the ownCloud team wasn’t so dedicated.

On the interview yesterday I asked if Nextcloud was going to have a proprietary (or “enterprise”) version. As you can imagine I am pretty strongly against that.

The reason I asked was from this article on the new company that stated:

There will be two editions of Nextcloud: the free of cost community edition and the paid enterprise edition. The enterprise edition will have some additional features suited for enterprise customers, but unlike ownCloud, the community and enterprise editions for Nextcloud will borrow features from each other more freely.

Frank wouldn’t commit to making all of Nextcloud open, but he does seem genuinely determined to make as much of it open as possible.

Which leads me to wonder, what’s stopping him?

It’s got to be the money guys, right? Look, nothing says that open source companies can’t make money, it’s just you have to do it differently than you would with proprietary software. I can’t stress this enough – if your “open source” business model involves selling proprietary software you are not an open source company.

This is one of the reasons my blood pressure goes up whenever I visit Silicon Valley. Seriously, when I watch the HBO show to me it isn’t a comedy, it’s a documentary (and the fact that I most closely identify with the character of Erlich doesn’t make me feel all that better about myself).

I want to make things. I want to make things that last. I can remember the first true vacation I took, several years after taking over the OpenNMS project when it had grown it to the point that it didn’t need me all the time. I was so happy that it had reached that point. I want OpenNMS to be around well after I’m gone.

It seems, however, that Silicon Valley is more interested in making money rather than making things. They hunt “unicorns” – startups with more than a $1 billion valuation – and frequently no one can really determine how they arrive at that valuation. They are so consumed with jargon that quite often you can’t even figure out what some of these companies do, and many of them fade in value after the IPO.

I can remember a keynote at OSCON by Martin Mickos about Eucalyptus, and how it was “open source” but of course would have proprietary code because “well, we need to make money”. He is one of those Silicon Valley darlings who just doesn’t get open source, and it’s why we now have OpenStack.

The biggest challenge to making money in open source is educating the consumer that free software doesn’t mean free solution. Free software can be very powerful but it comes with a certain level of complexity, and to get the most out of it you have to invest in it. The companies focused on free and open source software make money by providing products that address this complexity.

Traditionally, this has been service and support. I like to say at OpenNMS we don’t sell software, we sell time. Since we do little marketing, all of our users are self selecting (which makes them incredibly intelligent and usually quite physically beautiful) and most of them have the ability to figure out their own issues. But by working with us we can greatly shorten the time to deploy as well as make them aware of options they may not know exist.

In more recent times, there is also the option to offer open source software as a service. Take WordPress, one of my favorite examples. While I find it incredibly easy to install an instance of WordPress, if you don’t want to or if you find it difficult, you can always pay them to host it for you. Change your mind later? You can export it to an instance you control.

The market is always changing and with it there is opportunity. As OpenNMS is a network monitoring platform and the network keeps getting larger, we are focusing on moving it to OpenStack for ultimate scalability, and then coupled with our Minions we’ll have the ability to handle an “Internet of Things” amount of devices. At each point there are revenue opportunities as we can help our clients get it set up in their private cloud, or help them by letting them outsource some or all of it, such as Newts storage. The beauty is that the end user gets to own their solution and they always have the option of bringing it back in house.

None of these models involves requiring a license purchase as part of the business plan. In fact, I can foresee a time in the near future where purchasing a proprietary software product without fully exploring open source alternatives will be considered a breach of fiduciary responsibility.

And these consumers will be savvy enough to demand pure open source solutions. That is why I think Nextcloud, if they are able to focus their revenue efforts on things such as an appliance, has a better chance of success than a company like ownCloud that relies on revenue from software licensing sales. The fact that most of the creators have left doesn’t help them, either.

The lack of revenue from licenses sales makes most VCs panic, and it looks like that’s exactly what happened with the US division of ownCloud:

Unfortunately, the announcement has consequences for ownCloud, Inc. based in Lexington, MA. Our main lenders in the US have cancelled our credit. Following American law, we are forced to close the doors of ownCloud, Inc. with immediate effect and terminate the contracts of 8 employees. The ownCloud GmbH is not directly affected by this and the growth of the ownCloud Foundation will remain a key priority.

I look forward to the time in the not too distant future when the open core model is seen as quaint as selling software on floppy disks at the local electronics store, and I eagerly await the first release of Nextcloud.

Magnus Hedemark : breaking up with Twitter

May 29, 2016 05:36 PM

This morning I found myself locked out of my Twitter account. Twitter claimed that my account was playing shenanigans (inconceivable!) and that to rescue my account, I had to give Twitter my phone number to validate that it’s really me.

Except Twitter never had my phone number before, so giving it to them would validate nothing.

Twitter and I have had a rough year. I was pretty active under the account @Magnus919, but had to abandon it since Twitter did not give me the means by which to clean up the cruft in my content. Earlier this year, I moved to @MagNetDevOps which was being used a little more carefully, and for the purpose of engaging with the professional community in my field.

With @Magnus919, I was using Twitter to connect with people from any of a number of far-flung interests, from DevOps to deafness, from autism to fountain pens. But the way Twitter is structured, it actually punishes people for having a wide variety of interests. It rewards deep focus on one or two special interests. If I spent a couple of days focusing on DevOps, I’d build followers in that space, and then just as quickly lose them when engaging the community of people with disabilities.

So I thought I’d create a number of accounts that were all clearly identified as belonging to me, but each specializing in a single field of interest. Twitter made it impossible to clean up the corpus of old tweets on @Magnus919, so I deleted that account and created @MagNetDevOps for focusing on content focused on my professional interests. From there, I was going to create some other accounts for other areas of interest, but never got that far.

Photo: Twitter error message that reads: "What happened? Your account appears to have exhibited automated behavior that violates the Twitter Rules. To unlock your account, please complete the steps below and confirm that you are the valid account owner. What you can do: To unlock you account, you must do the following: Verify your phone number" Twitter form that reads: "Add a phone number. Enter the phone number that you would like to associate with your Twitter account. We will send you an SMS with a verification code. SMS fees may apply."

Twitter, you see, has joined Facebook in requiring a phone number in order to have an account. Under duress, I complied with Facebook’s policy. Since Facebook was always meant to be one person / one account / real name, I wasn’t too terribly broken up about it. But Twitter is something else all together. I’m aware of a number of people who, for their own safety, require keeping a safe space between their online identity and their legal identity. These are people who, for any number of reasons, have legitimate reasons to fear real-world hostility from the people online around them. They may be seeking support for an invisible disability, or trying to have frank conversations about transgender rights. Or maybe, like the many who led the Arab Spring movement, they are speaking out against a government who oppresses them. By requiring such personally identifiable information in conjunction with a Twitter account, Twitter is going to silence those voices.

Twitter made a vague claim that my account had done something wrong, via automation, without giving any details about what it had supposedly been party to. I could not find any evidence of wrong doing on my Twitter account. They claimed that giving them a phone number would validate that I’m the account owner, yet this account was created with an email address and not a phone number. If the goal was to validate that I owned the account, it would make sense to do this via the email account used to create the Twitter account. Twitter seems to want my phone number, but I doubt that their stated reason matches their actual intent. I’m calling Twitter’s integrity into question here.

As a matter of principle, I don’t want to be a part of legitimizing this change in policy. Going forward, my Twitter account will lay fallow until Twitter no longer requires personally identifiable information to be associated with its accounts.


Mark Turner : Obama, Truman, and the atomic bomb

May 27, 2016 09:48 PM

Harry S. Truman

Harry S. Truman

On my port visit to Sasebo, Japan, during my Navy service, I decided to take a tour of Nagasaki. Standing at ground zero of this city was an unexpectedly deeply moving experience for me, one that I will never forget. The U.S. Army photos displayed there of mangled, radiation-poisoned bodies will haunt me forever.

It was a horrendous decision to drop the bomb. Anyone who visits Nagasaki or Hiroshima and does not agree has lost all humanity.

Obama is visiting Hiroshima and some of my right-wing friends are having a hissy fit about it. Many claim this is a “slap in the face to veterans,” though many of them are not veterans themselves, so it’s unclear how they can speak for veterans.

As a veteran I have debated whether dropping the bomb was the right thing to do. I always thought Harry Truman did a lot of good as President but how could I reconcile his decision to nuke hundreds of thousands of people with his good deeds? I’ve since grudgingly come to think it was the right call, given the fanaticism in Japan at the time. Casualties from an invasion of Japan (proposed as Operation Downfall) would have been from 500,000 to over a million in bloody, take-no-prisoners fighting.

So Truman’s decision most likely saved lives, though it brought the world the madness of nuclear weapons. It was a decision we’re still paying for today.

It’s easy to second-guess President Truman today since things look so much different from our perspective. The war, however, has long been over. Japan and America are close friends and important allies.

Should Obama apologize? I really don’t care either way. The only people who do care are the ones who just can’t let go.

Warren Myers : there is no such object on the server

May 27, 2016 10:13 AM

Gee. Thanks, Active Directory.

This is one of the more useless error messages you can get when trying to programmatically access AD.

Feel free to Google (or DuckDuckGo, or Bing, or whomever) that error message. Go ahead, I’ll wait.

Your eyes bleeding, and gray matter leaking from your ears yet? No? Then you obviously didn’t do what I just told you to – go search the error message, I’ll be here when you get back.

Background for how I found this particular gem: I have a customer (same one I was working with on SAP a while back where I had BAPI problems) that is trying to automate Active Directory user provisioning with HP Operations Orchestration. As a part of this, of course, I need to verify I can connect to their AD environment, OUs are reachable, etc etc.

In this scenario, I’m provisioning users into a custom OU (ie not merely Users).

Provisioning into Users doesn’t give this error – only in the custom OU. Which is weird. So we tried making sure there was already a user in the OU, in case the error was being kicked-back by having and empty OU (if an OU is empty, does it truly exist?).

That didn’t help.

Finally, after several hours of beard-stroking, diving into deep AD docs, MSDN articles, HP forae, and more … customer’s AD admin says, “hey – how long is the password you’re trying to use; and does it meet 3-of-4?” I reply, “it’s ‘Password!’ – 3-of-4, 9 characters long”. “Make it 14 characters long – for kicks.”

Lo and behold! There is a security policy on that OU that mandates a minimum password length as well as complexity – but that’s not even close to what AD was sending back as an error message. “There is no such object on the server”, as the end result of a failed user create, is 100% useless – all it tells you is the user isn’t there. It doesn’t say anything about why it isn’t there.

Sigh.

Yet another example of [nearly] completely ineffective error messages.

AD should give you something that resembles a why for the what – not merely the ‘what’.

Something like, “object could not be created; security policy violation” – while not 100% of the answer – would put you a lot closer to solving an issue than just “there is no such object on the server”.

Get it together, developers! When other people cannot understand your error messages, regardless of how “smart” they are, what field they work in, etc, you are Doing It Wrong™.

Mark Turner : The Astonishing Age of a Neanderthal Cave Construction Site – The Atlantic

May 27, 2016 12:47 AM

The Bruniquel Cave site is an incredible discovery of the earliest known civilization in Europe, 176,000 years ago. We are learning that our distant Neandertal cousins were at least as clever as we were.

Bruniquel Cave

Bruniquel Cave

After drilling into the stalagmites and pulling out cylinders of rock, the team could see an obvious transition between two layers. On one side were old minerals that were part of the original stalagmites; on the other were newer layers that had been laid down after the fragments were broken off by the cave’s former users. By measuring uranium levels on either side of the divide, the team could accurately tell when each stalagmite had been snapped off for construction.

Their date? 176,500 years ago, give or take a few millennia.

Source: The Astonishing Age of a Neanderthal Cave Construction Site – The Atlantic

Mark Turner : Does criticism of government turn off new leaders?

May 27, 2016 12:42 AM

A few weeks ago, a local media outlet published a story taking a few swipes at Raleigh’s city manager. While the criticism was mostly harmless (and city managers know it comes with the territory), it reminded me again that while taking digs at city government might seem to win points with hipster readers, it also alienates those hipsters from possibly getting involved themselves. Make public service look uncool and you run the risk of scaring off good people who might do great things with it.

I’m not saying don’t afflict the comforted when they rightfully earn it, but at the same time if you’re taking swipes just for the sake of taking swipes then you could be inadvertently turning away the bright, creative people who could be doing us all good.

I guess the constant focus on the negative when there’s really a ton of good being done gets tiring to me. And it’s not just the local level but at every level. Maybe it’s human nature to find something to complain about. Or maybe not.

Mark Turner : Soaring profit?

May 27, 2016 12:27 AM

A “free market” story I read tonight reminded me of one of the most surprising aspects of the Wright Brothers’ invention of the airplane. The Bishop’s Boys author Tom D. Crouch makes the point that Wilbur and Orville Wright were not motivated by profit when they began their chase for powered flight. The Wrights took their airplane designs on more as an interesting hobby, funded by their very successful bicycle shop. They were not venture-funded and did not answer to Wall Street. Their innovation grew mainly from their intense curiosity and desire to create things.

That’s not to say that they were altruistic because they certainly weren’t. Once they began flying, the brothers became secretive and litigious. They went after anyone else who seemed to infringe on their patents, with the aim of making as much money as possible.

While they were not top-notch businessmen, they were top-notch engineers. Their love of engineering, not their love of money, wound up making them a fortune.

Tarus Balog : Emley Moor, Kirklees, West Yorkshire

May 26, 2016 10:21 PM

I spent last week back in the United Kingdom. I always find it odd to travel to the UK. When I’m in, say, Germany or Spain, I know I’m in a different country. With the UK I sometimes forget and hijinks ensue. As Shaw may have once said, we are two countries separated by a common language.

Usually I spend time in the South, mainly Hampshire, but this trip was in Yorkshire, specifically West Yorkshire. I was looking forward to this for a number of reasons. For example, I love Yorkshire Pudding, and the Four Yorkshiremen is my favorite Monty Python routine.

Also, it meant that I could fly into Manchester Airport and miss Heathrow. Well, I didn’t exactly miss it.

I was visiting a big client that most people have never heard of, even though they are probably an integral part of your life if you live in the UK. Arqiva provides the broadcast infrastructure for much of the television and mobile phone industry in the country, as well as being involved in deploying networks for projects such as smart metering and the Internet of Things.

We were working at the Emley Moor location, which is home to the Emley Moor Mast. This is the largest freestanding structure in Britain (and third in the European Union). With a total height of 1084 feet, it is higher than the Eiffel Tower and almost twice as high as the Washington Monument.

Emily Moor Mast View

The mast was built in 1971 to replace a metal lattice tower that fell, due to a combination of ice and wind, in 1969. I love the excerpt from the log book mentioned in the Wikipedia article:

  • Day: Lee, Caffell, Vander Byl
  • Ice hazard – Packed ice beginning to fall from mast & stays. Roads close to station temporarily closed by Councils. Please notify councils when roads are safe (!)
  • Pye monitor – no frame lock – V10 replaced (low ins). Monitor overheating due to fan choked up with dust- cleaned out, motor lubricated and fan blades reset.
  • Evening :- Glendenning, Bottom, Redgrove
  • 1,265 ft (386 m) Mast :- Fell down across Jagger Lane (corner of Common Lane) at 17:01:45. Police, I.T.A. HQ, R.O., etc., all notified.
  • Mast Power Isolator :- Fuses removed & isolator locked in the “OFF” position. All isolators in basement feeding mast stump also switched off. Dehydrators & TXs switched off.

They still have that log book, open to that page.

Emily Moor Log Book

If you have 20 minutes, there is a great old documentary on the fall of the old tower and the construction of the new mast.

On my last day there we got to go up into the structure. It’s pretty impressive:

Emily Moor Mast Up Close

and the inside looks like something from a 1970s sci-fi movie:

Emily Moor Mast Inside

The article stated that it takes seven minutes to ride the lift to the top. I timed it at six minutes, fifty-seven seconds, so that’s about right (it’s fifteen seconds quicker going down). I was working with Dr. Craig Gallen who remembers going up in the open lift carriage, but we were in an enclosed car. It’s very small and with five of us in it I will admit to a small amount of claustrophobia on the way up.

But getting to the top is worth it. The view is amazing:

View from Emily Moor Mast

It was a calm day but you could still feel the tower sway a bit. They have a plumb bob set up to measure the drift, and it was barely moving while we were up there. Toby, our host, told of a time he had to spend seven hours installing equipment when the bob was moving four to five inches side to side. They had to move around on their hands and knees to avoid falling over.

Plumb Bob

I’m glad I wasn’t there on that day, but our day was fantastic. Here is a shot of the parking lot where the first picture (above) was taken.

View of Emily Moor Parking Lot

I had a really great time on this trip. The client was amazing, and I really like the area. It reminds me a bit of the North Carolina mountains. I did get my Yorkshire Pudding in Yorkshire (bucket list item):

Yorkshire Pudding in Yorkshire

and one evening Craig and I got to meet up with Keith Spragg.

Keith Spragg and Craig Gallen

Keith is a regular on the OpenNMS IRC channel (#opennms on freenode.net), and he works for Southway Housing Trust. They are a non-profit that manages several thousand homes, and part of that involves providing certain IT services to their tenants. They are mainly a Windows/Citrix shop but OpenNMS is running on one of the two Linux machines in their environment. He tried out a number of solutions before finding that OpenNMS met his needs, and he pays it forward by helping people via IRC. It always warms my heart to see OpenNMS being used in such places.

I hope to return to the area, although I was glad I was there in May. It’s around 53 degrees north latitude, which puts it level with the southern Alaskan islands. It would get light around 4am, and in the winter ice has been known to fall in sheets from the Mast (the walkways are covered to help protect the people who work there).

I bet Yorkshire Pudding really hits the spot on a cold winter’s day.

Mark Turner : Clinton allies blame Bernie for bad polls | TheHill

May 24, 2016 11:15 PM

Here it goes. Clinton supporters are already blaming Sanders for Clinton losing to Trump. It has nothing to do with all of Clinton’s faults, of course. Oh no. If she didn’t win, surely it must Bernie Sanders’s fault.

I’m so tired of Clinton playing the victim card. All. The. Time. The same thing played out in this political cartoon.

Poor Hillary.

Poor Hillary.

Hillary Clinton allies worried about polls that suggest a tightening general election match-up with Donald Trump are placing blame on Bernie Sanders. They say that the long primary fight with the independent senator from Vermont, which looks like it could go all the way to the Democratic convention in Philadelphia, has taken a toll on Clinton’s standing in the polls. In the latest RealClearPolitics average, she is two-tenths of a point behind Trump, the presumptive GOP presidential nominee.

The surrogates say they’re concerned that Sanders is still — this late in the game — throwing shots at Clinton and the Democratic establishment.

“I don’t think he realizes the damage he’s doing at this point,” one ally said of Sanders. “I understand running the campaign until the end, fine. But at least take the steps to begin bringing everyone together.”

Source: Clinton allies blame Bernie for bad polls | TheHill

Alan Porter : Moogfest

May 22, 2016 05:19 PM

This is either a story of poorly-managed expectations, or of me being an idiot, depending on how generous you’re feeling.

Eight months ago, when I heard that Moogfest was coming to Durham, I jumped on the chance to get tickets. I like electronic music, and I’ve always been fascinated by sound and signals and even signal processing mathematics. At the time, I was taking an online course in Digital Signal Processing for Music Applications. I recruited a wingman; my friend Jeremy is also into making noise using open source software.

moogfest2016

The festival would take place over a four-day weekend in May, so I signed up for two vacation days and I cleared the calendar for four days of music and tech geekery. Since I am not much of a night-owl, I wanted to get my fill of the festival in the daytime and then return home at night… one benefit of being local to Durham.

Pretty soon, the emails started coming in… about one a week, usually about some band or another playing in Durham, with one or two being way off base, about some music-related parties on the west coast. So I started filing these emails in a folder called “moogfest”. Buried in the middle of that pile would be one email that was important… although I had purchased a ticket, I’d need to register for workshops that had limited attendance.

Unfortunately, I didn’t do any homework in advance of Moogfest. You know, life happens. After all, I’d have four days to deal with the festival. So Jeremy and I showed up at the American Tobacco campus on Thursday with a clean slate… dumb and dumber.

Thursday

Moog shop keyboards

Thursday started with drizzly rain to set the mood.

I’m not super familiar with Durham, but I know my way around the American Tobacco campus, so that’s where we started. We got our wristbands, visited the Modular Marketplace (a very small and crowded vendor area where they showed off modular synthesizer blocks) and the Moog Pop-up Factory (one part factory assembly area, and one part Guitar Center store).  Thankfully, both of these areas made heavy use of headphones to keep the cacophony down.

From there, we ventured north, outside of my familiarity. The provided map was too small to really make any sense of — mainly because they tried to show the main festival area and the outlying concert area on the same map. So we spent a lot of time wandering, trying to figure out what we were supposed to see. We got lost and stopped for a milkshake and a map-reading. Finally, we found the 21c hotel and museum. There were three classrooms inside the building that housed workshops and talks, but that was not very clearly indicated anywhere. At every turn, it felt like we were in the “wrong place“.

girl in Moog shop

We finally found a talk on “IBM Watson: Cognitive Tech for Developers“. This was one of the workshops that required pre-registration, but there seemed to be room left over from no-shows, so they let us in. This ended up being a marketing pitch for IBM’s research projects — nothing to do with music synthesis or really even with IBM’s core business.

Being unfamiliar with Durham, and since several points on the map seemed to land in a large construction area, we wandered back to the American Tobacco campus for a talk. We arrived just after the talk started, so the doors were closed. So we looked for lunch. There were a few sit-down restaurants, but not much in terms of quick meals (on Friday, I discovered the food trucks).

Finally, we declared Thursday to be a bust, and we headed home.

We’d basically just spent $200 and a vacation day to attend three advertising sessions.  I seriously considered just going back to work on Friday.

With hopes of salvaging Friday, I spent three hours that night poring over the schedule to figure out how it’s supposed to be done.

  • I looked up all of the venues, noting that several were much farther north than we had wandered.
  • I registered (wait-listed) for workshops that might be interesting.
  • I tried to visualize the entire day on a single grid, gave up on that, and found I could filter the list.
  • I read the descriptions of every event and put a ranking on my schedule.
  • I learned – much to my disappointment – that the schedule was clearly divided at supper time, with talks and workshops in the daytime and music at night.
  • I made a specific plan for Friday, which included sleeping in later and staying later in the night to hear some music.

Friday

I flew solo on Friday, starting off with some static displays and exploring the venues along West Morgan Street (the northern area).  Then I attended a talk on “Techno-Shamanism“, a topic that looked interesting because it was so far out of my experience.  The speaker was impressively expressive, but it was hard to tell whether he was sharing deep philosophical secrets or just babbling eloquently… I am still undecided.

I rushed off to the Carolina Theater for a live recording of the podcast “Song Exploder“.  However, the theater filled just as I arrived — I mean literally, the people in front of me were seated — and the rest of the line was sent away.  Severe bummer.

I spent a lot of time at a static display called the Wifi Whisperer, something that looked pretty dull from the description in the schedule, but that was actually pretty intriguing.  It showed how our phones volunteer information about previous wifi spots we have attached to.  My question – why would my phone share with the Moogfest network the name of the wifi from the beach house we stayed at last summer?  Sure enough, it was there on the board!

Polyrhythmic Loops

Determined to not miss any more events, I rushed back to ATC for a talk on Polyrhythmic Loops, where the speaker demonstrated how modular synth clocks can be used to construct complex rhythms by sending sequences of triggers to sampler playback modules.  I kind of wish we could’ve seen some of the wire-connecting madness involved, but instead he did a pretty good job of describing what he was doing and then he played the results.  It was interesting, but unnecessarily loud.

The daytime talks were winding down, and my last one was about Kickstarter-funded music projects.

To fill the gap until the music started, I went to “Tech Jobs Under the Big Top“, a job fair that is held periodically in RTP.  As if to underscore the craziness of “having a ticket but still needing another registration” that plagued Moogfest, the Big Top folks required two different types of registration that kept me occupied for much longer than the time I actually spent inside their tent.  Note: the Big Top event was not part of Moogfest, but they were clearly capitalizing on the location, and they were even listed in the Moogfest schedule.

Up until this point, I had still not heard any MUSIC.

Sonic Pi

My wingman returned and we popped into our first music act, Sam Aaron played a “Live Coding” set on his Sonic Pi.  This performance finally brought Moogfest back into the black, justifying the ticket price and the hassles of the earlier schedule.  His set was unbelievable, dropping beats from the command line like a Linux geek.

Grimes

To wrap up the night, we hiked a half mile to the MotorCo stage to see Grimes, one of the headline attractions of Moogfest.  Admittedly, I am not part of the target audience for this show, since I had never actually heard of Grimes, and I am about 20 years older than many of the attendees.  But I had been briefly introduced to her sound at one of the static displays, so I was stoked for a good show.  However, the performance itself was really more of a military theatrical production than a concert.

Sure, there was a performer somewhere on that tiny stage in the distance, but any potential talent there was hidden behind explosions of LEDs and lasers, backed by a few kilotons of speaker blasts.

When the bombs stopped for a moment, the small amount of interstitial audience engagement reminded me of a middle school pep rally, both in tone and in body language. The words were mostly indiscernible, but the message was clear.  Strap in, because this rocket is about to blast off!  We left after a few songs.

Saturday

Feeling that I had overstayed my leave from home, I planned a light docket for Friday. There were only two talks that I wanted to see, both in the afternoon. I could be persuaded to see some more evening shows, but at that point, I could take them or leave them.

Some folks from Virginia Tech gave a workshop on the “Linux Laptop Orchestra” (titled “Designing Synthesizers with Pd-L2Ork“). From my brief pre-study, it looked like a mathematical tool used to design filters and create synthesizers. Instead, it turned out to be an automation tool similar to PLC ladder logic that could be used to trigger the playback of samples in specific patterns. This seemed like the laptop equivalent to the earlier talk on Polyrhythmic Loops done with synth modules. The talk was more focused on the wide array of toys (raspi, wii remotes) that could be connected to this ecosystem, and less about music. Overall, it looked like a very cool system, but not enough to justify a whole lot of tinkering to get it to run on my laptop (for some reason, my Ubuntu 15.10 and 16.04 systems both rejected the .deb packages because of package dependencies — perhaps this would be a good candidate for a docker container).

The final session of Moogfest (for me, at least) was the workshop behind Sam Aaron’s Friday night performance. Titled “Synthesize Sounds with Live Code in Sonic Pi“, he explained in 90 minutes how to write Ruby code in Sonic Pi, how to sequence samples and synth sounds, occasionally diving deep into computer science topics like the benefits of pseudo-randomness and concurrency in programs. Sam is a smart fellow and a natural teacher, and he has developed a system that is both approachable by school kids and sophisticated enough for post-graduate adults.

Wrap Up

I skipped Sunday… I’d had enough.

My wife asked me if I would attend again next year, and I’m undecided (they DID announce 2017 dates today).  I am thrilled that Moogfest has decided to give Durham a try. But for me personally, the experience was an impedance mismatch. I think a few adjustments, both on my part and on the part of the organizers, would make the festival lot more attractive.  Here is a list of suggestions that could help.

  • Clearly, I should’ve done my homework.  I should have read through each and every one of the 58 emails I received from them, possibly as I received them, rather than stockpiling them up for later.  I should have tuned in more closely a few weeks in advance of the date for some advanced planning as the schedule materialized.
  • Moogfest could have been less prolific with their emails, and clearly labeled the ones that required some action on my part.
  • The organizers could schedule music events throughout the day instead of just during the night shift… I compare this festival with the IBMA Wide Open Bluegrass festival in Raleigh, which has music throughout the day and into the nights.  Is there a particular reason why electronic music has to be played at night?
  • I would enjoy a wider variety of smaller, more intimate performances, rather than megawatt-sized blockbuster performances.  At least one performance at the Armory was loud enough to send me out of the venue, even though I had earplugs.  It was awful.
  • The festival could be held in a tighter geographic area.  The American Tobacco Campus ended up being an outlier, with most of the action being between West Morgan Street and West Main Street (I felt like ATC was only included so Durham could showcase it for visitors).  Having the events nearer to one another would mean less walking to-and-from events (I walked 14½ miles over the three days I attended).  Shuttle buses could be provided for the severely outlying venues like MotorCo.
  • The printed schedule could give a short description of the sessions, because the titles alone did not mean much.  Static displays (red) should not be listed on the schedule as if they are timed events.
  • The web site did a pretty good job of slicing and dicing the schedule, but I would like to be able to vote items up and down, then filter by my votes (don’t show me anything I have already thumbs-downed).  I would also like to be able to turn on and off entire categories – for example, do not show me the (red) static events, but show all (orange) talks and (grey) workshops.
  • The register-for-workshops process was clearly broken.  As a late-registerer, my name was not on anyone’s printed list.  But there was often room anyway, because there’s no reason for anyone to ever un-register for a workshop they later decided to skip.
  • The time slots did not offer any time to get to and from venues.  Maybe they should be staggered (northern-most events start on the hour, southern-most start on the half-hour) to give time for walking between them.

All in all, I had a good time.  But I feel like I burned two vacation days (and some family karma/capital) to attend a couple of good workshops and several commercial displays.  I think I would have been equally as happy to attend just on Saturday and Sunday, if the music and talks were intermixed throughout the day, and did not require me to stick around until 2am.

Alan Porter : Duck Patrol

May 15, 2016 11:19 PM

IMG_2176

On my way home today, I stopped by our neighborhood gas station to fill up the tank. As I was leaving, I noticed a mother duck and four ducklings walking along the curb of the shopping center driveway. They were making a lot of noise. The mother was cluck-cluck-clicking, and the ducklings were cheep-cheep-cheeping.

IMG_2177

They were standing pretty close to a storm drain. Then a car came whizzing by and one of the ducklings jumped into the storm drain! I went over to the storm drain and found six ducklings at the bottom!

So I rushed home and recruited Audrey and Sydney, who were eager to help. We got some buckets and brooms and some rope and went back to the shopping center. By that time, a couple of other people were gathered around, and they said they had called the Cary Police.

We went ahead and lifted the storm drain grate and one lady climbed in, carrying a bucket. One by one, she lured them close and plucked them up and into the bucket!

IMG_2183 IMG_2180 IMG_2182

The Policeman finally showed up, and we went looking for the mother duck and the other three ducklings. They could’ve been in the woods or near one of the storm drains. We finally spotted them in the pond across the street.

IMG_2186

So we carried our bucket to the pond. When we got close, the mother heard the ducklings cheeping and she ran over to us. Sydney laid the bucket down sideways in the grass and we all backed away. The mother duck ran to us, quacking like crazy, and all of the ducklings started cheeping even louder. The mother went to the bucket and then escorted them all down the grass and into the pond. And then they swam away in a tight formation, all nine babies clinging closely behind the mother.

Sydney said that it was the best day ever!

Mark Turner : Rosie the Seaboard Station ghost?

May 15, 2016 01:59 AM

Does Rosie the Riveter have a doppelganger at Seaboard Station?

Does Rosie the Riveter have a doppelganger at Seaboard Station?

I needed a part to fix our broken dishwasher so I drove over to Seaboard Ace Friday morning before work. On my way out of the store, I spotted an African American woman slowly walking toward me from the north in the parking lot. I did not want to keep her waiting as I backed out of the space so I wasted no time in getting going. Sure I was out of her way, I headed towards the lot’s exit. In the time it took me to reach the stop sign in front of Logan’s Trading Company the woman had somehow made it into the next parking lot, where the Phydeaux store used to be.

I was stunned. I was sure I backed out of the space before this woman could’ve reached my car, and somehow she had beaten my car to the stop sign? How?

Not wanting to seem like I was stalking her, I continued left to Halifax Street, then turned right to go back down the little one-way alley between Phydeaux and 18 Seaboard. The woman was still in the Phydeaux parking lot, this time slowly walking west.

Just to make sure I hadn’t mistaken the woman for another one dressed similarly, I drove back down in front of the hardware store. No other similarly-dressed women were around. I turned around just past Peace China and headed back towards the woman.

This time when I reached the Phydeaux parking lot the woman was gone. I drove the counterclockwise loop from Logan’s back to the one-way alley but could not find her.

I still couldn’t believe what I had just seen. How did this slow-walking woman suddenly leap ahead of me? And where had she gone? What had just happened here!?

Not sure what I had just seen, I later emailed the hardware store owner (an acquaintance of mine), asking if he may have captured video of the encounter. Unfortunately, his cameras pointed the wrong way. How else could I rule things out, I wondered.

I had to make another trip to the hardware store this afternoon and took the opportunity to time how long it would take for the woman to travel that distance. At my usual, typically brisk pace it took me 80 seconds. I then tried driving it the way I did yesterday. Even though I had to wait a moment when some rude driver insisted on passing me while I backed out, I still made it to the stop sign in 48 seconds.

I should have beaten the woman to the intersection by roughly half a minute, give or take. That’s a long time. If it were a race it would have been no contest.

I never saw her cross behind my car as I was backing up (remember my goal was to get out before she reached me). If she had opted to take the sidewalk instead of the lot this almost certainly would have added 5-10 seconds to her trip as she navigated the steps. So that’s unlikely.

Then I recalled my coworkers’ stories of ghosts that hung around the nearby, century-old Pilot Mill offices where I work. Pilot Mill and Seaboard Station have been around for many decades and might be unsurprising places to find ghosts. Was this a ghost I had seen?

Both times I saw her she looked like any person would. She was a medium-complexioned African American woman, stocky build, who wore a long blue denim dress, dark-rimmed glasses, and had a headcovering, perhaps a blue handkerchief. She was moving somewhat slowly, hobbling really, and did not at all seem like anyone capable of sprinting ahead of me even if she had wanted to. Her clothes were a little homely but not entirely out of place. They looked like work clothes, and could have been in style from 80 years ago to today. Rosie the Riveter comes to mind.

Was “Rosie” a ghost? If not, how the hell did she make it to the next parking lot so astonishingly fast? And without me seeing her? I can think of few rational explanations. The world is a strange place indeed.

Mark Turner : What science knows (and doesn’t know) about animals

May 14, 2016 10:47 PM

I was unexpectedly on-call Monday night and the pages I got made me sleep very lightly the rest of the night. When 3:30 AM rolled around, I was a little surprised to be serenaded by the birds outside. As I dozed, I began to wonder what it is about 3:30 AM that prompts the birds to sing? There can be no sign of dawn at that early time, even on May 10th. Is there some sort of environmental variable that tips birds off that it’s time to sing?

Later that day, naturally I then did some Googling on the research about birds. A query on “what makes birds sing in the morning” brought up a few interesting articles but also left me exasperated.

Here’s why. So much of the research into this is incomplete. For instance, around 2003, two researchers attempted to see what made the early morning special to birds, but the way they tested it was by playing recordings of bird songs at various times of day and comparing how their human ears perceived those sounds. The theory they were testing (and ultimately claimed to confirm) was that sound traveled better in the morning (allegedly when the wind isn’t blowing). Though their sound theory was later disproved, it bothered me that this was the test they tried since it was based on a flawed assumption. They tested how sound is perceived by humans at various times of day but didn’t actually test the damn birds.

It got me thinking that what we know about the animal kingdom is laughably incomplete. Other bird studies theorize that birds are announcing their territory. How does anyone know this? How can we humans really know the motivation behind animal behavior? Is it all about territory, or have we been grossly simplifying things?

It may make me a bad science nerd, but I’ve never been fully convinced that the motivations we have attributed to animals in the Theory of Evolution and other theories are accurate. Since the time of Darwin we’ve learned so much more about the beings we share the planet with. What these studies show is that we have long greatly underestimated the abilities of animals. Animal behavior is far more sophisticated than we’ve recognized. The old thought that other animals couldn’t be possibly intelligent due to their smaller brains is beginning to change.

Just because other Earth creatures cannot build a nuclear weapon does not mean they don’t possess intelligence (and frankly, as far as nukes are concerned, they show more intelligence). Just because animals don’t possess our sophisticated spoken language doesn’t mean they don’t have their own rich means of communication.

So, what cranks birds up before the crack of dawn? I don’t know. Then again, it seems that no one else really knows, either, from what I can gather and I find this really astounding.

Tarus Balog : OpenNMS Horizon 18 “Tardigrade” Is Now Available

May 11, 2016 05:03 PM

I am extremely happy to announce the availability of Horizon 18, codenamed “Tardigrade”. Ben is responsible for naming our releases and he’s decided that the theme for Horizon 18 will be animals. The name “Tardigrade” was suggested in the IRC channel by Uberpenguin, and while they aren’t the prettiest things, Wikipedia describes them as “perhaps the most durable of known organisms” so in the context of OpenNMS that is appropriate.

OpenNMS Horizon 18

I am also happy to see the Horizon program working. When we split OpenNMS into Horizon and Meridian, the main reason was to drive faster development. Now instead of a new stable release every 18 months, we are getting them out every 3 to 4 months. And these are great releases – not just major releases in name only.

The first thing you’ll notice if you log in to Horizon 18 as a user in the admin role is that we’ve added a new “opt-in” feature that let’s us know a little bit about how OpenNMS is being used by people. We hope that most of you will choose to send us this information, and in the spirit of the Open Source Way we’ve made all of the statistics available publicly.

OpenNMS Opt-In Screen

One of the key things we are looking for is the list of SNMP Object IDs. This will let us know what devices are being monitored by our users and to increase their level of support. Of course, this requires that your OpenNMS instance be able to reach the stats server on the Internet, and you can change your choice at any time on the Configuration admin page under “Data Choices”. It will only send this information once every 24 hours, so we don’t expect it to impact network traffic at all.

Once you’ve opted in, the next thing you’ll probably notice is new problem lists on the home page listing “services” and “applications”.

OpenNMS BSM Problem Lists

This related to the major feature addition in Horizon 18 of the Business Service Monitor (BSM).

OpenNMS BSM OpenDaylight

As people move from treating servers as pets to treating them like cattle, the emphasis has shifted to understanding how well applications and microservices are running as a whole instead of focusing on individual devices. The BSM allows you to configure these services and then leverage all the usual OpenNMS crunchy goodness as you would a legacy service like HTTP running on a particular box. The above screenshot comes from some prototype work Jesse has been doing with integrating OpenNMS with OpenDaylight. As you can see at a glance, while the ICMP service is down on a particular device, the overall Network Fabric is still functioning perfectly.

Another thing I’m extremely proud of is the increase in the quality of documentation. Ronny and the rest of the documentation team are doing a great job, and we’ve made it a requirement that new features aren’t complete without documentation. Please check out the release notes as an example. It contains a pretty comprehensive lists of changes in 18.

A few I’d like to point out:

Horizon 17 is one of the most powerful and stable releases of OpenNMS ever, and we hope to continue that tradition with Horizon 18. Hats off to the team for such great work.

Here is a list of all the issues addressed in Horizon 18:

Release Notes – OpenNMS – Version 18.0.0

Bug

  • [NMS-3489] – "ADD NODE" produces "too much" config
  • [NMS-4845] – RrdUtils.createRRD log message is unclear
  • [NMS-5788] – model-importer.properties should be deprecated and removed
  • [NMS-5839] – Bring WaterfallExecutor logging on par with RunnableConsumerThreadPool
  • [NMS-5915] – The retry handler used with HttpClient is not going to do what we expect
  • [NMS-5970] – No HTML title on Topology Map
  • [NMS-6344] – provision.pl does not import requisitions with spaces in the name
  • [NMS-6549] – Eventd does not honor reloadDaemonConfig event
  • [NMS-6623] – Update JNA.jar library to support ARM based systems
  • [NMS-7263] – jaxb.properties not included in jar
  • [NMS-7471] – SNMP Plugin tests regularly failing
  • [NMS-7525] – ArrayOutOfBounds Exception in Topology Map when selecting bridge-port
  • [NMS-7582] – non RFC conform behaviour of SmtpMonitor
  • [NMS-7731] – Remote poller dies when trying to use the PageSequenceMonitor
  • [NMS-7763] – Bridge Data is not Collected on Cisco Nexus
  • [NMS-7792] – NPE in JmxRrdMigratorOffline
  • [NMS-7846] – Slow LinkdTopologyProvider/EnhancedLinkdTopologyProvider in bigger enviroments
  • [NMS-7871] – Enlinkd bridge discovery creates erroneous entries in the Bridge Forwarding Tables of unrelated switches when host is a kvm virtual host
  • [NMS-7872] – 303 See Other on requisitions response breaks the usage of the Requisitions ReST API
  • [NMS-7880] – Integration tests in org.opennms.core.test-api.karaf have incomplete dependencies
  • [NMS-7918] – Slow BridgeBridgeTopologie discovery with enlinkd.
  • [NMS-7922] – Null pointer exceptions with whitespace in requisition name
  • [NMS-7959] – Bouncycastle JARs break large-key crypto operations
  • [NMS-7967] – XML namespace locations are not set correctly for namespaces cm, and ext
  • [NMS-7975] – Rest API v2 returns http-404 (not found) for http-204 (no content) cases
  • [NMS-8003] – Topology-UI shows LLDP links not correct
  • [NMS-8018] – Vacuumd sends automation events before transaction is closed
  • [NMS-8056] – opennms-setup.karaf shouldn't try to start ActiveMQ
  • [NMS-8057] – Add the org.opennms.features.activemq.broker .xml and .cfg files to the Minion repo webapp
  • [NMS-8058] – Poll all interface w/o critical service is incorrect
  • [NMS-8072] – NullPointerException for NodeDiscoveryBridge
  • [NMS-8079] – The OnmsDaoContainer does not update its cache correctly, leading to a NumberFormatException
  • [NMS-8080] – VLAN name is not displayed
  • [NMS-8086] – Provisioning Requisitions with spaces in their name.
  • [NMS-8096] – JMX detector connection errors use wrong log level
  • [NMS-8098] – PageSequenceMonitor sometimes gives poor failure reasons
  • [NMS-8104] – init script checkXmlFiles() fails to pick up errors
  • [NMS-8116] – Heat map Alarms/Categories do not show all categories
  • [NMS-8118] – CXF returning 204 on NULL responses, rather than 404
  • [NMS-8125] – Memory leak when using Groovy + BSF
  • [NMS-8128] – NPE if provisioning requisition name has spaces
  • [NMS-8137] – OpenNMS incorrectly discovers VLANs
  • [NMS-8146] – "Show interfaces" link forgets the filters in some circumstances
  • [NMS-8167] – Cannot search by MAC address
  • [NMS-8168] – Vaadin Applications do not show OpenNMS favicon
  • [NMS-8189] – Wrong interface status color on node detail page
  • [NMS-8194] – Return an HTTP 303 for PUT/POST request on a ReST API is a bad practice
  • [NMS-8198] – Provisioning UI indication for changed nodes is too bright
  • [NMS-8208] – Upgrade maven-bundle-plugin to v3.0.1
  • [NMS-8214] – AlarmdIT.testPersistManyAlarmsAtOnce() test ordering issue?
  • [NMS-8215] – Chart servlet reloads Notifd config instead of Charts config
  • [NMS-8216] – Discovery config screen problems in latest code
  • [NMS-8221] – Operation "Refresh Now" and "Automatic Refresh" referesh the UI differently
  • [NMS-8224] – JasperReports measurements data-source step returning null
  • [NMS-8235] – Jaspersoft Studio cannot be used anymore to debug/create new reports
  • [NMS-8240] – Requisition synchronization is failing due to space in requisition name
  • [NMS-8248] – Many Rcsript (RScript) files in OPENNMS_DATA/tmp
  • [NMS-8257] – Test flapping: ForeignSourceRestServiceIT.testForeignSources()
  • [NMS-8272] – snmp4j does not process agent responses
  • [NMS-8273] – %post error when Minion host.key already exists
  • [NMS-8274] – All the defined Statsd's reports are being executed even if they are disabled.
  • [NMS-8277] – %post failure in opennms-minion-features-core: sed not found
  • [NMS-8293] – Config Tester Tool doesn't check some of the core configuration files
  • [NMS-8298] – Label of Vertex is too short in some cases
  • [NMS-8299] – Topology UI recenters even if Manual Layout is selected
  • [NMS-8300] – Center on Selection no longer works in STUI
  • [NMS-8301] – v2 Rest Services are deployed twice to the WEB-INF/lib directory
  • [NMS-8302] – Json deserialization throws "unknown property" exception due to usage of wrong Jax-rs Provider
  • [NMS-8304] – An error on threshd-configuration.xml breaks Collectd when reloading thresholds configuration
  • [NMS-8313] – Pan moving in Topology UI automatically recenters
  • [NMS-8314] – Weird zoom behavior in Topology UI using mouse wheel
  • [NMS-8320] – Ping is available for HTTP services
  • [NMS-8324] – Friendly name of an IP service is never shown in BSM
  • [NMS-8330] – Switching Topology Providers causes Exception
  • [NMS-8335] – Focal points are no longer persisted
  • [NMS-8337] – Non-existing resources or attributes break JasperReports when using the Measurements API
  • [NMS-8353] – Plugin Manager fails to load
  • [NMS-8361] – Incorrect documentation for org.opennms.newts.query.heartbeat
  • [NMS-8371] – The contents of the info panel should refresh when the vertices and edges are refreshed
  • [NMS-8373] – The placeholder {diffTime} is not supported by Backshift.
  • [NMS-8374] – The logic to find event definitions confuses the Event Translator when translating SNMP Traps
  • [NMS-8375] – License / copyright situation in release notes introduction needs simplifying
  • [NMS-8379] – Sluggish performance with Cassandra driver
  • [NMS-8383] – jmxconfiggenerator feature has unnecessary includes
  • [NMS-8386] – Requisitioning UI fails to load in modern browsers if used behind a proxy
  • [NMS-8388] – Document resources ReST service
  • [NMS-8389] – Heatmap is not showing
  • [NMS-8394] – NoSuchElement exception when loading the TopologyUI
  • [NMS-8395] – Logging improvements to Notifd
  • [NMS-8401] – There are errors on the graph definitions for OpenNMS JMX statistics
  • [NMS-8403] – Document styles of identifying nodes in resource IDs

Enhancement

  • [NMS-2504] – Create a better landing page for Configure Discovery aftermath
  • [NMS-4229] – Detect tables with Provisiond SNMP detector
  • [NMS-5077] – Allow other services to work with Path Outages other than ICMP
  • [NMS-5905] – Add ifAlias to bridge Link Interface Info
  • [NMS-5979] – Make the Provisioning Requisitions "Node Quick-Add" look pretty
  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7446] – Enhance Bridge Link Object Model
  • [NMS-7447] – Update BridgeTopology to use the new Object Model
  • [NMS-7448] – Update Bridge Topology Discovery Strategy
  • [NMS-7756] – Change icon for Dell PowerConnector switch
  • [NMS-7798] – Add Sonicwall Firewall Events
  • [NMS-7903] – Elasticsearch event and alarm forwarder
  • [NMS-7950] – Create an overview for the developers guide
  • [NMS-7965] – Add support for setting system properties via user supplied .properties files
  • [NMS-7976] – Merge OSGi Plugin Manager into Admin UI
  • [NMS-7980] – provide HTTPS Quicklaunch into node page
  • [NMS-8015] – Remove Dependencies on RXTX
  • [NMS-8041] – Refactor Enhanced Linkd Topology
  • [NMS-8044] – Provide link for Microsoft RDP connections
  • [NMS-8063] – Update asciidoc dependencies to latest 1.5.3
  • [NMS-8076] – Allow user to access local documentation from OpenNMS Jetty Webapp
  • [NMS-8077] – Add NetGear Prosafe Smart switch SNMP trap events and syslog events
  • [NMS-8092] – Add OpenWrt syslog and related event definitions
  • [NMS-8129] – Disallow restricted characters from foreign source and foreign ID
  • [NMS-8149] – Update asciidoctorj to 1.5.4 and asciidoctorjPdf to 1.5.0-alpha.11
  • [NMS-8152] – Collect and publish anonymous statistics to stats.opennms.org
  • [NMS-8160] – Remove Quick-Add node to avoid confusions and avoid breaking the ReST API
  • [NMS-8163] – Requisitions UI Enhancements
  • [NMS-8179] – ifIndex >= 2^31
  • [NMS-8182] – Add HTTPS as quick-link on the node page
  • [NMS-8205] – Generate events for alarm lifecycle changes
  • [NMS-8209] – Upgrade junit to v4.12
  • [NMS-8210] – Add support for calculating the derivative with a Measurements API Filter
  • [NMS-8211] – Add support for retrieving nodes with a filter expression via the ReST API
  • [NMS-8218] – External event source tweaks to admin guide
  • [NMS-8219] – Copyright bump on asciidoc docs
  • [NMS-8225] – Integrate the Minion container and packages into the mainline OpenNMS build
  • [NMS-8226] – Upgrade SNMP4J to version 2.4
  • [NMS-8238] – Topology providers should provide a description for display
  • [NMS-8251] – Parameterize product name in asciidoc docs
  • [NMS-8259] – Cleanup testdata in SnmpDetector tests
  • [NMS-8265] – SNMP collection systemDefs for Cisco ASA5525-X, ASA5515-X
  • [NMS-8266] – SNMP collection systemDefs for Juniper SRX210he2, SRX100h
  • [NMS-8267] – Create documentation for SNMP detector
  • [NMS-8271] – Enable correlation engines to register for all events
  • [NMS-8296] – Be able to re-order the policies on a requisition through the UI
  • [NMS-8334] – Implement org.opennms.timeseries.strategy=evaluate to facilitate the sizing process
  • [NMS-8336] – Set the required fields when not specified while adding events through ReST
  • [NMS-8349] – Update screenshots with 18 theme in user documentation
  • [NMS-8365] – Add metric counter for drop counts when the ring buffer is full
  • [NMS-8377] – Applying some organizational changes on the Requisitions UI (Grunt, JSHint, Dist)

Story

Task

  • [NMS-8236] – Move the "vaadin-extender-service" module to opennms code base

Warren Myers : new service – free, secure password generation

May 01, 2016 04:53 PM

Today, I am formally announcing a brand-new service / website for secure password generation.

Go visit password.cf

Get yourself random passwords of commonly-required lengths and complexities*.

Password Varieties:

  • 4 of 4
  • upper & lower alphanumeric
  • lower alphanumeric

Lengths generated: 12, 16, & 24 characters

Visit the GitHub project page ..

.. if you want to run the site on your own server.

You can view the source “live” ..

.. if you’d like to see how it works without visiting GitHub – and verify nothing is saved anywhere by the code: it’s just a script with no filesystem / database access.

It’s fast ..

.. load times tend to be under 0.15 seconds!

It will always be linked from my Projects page, and from the ‘External’ links menu on this blog.


*Also findable at password.ga – same server, same code

Mark Turner : Neighborhood joy

April 30, 2016 01:08 PM

As sad as it is that Miss Ruth has moved away, our changing neighborhood ain’t all bad. In fact, there is lots to celebrate. Over the winter, Kelly and I finally bought a storm door for our front door, which gives us a look at what goes on outside. With the arrival of beautiful spring weather, I’ve been delighted to see all the neighbors out walking, running, pushing strollers, walking their dogs, and being neighborly. Last Friday evening alone I must have watched a dozen people passing happily by our home.

I’ve always considered as a sign of the health of a community how many people you see out interacting with each other. I’m thrilled to see so many of my friends and neighbors out getting to know their community.

Mark Turner : Miss Ruth moves away

April 29, 2016 02:19 PM

Miss Ruth Gartrell poses with the Turner family, February 2016.

Miss Ruth Gartrell poses with the Turner family, February 2016.

I knew the day would ome day come and about two weeks ago it did: the day our wonderful next-door neighbor “Miss Ruth” Gartrell moved away. Her once-bustling home is now empty and it makes me sad.

We first found out about her impending move over New Year’s when a for sale sign appeared in her yard. She told me that she was unable to keep up with her large home the way she used to and also felt she should move back to California where she could be closer to more of her family. A few months then went by before her packing began in earnest and one morning about two weeks ago she and her family left for good.

Living next to her was like living next to an angel. We always looked out for each other. She once wrote one of the most humbling things anyone has ever said about me when she delivered a thank you note to me for something I’d done. In it she had written “I thank God that you are my neighbor.” Wow.

The Empty Gartrell Home

The Empty Gartrell Home


We will all miss her friendly smile, the stories she would tell, the cookies she would bake for the kids, and the hugs she was always happy to share. She’s invited us to come see her in California anytime and I’m hoping we may one day be able to accept. We miss having her next door but someone in California just got an awesome neighbor.

Her empty home won’t stand as a memorial for long. A developer plans to raze it and replace it with three upscale homes. This work might take place as early as next week. It’s progress, I guess, but it just won’t be the same without the comforting presence of Miss Ruth.

Magnus Hedemark : state of the nerd

April 29, 2016 11:26 AM

It’s been awhile since I’ve written, and much has changed. I thought it was time to lay down some updates. Since my last post, I’ve made some big career decisions.

Career

The elephant in the room. Let’s tackle that first.

Happiness is important. And it’s been a little while since I’ve had happiness in my career. I think the last time I was truly happy was the first time that I actually enjoyed being in a leadership position.

[a 3,000+ word essay about the last five years of my career was here]

I’m really happy to announce that I’ve resigned my role as Principal Software Engineer at NetSuite/Bronto to take on a role in the leadership team at Optum. I started on Monday as Manager of Continuous Delivery. And I’m hiring.

The last few years of my career, coinciding with when I switched from Management back into Engineering, have not been very fulfilling or challenging. The happiness has been missing for awhile. I’ve not had stake in influencing the kind of organizational growth and change that really make me want to come to work every day and do my best work. So I’m now back in Management, and I’m already getting access to influence the kinds of things that I’m really passionate about.

 

Reading

My reading habits had actually been hurt the last few months. My love for reading took a hit based on some other big changes that happened, mostly around my career path. I did read but I didn’t find joy in it. I don’t think I have any books that I want to single out right now as being wonderful. I know I’ve hit a couple of turkeys, but I’m not going to shame them here.

I’m looking at a stack of ten books on my desk that I’ve singled out for reading. The last time I was a manager, I read a lot of books that were meant to help me better understand the discipline and help me to imagine better solutions. When I went from Management to Engineering a few years ago, I’d found that my input was discouraged without the leadership title attached. This was true to the point that I even got a formal reprimand when I was at Red Hat for tweeting praise of Ricardo Semler’s book “Maverick“. So I’d drifted back into reading, and writing, fiction during my time as an Engineer.

The ten books I’ve singled out for reading that are sitting on my desk now:

  1. The 5 Languages of Appreciation in the Workplace: Empowering Organizations by Encouraging People
  2. Winning Teams, Winning Cultures
  3. The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change
  4. Up The Mood Elevator: Your Guide to Success Without Stress
  5. The Practice of Management
  6. The Open Organization: Igniting Passion and Performance
  7. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses
  8. Business Stripped Bare: Adventures of a Global Entrepreneur
  9. Leading the Transformation: Applying Agile and DevOps Principles at Scale
  10. Designing Delivery

I’ve also re-started my digital subscription to Harvard Business Review on my Kindle. I tend to use the Kindle only for things that I don’t feel I’ll want enduring hard copies of.

All of these books were picked specifically because they will help me to understand the existing leadership culture at Optum, or because they will help me to better focus my own individual leadership values.

Writing

Back in February I took a week off with my family and went to Clearwater Beach, Florida. I spent a little bit of that time on the balcony of my hotel room, overlooking the Gulf of Mexico, (re-)beginning the manuscript for a very existential science fiction story that’s been kicking around in my head. Unlike the last manuscript I wrote, which was done 100% electronically, this time I’ve been writing with a fountain pen on paper. Neither way is better than the other, but I will say that writing with a pen does change the cadence and does change the quality of writing.

I’ve been a badly behaved writer. Or perhaps I’ve been a typical one. I came home from Florida to a job that I was feeling really sad about, and this story that I’m writing is meant to be one of hope. I’d put down the pen and stopped writing. I haven’t touched the project in the last two months.

Perhaps now that I’ve so thoroughly rearranged my life, maybe I’ll get back to it. Though I’m still trying to figure out what my new routine is going to look like. I’m now part of an international organization. I have meeting requests for time slots outside of the traditional 9-5 which need to be respected because, well, there are no times that are convenient when you have attendees everywhere from Utah to Minnesota to India.

One of the things that worked well for me when writing My Love, My Slave was taking a Macbook Air everywhere with me. I’d chip away at that story any time I had five or ten spare minutes to call my own. I might need to do that again. That would mean abandoning the pen & paper approach to writing. Or maybe just putting that project away and starting with one of the others in my backlog that could better accommodate my hectic new schedule.

Fitness

I’ve been trying… again… to reclaim my fitness. I’m using apps to help me out now. Mostly MyFitnessPal and MapMyWalk. Since I’ve been so overweight for so long, I’m using a Schosche Rhythm+ armband heart rate monitor to help me find a cadence that lets me get to an aerobic workout without pushing too hard. I’m down eight pounds so far.

I’ve also splurged and picked up a Bowflex Max Trainer M5 for the house. It kind of scares me, to be honest. Even on the easiest setting, my heart rate shoots up to a level higher than I think is probably safe. I can only last a minute or two before my knees grind to a halt and can go no further. It’s going to be a long time before I can get a full fourteen minute workout out of this thing.


Mark Turner : Puerto Rico

April 28, 2016 05:39 PM

metric system
Spanish
beaches/urban
driving
small
no voting for president
cannon-fodder citizenship
coqui frogs
expensive electricity
abandoned Roosevelt Roads
three days getting back
old san juan
drinking and driving
hike up waterfall/rappelling/ziplining
gas prices (liters)
beaches in every direction
governor’s beachhouse
Hallie new friends (Kristen and James?)
star-filled skies
rough waters for Vieques
snorkeling off the beach
Mayoral races
iguanas like squirrels
cops on every corner in Old San Juan

Mark Turner : Tallying up electric vehicle savings

April 26, 2016 01:07 AM

I was showing off my electric car to an engineer friend when he asked me a very engineer-like question.

“So, how much money have you saved?” he grinned. “I know you’ve figured it out, right?”

“Well, yes and no,” was my response. I went on to briefly explain fluctuating electric and gasoline costs and how the solar panels must also factor in. It’s not so simple to say “I have saved x dollars.”

That said, I do have a record of my electricity usage, both before and after EV. I can figure out my cost of charging during off-peak hours and extrapolate that over the time we’ve owned the car. Perhaps I can find a resource that shows the average price of unleaded gasoline for the past year or so. Finally, I can say for certainty how many miles I’ve driven. Putting all of this into a spreadsheet ought to give me a ballpark figure on how much it has cost to drive. Then I can factor in the skipped oil changes and other unneeded mechanical work and get a decent guess as to what we’ve saved.

This might be a fun Saturday afternoon project.

Warren Myers : turn on spf filtering with postfix and centos 7

April 25, 2016 08:44 AM

After running my new server for a while, I was noticing an unusually-high level of bogus email arriving in my inbox – mail that was being spoofed to look like it was coming from myself (to myself).

After a great deal of research, I learned there is a component of the DNS specification that allows for TEXT or SPF records. Sender Policy Framework was developed to help mail servers identify whether or not messages are being sent by authorized servers for their representative domains.

While there is a huge amount of stuff that could be added into a SPF record, what I am using for my domains is:

"v=spf1 mx -all"

Note: some DNS providers (like Digital Ocean) will make you use a TEXT record instead of a dedicated SPF record (which my registrar / DNS provider Pairnic supports).

If they require it be via TEXT record, it’ll look something like this: TXT @ "v=spf1 a include:_spf.google.com ~all"

Starting with this old how-to I found for CentOS 6, I added the policy daemon for Postfix (though it’s now in Python and not Perl) thusly:

yum install pypolicyd-spf

(I already had the EPEL yum repository installed – to get it setup, follow their directions, found here.)

Then I edited the master.cf config file for Postfix, adding the following at the bottom:

policy unix - n n - 0 spawn user=nobody argv=/bin/python /usr/libexec/postfix/policyd-spf

Note: those are actually tabs in my config file – but spaces work, too.

When you’re done with your edits and record additions, restart Postfix:

systemctl restart postfix

Then you’ll see messages like this in your /var/log/maillog file:

Apr 23 18:58:59 khopesh postfix/smtpd[18199]: NOQUEUE: reject: RCPT from unknown[197.27.40.169]: 550 5.7.1 <warren@datente.com>: Recipient address rejected: Message rejected due to: SPF fail - not authorized. Please see http://www.openspf.net/Why?s=mfrom;id=warren@datente.com;ip=197.27.40.169;r=warren@datente.com; from=<warren@datente.com> to=<warren@datente.com> proto=ESMTP helo=<[197.27.40.169]>

And if you follow the directive to go visit the “Why” page on OpenSPF, you’ll see something like this explanation:


Why did SPF cause my mail to be rejected?

What is SPF?

SPF is an extension to Internet e-mail. It prevents unauthorized people from forging your e-mail address (see the introduction). But for it to work, your own or your e-mail service provider’s setup may need to be adjusted. Otherwise, the system may mistake you for an unauthorized sender.

Note that there is no central institution that enforces SPF. If a message of yours gets blocked due to SPF, this is because (1) your domain has declared an SPF policy that forbids you to send through the mail server through which you sent the message, and (2) the recipient’s mail server detected this and blocked the message.

warren@datente.com rejected a message that claimed an envelope sender address of warren@datente.com. warren@datente.com received a message from 197.27.40.169 that claimed an envelope sender address of warren@datente.com.

However, the domain datente.com has declared using SPF that it does not send mail through 197.27.40.169. That is why the message was rejected.


Tarus Balog : Welcome Ecuador (Country 29)

April 22, 2016 03:49 PM

It is with mixed emotions that I get to announce that we now have a customer in Ecuador, our 29th country.

My emotions are mixed as my excitement at having a new customer in a new country is offset by the tragedy that country suffered recently. Everyone at OpenNMS is sending out our best thoughts and we hope things settle down (quite literally) soon.

They join the following countries:

Australia, Canada, Chile, China, Costa Rica, Denmark, Egypt, Finland, France, Germany, Honduras, India, Ireland, Israel, Italy, Japan, Malta, Mexico, The Netherlands, Portugal, Singapore, Spain, Sweden, Switzerland, Trinidad, the UAE, the UK and the US.

Mark Turner : Is Facebook secretly snooping on my photos to serve ads?

April 22, 2016 02:59 PM

I’ve been taking part in an experimental drug study at the local Veterans Administration hospital. Now that the study is wrapping up, I thought it might be wise to take a photo of my medicine bottle for future reference. So, during a break in traffic on my way to my appointment the other day, I picked up my work Android phone and snapped some photos of my medicine bottle, like this one.

Until this blog post I hadn't shared this photo with anyone.

Until now I hadn’t shared this photo with anyone.

All seemed well until I logged into Facebook on the same phone yesterday. That’s when I was astonished to see this targeted ad show up in my Facebook feed.

Holy shit! What are the odds that Facebook would just happen to serve up an ad that matched a photo I took less than 24 hours earlier, a photo that I hadn’t shared with anyone? Call me paranoid but I can’t even fathom the odds that this is coincidental. I don’t post any medical stuff on Facebook, have never mentioned medicine or bottles or … anything. No keywords. There is nothing I’ve shared voluntarily on Facebook that could have summoned an ad that just happens to match a photograph I had just taken but never intended to share.

Did my Facebook app spy on my private photo to serve me this ad?

Did my Facebook app spy on my private photo to serve me this ad?

The simplest explanation is that Facebook is snooping on my phone’s photos and using them without my knowledge to send me targeted ads. There is just no way this can be coincidental.

This makes me furious. That Facebook monetizes the content that I willing share isn’t the issue, after all I’ve long understood that if something is free then that makes me the product. The issue is whether Facebook may be making use of the content that I am not willing to share, behind my back! It certainly looks like it is.

So, can Facebook do this? Certainly Facebook Messenger has raised privacy issues, one of the many reasons I don’t use it. Back in November, Facebook added a feature to Messenger called “Photo Magic,” which automatically scours your phone’s photos, allegedly to automatically tag and alert any Facebook friends it finds. Says Yahoo Business News:

In a bit of “Photo Magic,” Facebook is testing a new feature to make it easier to share your photos with friends — before you even upload them to the social network.

Using facial recognition, Facebook Messenger will look through your newly taken photos in your phone’s camera roll to identify your friends in them.

If Photo Magic recognizes one of your friends, Messenger will immediately send you a notification to send it to the person in the photo, so you don’t have to go the extra step to message or text them later.

Is tagging friends the only thing Facebook is doing when it’s snooping through your photos, or is it also using your photos to send you targeted ads? And what about the regular Facebook app? Did Photo Magic get quietly slipped into it as well?

To double-check what permissions I granted the Facebook app, I checked the listing on Google Play:

This app has access to:
Device & app history: retrieve running apps

Identity: find accounts on the device, read your own contact card, add or remove accounts

Calendar: add or modify calendar events and send email to guests without owners’ knowledge, read calendar events plus confidential information

Contacts:
find accounts on the device, read your contacts, modify your contacts

Location:
precise location (GPS and network-based), approximate location (network-based)

SMS: read your text messages (SMS or MMS)

Phone: read phone status and identity, write call log, read call log, directly call phone numbers

Photos/Media/Files: modify or delete the contents of your USB storage, read the contents of your USB storage

Storage: modify or delete the contents of your USB storage, read the contents of your USB storage

Camera: take pictures and videos

Microphone: record audio

Wi-Fi connection information: view Wi-Fi connections

Device ID & call information: read phone status and identity

Other: adjust your wallpaper size, receive data from Internet, download files without notification, control vibration, reorder running apps, run at startup, draw over other apps, send sticky broadcast, connect and disconnect from Wi-Fi, create accounts and set passwords, change network connectivity, prevent device from sleeping, set wallpaper, install shortcuts, expand/collapse status bar, read battery statistics, read sync settings, toggle sync on and off, read Google service configuration, view network connections, change your audio settings, full network access

Pretty all-encompassing list, isn’t it? For comparison, I looked up the permissions to Facebook Messenger:

This app has access to:

Identity:
find accounts on the device, read your own contact card, add or remove accounts

Contacts: find accounts on the device, read your contacts, modify your contacts

Location: precise location (GPS and network-based), approximate location (network-based)

SMS: edit your text messages (SMS or MMS), receive text messages (SMS), send SMS messages, read your text messages (SMS or MMS), receive text messages (MMS)

Phone: read phone status and identity, read call log, directly call phone numbers, reroute outgoing calls

Photos/Media/Files: modify or delete the contents of your USB storage, read the contents of your USB storage

Storage: modify or delete the contents of your USB storage, read the contents of your USB storage

Camera: take pictures and videos

Microphone: record audio

Wi-Fi connection information:
view Wi-Fi connections

Device ID & call information: read phone status and identity

Other: receive data from Internet, download files without notification, control vibration, run at startup, draw over other apps, pair with Bluetooth devices, send sticky broadcast, create accounts and set passwords, change network connectivity, prevent device from sleeping, install shortcuts, read battery statistics, read sync settings, toggle sync on and off, read Google service configuration, view network connections, change your audio settings, full network access

You can see that Messenger has a few extra things that one would expect for a messenger app, such as more SMS rights, but look at the storage and camera rights:

Facebook:

Photos/Media/Files: modify or delete the contents of your USB storage, read the contents of your USB storage

Storage: modify or delete the contents of your USB storage, read the contents of your USB storage

Camera: take pictures and videos

Messenger:

Photos/Media/Files: modify or delete the contents of your USB storage, read the contents of your USB storage

Storage: modify or delete the contents of your USB storage, read the contents of your USB storage

Camera: take pictures and videos

As you can see above, the rights both the standard Facebook app and Facebook Messenger use to read your photos, videos, and camera are identical, thus there is nothing from Android’s point of view that prevents the Facebook app from spying on your private photos the same way Messenger’s Photo Magic does.

So, am I being paranoid? Perhaps, but I am highly suspicious that something underhanded is going on here. The chances of this ad being shown to me are just too high not to be nervous. Further investigation is warranted.

A few parting thoughts:

  • I never opted in to allow Facebook access to photos I did not explicitly share (i.e., Photo Magic).
  • I cannot find any settings in the Facebook mobile app that might disable this feature.
  • If Facebook has access to my private photos, then state security organizations can, too.
  • Android 6.x offers the ability to fine-tune app permissions. It can’t get deployed to my phones fast enough.

Mark Turner : KeePass2Android password manager

April 20, 2016 01:29 AM

keepass2android

At $WORK, I use a commercial password management tool that seems to fit my needs as well as the company’s. For my home use, however, I prefer open source.

My password manager of choice has been KeePass. I like it’s open nature and wide variety of supported platforms. As I began to use it regularly, though, I realized that keeping all these password databases in sync is a huge challenge. Earlier this week I went searching to see if another open source password manager might do the trick and thanks to this post on the excellent Linuxious blog I discovered KeePass2Android.

KeePass2Android is a fork of KeePass and uses KeePass’s same libraries to manipulate its databases. The big win for KeePass2Android, though, is its extensive support for remote files. It supports databases hosted on popular file-sharing tools such as Google Drive, DropBox, Box.com, as well as SFTP-and-WebDAV-hosted files. It’s also been rewritten from Java to Mono for Android, which seems to be snappier than the Java version.

Now I have KeePass2Android installed on all of my devices and pointed to the same database! That’s one big feature now no longer solely the domain of commercial password managers. Score one for open source!

Mark Turner : The mystery of place memory

April 20, 2016 12:17 AM

Yesterday, I was leaving my desk for a meeting when I realized I had my high-tech, shiny Macbook Pro in one hand and a low-tech notepad in the other. There was no reason I needed a notepad when I had my laptop and yet it didn’t seem right not to attend a meeting without it.

After pointing out my absurdity to my coworkers for a laugh, I pondered how writing something down with a pencil or pen seems to strengthen my recall of it. I could easily type whatever I’d be jotting down and do it much faster with a computer, yet I’m certain I would not retain it as well as if I had used a pen or pencil.

Watching my dog make his rounds to all of the neighborhood pee spots got me thinking of how a dog’s world must be organized. Smells act as a dog’s map. If a dog finds a treat somewhere in the house, the dog will continually check that spot long afterward. Even if that treat was there only once. Dogs seem to create memories based on place (and reinforced with one of the strongest memory-making senses, the sense of smell).

I also thought of how we humans tend to organize our memories based on place. When recalling a fact or replaying a memory in our heads, we often instinctively look up to a particular place in space, as if that spot in physical space somehow holds the answer. Another example is how walking into a new room sometimes instantly erases the memory of what you were looking for. Or how a visit to old stomping grounds can ferret out long-lost memories.

We are oriented to operate in 3D space, so it makes sense that our memory process might be similarly designed. Check out some fascinating research on this topic and the role that the brain’s retrospenial cortex plays.

Warren Myers : helping a magpierss-powered site perform better

April 19, 2016 11:52 PM

I rely on MagpieRSS to run one of my websites. (If you'd like to see the basic code for the site, see my GitHub profile.)

One of the drawbacks to Magpie, and dynamic websites in general, is they can be bottlenecked by external sources – in the case of Magpie, those sources are the myriad RSS feeds that Datente draws from.

To overcome some of this sluggishness, and to take better advantage of the caching feature of Magpie, I recently started a simple cron job to load every page on the site every X minutes – this refreshes the cache, and helps ensure reader experience is more performant. By scheduling a background refresh of every page, I cut average page load times by nearly a factor of 10! While this is quite dramatic, my worst-performing page was still taking upwards of 10 seconds to load a not-insignificant percentage of the time (sometimes more than a minute!) 🙁

Enter last week's epiphany – since RSS content doesn't change all that often (even crazy-frequent-updating feeds rarely exceed 4 updates per hour), I could take advantage of a "trick", and change the displayed pages to be nearly static (I still have an Amazon sidebar that's dynamically-loaded) – with this stupidly-simple hack, I cut the slowest page load time from ~10-12 seconds to <1: or another 10x improvement!

"What is the 'trick'," you ask? Simple – I copied every page and prefixed it with a short character sequence, and then modified my cron job to still run every X minutes, but now call the "build" pages, redirecting the response (which is a web page, of course) into the "display" pages. In other words, make the display pages static by building them in the background every so often.

If you'd like to see exactly how I'm doing this for one page (the rest are similar), check out this stupidly-short shell script:

(time (/bin/curl -f http://datente.com/genindex.php > ~/temp.out)) 2>&1 | grep real

(The time is in there for my cron reports.)

Compare the run time to the [nearly] static version:

(time (/bin/curl -f http://datente.com/index.php > ~/temp.out)) 2>&1 | grep real

Mark Turner : Parks board past

April 19, 2016 11:06 PM

While fueling up at the gas station this morning, I recognized the gentlemen behind me as Ed Morris, the former chair of the Mordecai Historic Park board on which I served for four years. Ed was happy to see me and we caught up for a bit as we haven’t seen each other in far too long.

I was touched when Ed told me I was missed over at Mordecai. Serving on Mordecai’s board was not only a committee assignment for me while I was on the Parks board but it was also a personal treat. I am proud that I participated in the project to create an Interpretive Center at Mordecai and worked with the community to build consensus for the plan. It was a fun group to serve with, and then in a flash it was over.

I’ve turned my attention to other endeavors but I will always be proud of Raleigh’s parks. I hope to continue getting Dix Park designed, which would pretty-much top it all.

Tarus Balog : Agent Provocateur

April 19, 2016 03:31 PM

I’ve been involved with the monitoring of computer networks for a long time, two decades actually, and I’m seeing an alarming trend. Every new monitoring application seems to be insisting on software agents. Basically, in order to get any value out of the application, you have to go out and install additional software on each server in your network.

Now there was a time when this was necessary. BMC Software made a lot of money with its PATROL series of agents, yet people hated them then as much as they hate agents now. Why? Well, first there was the cost, both in terms of licensing and in continuing to maintain them (upgrades, etc.). Next there was the fact that you had to add software to already overloaded systems. I can remember the first time the company I worked for back then deployed a PATROL agent on an Oracle database. When it was started up it took the database down as it slammed the system with requests. Which leads me to the final point, outside of security issues that arise with an increase in the number of applications running on a system, the moment the system experiences a problem the blame will fall on the agent.

Despite that, agents still seem to proliferate. In part I think it is political. Downloading and installing agents looks like useful work. “Hey, I’m busy monitoring the network with these here agents”. Also in part, it is laziness. I have never met a programmer who liked working on someone else’s code, so why not come up with a proprietary protocol and write agents to implement it?

But what bothers me the most is that it is so unnecessary. The information you need for monitoring, with the possible exception of Windows, is already there. Modern operating systems (again, with the exception of Windows) ship with an SNMP agent, usually based on Net-SNMP. This is a secure, powerful extensible agent that has been tried and tested for many years, and it is maintained directly on server itself. You can use SNMPv3 for secure communications, and the “extend” and “pass” directives to make it easy to customize.

Heck, even Windows ships with an extensible SNMP agent, and you can also access data via WMI and PowerShell.

But what about applications? Don’t you need an agent for that?

Not really. Modern applications tend to have an API, usually based on ReST, that can be queried by a management station for important information. Java applications support JMX, databases support ODBC, and when all that fails you can usually use good ol’ HTTP to query the application directly. And the best part is that the application itself can be written to guard against a monitoring query causing undue load on the system.

At OpenNMS we work with a lot of large customers, and they are loathe to install new software on all of their servers. Plus, many of our customers have devices that can’t support additional agents, such as routers and switches, and IoT devices such as thermostats and door locks. This is the main reason why the OpenNMS monitoring platform is, by design, agentless.

A critic might point out that OpenNMS does have an agent in the remote poller, as well as in the upcoming Minion feature set. True, but those act as “user agents”, giving OpenNMS a view into networks as if it was a user of those networks. The software is not installed on every server but instead it just needs the same access as a user would have. So, it can be installed on an existing system or on a small system purchased for that purpose, at a minimum just one for each network to be monitored.

While some new IT fields may require agents, most successful solutions try to avoid them. Even in newer fields such as IT automation, the best solutions are agentless. They are not necessary, and I strongly suggest that anyone who is asked to install an agent for monitoring question that requirement.

Mark Turner : Exercise Is ADHD Medication – The Atlantic

April 17, 2016 10:10 PM


Mental exercises to build (or rebuild) attention span have shown promise recently as adjuncts or alternatives to amphetamines in addressing symptoms common to Attention Deficit Hyperactivity Disorder (ADHD). Building cognitive control, to be better able to focus on just one thing, or single-task, might involve regular practice with a specialized video game that reinforces “top-down” cognitive modulation, as was the case in a popular paper in Nature last year. Cool but still notional. More insipid but also more clearly critical to addressing what’s being called the ADHD epidemic is plain old physical activity.

Source: Exercise Is ADHD Medication – The Atlantic

Mark Turner : Russia’s military rejects U.S. criticism of new Baltic encounter | Reuters

April 17, 2016 01:58 PM

The USS Donald Cook (DDG-75) was buzzed earlier this week by a pair of Russian SU-24 Fencer bombers as the Cook transited the Baltic Sea. The Fencers flew an attack profile and flew within 100 feet (and some say within 30 feet) of the Cook in what the Cook skipper CDR Charles Hamilton called an unsafe and unprofessional manner.

While the incident was unusually unsafe, this kind of response from Russia is no surprise. Russia has long been irked by the U.S. Navy’s stubborn insistence on exercising its right of free passage through international waters, including the Baltic and Black Seas near Russia’s coast. Russia has a history of aggressively challenging the U.S. Navy as it operates in these areas, behavior which has sometimes resulted

in collisions.

While some old-salt Navy shipmates have criticized the Cook’s response as “weak,” the truth is that the Cook is extraordinarily capable of defending itself and could have easily handled the Fencers. However, given the history of operating near Russia, the Cook was almost certainly prepared for this aggressive response to its presence and did not take Russia’s bait by refusing to escalate the confrontation.

Given the close call of this latest incident, though, I don’t know if the U.S. Navy will be so willing to play nice the next time around. I would not be surprised if any future Russian bombers that pretend-attack a U.S. Navy warship operating near Russia get pretend-lit-up by that ship’s weapons radars.

Overall, though, the Russian military remains a shadow of its former self. The plunging oil prices have gutted Russia’s military funding. These highly-publicized dangerous confrontations are nothing more than propaganda used to prop up Russian nationalism.

In short, nothing to see here. Move along.

Russia’s military rejected criticism by U.S. European Command on Sunday that a Russian jet had made aggressive maneuvers near a U.S. reconnaissance plane over the Baltic Sea, a second incident in the region between the Cold War-era foes in the past week.

Source: Russia’s military rejects U.S. criticism of new Baltic encounter | Reuters

Mark Turner : Too busy to blog

April 16, 2016 10:21 PM

I’m hoping to catch up at some point with documenting all of the stuff that’s been going on lately. We’ve had a trip to Savannah, a trip to Puerto Rico, and a work trip to Boulder. I’ve been pretty exhausted in-between, too. Hopefully tonight and tomorrow I’ll have time to properly write it all down. Stay tuned.

Warren Myers : how did i never know about .ssh/config?

April 14, 2016 12:56 AM

I’m sure folks have tried to explain this to me before, but it wasn’t until today that it finally clicked – using .ssh/config will save you a world of hurt when managing various systems from a Linux host (I imagine it works on other platforms, too – but I’ve only started using it on CentOS).

Following directions I found here, I started a config file on a server I use as a jump box. In it I have an entry for my web server, and I’ll be adding other frequently-accessed servers to it as time goes on.

Thanks, nerderati, man pages … and whomever else tried to explain this to me before but I didn’t grok.

Jesse Morgan : Puppet Enterprise + firewall = pain.

April 12, 2016 06:11 PM

I’ve been tasked with setting up puppet enterprise. For numerous reasons it’s shaping up to be the project from hell (some the fault of puppet, but many that aren’t), but I’d like to share this little tidbit for posterity.

The main issue I’ve run into is that our puppet server is in a highly restricted vlan with no internet access. Since puppet pulls its modules from puppetforge, this becomes problematic.  The solution we came up with is to explicitly state the git repo to use for each module in the Puppetfile.

Problem 1: Naming conventions.

I can’t keep 100% fidelity on the projectnames when we migrate them over- for the puppetmodule KyleAnderson/consul, I don’t want to create a KyleAnderson user, so I have to mangle it to merge the user and project name together (since project names alone may not be unique; e.g. if bob/ntp wrote his module for windows and kevin/ntp wrote his module for linux, we can’t just call either puppet/ntp or we’ll get a collision.

We go from this:

forge "http://forge.puppetlabs.com"
mod "KyleAnderson/consul", :latest
mod "arioch/redis", :latest
...

to

forge "http://forge.puppetlabs.com"
mod "KyleAnderson/consul", :latest
  :git => 'https://internalgit/puppet/KyleaAderson-consul'
mod "arioch/redis", :latest
  :git => 'https://internalgit/puppet/arioch-redis'
...

In order to do this, we needed to get the git repo for each and mirror it. Well, that was the intent.

Problem 2: Names don’t match

KyleAnderson/consul does not exist on github. After manually searching the forge, I see his URL is actually solarkennedy/consul. So this means we need to get the project URL for each module to be able to clone the git repo. After much experimentation with puppet help module, I realized I can search for the module name, export as yaml and grep out the project name. I end up using the following command to check out the 51 modules I need:

for i in `cat .file |sed -e 's/.*"\(.*\)".*/\1/'`; do puppet module search ${i} --render-as=yaml |grep project_url |sed -e 's/.*: //' |xargs git clone ; done;

Problem 3: Inconsistent project URLs

…except that only works for about 80% of the modules- the rest have bad urls. Oh well, 43 is better than nothing.

ok, I have the modules now, time to check them into my git repo…

Problem 4: can’t check modules into git without the project existing first.

I have to create all 43 projects in the github enterprise web interface; that’s painful. I search and find documentation that eventually leads me to this little nugget:

for i in `ls` ; do curl -u "jmorgan3:$token" http://internalgit/api/v3/orgs/puppet/repos -d '{"name": "'${i}'"}' ; done

which creates 43 glorious repos. Then, I set the origin URL to my server:

for i in `ls` ; do cd $i ; echo $i ; git remote set-url origin git@internalgit:puppet/${i}.git ; cd ~/Projects/puppetmods/ ; done

and finally push them up

for i in `ls` ; do cd $i ; echo $i ; git remote -v ; cd ~/Projects/puppetmods/ ; done
for i in `ls` ; do cd $i ; echo $i ; git push ; cd ~/Projects/puppetmods/ ; done

Now I have all 43 modules checked into my internal git server.

 

I need to match up repos with modules (since the names may not match).

Problem 5: Repos were horribly named.

By using the repo names from the project URL, I still ended up with names like realmd, puppet-wordpress, and sssd. Hopefully this won’t bite us later.

 

I’ve commented out the remaining 7 unmatched projects, committed and pushed my Puppetfile changes, and am now rerunning “r10k deploy environment -pv”

 

Fingers crossed that this will work.

 

Problem 6: Bad syntax, I guess?

There were 100 little syntax issues with the Puppetfile. While I fixed most, this one was not resolvable:

# r10k deploy environment -pv
INFO -> Deploying environment /etc/puppetlabs/code/environments/master
INFO -> Environment master is now at 2481f9677469711705bcdb20dd9f0260466b955d
ERROR -> Failed to evaluate /etc/puppetlabs/code/environments/master/Puppetfile
Original exception:
wrong number of arguments (3 for 1..2)
INFO -> Deploying environment /etc/puppetlabs/code/environments/production
INFO -> Environment production is now at a6a7d5eda88334b0293d8534de81191a1375cf06
ERROR -> Failed to evaluate /etc/puppetlabs/code/environments/production/Puppetfile
Original exception:
wrong number of arguments (3 for 1..2)

Problem 7:  The control Repo changed!

Between originally checking this out 3 weeks ago and now, they have gutted and rebuilt the example I was using. The rationale makes total sense (it was over-opinionated previously), but now the new version is incomplete, so I’m left twisting in the wind.

 

I have a call with our puppet reps scheduled shortly and will pick up there.

Warren Myers : wordpress plugins i use

April 11, 2016 10:46 PM

As promised last time, I now have a page dedicated to the WordPress plugins I use.

Check it out, here.

Warren Myers : use prettypress if you’re running a wordpress blog

April 04, 2016 09:12 AM

Like my list of used Chrome Extensions, I’m building a list of recommended WordPress plugins.

But until I get it done, I have to give some pretty big props to PrettyPress. It’s a plugin that lets you edit in Visual, Text, and Markdown – the markup format of sites like reddit, GitHub,, GitLab, and the Stack Exchange family.

prettypress-screenshot

Warren Myers : a couple months late – but my prediction was pretty close

April 02, 2016 06:06 PM

Tesla’s Model 3 is debuting at $35,000.

That is distinctly in the range of most “normal” people to obtain.

It should be shipping sometime in 2017. I’d buy one.

Warren Myers : improve your entropy pool in linux

April 01, 2016 09:34 AM

A few years ago, I ran into a known issue with one of the products I use that manifests when the Red Hat Linux server it’s running on has a low entropy pool. And, as highlighted in that question, the steps I found 5 years ago didn’t work for me (turns out modifying the t parameter from ‘1’ to ‘.1’ did work (rngd -r /dev/urandom -o /dev/random -f -t .1), but I digress (and it’s no longer correct in CentOS 7 (the ‘t’ option, that is))).

In playing around with the Mozilla-provided SSL configurator, I noticed a line in the example SSL config that referenced “truerand”. After a little Googling, I found an opensource implementation called “twuewand“.

And a little more Googling about adding entropy, and I came across this interesting tutorial from Digital Ocean for “haveged” (which, interestingly-enough, allowed me to answer a 6-month-old question on Server Fault about CloudLinux).

Haveged “is an attempt to provide an easy-to-use, unpredictable random number generator based upon an adaptation of the HAVEGE algorithm. Haveged was created to remedy low-entropy conditions in the Linux random device that can occur under some workloads, especially on headless servers.”

And twuewand “is software that creates hardware-generated random data. It accomplishes this by exploiting the fact that the CPU clock and the RTC (real-time clock) are physically separate, and that time and work are not linked.”

For workloads that require lots of entropy (generating SSL keys, SSH keys, PGP keys, and pretty much anything else that wants lots of random (or strong pseudorandom) seeding), the very real problem of running out of entropy (especially on headless boxes or virtual machines) is something you can face quite easily / frequently.

Enter solutions like OpenRNG which are hardware entropy generators (that one is a USB dongle (see also this skh-tec post)). Those are awesome – unless you’re running in cloud space somewhere, or even just a “traditional” virtual machine.

One of the funny things about getting “random” data is that it’s actually very very hard to get. It’s easy to describe, but generating “truly” random data is incredibly difficult. (If you want to have an aneurysm (or you’re like me and think this stuff is unendingly fascinating), go read the Wikipedia entry on “Cryptographically Secure Pseudo Random Number Generator“.)

If you’re in a situation, though, like I was (and still am), where you need to maintain a relatively high quantity of fairly decent entropy (probably close to CSPRNG level), use haveged. And run twuewand occasionally – at the very least when starting Apache (at least if you’re running HTTPS – which you should be, since it’s so easy now).

Mark Turner : How one programmer broke the internet by deleting a tiny piece of code – Quartz

March 30, 2016 10:50 PM


This is a fascinating story of how one programmer’s deletion of 11 lines of his code wound up breaking the Internet. Yes, we are really that interconnected.

A man in Oakland, California, disrupted web development around the world last week by deleting 11 lines of code.

The story of how 28-year-old Azer Koçulu briefly broke the internet shows how writing software for the web has become dependent on a patchwork of code that itself relies on the benevolence of fellow programmers. When that system breaks down, as it did last week, the consequences can be vast and unpredictable.

Source: How one programmer broke the internet by deleting a tiny piece of code – Quartz

Tarus Balog : OpenNMS is Sweet Sixteen

March 30, 2016 03:15 PM

It was sixteen years ago today that the first code for OpenNMS was published on Sourceforge. While the project was started in the summer of 1999, no one seems to remember the exact date, so we use March 30th to mark the birthday of the OpenNMS project.

OpenNMS Project Details

While I’ve been closely associated with OpenNMS for a very long time, I didn’t start it. It was started by Steve Giles, Luke Rindfuss and Brian Weaver. They were soon joined by Shane O’Donnell, and while none of them are associated with the project today, they are the reason it exists.

Their company was called Oculan, and I joined them in 2001. They built management appliances marketed as “purple boxes” based on OpenNMS and I was brought on to build a business around just the OpenNMS piece of the solution.

As far as I know, this is the only surviving picture of most of the original team, taken at the OpenNMS 1.0 Release party:

OpenNMS 1.0 Release Team

In 2002 Oculan decided to close source all future work on their product, thus ending their involvement with OpenNMS. I saw the potential, so I talked with Steve Giles and soon left the company to become the OpenNMS project maintainer. When it comes to writing code I am very poorly suited to the job, but my one true talent is getting great people to work with me, and judging by the quality of people involved in OpenNMS, it is almost a superpower.

I worked out of my house and helped maintain the community mainly through the #opennms IRC channel on freenode, and surprisingly the project managed not only to survive, but to grow. When I found out that Steve Giles was leaving Oculan, I applied to be their new CEO, which I’ve been told was the source of a lot of humor among the executives. The man they hired had a track record of snuffing out all potential from a number of startups, but he had the proper credentials that VCs seem to like so he got the job. I have to admit to a bit of schadenfreude when Oculan closed its doors in 2004.

But on a good note, if you look at the two guys in the above picture right next to the cake, Seth Leger and Ben Reed, they still work for OpenNMS today. We’re still here. In fact we have the greatest team I’ve every worked with in my life, and the OpenNMS project has grown tremendously in the last 18 months. This July we’ll have our eleventh (!) annual developers conference, Dev-Jam, which will bring together people dedicated to OpenNMS, both old and new, for a week of hacking and camaraderie.

Our goal is nothing short of making OpenNMS the de facto management platform of choice for everyone, and while we still have a long way to go, we keep getting closer. My heartfelt thanks go out to everyone who made OpenNMS possible, and I look forward to writing many more of these notes in the future.

Warren Myers : can you disable encryption on a windows server?

March 30, 2016 09:40 AM

This was asked recently on Server Fault.

I’m asking if there’s a way to prevent files from being encrypted. I’m referring to some extent to ransomware, but specifically I want the following scenario:

  • Windows File server w/ shares (on the E: drive)

I want a way to tell the above server “don’t allow files on the E: drive to ever be encrypted by anyone or any software/process.”

And, of course, the answer to this question is “no”, as I and others said:

No, you cannot prevent files from being encrypted. How is the OS supposed to know if a file is encrypted vs being of some format it doesn’t “know” about?

You can disable OS-level encryption, and perhaps some programs from running via GPO, but that cannot stop every program, nor users uploading already encrypted files.

What you want to do is ensure users are only putting files where they are supposed to – and no where else.

But more interesting is why you would even ask something like this: is it because you really only want “plaintext” files on the share? (Even when the “plaintext” is a binary format (like an EXE, PNG, etc?) I suppose there could be “value” is disallowing even the concept of encrypted files .. but since encrypted files look like files (albeit ones that are not readably openable).

And I think this really belies an exceptionally-poor understanding of what encryption is – and what it is not. Encryption is meant to protect (or hide) specific content (the “specific content” might be the entirety of your phone or hard drive, or an email, or a trade secret, etc) from eyes who shouldn’t be allowed to see what is happening. Yes, there is ransomware that will encrypt or obfuscate files or file systems and demand payment to be decrypted – but attempting to solve for that corner case by attempting to disallow even the concept of encrypted data is highly misguided: the way to prevent/mitigate ransomware is by a combination of good system management practices, solid IDS and IDP software/appliances, sane anti-virus policies, and general good user behavior. (And, maybe, by using OSes less targeted by ransomware authors.)

Warren Myers : how to turn a google+ community into a quasi “mailing list”

March 22, 2016 03:53 PM

Spurred by a recent question from an acquaintance in town, I asked on Google+ whether or not you can enable emailed notifications for a Community. This led to the elaborate Settings page for G+.

It turns out that if you combine enabling a Community’s “Community notifications” vertical-ellipsiscommunity-settings (under the specific Community’s settings (which you find by clicking the vertical ellipsis button on the Community page) with the following tree in your general Google+ settings, Notifications -> Email -> Communities -> Shares something with a community you get notifications from, notifications-emailyou get a “mailing list” of sorts from your Community, which, niftily enough, also allows you to comment on the post via email (at least on the first notification of said post)!

Mark Turner : Reliable Sources under new ownership

March 21, 2016 12:57 AM

ReliableSources.com’s transfer to CNN is now complete, as the screenshot below shows. Sniff.

CNN-ReliableSources-screenshot-20160320

Mark Turner : Why Bernie Sanders Is Adopting a Nordic-Style Approach – The Atlantic

March 21, 2016 12:51 AM


Good article taking issue with those who say Bernie Sanders’s healthcare and college proposals won’t work
here like they do in Nordic countries.

Bernie Sanders is hanging on, still pushing his vision of a Nordic-like socialist utopia for America, and his supporters love him for it. Hillary Clinton, meanwhile, is chalking up victories by sounding more sensible. “We are not Denmark,” she said in the first Democratic debate, pointing instead to America’s strengths as a land of freedom for entrepreneurs and businesses. Commentators repeat endlessly the mantra that Sanders’s Nordic-style policies might sound nice, but they’d never work in the U.S. The upshot is that Sanders, and his supporters, are being treated a bit like children—good-hearted, but hopelessly naive. That’s probably how Nordic people seem to many Americans, too.

Source: Why Bernie Sanders Is Adopting a Nordic-Style Approach – The Atlantic

Tanner Lovelace : St Paddy’s Day 8K and Kilt Run Race Report

March 19, 2016 10:47 PM

I’ve been wanting to do this race for a while but I wasn’t quite sure about the kilt thing. They came out with a very cool medal this year, though, and that pushed me over the edge. I went ahead and got a kilt and came out for this. But, since it was the day before my early season half marathon, I decided to take it very, very easy.

We did the kilt run first and unfortunately didn’t have enough people to break the Guinness World Record. Oh well, it was a nice little warmup.

We then lined up for the 8K and I really tried to keep it slow. I’ve been doing most of my runs this year between 9 and 10 min/mi. For this one I ended up at 11:17 min/mi so I think I did fairly well. Raleigh is very hilly so that was a factor too. But I finished in 56:09, spent some time at the expo afterward, signed up for the Octoberfest 8K (the virtual version since I’m busy the day of the actual race) and then headed on to Wilmington to get ready for my half marathon the very next day.

Full race results can be seen at Athlinks: https://www.athlinks.com/Athletes/180752944/Race/251123395/Details