Mark Turner : Vets study links PB pills, genetic variations to Gulf War illness | TribLIVE

February 01, 2015 03:11 AM

A government-issued pill intended to protect troops from nerve agents may have made some troops more vulnerable to a chronic condition marked by headaches, cognitive problems, pain and fatigue, researchers say.

People with certain genetic variations were 40 times more likely to contract Gulf War illness if they took pyridostigmine bromide, or PB, pills that the Defense Department issued to protect them from soman, a nerve agent, during the 1990-91 war, researchers concluded in a study funded by the U.S. Army Medical Research and Materiel Command and published this month in the journal Environmental Health.

via Vets study links PB pills, genetic variations to Gulf War illness | TribLIVE.

Mark Turner : Yes, Walking Through A Doorway Really Does Make You Forget — Here’s Why – Forbes

January 31, 2015 09:31 PM

I forgot to post this earlier.

More often than I care to admit, I’ll walk from one room to another with a clear vision in mind of whatever I need to do once I get there, but then I get there and can’t remember why I started. The only thing that happened between my first movement and my last is that I walked through a doorway. Surely that has nothing at all to do with forgetting something I knew just moments before, right?

Wrong, says new research. As it turns out, walking through a doorway exerts an imperceptible influence on memory. In fact, merely imagining walking through a doorway can zap memory.

via Yes, Walking Through A Doorway Really Does Make You Forget — Here's Why – Forbes.

Mark Turner : Is It Time To Kill The K-Cup, Before It Kills Our Planet?

January 31, 2015 06:50 PM

We have these coffee machines at work and they sure do produce a lot of trash for the amount of coffee they produce.

Your Keurig coffee pods have a dirty little secret. Actually, make that a big secret.

In 2013, Keurig Green Mountain produced 8.3 billion K-Cups that were brewed on millions of machines around the world — enough to circle to globe 10.5 times. Last year, production rose to nearly 9.8 billion, and most of those pods are not recyclable.

A new video made by Canadian production company Egg Studios takes a look at the environmental impact our coffee addiction has created. Titled "Kill The K-Cup," the short showcases a dystopian future where a single-use coffee pod monster destroys everything in its path.

via Is It Time To Kill The K-Cup, Before It Kills Our Planet?.

Tarus Balog : SCaLE 13x – February 2015

January 30, 2015 03:20 PM

We are three weeks away from the Southern California Linux Expo and I am getting really excited about it.

For those of you who are in to OpenNMS then tune in that day because we are making a pretty significant announcement at the show. Be sure to come buy the booth on the expo floor and say “hi” to the team, and both Jeff and I will be speaking (although at least during my talk you probably have better things to go see. For example, have you met our Lord and Savior, Docker?)

We are also incredibly excited that MC Frontalot will be performing. I’m not sure of the exact details but I believe it will be Saturday night.

(Note: I stole that picture from here since I like the fact that he has hair in it, well for certain values of “hair”, and note that link may not be safe for work [nudity])

If you are unfamiliar with his work, be sure to check out his YooToob Channel, and if you are so inclined I strongly recommend reading this well written bit (on Jezebel no less) concerning an issue surrounding a Penny Arcade comic a few years ago that really showcases the type of guy he is. Again, might not be safe for work (language). Be sure to click on the link to the original post for more detail.

If you are still on the fence about SCaLE, perhaps this little nugget will sway you: use Promo Code “ONMS” and get 40% off show registration. It’s cheap at twice the price and one of my favorite events of any year, but we want it to be extra special for 2015.

Jesse Morgan : Geomorph Theory

January 29, 2015 06:43 PM

*Random Crazy Person Thought of the Day: Ultra-specialized Geomorphs and Naming Conventions*

Standard Geomorph

A geomorph has 4 sides, and connects to all for sides via two entryways or “ports.” It looks a little bit like an octothorpe/hash with the center filled in (#).

Layer

Base2 Geomorph Sets

While Standard Geomorph tiles are cool, theres no way to close the system. To do this, you need to introduce the concept of a side being open (has two connecting ports) or closed (has no connecting ports).

Since there are only two options per side, we can represent each side with a binary number- 0 for closed, 1 for open. By using binary, we can now represent our tile as a four digit binary number. A four digit binary number has 16 possible states (0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, etc).  If we allow for rotation, we can reduce the total number of unique tiles needed, i.e. 1000, 0100, 0010 and 0001 can all be represented with the same one-sided geomorph by turning it.

With the addition of rotation, we can reduce our 16 down to 6 unique configurations:

0000= Sealed Geomorph
0001= One-sided Cave Geomorph
0011= Two-sided Corner Geomorph
0101= Two-sided Tunnel Geomorph
0111= Three-sided Edge Geomorph
1111= Four-sided Standard Geomorph

Layer #8Layer #4Layer #5 Layer #9Layer #7 Layer

Suppose we wanted to create store hundreds of tiles with these configurations. How would we store them? The most logical way is to create directories based on their configuration, which could be named after the binary number above. if you needed a eastern wall, you could translate it to 1011, which is simply a three-sided Edge geomorph with a 90 degree clockwise rotation. You could then snag a random one-sized edge tile from 1011/ and simply rotate it.

Base4 Geomorph Set

While this is neat, you can take it a step further with segmented geomorphs, which track the state of individual left and right ports:

 

00= both closed
01= first open
10= second open
11= both open

The addition of these two new states forces us to use 2 bits to represent state per side, or 8 bits total to represent a tile.

labeled

This means there are 256 different configurations for tiles. This can be reduced not only by rotation, but by flipping:

10 00 00 00 = Top left open
01 00 00 00 = Top right open (the above tile, flipped on it's Y axis)

(Also note, flipping along the X and Y axis has the same effect as rotating 180 degrees.)

00 10 00 00 = right top open
00 01 00 00 = right bottom open (flipped along X axis)
00 00 00 01 = left top open (flipped along Y axis)
00 00 00 10 = left bottom open (rotated 180 degrees)
00 00 00 10 = left bottom open (flipped along X and Y axis)

 

By the time you add in flipping and rotating, we end up with significantly less than 256 tiles. How much? I have no idea, the math is beyond me right now without drawing them all out. What I can say is that we can represent them with Base4 notation:

0= both closed
1= first open
2= second open
3= both open

Layer #8Layer #10 Layer #11Layer #4

 

 

 

 

This allows us to represent every tile category with only 4 digits. Looking at what we’ve represented previously:

0000= Sealed Geomorph
0003= One-sided Cave Geomorph
0033= Two-sided Corner Geomorph
0303= Two-sided Tunnel Geomorph
0333= Three-sided Edge Geomorph
3333= Four-sided Standard Geomorph

But we could also represent things like:

  • a pinwheel configuration: 1111
  • a crooked fork in the road:   0302
  • a narrow corridor  0102

PINWHEEL Layer #3 corridor

Among others.

Base8 Geomorphs

Lets take it another step- lets say that the solid center part between the two ports was changeable, essentially giving us 3 ports; three binary positions giving us a total of 8 combinations per side.

000 = all closed
001 = right open
010 = center open
011 = center and right open
100 = left open
101 = left and right open  (standard geomorph)
110 = center and left open
111 = all three open

With 3 bits per side, that gives us a total of 12 bits to represent a geomorph; If I remember my Base2 properly, that’s 4096 possible configurations (again much less with rotation and flips). We could still represent our standard configurations with only 4 digits if we use octal:

0000= Sealed Geomorph
0005= One-sided Cave Geomorph
0055= Two-sided corner Geomorph
0505= Two-sided Tunnel Geomorph
0555= Three-sided Edge Geomorph
5555= Four-sided Standard Geomorph

In addition we could create neat things like plusses, crosses, Y’s, trees, etc.

onesideelbowLayer #6 Layer #14 Layer #13

treeLayer #2 Layer #1 fatladder ladder

Base32 Geomorphs

If we wanted to take this one last insane step further, we could introduce the idea of ultra-specialized. where the 2 solid edges of each side were turned into ports. This means there are 5 binary areas (open or closed) per side, which translates to 32 configurations per side, meaning we can use base32 to encode each of the four sides with a simple four-letter code.

To this end, you could represent a “regular” geomorph side with the binary representation, i.e. 01010, which is 10 in decimal and A in base32. This means a regular geomorph tile would be encoded as AAAA.

0000=sealed Geomorph
A000= One-sided Geomorph
AA00= Two-sided Corner Geomorph
A0A0= Two-sided Tunnel Geomorph
AAA0= Three-sided Edge Geomorph
AAAA= Four-sided Standard Geomorph

So, the final tally? Five binary on 4 sides is 20 bits of data per tile; That’s over a million different variations. My brain hurts now.

Until I sat down and did the math, I thought 5bit-sided geomorphs were doable. Now I see how wrong I was.

 

Mark Turner : N&O runs horrible broadband op-ed

January 29, 2015 05:57 PM

The Google Fiber op-ed that ran in today’s N&O entitled “Google Fiber deal not in best interest of NC public” is so godawful that I don’t even know where to begin. Written by Dawson Gage, who calls himself an “IT worker, freelance writer, and aspiring law student,” it is incredibly misinformed on so many levels:

I rejoiced when my family first got broadband Internet when I was about 13, but I doubt it has made any of our lives richer or more productive. The usefulness of computers, for the most part, has little enough to do with how fast they are. No one wants delivery vans and school buses that go 20,000 mph.

Is Gage actually suggesting that life isn’t richer than in the days of dialup? Before YouTube, NetFlix, Wikipedia, Facebook, and Google? Apparently, having a mind-blowing amount of the world’s information instantly available isn’t rich or more productive enough for him. I bet he’s a big fan of the abacus.

In light of this, a massive dose of skepticism is appropriate. The upshot of the Google deal is that an enormously valuable piece of public infrastructure, which ought to be owned in common by the public, is handed over lock, stock and barrel to a private company based in California.

Do what, now? There is no “public infrastructure” being handed over to Google or anyone else. Google is getting nothing for free here. It’s paying its franchise fees, permits, taxes, and other costs just like any other company. I have no idea what Gage means here.

This same company was deeply involved in the illegal, secret surveillance of all our Internet usage by the NSA.

Well, no. Google reached out to NSA for help when the company found it had been hacked by the Chinese government. Soon afterward, when Edward Snowden’s documents revealed NSA was tapping directly into Google’s unsecured internal networks, the company angrily protested and immediately set about encrypting all of its links. This was the subject of an extensive story in June in the New York Times. What Gage wrote is patently false.

Its entire business model is founded on the premise that Google has the right to meticulously monitor and record every morsel of data that passes within its reach.

Google’s business model is to make money. They do this very well with advertising but it’s not all Google does. Part of Google’s mission seems to be pushing technological boundaries. This results in innovations like Google Earth, Google Maps, Google Voice, and Google Fiber. Sometimes these ideas don’t pan out (like Google Glass), but not everything they do is to support their advertising business.

Moreover, the law passed by the General Assembly to make public municipal Internet services illegal (save for that of Wilson, which was grandfathered in) is itself testament to the fact that public alternatives are feasible and sustainable. Indeed, at the time of that bill’s passage, the town of Chapel Hill was already laying its own high-speed fiber, which now presumably will be annexed by Google.

Well, no, again. See above. Municipally-owned networks will stay municipally-owned, and the same law Gage cites prevents cities from letting commercial entities use their networks even if they wanted to.

At the time of the law’s passage in 2011, its proponents argued that municipal or other government involvement in providing Internet service was “an interference in the free market.” Last time I checked, lobbying the government to outlaw an entire sector of potential competition was not much of a “free-market” approach. What erstwhile advocates of “free market” principles in the realm of infrastructure actually believe in is a doctrine of private ownership as an unchallenged system.

Why not simply contract Google – or even better, some of the many competent North Carolina businesses – to build a high-speed fiber network, which would then be owned by the public? Would any of us wish to drive on privately owned toll roads? Those who stand to benefit and, yes, profit from such ventures as the Google plan would prefer we did not ask such questions.

These passages echo the broadband op-ed I wrote back in 2011. Nothing new here.

And do we not imagine that Google views owning our Internet infrastructure as a fantastic bonanza of the data on which it feeds? Google Fiber is a business venture, not an act of philanthropy.

Yes, it’s a business, and Gage implying Google is only interested in monitoring its Google Fiber customers is not only unsupported by any evidence but goes a little into the tinfoil-hat arena.

Appeals to the virtues of the market are hollow in the cases of government-anointed monopolies like Google or, for that matter, Duke Energy. In the era of Gov. Pat McCrory, I understand that many of those in power see the private ownership of public infrastructure as a beau-ideal, part of the natural order.

Again, Google wasn’t awarded any monopoly here. Nor, strictly speaking, was Time Warner Cable (as much as I hate to admit it). The cities that succeeded in attracting Google did so by working through a checklist of requirements Google needed to determine whether a deployment was possible. This was what spurred on the N.C. Next Generation Network (NCNGN) effort: to figure out how to streamline these requirements. AT&T was the first company to agree to the NCNGN terms and has started rolling out its own fiber deployment. There is nothing preventing Google from also agreeing to the NCNGN terms and providing Google Fiber under this agreement. Google, however, has preferred to do things its own way in its previous deployments and I’m betting its Triangle deployments will be similar.

And for the last time, there is no “private ownership of public infrastructure” going on here.

Gage might want to check his facts before penning another op-ed, and maybe the N&O should pay a little more attention before it runs one like this.

Mark Turner : Google Fiber: fast download AND upload speeds

January 29, 2015 02:37 PM

fiber_house
Most of the local news stories I’ve read about Google Fiber coming to Raleigh highlight the ability to “download YouTube videos quickly.” Quickly downloading the stuff you’ve always downloaded is cool, but it isn’t an Earth-shattering use case. The real value of Google Fiber is that Google treats the Internet the way it should be treated – like a two-way street.

Other broadband providers will sell you fast connections but those connections are strictly asymmetrical. You may get a 15 Mbps download speed but you’ll only get a 1 Mbps upload speed. You see, Big Telecom wants you to treat you as a “consumer,” meaning you’ll take whatever the media companies choose to give you. They don’t think of you as having anything to bring to the conversation.

Google Fiber is different. Not only can you get 1 Gbps download speeds, you also get equally fast 1 Gbps upload speeds! Your download and upload speeds are equal, exactly how God intended. You become a full partner in the Internet, able not only to download at blazing speeds a multitude of cat videos from YouTube but able offer up your own. Or, you can hold videoconferences with your friends without being interrupted by buffering. Or play video games with others without sluggishness.

When last year’s dreary, snow-filled winter kept everyone home, I had the hankering to hold a guitar-picking session with some of the musicians in the neighborhood. I thought it would be cool to do this over the Internet, but such coordination could never happen with traditional, compression-filled video solutions because the timing would always be off. With a fat pipe like Google Fiber on either end, a jam session could be held with neither side missing a beat. This would ordinarily only be possible with expensive, time-locked (and … well, ancient) technology like T1 or ISDN lines.

Couple this with the impromptu jam sessions we’ve seen around town during Raleigh’s new showcase event, the International Bluegrass Festival, and I predict you’ll soon see some really cool musical collaborations that wouldn’t ordinarily be possible. I would love to see roving teams of broadband broadcasters out beaming street music into the homes of viewers right as it happens.

The beauty of Google Fiber is that it enables everyone to contribute to the Internet. So, rather than thinking in terms of fast video downloads, imagine what fast upload speeds now make possible.

Tarus Balog : Twitter

January 29, 2015 11:23 AM

After a long absence, I thought I’d let my three readers know I’m back on Twitter (as @sortova). Don’t expect much from me since I can’t even say “I’m back on Twitter” in less than 140 characters, and I tend to echo the sentiments of John Cleese on the subject, but it should allow those of you with nothing better to do more things with which to do nothing.

Mark Turner : An Introduction to Google Fiber

January 29, 2015 02:22 AM

An_Introduction_To_Google_Fiber_cover

One of the most useful things I got out of yesterday’s Google Fiber press conference (well, aside from a sweet Google Fiber water bottle) is an insightful booklet called “An Introduction To Google Fiber.” It basically spells out what the next steps are for the Google Fiber rollout.

Of particular interest is the question of “how do I get Google Fiber in my neighborhood?” Google’s answer?

Our approach is to build where people want us.

Fiber optic cable will travel into your neighborhood into boxes called telecom cabinets. One of these cabinets can serve you and a few hundred of your neighbors with Fiber — we call this grouping your “fiberhood”.

That’s where you come in. For us to bring Google Fiber to you — i.e. for us to light up your local telecom cabinet with working Google Fiber service and then for us to bring that service right down the street and up to your house — you and your neighbors first need to tell us you want us. Each fiberhood will have a sign-up goal that you can see on our website by entering your address — and the process is transparent, so you and your neighbors can see how close your fiberhood is to the goal.

After you and your neighbors reach your goal, we’ll be able to bring fiber the last mile (or so) from the cabinet to your home.

Wondering why we do it this way? It’s because we focus our energy on a handful of fiberhoods at once, doing an all-out installation and construction blitz. We do this so we can provide you with better, faster service; we won’t make you wait around for a crew that’s stuck across town. After we’re done in one fiberhood, we’ll move on to the next.

Already, word of the Google Fiber signup page has lit up neighborhood email lists, Facebook pages, and NextDoor pages all across town. Geeks in Cary have organized a MeetUp to engage their neighbors in the signup strategy. Now that Raleigh has worked its way towards achieving Gig City status, it’s amusing to me to see neighborhoods vying amongst themselves to be the first “fiberhoods.”

I spoke with the Erik Garr, Head of Google Fiber Raleigh/Durham, at tonight’s reception. He insisted that the Google Fiber rollout would not simply target Raleigh’s wealthiest neighborhoods first. Instead, Google will include neighborhoods of all economic means. Mr. Garr emphasized that Google would be making good use of its free service. This approach makes me happy as it will mean Google Fiber’s presence will help bridge the “digital divide” rather than increase it (exponentially).

I highly encourage you to read the rest of the nuggets contained in Google’s booklet, downloadable from the City of Raleigh’s website.

Happy Fiber Hunting!

Tarus Balog : Welcome to OpenNMS 15

January 28, 2015 09:31 PM

Today OpenNMS 15 was released. It was a year and a half between the release of OpenNMS 1.12 and OpenNMS 14, but only three months between OpenNMS 14 and OpenNMS 15.

As we move forward this year we are trying to adhere more to the open source mantra of “release early, release often”, and thus the new major release. There have been 1177 new commits since 14.0.3

You’ll also notice that this version of OpenNMS has a new name – Horizon. We’ve always thought that OpenNMS represents the best network management platform available and the name is meant to reflect that. We hope to make as many improvements we can, as fast as we can, without sacrificing quality, thus keeping OpenNMS out on the “horizon” from the competition.

The main improvement for the 15 release is in the webUI. Although you might not notice it at first, we’ve spent months migrating the whole interface to a technology called Bootstrap. The Bootstrap framework allows us to create a responsive UI that should look fine on a computer, a tablet or a phone. This should allow us a lot more freedom to modify the style sheet and we hope to be able to add “skinable” theme options soon.

A cool feature that can be found in this new UI is the ability to automatically resize resource graphs. If you have a particular set of resource graphs displayed:

and then you shrink the window, you’ll note that the menu turns into a dropdown and the graphs themselves now fit the more narrow window:

There are a number of bug fixes and other new features, and a complete list can be found at the bottom of this post or in our Jira instance (but for some reason you have to be logged in to see it). I am happy to say that there was no need for major security fixes in this release. (grin)

Sub-task

  • [NMS-6642] – CiscoPingMibMonitor
  • [NMS-6674] – NetScalerGroupHealthMonitor
  • [NMS-7060] – merge DocuMerge branch into develop branch
  • [NMS-7086] – alter documentation deploy step in bamboo to match the new structure
  • [NMS-7164] – Fix fortinet event typos (fortinet vs fortimail)
  • [NMS-7238] – Fix UEI names for CitrixNetScaler trap events
  • [NMS-7264] – Document CORS Support

Bug

  • [NMS-1956] – Missing localised time in web pages
  • [NMS-2358] – Time to load Path Outages page grows with each entry added
  • [NMS-2580] – Null/blank sysName value causes null/blank node label
  • [NMS-3033] – Create a HibernateEventWriter to replace JdbcEventWriter
  • [NMS-3207] – Able to get to non authorised devices via path outages link.
  • [NMS-3615] – Custom Resource Performance Reports not available
  • [NMS-3847] – jdbcEventWriter: Failed to convert time to Timestamp
  • [NMS-4009] – wrong content type in rss.jsp
  • [NMS-4246] – Paging arrows invisible with firefox on mac
  • [NMS-4493] – Notification WebUI has issues
  • [NMS-4528] – Time format on Event webpage is different that on Notices webpage
  • [NMS-5057] – Installer database upgrade script (install -d) scans every RRD directory, bombs with "too many open files"
  • [NMS-5427] – RSS feeds are not valid
  • [NMS-5618] – notifications list breadcrumbs differs from notifications index page
  • [NMS-5858] – Resource Graphs No Longer Centered
  • [NMS-6022] – Vaadin Header not consistent with JSP Header
  • [NMS-6042] – Empty Notification search bug
  • [NMS-6472] – Map Menu is not listing all maps
  • [NMS-6529] – Web UI shows not the correct Java version
  • [NMS-6613] – Problems installing "Testing" on Ubuntu 14.04
  • [NMS-6826] – Queued Ops Pending default graph needs rename
  • [NMS-6827] – Many graph definitions in snmp-graph.properties have line continuation slashes
  • [NMS-6894] – New Focal Point Topology UI (STUI-2) very slow
  • [NMS-6917] – Node page availability graph isn't "(last 24 hours)"
  • [NMS-6924] – WMI collector does not support persistence selectors
  • [NMS-6956] – test failure: org.opennms.mock.snmp.LLDPMibTest
  • [NMS-6958] – Requisition list very slow to display
  • [NMS-6967] – GeoMap polygons activation doesn't accurately reflect cursor location
  • [NMS-7015] – Navbar in Distributed Map is missing
  • [NMS-7059] – Local interface not displayed correctly in "Cdp Cache Table Links"
  • [NMS-7075] – xss in device snmp settings
  • [NMS-7112] – provision.pl just works if the admin user credentials are used
  • [NMS-7115] – Message Error in DnsMonitor
  • [NMS-7120] – Unable to add graph to KSC report
  • [NMS-7126] – ReST call for outages ends up with 500 status
  • [NMS-7144] – OpenNMS logo doesn't point to the same file
  • [NMS-7149] – footer rendering is weird in opennms docs
  • [NMS-7170] – Add a unit test for NodeLabel.computeLabel()
  • [NMS-7176] – ie9 does not display any 'interfaces' on a switch node – the tabs are blank
  • [NMS-7185] – NullPointerException When Querying offset in ReST Events Endpoint
  • [NMS-7246] – OpenNMS does not eat yellow runts
  • [NMS-7270] – HTTP 500 errors in WebUI after upgrade to 14.0.2
  • [NMS-7277] – WMI changed naming format for wmiLogicalDisk and wmiPhysicalDisk device
  • [NMS-7279] – Enable WMI Opennms Cent OS box
  • [NMS-7287] – Non provisioned switches with multiple VLANs generate an error
  • [NMS-7322] – SNMP configuration shows v1 as default and v2c is set.
  • [NMS-7330] – Include parts of a configuration doesn't work
  • [NMS-7331] – Outage timeline does not show all outages in timeframe
  • [NMS-7332] – Unnecessary and confusing DEBUG entry on poller.log
  • [NMS-7333] – Switches values retrieved incorrectly in the BSF notification strategy
  • [NMS-7335] – QueryManagerDaoImpl crashes in getNodeServices()
  • [NMS-7359] – Acknowledging alarms from the geo-map is not working
  • [NMS-7360] – Add/Edit notifications takes too much time
  • [NMS-7363] – Update Java in OpenNMS yum repos
  • [NMS-7367] – Octectstring not well stored in strings.properties file
  • [NMS-7368] – RrdDao.getLastFetchValue() throws an exception when using RRDtool
  • [NMS-7381] – Authentication defined in XML collector URLs cannot contain some reserved characters, even if escaped.
  • [NMS-7387] – The hardware inventory scanner doesn't recognize PhysicalClass::cpu(12) for entPhysicalClass
  • [NMS-7391] – Crash on path outage JSP after DAO upgrade

Enhancement

  • [NMS-1595] – header should always contain links for all sections
  • [NMS-2233] – No link back to node after manually unmanaging services
  • [NMS-2359] – Group path outages by critical node
  • [NMS-2582] – Search for nodes by sysObjectID in web UI
  • [NMS-2694] – Modify results JSP to render multiple columns
  • [NMS-5079] – Sort the Path Outages by Critical Path Node
  • [NMS-5085] – Default hrStorageUsed disk space relativeChange threshold only alerts on a sudden _increase of free space_, not a decrease of free space
  • [NMS-5133] – Add ability to search for nodes by SNMP values like Location and Contact
  • [NMS-5182] – Upgrade JasperReports 3.7.6 to most recent version
  • [NMS-5448] – Add link to a node's upstream critical path node in the dependent node's web page
  • [NMS-6508] – Event definitions: Fortinet
  • [NMS-6736] – ImapMonitor does not work with nginx
  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7157] – showNodes.jsp should show nodes in alphabetical order
  • [NMS-7166] – Backup Exec UEI contain "http://" in uei
  • [NMS-7205] – Rename link to configure the Ops Board in the Admin section.
  • [NMS-7206] – Remove "JMX Config Generator Web UI ALPHA" from stable
  • [NMS-7228] – Document that user must be in 'rest', 'provision' or 'admin' role for provision.pl to work
  • [NMS-7247] – Add collection of SNMP MIB2 UDP scalar stats
  • [NMS-7261] – CORS Support
  • [NMS-7278] – Improve the speed of the ReST API and Service Layer for the requisitions' repositories.
  • [NMS-7308] – Enforce selecting a single resource for Custom Resource Performance Reports
  • [NMS-7317] – Rearrange Node/Event/Alarm/Outage links on bootstrap UI
  • [NMS-7384] – Add configuration property for protobuf queue size
  • [NMS-7388] – IpInterfaceScan shouldDetect() method should check for empty string in addition to null string

Mark Turner : N&O Editors miss Hatem hypocrisy

January 28, 2015 06:29 PM

I was disappointed to read the N&O’s take in this editorial.

Greg Hatem is an acquaintance of mine. He’s done a tremendous job helping kick-start downtown Raleigh’s renaissance, investing when others would not. He’s earned some respect and should have his say.

On this issue, though, I must respectfully disagree with Greg. Downtown has continued to grow since those days when Empire Properties was the only game in town. Greg’s businesses have grown and thrived as well in this new, noisier downtown Raleigh. Heck, his businesses have contributed more than their share to the noise and revelry. For Greg Hatem to have played such a large role (as well as profited) in popularizing downtown and now complain about its success seems a tad hypocritical, doesn’t it?

It mystifies me how the editors at the News and Observer failed to see this irony.

When someone heads a company with 40 buildings and 500 employees connected to downtown Raleigh, getting the Raleigh City Council’s attention is fairly easy.

And Greg Hatem – whose company owns the restaurants Sitti, Gravy, The Pit and the Raleigh and Morning Times, along with many other properties – has earned that attention. Hatem’s involvement with downtown Raleigh goes back to a time when it was by no means certain that the city would see the boom it has. Hatem took big chances and got big returns.

But he’s moving his family, which includes younger children, out of a Fayetteville Street apartment into the Oakwood neighborhood near downtown. Why? The noise and party aftermath have made downtown, he says, "unlivable." He doesn’t like the idea of his family waking up to the garbage and other remnants of the previous night’s revels.

via Lower the volume on Raleigh's boom | Editorials | NewsObserver.com.

Mark Turner : Wake Forest police address concerns about ‘stranger danger’ cases :: WRAL.com

January 28, 2015 06:07 PM

Wake Forest Police have expressed exasperation with citizens sharing information on Facebook about a recent spate of “stranger danger” incidents. The incidents involve men driving a silver or gray SUV and trying to lure kids into the vehicle.

It’s a very frightening situation and any parent’s worst nightmare. People are afraid and rightfully so. They want answers, and if the police aren’t giving them then these folks will fill the void using social media outlets like Facebook and NextDoor.

I’ve seen how social media can help solve crimes. It works. Nothing helps police efforts like citizens working together. Instead of blaming it for “heresay,” Wake Forest PD should embrace social media as a “force multiplier” to solve crimes. If there are rumors that should be quashed, they should go online and set the record straight. It’s a new world we live in, after all.

Leonard said the police department has received other reports on social media that investigators have looked into, noting that they have had to use resources to track down "inaccurate information and hearsay.""If you see something that looks suspicious in your neighborhood, call the police department first rather than posting it on Facebook," Leonard said.

via Wake Forest police address concerns about 'stranger danger' cases :: WRAL.com.

Warren Myers : merging centos iso images

January 28, 2015 05:56 PM

Thanks to @Anon on Unix.SE for the pointer on how to do this. And to @Andy‘s comment on @mmckinst‘s answer for the warning about additional packages you may need.

As my three readers know, I run a CentOS mirror. One of the idiosyncrasies of CentOS, like its upstream RHEL, is that DVD ISOs aren’t always just one image – for example, the 6.6 x64 image comes on two ISOs. I suppose this has something to do with the “normal” or “simple” capacity of a DVD disc, but it’s annoying.

Enter the mkdvdiso.sh script (original found here) from Chris Kloiber & Phil Schaffner.

The process I used to combine these two ISOs into one is as follows:
yum install isomd5sum createrepo mkisofs
mkdvdiso.sh /full/path/to/original/isos /full/path/to/destination.iso

For posterity, and in case the CentOS wiki dies, below is the mkdvdiso.sh script:

#!/bin/bash

# by Chris Kloiber 
# Mods under CentOS by Phil Schaffner 

# A quick hack that will create a bootable DVD iso of a Red Hat Linux
# Distribution. Feed it either a directory containing the downloaded
# iso files of a distribution, or point it at a directory containing
# the "RedHat", "isolinux", and "images" directories.

# This version only works with "isolinux" based Red Hat Linux versions.

# Lots of disk space required to work, 3X the distribution size at least.

# GPL version 2 applies. No warranties, yadda, yadda. Have fun.

# Modified to add sanity checks and fix CentOS4 syntax errors

# TODO:
#   Add checks for available disk space on devices holding output and
#       temp files.
#   Add optional 3rd parameter to specify location of temp directory.
#   Create .discinfo if not present.

OS_VER=\
$((test -e /etc/fedora-release && rpm -qf /etc/fedora-release --qf "FC%{VERSION}") \
|| (test -e /etc/redhat-release && rpm -qf /etc/redhat-release --qf "EL%{VERSION}") \
|| echo OS_unknown)

case "$OS_VER" in
  EL[45]*|FC?)
        IMPLANT=/usr/lib/anaconda-runtime/implantisomd5
        if [ ! -f $IMPLANT ]; then
            echo "Error: $IMPLANT Not Found!"
            echo "Please install anaconda-runtime and try again."
            exit 1
        fi
        ;;
  EL6*|FC1?)
        IMPLANT=/usr/bin/implantisomd5
        if [ ! -f $IMPLANT ]; then
            echo "Error: $IMPLANT Not Found!"
            echo "Please install isomd5sum and try again."
            exit 1
        fi
        ;;
  OS_unknown)
        echo "Unknown OS."
        exit 1
        ;;
  *)
        echo "Fix this script for $OS_VER"
        exit 1
esac

if [ $# -lt 2 ]; then
        echo "Usage: `basename $0` source /destination/DVD.iso"
        echo ""
        echo "        The 'source' can be either a directory containing a single"
        echo "        set of isos, or an exploded tree like an ftp site."
        exit 1
fi

DVD_DIR=`dirname $2`
DVD_FILE=`basename $2`

echo "DVD directory is $DVD_DIR"
echo "ISO file is $DVD_FILE"

if [ "$DVD_DIR" = "." ]; then
    echo "Destinaton Directory $DVD_DIR does not exist"
    exit 1
else
    if [ ! -d "/$DVD_DIR" ]; then
        echo "Destinaton Directory $DVD_DIR must be an absolute path"
        exit 1
    else
        if [ "$DVD_FILE" = "" ] || [ -d "$DVD_DIR/$DVD_FILE" ]; then
            echo "Null ISO file name."
            exit 1
        fi
    fi
fi

which mkisofs >&/dev/null
if [ "$?" != 0 ]; then
    echo "mkisofs Not Found"
    echo "yum install mkisofs"
fi

which createrepo >&/dev/null
if [ "$?" != 0 ]; then
    echo "createrepo Not Found"
    echo "yum install createrepo"
fi

if [ -f $2 ]; then
    echo "DVD ISO destination $2 already exists. Remove first to recreate."
    exit 1
fi

# Make sure there is enough free space to hold the DVD image on the filesystem
# where the home directory resides, otherwise change ~/mkrhdvd to point to
# a filesystem with sufficient free space.

cleanup() {
    [ ${LOOP:=/tmp/loop} = "/" ] && echo "LOOP mount point = \/, dying!" && exit
    [ -d $LOOP ] && rm -rf $LOOP 
    [ ${DVD:=~/mkrhdvd} = "/" ] && echo "DVD data location is \/, dying!" && exit
    [ -d $DVD ] && rm -rf $DVD 
}

cleanup
mkdir -p $LOOP
mkdir -p $DVD

ls $1/*.iso &>/dev/null
if [ "$?" = 0 ]; then

    echo "Found ISO CD images..."

    CDS=`expr 0`
    DISKS="1"

    [ -w / ] || {   # Very portable, but perhaps not perfect, test for superuser.
        echo "Only 'root' may use this script for loopback mounts" 1>&2
        exit 1
    }

    for f in `ls $1/*.iso`; do
        mount -o loop $f $LOOP
        cp -av $LOOP/* $DVD
        if [ -f $LOOP/.discinfo ]; then
            cp -av $LOOP/.discinfo $DVD
            CDS=`expr $CDS + 1`
            if [ $CDS != 1 ] ; then
                DISKS=`echo ${DISKS},${CDS}`
            fi
        fi
        umount $LOOP
    done
else
    if [ -f $1/isolinux/isolinux.bin ]; then

        echo "Found FTP-like tree..."

        if [ -e $1/.discinfo ]; then
            cp -av $1/.discinfo $DVD
        else
# How does one construct a legal .discinfo file if none is found?
            echo "Error: No .discinfo file found in $1"
            cleanup
            exit 1
        fi
        cp -av $1/* $DVD
    else
        echo "Error: No CD images nor FTP-like tree found in $1"
        cleanup
        exit 1
    fi
fi

if [ -e $DVD/.discinfo ]; then
    awk '{ if ( NR == 4 ) { print disks } else { print ; } }' disks="ALL" $DVD/.discinfo > $DVD/.discinfo.new
    mv $DVD/.discinfo.new $DVD/.discinfo
else
    echo  "Error: No .discinfo file found in $DVD"
    cleanup
    exit 1
fi

rm -rf $DVD/isolinux/boot.cat
find $DVD -name TRANS.TBL | xargs rm -f

cd $DVD
createrepo -g repodata/comps.xml ./
mkisofs -J -R -v -T -o $2 -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 8 -boot-info-table $DVD
if [ "$?" = 0 ]; then

    echo ""
    echo "Image complete, create md5sum..."

#  $IMPLANT --force $2
# Don't like forced mediacheck? Try this instead.
    $IMPLANT --supported-iso --force $2

    echo "Start cleanup..."

    cleanup

    echo ""
    echo "Process Complete!"
    echo "Wrote DVD ISO image to $DVD_DIR/$DVD_FILE"
    echo ""
else
    echo "ERROR: Image creation failed, start cleanup..."

    cleanup

    echo ""
    echo "Failed to create ISO image $DVD_DIR/$DVD_FILE"
    echo ""
fi

Tarus Balog : OUCE 2014 Videos Now Available

January 28, 2015 03:34 PM

The dates are now set for the 2015 OpenNMS Users Conference, but if you can’t wait until September you can now relive the 2014 conference through the magic of YouTube.

You can visit the 2014 conference events calendar and if a video is available it will show up under the “Links” section.

Markus Neumann has been working through the videos and doing his best to improve them, but apologies in advance for the quality of some of them. We’ll attempt to record things better in Fulda.

Mark Turner : Photos from the Google Fiber announcement

January 28, 2015 02:45 PM

Google Fiber is coming to the Triangle

Google Fiber is coming to the Triangle


I was able to attend yesterday’s Google Fiber announcement. As I walked towards the auditorium in the North Carolina Museum of Natural History, I was attracted to a table out front that displayed shiny plastic. Spying my Canon camera in my hand, the helpful woman staffing the table asked “would you like a media pass?”

Feeling like the limo driver in the Bud Light “Dr. Galakawicz” commercials, I answered “yeaaassss, I would” and smoothly hung it around my neck.

Inside, I hung out with the media pros and snapped photos with wild abandon. I’ve collected the shots into my Google Plus album. Check them out!

Mark Turner : These four lucky cities are now officially getting Google Fiber – The Washington Post

January 28, 2015 02:30 PM

Yesterday’s Google Fiber announcement has gotten some press in WaPo this morning. Unfortunately, it has hit one of my pet peeves:

After months of speculation, Google confirmed Tuesday that its ultra-fast Internet service will soon be coming to four more cities — Atlanta, Charlotte, Nashville and Raleigh-Durham, N.C. Those regions, along with more than a dozen cities in their immediate vicinity, will be the latest to benefit from high-speed Internet provided by the search giant.

Uh, sorry to disappoint you, Mr. Fung, but that’s five cities, not four: Atlanta, Charlotte, Nashville, Raleigh, and Durham.

The mayors of both Raleigh and Durham spoke at the press conference yesterday. Both cities’ Chief Information Officers spoke about the project and put in incredibly long hours to get their cities where we are now. Both cities have completely different permitting processes, different infrastructure, different laws and regulations. The way outsiders lump Raleigh and Durham into Raleigh-Durham has always annoyed me (and will be the topic of an upcoming blog post).

And saying it’s just Raleigh and Durham isn’t even accurate, as the nearby municipalities Carrboro, Cary, Chapel Hill, Garner, and Morrisville are also included. These cities’ mayors were also present but are overlooked by the reporter.

It’s just as big a deal to these other cities that they are getting Google Fiber. It would be nice if they got a little credit for their hard work, too.

via These four lucky cities are now officially getting Google Fiber – The Washington Post.

Mark Turner : Raleigh gets Google Fiber

January 27, 2015 02:31 PM

Google Fiber is coming to Raleigh

Google Fiber is coming to Raleigh


Last week, word leaked out that Google was hosting two events this week: one in Raleigh and one in Durham. Of course, it doesn’t take a genius to guess that Google Fiber is on its way to the Triangle. Word now is that Charlotte will also get the gigabit-speed Internet service.

I hope to attend the upcoming meetings to learn more about this service, after having fought a long battle to bring truly high-speed Internet to the state. I have no special inside track on the goings on, though, so I’ll likely learn about it like everyone else: through the media. It would’ve been great to receive an invitation, though, but in the bigger picture I’m just glad that a cause I’ve supported for many years will finally become reality.

The Goog and The Gov will hold a 1 PM press conference today to announce the news.

Mark Turner : Larry Stogner and ALS

January 27, 2015 02:21 AM

I was saddened to hear local WTVD anchor Larry Stogner has ALS, also known as Lou Gherig’s Disease. He has been the face and voice of the Raleigh-Durham area for decades and to see him doing battle with this devastating disease is heartbreaking.

I’ve been thinking of my own recent health issues. For a while it seemed that the twitching that popped up last summer had subsided but recently it has come back just as strong as before. I can’t sit at my desk during the day without feeling some muscle somewhere just twitching away. I had to reschedule my follow-up visit with a neurologist due to a PTA conflict but I see him again next week. I hope we can figure this out.

Mark Turner : Goodbye, CR-V

January 27, 2015 02:05 AM

2001 Honda CR-V EX

2001 Honda CR-V EX


This past Friday we said goodbye to our 2001 Honda CR-V, sold to a very happy young woman who answered our Craigslist ad. The CR-V had been in our family for over nine years and was a very good, reliable car. It was also a bit boxy for my taste and the 2.0l engine was underpowered for the car’s size. And it’s not electric, like our new Ford Focus.

For some reason it’s hard for me to say goodbye to vehicles. I suppose I like to take care of things and try to make them last. And vehicles become like a second set of clothes, in a way. They become part of one’s image.

For a while there we had four vehicles, though, with the CR-V, Odyssey, Focus, and the 2014 Kia Sorento we bought last week all taking up space in our driveway and garage. After hearing what the dealers would give us for trade-ins, we decided we would try selling the older cars on our own. This has worked out nicely for the CR-V and I’ve already had a few serious buyers inquire about the Odyssey. I am glad to find good homes for these cars but the whole process is a giant production that I can only stand about once every ten years.

Tarus Balog : Building an Open Source PVR: Step Three – Electronic Program Guide

January 26, 2015 06:55 PM

The most frustrating thing about this project was getting the Electronic Program Guide (EPG) to work. Unfortunately, it isn’t easy.

This was one of the things that TiVo excels at doing. You are basically paying for a very up to date program guide. They also offered something called a “Season Pass” which would cause all of the episodes of a particular program to be recorded without having to explicitly select them.

When I got my EyeTV system, this part was a snap. They partner with TV Guide to provide the service, and unlike TiVo’s $14.95/month fee it is a yearly fee of $19.95 (with the first year being included with the unit).

Even my Sony Bravia is able to get over the air EPG information, but I wasn’t able to get that to work with OpenELEC.

The actual EPG configuration occurs in the Tvheadend software. You get a screen like this:

There are three main areas of configuration: the “Internal Grabber”, “Over-the-air Grabbers” and “External Interfaces”.

The internal section displayed a cron job but no module options. None of the OTA grabbers seemed to work, and there wasn’t a module for North America. That left the external grabbers.

I started digging around and found that it really isn’t easy to get this running.

One tool that kept coming up was XMLTV. On the frontend configuration for the Tvheadend client in OpenELEC they even have a section on it:

XMLTV is a number of things, such as a format for representing TV listings, but it is mainly a set of tools “to obtain, manipulate, and search TV Listings”. It contains programs that will connect to an external source to gather EPG data.

Unfortunately, OpenELEC doesn’t ship with it. There is a script called “tv_grab_file” which is used to manage the XMLTV data, but not to actually acquire that data.

For me the easiest solution was to install XMLTV via apt on my home Debian server. It comes with a script called tv_grab_na_dd that can be used to fetch the data.

But I still wasn’t done. I needed a data source. It looks like all the cool kids use Schedules Direct. They are a non-profit that promotes open source software and provides, for a fee, access to EPG information. Since they had a free trial I signed up, configured my tv_grab_na_dd script to access their information, and voilà, I had an XML file with what appeared to be useful information.

I placed that in the webroot of my server, and then configured OpenELEC to point to it. Nothing happened. So I copied the file to the OpenELEC server, modified the client to use the “FILE” method (see screenshot above) and nothing happened.

I finally had to uncheck the XMLTV checkbox under “External Interfaces”. When I did that I finally had something under the “Internal Grabbers” section.

The last chore was to associate the channels I had discovered with the program guide.

Prior to getting all of that to work, the drop down for “EPG Source” had been blank.

So, to summarize my steps:

  1. Get an account at schedulesdirect.org
  2. Install the XMLTV tools somewhere (I used a Debian box)
  3. Configure XMLTV to access your Schedules Direct account
  4. Set up a cron job to periodically grab the updated EPG information and store it in a web root:
     0 1,13 * * * /usr/bin/tv_grab_na_dd --config-file ~/xmltv.conf --days 7 > /secure/html/xmltv.xml
    
  5. On the OpenELEC box, set up a cron to fetch the data:
    0 2,14 * * * wget http://172.20.10.12/xmltv.xml -O /storage/xmltv.xml
    

Whew. So far everything has been working well. You want to be sure not to fetch the data too often as that can overwhelm the Schedules Direct servers. My current seven day XML file is about 10MB.

I went ahead and signed up for a year account for $25, bringing my total to $705.92 (the hardware was $680.92 and the software was, yup, $0). It’s quite possible to shave off about $200 by going with less memory and a smaller SSD (or using an HDD) or if you already have a server to run the Tvheadend backend you could get by with a Raspberry Pi.

My next steps are to play with all the cool add-ons and to try and organize my pictures in a fashion where they would be usable with the system. More fun for me.

Alan Porter : idea: spectral shift hearing aids

January 26, 2015 02:39 PM

This is part of a series I have been thinking about for a long time. When I have a fleeting thought about some neat idea, I should publish it to ensure that it can not be patented later.

I saw an ad for hearing aids, and that made me wonder if instead of simply amplifying, hearing aids could do some more sophisticated sound transforms. Maybe they do already.

Since hearing loss is typically non-uniform across the hearing spectrum, it would make sense to transpose sounds from “bad” ranges to “good” ranges. Of course, in practice, that might sound weird. For example, someone with high-frequency hearing loss might have high-pitched consonant sounds transposed to a lower end of the spectrum. I’m sure the listener would have to adjust to that, since we’re used to vowels sounding low and consonants sounding high.

Alan Porter : idea: car sobriety switch

January 26, 2015 02:34 PM

This is part of a series I have been thinking about for a long time. When I have a fleeting thought about some neat idea, I should publish it to ensure that it can not be patented later.

This morning I read an article about a drunk driver that killed a motorcyclist. I know there are companies that make sobriety tests that tie into vehicle ignition systems. Some courts order offenders to have these installed.

I thought it would make sense to use the car’s existing controls (buttons on the steering wheel) and displays to run a reaction-time test that has to be passed before the car can be started.

Of course, this would be annoying. So maybe the car could be configured (via web page?) to require this test only at certain times. I log into car.com and set it to require a sobriety test to be started between 10pm and 4am. It could provide options if I fail. Say, after two failures, the car could phone a friend, or it could (via a service like OnStar) call a cab to my location.

Tarus Balog : Building an Open Source PVR: Step Two – Software

January 25, 2015 06:46 PM

So in my quest to replace my Mac-based PVR I wanted something lightweight that could be controlled via a remote. I had issues with my current setup when a keyboard or a mouse just had to be used, and I wanted to avoid that. Since this system would be dedicated to the PVR, I didn’t want to install anything that wasn’t necessary.

This left me two options: Kodi (formerly XBMC) and MythTV. I decided to try out Kodi via the OpenELEC project. OpenELEC aims to create a very lightweight instance of Kodi that can be installed (and probably even run) from a USB stick. Sounded like just want I needed.

The easiest way it install OpenELEC is to create a bootable install USB stick. This is pretty easy, if you read the instructions correctly. I actually spent a lot more time on this than I needed because of a failure to do so. Once you download the image you need (I used the new bundled “generic” version which works with Intel-based devices as well as most others), you insert your USB stick and then run the “create_livestick” command. You pass a parameter to that command which indicates the path to the USB stick, i.e. /dev/sdX where X is the drive letter.

This is where I screwed up. I could easily tell that the stick I used was mounted on /dev/sdh1, so that’s what I used. The problem was by adding the “1” I was specifying a partition and not the whole drive. It took me an embarrassingly long time to figure out what I was doing wrong.

Once the stick was created, I just booted the Intel NUC with it and followed the on-screen instructions. Pretty soon I had a working OpenELEC system.

Now let me stress that OpenELEC is not designed as a dedicated video recorder. It is designed to run Kodi which is a media center, so most of its functions are aimed at managing libraries of media and not recording television. The menu is organized by media type:

You have a Pictures menu, and if you have the PVR add-ons installed, a TV menu item. Videos are movies, whereas TV Shows are media files that have been identified as TV Shows (different than things you have recorded on your PVR).

Then you have your Music files, any Programs you have added to the system (as in software programs) and the System menu itself.

The system information screen gives you a read-only overview of the system, including memory usage and frames per second.

To actually change things you need to go to the configuration menu:

You can add media sources via pretty much every network protocol currently in use. As I have a couple of UPnP servers on my home network I used that format, but I found that when I added new content the system wouldn’t pick up the changes. So I installed an add-on to update my library but it didn’t help:

I searched but couldn’t find any way to get the changes to show up. There is a menu that comes up when you hit the left arrow that is supposed to update the library but it wouldn’t work for me. Since my UPnP servers can also serve files over SMB, I tried that and it not only fixed the issue but opened up a whole new level of coolness.

You can scan for TV Shows in your media files, and when you do Kodi will try to “scrape” information off of the Internet for such things as artwork and episode synopsis. You have to have your library named in a particular fashion (which I do) but then it is pretty automatic:

This didn’t happen when I was using UPnP.

This is all well and good, but I still get a lot of content through Over the Air (OTA) television broadcasts and the whole purpose of this exercise was to get that working. In order to add PVR functionality to OpenELEC you need to install add-ons. This usually consists of a “backend” application that does all of the heavy lifting with respect to video capture and encoding, and a “frontend” or client application that connects with the backend and displays the video. I specifically chose more powerful hardware as I wanted both features on the same unit.

First I needed to install the backend, which is a piece of software called “Tvheadend“. It was a little hard to find in the menus as it is a “service” and not a normal add-on, so you have to find the “services” section of the add-ons menu:

and then you can find and enable your services:

Like most add-ons within Kodi, you will have an “information” screen:

and a configuration screen:

The configuration screen comes into play when you set up the Electronic Program Guide (EPG) but I’ve reserved that for a separate post.

To access the Tvheadend software, you have to browse to it via http://[openelec-server]:9981. That would be a different URL, of course, if you installed it on a server other than the OpenELEC box. This is where it got difficult as most of the documentation on-line is out of date and the menu options have changed. I’ll post what I did in the hope that it might help someone else out.

First, you want to go to the Configuration tab:

You don’t have to do anything on the “General” tab if you don’t want to, but you do need to see a TV capture device on the “DVB Inputs” tab:

If you have chosen a supported capture unit, it should be displayed here. If not you’ll need to either figure out why it isn’t or get another unit. My Kworld UB435-Q showed up with support for both DVB-C and ATSC formats. Since I am interested in OTA broadcasts in the United States, I chose to enable the ATSC interface as the other is used for cable, which I don’t have.

Note in this screenshot that there is a Network entry called “OTA”. This was not there when I first enabled the interface. I had to go and set it up on the “Networks” tab and then add it.

This took me a rather long time to figure out. You need to tell the Network what multiplexers to use, and it looks like you would need to add them individually under the “Muxes” tab. It turns out that there are a number of pre-defined muxes including one for North America ATSC called “United States: us-ATSC-center-frequencies-8VSB” so I just chose that for my “OTA” network:

Once I associated it with my adapter in the “DVB Inputs” tab, I had a list of television channels:

Tvheadend is pretty cool on its own. If you notice on the screenshot there is a “play” button next to the channel name and if you click it you get a stream that will play on your computer. We recently had a bad weather day and I worked from home, and I was able to keep the local news up in a window while I worked. I haven’t really explored all of the features of Tvheadend, but once I got to this point it was time to head back to OpenELEC and set up the frontend client.

Going back to the configuration menu and looking through the Kodi add-ons, there is a section called “PVR Clients”:

I wanted the Tvheadend HTSP client:

Just like the backend, there is an information screen:

and a configuration screen:

In this configuration screen, you have to point to the Tvheadend backend, which in my case is on localhost.

If you get to this point, then you should see a new “TV” menu item:

You can watch live TV:

But the main reason I wanted a PVR was to time shift and store TV programs so that I could a) skip the commercials and b) make sure I didn’t miss anything. This requires access to the Electronic Program Guide and I could not figure out how to get it to work. I spend days worth of my limited free time working on this. The forums and the existing documentation were not much help.

I got so frustrated that I based the system and installed Mythbuntu – an Ubuntu-based distribution that focuses on MythTV in the same fashion OpenELEC focuses on Kodi. I figured that since MythTV was designed to be a PVR from the start, it might be easier.

There were a number of differences apparent right away. Mythbuntu is huge compared to OpenELEC. It includes a number of things that just aren’t required. It was, however, easy to install, and building on my newly earned knowledge with OpenELEC I was able to navigate the initial setup easily. I found that the MythTV documentation was slightly better than OpenELEC/Kodi/Tvheadend, but I still hit snags.

The first was that MythTV wouldn’t recognize the Kworld tuner I was using. It did, however, see the EyeTV tuner from my Mac-based install. But using it and having it scan for channels turned up nothing. The channel scan seemed to complete as expected but nothing was discovered.

I spent another day’s worth of free time trying to get that to work, but I gave up pretty easily. I wanted to use the Kworld tuner and possibly sell the EyeTV unit, so it bothered me that it wasn’t recognized. Plus, Mythbuntu just wasn’t the lightweight install I wanted, so I decided to go back to OpenELEC.

I did finally get the EPG working, but I’m going to reserve that story for the third and final post in this series. Once that happened I could see the guide in OpenELEC:

Yay!

Is it perfect? No. The OpenELEC TV frontend is pretty limited. While I can schedule a show for recording through the EPG by setting a “timer”, I have not found a way through the GUI to record a whole series. I can do it through the Tvheadend web interface by selecting the show in the EPG and choosing “Record Series”:

and then it will show up on the “Timers” section of OpenELEC:

You can access saved recordings through the menu as well:

but it frustrates me that there doesn’t seem to be a way to delete the recording once I’ve seen it. I have to do that through the Tvheadend web page.

(sigh)

Overall I’m happy with my new OpenELEC Kodi install. There are a large number of add-ons that I haven’t explored yet, and perhaps I’ll have the time one day. When I was younger and got a new piece of technology I would try out every single feature. Now I tend to do the bare minimum I need to have a viable solution and then stop. (grin)

If you don’t care about OTA television then OpenELEC is a breeze to set up. The only issue I see is that there are a number of closed solutions, such as Google’s Chromecast and Amazon’s FireTV that do pretty much the same thing, at least with respect to video, and they cost about as much as a nice meal versus several hundred dollars.

But I like OTA television. Between it and other services I have like Netflix and Amazon Prime Instant Video, I always have something to watch. Plus, OTA HDTV signals aren’t compressed like those from cable and satellite providers, so the quality is excellent.

This experiment to create an open source PVR both emphasizes the good and the bad about free software. I consider myself pretty technically savvy but I had a lot of issues getting this to work. But I also learned a whole lot about four open source communities (OpenELEC, Kodi, Tvheadend and MythTV) and how OTA television actually works. My PVR is not some magical black box but a tool that I can control and manipulate to my benefit.

Technology is key to personal freedom and ceding the understanding of how it works to third parties can be dangerous. I know it sounds silly to sow fear about something as trivial as the ability to record “The Big Bang Theory“, but rarely does societal change happen in a huge way all at once. It is more a series of small things, chipping away at our freedoms over time, and getting this to work just made me feel like, at least in my life, that I was making a difference.

Many thanks to the people behind OpenELEC, Kodi, Tvheadend and their communities for making this possible.

Mark Turner : How does he know?

January 20, 2015 03:03 AM

I witnessed very interesting behavior from our dog, Rocket, this evening. He was napping on the floor next to me while I read in the recliner and Hallie surfed the Internet from the other room. Kelly had been at work all day and was bringing Travis home from his piano lesson.

Everything was quiet in the house so I was wondering where our dog was going as he suddenly hopped up from his nap and walked over to the door leading to the garage. Seconds later, the garage door went up and Kelly and Travis walked in with Rocket greeting them.

I sat there astonished. Could it be the dog had somehow known they were coming home? How? He clearly hopped up from his nap and went directly to the door as if he knew they would arrive. I can’t say for sure what his intentions were but to my eyes it certainly seemed like he was ready to greet someone at the door.

While it’s true that the front blinds were open and Kelly did drive past the house on the way to the driveway, Kelly was driving a car we had bought only 48 hours earlier. Rocket has not only ever been in it, he has never even seen it. Could he have associated the noise of an engine that quickly? I don’t think so.

I have heard of research that shows that pets know when their humans are on the way home and I’ve always taken those reports with a grain of salt. Even so, it’s very intriguing to watch what certainly looks like this behavior.

I’ve long asserted that Kelly is Rocket’s favorite human. Could he have somehow known she was coming home? I wouldn’t bet against it.

Mark Turner : The fine line of classroom discipline

January 19, 2015 04:16 PM

Today is Martin Luther King Day, honoring a great man who pushed America to honor its commitments to everyone. It’s got me in a contemplative mood.

A well-meaning liberal friend forwarded this article from the NEA about the “school-to-prison” pipeline. It portends to raise alarms about how a kid who gets suspended often winds up taking a path towards crime. This is indeed a serious issue with troubling implications. I was disappointed, though, to see the article missing an important point. For example:

According to the U.S. Department of Justice, which last year ordered school districts to respond to student misbehavior in “fair, non-discriminatory, and effective” ways, Black students are suspended and expelled at a rate three times greater than White students, while Black and Latino students account for 70 percent of police referrals.

Also, students with disabilities are twice as likely to be suspended than their non-disabled peers, and LGBT students are 1.4 times more likely to face suspension than their straight peers. In Ohio, a Black child with an emotional disability was 17 times more likely to be suspended than a White, non-disabled peer. Combine these “risk factors,” and you’re talking about a child who might as well stay home.

The bias starts early. Black children represent 18 percent of pre-school students, but account for 48 percent of pre-school suspensions. Yes, we’re talking about 4-year-olds.

“It’s crystal clear that Black students, especially boys, get it worse,” said Jacqui Greadington, chair of the NEA Black Caucus. “Studies have shown that a Black child, especially a male, is seen to be a bigger threat just because they are. They are. They exist.”

Is there “bias” here, or is there a real problem with these kids disrupting class? I think there are some folks like Ms. Greadington who want to believe that this is simply racism, that it’s just the system being unfair to these kids, but I don’t think that’s necessarily the case.

If kids are having problems behaving in class, it might be worthwhile to figure out why they’re misbehaving in class rather than charging their teachers with being racist. Do they need extra tutoring? Mentoring? Do they have enough parental supervision and support? Sleep? Do they live in a safe, loving home? Do they have enough money or opportunity?

We fail kids when we misrepresent the challenges they face or underestimate their ability to overcome them. It doesn’t do any good to say, “well, that’s just racism” and throw up our hands. Let’s focus instead on the solutions that will help every child reach his or her potential. As I said before, some kids face nearly impossible odds. How can we help get them where they need to be?

Mark Turner : Who built this country?

January 19, 2015 02:16 AM

My friend and new Wake County Commissioner John Burns was at the N.C. Association of County Commissioners where the state’s commissioners were given a presentation of the state’s changing demographics. Demographic trends show that white people will soon no longer be the majority.

One commissioner took issue with this and, according to John, announced “so we’re just going to take what built America for 200 years and throw it in the trash can, I guess.”

Of course, it was the immigrants who built America. Blacks, Chinese, Irish, Mexicans, and many others. The people who did the jobs that no one else wanted to do (and in the case of slavery, that they didn’t want to do, either).

Fortunately, everyone around this guy rolled their eyes. And it makes me glad that idiots like him are getting left behind.

Kevin Sonney : Brandon the Collie 2003-2015

January 17, 2015 08:13 PM

12 years ago, I got a puppy.

The Puppy

He was a good puppy, from a friend who bred border collies. He would be good with the kids, I thought. Everyone should have a dog growing up.

I named him for the lead singer of Incubus, because it felt right. I am very particular about naming my pets, and unless they come pre-named, I usually wait to see what name fits. In his case, “Brandon” stuck.

As he grew, we realized he wasn’t going to be a typical border collie.

Big Puppy

He was sweet, gentle, playful, and nowhere near as neurotic as the breed is normally. And he was also BIG. Bigger than anyone else in his family. And he loved me, and he loved my kids, and he loved my  (now ex-)wife.

62623564_fbd25cb7e0_o

When I got divorced, Brandon stayed with me. We were buds, and we hung out together at home. He took care of the other animals, and made sure I was where I was supposed to be. Frankly. he kept me sane during one of the roughest periods of my life.

Kevin+brandon

 

Even after a year or so of just us, he came to accept Ursula as part of my life, and while he never respected her as much as he had me, I know he came to love her. Or at least, like her well enough.

Sometimes we think of our pets as family. Children even. Sometimes we get really lucky and get a good companion. It isn’t often that we get a best friend.

I hit the jackpot.

Today, after a period of decline and illness, Brandon left us. He was a good dog. Hell, he was the best dog, and I’ll never have another like him. There have been and will be tears over the next several days and weeks. The children are holding up better than I am, which I expected a little.

Tonight though, please raise a glass for my Brandon, the best dog ever. May we all get so lucky at least once.

Tarus Balog : Building an Open Source PVR: Step One – Hardware

January 17, 2015 06:12 PM

Many years ago I was jealous of my friends that had a TiVo. Since I live out on a farm and get my television the old fashioned way via an antenna, TiVo didn’t work for me. I was stuck with using a VCR to record my television programs.

UPDATE: My friend Tanner pointed out that TiVo does offer an Over the Air (OTA) option now. It runs $49.99 for the hardware plus a monthly charge of $14.99 (a “lifetime” option is available as well). Looks pretty cool and I’m sure it is easier to set up than what I did. If I wasn’t all about open source I would have seriously considered this.

Then I got introduced to a product called EyeTV. This is a product designed for OS X that looks like an old school USB stick with a coax connector at one end. Connect that to your antenna, plug the USB end into your Mac, install the software and, voilà, you have your own personal video recorder (PVR). The hardware is actually a Hauppauge WinTV HVR-980 but the secret sauce is the software to make it easy to use.

Note: it appears they don’t make that unit for the US anymore and instead sell an external unit.

The EyeTV setup works fine, but as I’ve moved away from Apple products I have been wanting to replace it with an open source PVR. There are a number of open source media suites, including Kodi (previously XBMC) and MythTV, and I’ll cover that in my second post. I decided to explore OpenElec, which is a lightweight packaging of Kodi, and since I could not find an “all in one” guide for setting something like this up, I wanted to document the process I went through in the hope that it will help someone else.

The first challenge was finding suitable hardware. I run EyeTV on a Mac Mini. It’s small and quiet and has enough horsepower to drive the EyeTV software at HD resolutions. But, it is a full operating system and has some frustrating issues. First, I use Front Row, which Apple dropped with OS X Lion. Next, it is connected to a UPS and when the power blips (which happens frequently on the farm) I get a little pop up on the screen that requires you to click “ok”, and that can’t be disabled. Almost every time I go to watch a program I have to dig out a wireless mouse just so I can acknowledge the dialog box. I wanted something that could be run entirely by remote and was as lightweight as possible.

Noise from the unit was a big consideration for me. If you want to do a lot of video processing you need a rather powerful machine, but those tend to need cooling and cooling means noise. To deal with this, most PVR software comes with the concept of a “frontend” or playback client that talks to a “backend” that actually acquires and manipulates the video stream. A lot of people have had success in using a Raspberry Pi as the frontend, but I wanted a unit that was both powerful enough to act as both the frontend and backend while not generating much noise.

Yeah, I know, first world problem.

The first unit I tried was the Z3RO Pro from Xi3. My friend Donnie works at WDL Systems and they specialize in embedded devices, so he tends to be pretty up to date on the latest new shiny and recommended I check it out. I ordered one from Amazon preconfigured with an SSD hard drive and 4GB of RAM.

It’s a very stylish and compact unit, and the one I bought came preloaded with SuSE Linux. It did have noticeable fan noise, however. Nothing too obnoxious, but in my quiet office I could definitely hear it.

That wasn’t a show stopper, but what did kill it for me was the fact that I couldn’t get it to talk to any of my HDMI devices. The Z3RO Pro comes with two video ports, one for DisplayPort and a combo DP/HDMI connection. You have to be very careful when putting an HDMI plug into that port as there is nothing to really guide the orientation (you put it in upside down) and if you force it in the wrong way you can damage it. I didn’t have a HDMI cable at the office, so I used an HDMI to DVI one and it worked fine, but when I got home the unit wouldn’t recognize the monitor with a straight HDMI cable. I tried three different cables and three different monitors and not even the boot screen would show up, so I had to return the unit. There is no audio out on the Z3RO Pro outside of HDMI and audio is a big part of this experiment.

The next unit I tried was the Intel NUC (Next Unit of Computing). This looks like a taller, smaller version of the original Mac Mini. It comes as a kit and you have to add your own storage and memory. Since my current setup has a 350GB HDD, I opted for a 512GB SSD and 8GB of Crucial RAM. The NUC also requires a mini-HDMI connection, so I bought a mini-HDMI to HDMI cable as well.

When the unit arrived, there was a cute little Easter Egg upon opening the box:

In addition to the computer hardware, I had to obtain a compatible TV tuner and a remote.

For the tuner I went with a Kworld UB435-Q, which is supported by OpenElec, and an Ortek VRC-1100 for the remote control. I didn’t really care about the remote itself as I have a Harmony 900 universal remote and my plan was to use it, but I needed an IR receiver and the Ortek was cheap.

I did manage to get this setup working well with OpenElec, but it wasn’t easy and I’ll cover the details in the next post. One funny thing I found was when I was programming the Harmony it appears that the NUC includes an IR receiver so the Ortek is unnecessary. You might be able to understand my confusion when the remote was working but the little red LED in the Ortek receiver wasn’t blinking. I ended up unplugging it and when I could still manipulate OpenElec I concluded that the NUC must have it. During this process, however, I ended up blanking my Harmony and I had to reconfigure everything, and at some point the NUC’s IR stopped working. Since I got it working with the Ortek’s receiver and I’m not in any need of extra USB ports, I kept it.

The cost of this experiment so far:

Intel NUC BOXD54250WYKH1 $351.00
Crucial MX100 512GB SATA 2.5″ SSD $213.99
Crucial 8GB Kit (4GBx2) DDR3 1600 $66.99
Kworld UB435-Q USB ATSC TV Tuner $24.99
Ortek Windows 7 Vista XP Media Center MCE PC Remote Control $16.96
AmazonBasics High-Speed A to C Type, HDMI to Mini-HDMI Cable $6.99
Total: $680.92

You can shave $100 to $150 off of that price by using a smaller SSD. The Intel NUC I bought also supports laptop-size HDDs, but if you plan to use an SSD and just want a smaller unit, there is an SSD-only version of the NUC that is a little smaller and a couple of dollars cheaper.

This is much more expensive than a lot of media players out there, and the main cost has to do with my requirement to capture live over-the-air digital television. For many people, this is an archaic concept (I was talking about this in the office and Ben jokingly asked “What’s a channel?”) and thus of limited value, but for those of us still using the old paradigm a PVR like this is useful.

Finally, while my NUC is based on Intel’s Haswell chip, they just announced that the Broadwell-based units will ship soon. In an ironic twist, they are being manufactured via a partnership with Xi3. As soon as that happens, you can expect the price of my unit to drop, so if you are considering a project like this, you have options.

And that’s what open source is all about to me: options.

Tarus Balog : Zabbix and OpenNMS

January 16, 2015 10:57 PM

The network management application space is rather cluttered, with a number of “fauxpensource” offerings that can really confuse the landscape when people are looking for truly open solutions.

Two exceptions to that are OpenNMS (‘natch) and Zabbix.

According to the Wikipedia article, Zabbix was started in 1998, which makes it a little older than OpenNMS, which I’m told was started in July of 1999 although we use March 29th, 2000, as the official “birth date” since that’s when the project was registered on Sourceforge.

Despite being in the same space and about the same age, I’d never really used Zabbix or interacted with their community until 2009 when I met Rihards Olups.

Rihards is kind of “the Mouth of Zabbix” and I met him at the 2009 Open Source Monitoring Conference where he brought me some gifts from his home in Lativa. He repeated the gesture at this year’s OSMC, and I asked when would be his next trip to the US so I could return the favor. He pulled out his handy and said “Are you anywhere near Raleigh, North Carolina?”

Since that happens to be pretty much my home I was happy to find out that he was coming to town. Even though he was sick with the flu that had been going around, we managed to get a gang together for dinner.

Left to right, that’s Rihards (with the awesome beard), Eric (who was in town from Texas), Sarah, Seth, David, Me, and Ben.

We went to The Pit, which is an acceptable, local barbecue restaurant that is much more “presentable” than some of my favorite dives although the food isn’t quite as good, and then afterward we went next door to the “barcade” and played games.

I played pinball (one of my favorite things to do) and Rihards played his first game on a real pinball machine. Yes, I’m a bit older than him.

One of the things I like about my job is that I can go most anyplace and find like-minded free software people. It’s awesome and I always have a good time. I hope to visit Riga in September around the time of the OpenNMS Users Conference and meet more.

Warren Myers : columnar “email”

January 15, 2015 04:20 PM

There needs to be a better way of handling group conversations. IRC uses the constant scroll mentality. Email has reply-at-top, reply-at-bottom, and reply-inline.

Forums, reddit, Google+, Facebook, Twitter, and the like have a scroll-like view – every new post is merely sequentially listed after the last.

This can all lead to highly confusing digital conversations.

Somebody should make a parallel (maybe columnar) discussion/chat/email system where every participant can get their own space to reply, they can reply to specific things from different people, and everything can be viewed in an identified manner. Similar to how Track Changes works in Microsoft Word.

Surely it should be That Difficult™ to do this, should it?

David Cafaro : Updated Network

January 14, 2015 03:59 AM

I’ve been very busy updating my home network infrastructure lately.  I wanted to improve the zone separation, while at the same time providing a reasonably secure connection between my resources at home and my resources on the net.

Some of these changes include:

  • Replacing my SSG-140-SH Firewall with a new SRX220H2 w/POE Firewall.
  • Replacing my DELL 5448 Switch with a new Netgear GS724T Switch.
  • Removing an old 4 port POE switch.
  • Replacing my old VLAN setup (Main, Media, Utils) with my new VLAN setup (Main, Wireless, Media, Utils, LAB, VPN, Tunnel).
  • Upgrading my old Dell 860 (250GB Raid1 and 8GB RAM) co-located server with a new SuperMicro based server that has 12TB of storage and 32GB of ram.  This is split into virtualization images, so I’ll be able to work with Docker/CoreOS/KVM based technologies in my personal cloud.  This is tied into my home network via an OpenSwan -> SRX IPSec tunnel.  Additionally, the SRX will be able to provide dynamic SSL VPN capability for when I’m on the road.

All of the above gets added to my existing 12TB NAS, multiple POE wireless access points, and virtualization server.

I have a few more tweaks left to handle multicasting and cross-LAN traffic on the network, finishing up my log aggregation and analysis tools, as well CoreOS and Docker work for PaaS deployments.  This should provide some nice resources for my security research.

Tarus Balog : Important Security Issue with OpenNMS

January 13, 2015 05:54 PM

It is said that “given enough eyeballs, all bugs are shallow”, which is true, but the tricky part is finding enough eyeballs, especially useful ones and not the ones in that jar in Blade Runner.

Recently, an end user reported a rather severe security issue with OpenNMS.

The process that serves up the “Categories” section on the front page of the web interface is called RTC (for Real Time Console). The database queries that create the availability numbers on that page can be expensive in terms of resources, so the RTC daemon was created to periodically query the database and then cache the results so that lots of users wouldn’t create an undo load on the system.

We use a tool called Castor to process XML data within OpenNMS. Due to a bug in Castor, if Castor discovers an error when processing an XML file, it can throw an exception that includes the contents of the file.

This is very useful when the files relate to OpenNMS and you are trying to debug them, but you don’t exactly want the contents of /etc/shadow or /etc/passwd displayed indiscriminately. That’s exactly what this exploit allows.

Since the default username and password for the RTC user is “rtc” and exists on every system, a malicious person could use that information to obtain the contents of any file on the system. Note that as far as the OpenNMS application is concerned, the RTC user has very limited permissions, but this is caused by an issue with Castor and it has just
enough permissions to trigger it.

This has been reported as our first ever CVE: CVE-2015-0975

The best fix is to upgrade to OpenNMS 14.0.3. If, however, you are unable to upgrade soon, you can edit the Spring security file to limit requests from RTC to just the localhost, which should mitigate most of the issue. Full instructions and files can be found on the wiki.

To summarize, all versions of OpenNMS prior to 14.0.3 contain a bug where *anyone* with access to the webUI (port 8980 on the OpenNMS server) can retrieve any file that is on the system. While this isn’t the end of the world, it definitely could be considered bad and should be addressed.

Tarus Balog : OUCE 2015 – 28 September to 1 October

January 13, 2015 05:08 PM

Just a quick post to remind folks to reserve the dates for the next OpenNMS Users Conference Europe to be held in Fulda, Germany, the last week in September. It is usually held earlier in the year, but construction on the University of Applied Science campus pushed it out. I am really looking forward to the nicer weather (it snowed the last time we met in Fulda).

Organized and run by the independent OpenNMS Foundation, this is a yearly gathering of OpenNMS users and developers from around the world for several days of training, presentations and camaraderie. It’s a great time and I look forward to it every year.

And yes, there is beer, some of it free.

The Call for Papers is open.

Hope to see you there.

Mark Turner : Impossible odds

January 13, 2015 03:29 AM

Quantez_Johnson-2015-01-12
See this gentleman? He was arrested last month for a string of burglaries around East Raleigh. Before was busted in December he had been arrested six times since September. This photo was taken today at the county jail, when he was charged again with possession of a stolen firearm and possession of stolen goods.

Now here’s the mugshot of his mother, taken the same day her son was arrested. Note the shiner. Mom was charged with marijuana possession and possession of a stolen firearm. She has a rap sheet stretching back to 1995 with a few larcenies, license revocation charges, and minor drug charges. In each case her sentences were suspended and you know what? She managed to largely stay out of trouble since 2003.
Laurie_Johnson-2014-12-24
Though I’ve been quite willing to send kids like this one on his way to jail whenever one’s been caught stealing in my neighborhood, it has made me wonder how a kid can wind up in such a situation. It’s a damn shame to have to send a kid to jail.

As a PTA president, I hear a lot of stories of sad cases, absolutely heartbreaking cases of completely dysfunctional families. I heard one today that will haunt me into my dreams tonight, a story of a child whose parents are apparently no longer interested in being parents and want the child gone.

What kind of world is that for a child to grow up in? When you have no advocate at all? And no love? What kind of future does that child have?

Days like today leave me speechless, that in America in this day and age kids can fall through the cracks and no one even notices. A huge swath of America has no idea of the struggles of those around them.

These kids face truly impossible odds. They have so much stacked against them. I wonder what it will take to reach these kids, to overcome the impossible odds because whatever we’re doing now is not working.

Tarus Balog : Welcome to 2015

January 12, 2015 05:30 PM

I don’t know why I like the new year so much. It’s a pretty arbitrary holiday. I mean, yeah, the Earth has circled the Sun one more time, but is there really any difference between December 31st and January 1?

I think I like it because, no matter what happened in the previous year, the slate (to a large degree) has been wiped clean. You get a fresh start, and after 2014 I am excited to have one.

For the OpenNMS Group 2014 was bookended by two departures. The first came in January when the man we had hired to take over as CEO decided to leave us for a very senior position at Blackberry. Now considering the compensation Blackberry bestows upon its major executives, I really can’t blame him for taking the job. I am certain, however, that he could have handled the situation better. His departure was so sudden and we had a number of things going on that depended on him that it left us spinning for several months and put me in a bit of a depression.

The second departure was that of Matt Brozowski, our CTO and one of the three founders of the company. This hit me much more than Ron’s departure, because when a founder leaves any company it has to be seen as a vote of no confidence and at a minimum is a failure to meet expectations. I do understand his reasons for leaving, however. Being an entrepreneur is not for everyone. What we are trying to do is hard. People who have never attempted it must think it is a life of leisure – being your own boss, calling all the shots and taking vacation whenever you want. What I’ve found is that I spend most of my time acting as an umbrella to keep the crap from falling on the rest of the team so they can do the real work, and I’m tied to obligations that aren’t mine to control. I end my day by checking my e-mail and start it the same way. Only in the last few years has OpenNMS gotten to the point that I feel comfortable in taking a vacation, and even then I have a system for getting notified if something needs my attention.

Now I thrive on that, but not everyone does. I understand the lure of the safety and security of a big organization. Every so often I find myself wanting to swap places with some of my clients who make a lot more money than me and work in large corporations. But I know that, ultimately, it wouldn’t make me happy. We spend a third of our adult lives at work, and it is a shame if you aren’t doing something that makes you happy. We all wish the best for Matt and hope he finds happiness in his new position.

While both of these events were serious downers, there was a lot of good in 2014. We had the best Dev Jam ever, I think. We’ve been doing these for a long time and I think the whole team just gelled this year (plus, the Twins won). We released OpenNMS 14, which marked a new philosophy with releases with an emphasis on “early and often” and I think it’s great. I constantly discover new things in it that I didn’t realize I needed.

From a financial standpoint, the company lost money in 2014 for the first time. It took us awhile to get our focus back, but the last quarter was awesome, with revenue in December, always our strongest month, setting a new record and being up over 40% for the same month in 2013. We have a number of major announcements in the next six months that should get us back on track for an awesome 2015.

So, my parting thought to my three readers is this: if you had a great 2014, here’s hoping that 2015 is even better. If 2014 knocked you down, pick yourself up, brush yourself off and leave it in the dust. It’s a new year.

Go do great things.

Mark Turner : Busy busy weekend

January 12, 2015 03:12 AM

Been pretty busy around Chez Turner. First off, right around Christmas I caught some sort of cold which sapped much of my energy for a few days. Then my stuffy nose kept me from sleeping well for several nights. But that wasn’t enough to keep me from trying to do way too much as is my habit.

The changing calendar brought about the urge to knock out plenty of tasks that have been nagging us for a while. We cleaned out our attic of a ton of unneeded stuff. Then we did the same to the garage. Then we did the same to the utility room. Then we painted our dining room (after, what, six years?). Then we shifted our living room furniture around. Then we hung pictures on the wall (after, what, six years?). Oh, and I put in a charging station for our electric car.

In-between, we found time to go ice skating with our friends, go on a run or two, host our kids’ friends for playdates, go see the excellent movie The Imitation Game, and even get in some music practice. I’ve also spent some time building a spreadsheet to decipher our Time of Use – Demand (TOUD) electricity rate from Duke Energy Progress. I fixed up our CR-V to sell (Armor-All, car wax, engine cleaning, photography, create an ad) and used a smartphone app and a $15 OBD2 adapter to get the car like new. I also toyed with my new RTL-SDR tuners I bought from China, capable of tuning from about 50 MHz to 2200 MHz. And somewhere in there I made time to cook a very tasty meal tonight, after I watched the second half of the N.C. State win over Duke.

Life sure is busy but it’s also good.

Warren Myers : what should an “ideal” support ticketing system provide?

January 09, 2015 03:55 PM

If you were going to create a support ticketing system from scratch – what would you put in it?

My initial list of needs (some of which derive from my book, Debugging and Supporting Software Systems, and other from my experiences in ticket smashes):

  • “long” title support (HP truncates at 80 characters – give me at least 255)
  • “long” field update support (HP truncates at 4k characters – that’s not enough for some stack traces)
  • clear contact fields for both filer and support case owner
  • allow updates to be made via email or web ui
  • allow attachments (for log files, screenshots, etc)
  • have “private” updates visible only to support personnel
  • clear date/time stamps for updates
  • ability to turn case “result” into a KB article
  • clear resolution field
  • web ui should be highly responsive – and run usably on any modern browser (mobile, desktop, tablet, etc)
  • ability to cross-link to other cases filed by same customer
  • clear indication of who has made updates (maybe alternating colors for customer vs support updates?)
  • as few hoops as possible to open new cases & to update existing ones
  • simple way to close a case if you’re the opener
  • easy means to transfer ownership of a case – both for the customer and for the support technician
  • ability to search previous cases – both for customers and engineers

What else would you add? What would you change?

Warren Myers : fix ibm – hire me as your ceo

January 08, 2015 03:25 PM

Robert Cringely has written myriad times on IBM. His most recent post was titled, “How to fix IBM”.

His solution is simple and easy: “Go back to customers being a corporate priority.”

But IBM, as it stands today, will never get there.

And all the “leadership” they’ve brought in over the years has only compounded their errors faster – they’ve never done anything to even try to fix them. Why? Because they keep bringing-in stodgy old-thinking people who have no concept about what customer service means.

Ginni Rometty, and the rest of the senior leadership at IBM, needs to go. Absolutely. But when IBM brings-in new leadership, it truly needs to be, well, “new”. You need the same kind of leadership sea change Jack Ryan championed in Tom Clancy’s Executive Orders – you don’t need career managers and “senior” leadership: you need people with ideas who are will to try something new. Who are willing to fail, but to fail fast. Who will learn from failure, and keep iterating until there’s something that works.

So, IBM, I have a simple solution for you: hire me as your CEO. Give me 36 months to fix your problems. If I haven’t, let me go back to whence I came. But when I have, Wall Street will love you, and you’ll be on track to stay relevant for the next hundred years. Or, at least the next 30 (since I’ll want to retire some day). I’ve got a team of people already in mind who can do more for you in 18 months than the entire executive team has done in the last 180.

Mark Turner : If there’s an economy in your sharing then it’s not really sharing

January 07, 2015 07:02 PM

Wikipedia

Wikipedia

You can say I know a thing or two about sharing. I was open source long before it was cool. I support Wikipedia with not only my money but my photography, which I freely donate to the public domain. Even this blog is licensed under Creative Commons, allowing anyone to take what I’ve made and use it practically any way they choose. So the brouhaha over the “sharing economy” in Raleigh has me puzzled.

I attended what was billed as a “public hearing” on Airbnb Monday night. Fans organized the meeting to make a case for why Raleigh should consider legalizing use of the home-hosting service. Like other cities, Raleigh, they say, needs to embrace the “sharing economy.” I’m friends with many of these folks but I have a different take on this issue.

Now, I may be old-school (I am sporting gray hairs on my chin at the moment) but to me, “sharing economy” is a contradiction in terms. If my neighbor asks to use my lawn mower and I tell him “sure,” that’s sharing. If I say “sure … for fifteen bucks an hour” that’s not sharing, that’s renting. And if I do that on a regular basis, not only am I the biggest schmuck in the neighborhood but I’m also a de facto business, subject to all the fun rules and regulations that come with that.

This distinction seemed to be lost on a lot who spoke during Monday night’s meeting. If you’re advertising a service and you’re profiting (or attempting to profit) from it, don’t be surprised when the city (state, etc.) want to treat you like a business, because you are a business. Now, if you want to share your space with others with no requirement that money changes hands, that seems to me to be truly sharing and the government should steer clear. This is why I see services the facilitate this (like CouchSurfing) in a different light.

Airbnb isn’t doing this out of the kindness of its heart; the company exists to make money. What’s more, it seems their business model is predicated on making an end-run around ordinances and regulations meant to protect the public by making sure businesses meet certain standards. Municipalities rightfully take this very seriously and so should you. I don’t want to stay in hotels that have carbon-monoxide problems, or don’t carry insurance, or are going to swindle me with hidden fees. I appreciate the protections that regulation provides. I also appreciate my city having amenities like the PNC Arena and the Raleigh Convention Center, both of which would not be possible without occupancy taxes. Airbnb provides neither regulation nor occupancy taxes, and this hurts the city we all call home.

Too many people buy into the argument that regulation and taxes are wrong. Everyone loves a free ride. Infrastructure is great so long as someone else pays for it. This argument is often made by businesses that depend on shifting these burdens to someone else. That “someone else” is often you and me. Don’t fall for it!

If you want to share, then share. If you want to run a business, though, stop whining about the rules. Suck it up and pay your fair share just like everyone else.

If there’s an “economy” in your sharing then not really sharing.

Eric Christensen : What’s worse?

January 06, 2015 08:05 PM


Eric Christensen : Securing Secure Shell

January 06, 2015 03:47 PM

I was passed an interesting article, this morning, regarding hardening secure shell (SSH) against poor crypto that can be a victim of cracking by the NSA and other entities.  The article is well written and discusses why the changes are necessary in light of recent Snowden file releases.


Greg DeKoenigsberg : A good year for Ansible users

December 31, 2014 01:43 PM

About a year ago, Stephen O’Grady of Redmonk published a comparison of the community metrics of the major configuration management tools. It’s a good read, and I won’t rehash its points. Go read it first.

Today I’d like to take a look at where Ansible is, a year later, using last year’s report as a benchmark. I think it’s fair to say that we’ve done pretty well for our users in 2014.


Debian Popcon

Debian’s Popularity Contest is an opt-in way for Debian users to share information about the software they’re running on their systems.  Although it represents only a small sample of the Linux distro world, it’s useful because it’s one of the few places where we can really see an apples-to-apples comparison of install bases of the various tools.

First, though, a note about the original Popcon analysis: for Ansible, it was an apples-to-oranges analysis.  Stephen’s report compared the Ansible control tool — the functional equivalent of a “server” — to Puppet/Chef/Salt agents. When comparing the Ansible package to the server packages of the other configuration management tools, the picture is quite different, and more in line with what we’re seeing elsewhere:

Debian Popcon comparison, configuration management serversDebian Popcon comparison, configuration management servers

Strong growth demonstrated by Ansible in 2014. Important caveats about the above chart:

  • The Puppet line shows far more variability than do the other lines, which may be an artifact of collection method.
  • It appears that Chef has moved away from the distro distribution model for their server, so they are significantly under-represented. libchef-ruby was chosen because it appeared to be the best proxy. (If anyone has a better “chef server” package for Popcon comparison purposes, let me know.)
  • Ansible is also under-represented to some degree, since a significant amount of Ansible’s user base installs Ansible from PyPi.
  • Puppet and Salt could also be under-represented for similar reasons.

Because of Ansible’s agentless model, it’s impossible to get a realistic picture of how many systems are under active management by Ansible.  However, If we look at “systems immediately addressable by each configuration management system”, then we would use openssh-server as our “agent”, and the comparison looks like this:

Debian Popcon: Systems immediately addressable by the various configuration management toolsDebian Popcon: Systems immediately addressable by the various configuration management tools

Obviously, not every system that has openssh installed is currently being managed by Ansible. Still, I include the graph because it’s a compelling picture of the power of not having to bootstrap an agent.


Github Metrics

Stephen tracked three metrics last year for the four projects: stars, forks, and merged PRs in the last 30 days.  A key caveat for all of these graphs: because Ansible and Salt are both Github-native projects, the Github numbers for both projects are understandably higher than their older counterparts.

First, stars. Ansible has opened up its lead from last year, and now has 2-3x more stars than its counterparts.

github-stars

Second, merged pull requests in last 30 days. This graph looks very similar proportionally to last year, and represents different stages of maturity and different upstream philosophies among the projects. In number of lifetime contributors, Ansible (935) and Saltstack (934) significantly outpace Chef (363) and Puppet (344).

github-pr-30

Third, forks. Ansible had a slight lead at the end of 2013; that gap has widened in 2014.

github-forks


Hacker News Jobs

In his analysis from last year, Stephen referenced a metric he called “Hacker mentions adjusted for growth.”  I was unable to replicate that metric, so instead I used Ryan Williams’ excellent “Hacker News Hiring Trends” metric, which pulls from the “whoishiring” discussion threads.  It’s a narrow metric, but it shows steadily growing demand for Ansible knowledge among the Hacker News crowd.

whoishiring

Caveat: neither “Salt” nor “Saltstack” appear to be tracked in Ryan’s dataset.


Indeed.com Job Trends

On last year’s chart from indeed.com, Ansible was practically invisible.  This year’s chart shows significant growth, even though there’s a long way to go to catch up with incumbents Chef and Puppet.

Caveat: Stephen’s original search used “technology” as the term to try to weed out extraneous data.  In my search I used the term “devops” instead, which shows similar results but also allows for the inclusion of Salt.

indeed


LinkedIn.com User Groups

Again, as a function of time, Ansible lags behind the incumbents Chef and Puppet, but has made significant strides in the past year, almost tripling the number of users in Linkedin.com user groups.

linkedin


StackOverflow Questions

The total number of StackOverflow questions tagged with the different terms. Interestingly, although Ansible’s numbers are higher here than last year, there have still been fewer questions asked about Ansible than about any of its competitors.

stackoverflow

Given the evidently increasing popularity of Ansible along all other metrics, it’s a curious stat — but I like to think that the gap can be explained by Ansible’s ease of use, strong documentation, and large and helpful user community. That explanation may even be reasonable. :)


 

Ansible demonstrated strong growth in 2014, and people have noticed. Thoughtworks moved us rapidly from “trial” to “adopt” in 2014 in their technology radar, with some very kind words of endorsement. Opensource.com ranked us as one of the top ten open source projects of 2014.

Which means that the bar has been raised. SDTimes called us the #1 company to watch in 2015. We’ll see about that.

As always, our success is a direct reflection of the success of our passionate community of users and contributors. Thanks to all of you. We’re looking forward to a great new year.

Mark Turner : Tablets and E-readers May Disrupt Your Sleep

December 24, 2014 02:08 AM

Screen time before bedtime disrupts your sleep, a new study says. I love the science of sleep.

People who receive a tablet or e-book reader for the holidays might wind up spending some sleepless nights because of their new gadget.

That’s because the light emitted by a tablet like an iPad can disrupt sleep if the device is used in the hours before bedtime, according to a new Harvard study.

People who read before bed using an iPad or similar "e-reader" device felt less sleepy and took longer to fall asleep than when they read a regular printed book, researchers found.

via Tablets and E-readers May Disrupt Your Sleep.

Mark Turner : ICEd out of parking spots

December 22, 2014 05:30 PM

The N&O’s Andrew Kinney writes about the topic of ICEing, which is what EV owners call it when their charging spot is blocked by an Internal Combustion Engine (ICE) vehicle. Kinney reports that the city has collected a lot of fines from due to drivers not paying attention to where they park.

I agree with Bonner Gaylord: perhaps painting the EV parking spaces a bright color might help clueless drivers pay better attention.

RALEIGH — People can’t seem to resist Raleigh’s electric-vehicle parking and charging spots – even when they’re driving gas guzzlers.

A sign next to each of the city’s 23 special spaces warns that gasoline-powered vehicles blocking the charger will get a $50 ticket. Yet fuel-burners just keep coming, especially to spot No. 378, which may be the most frequent site of parking tickets in Raleigh.

via RALEIGH: Gas guzzlers can’t stay out of electric-only parking in Raleigh | The Raleigh Report | NewsObserver.com.

Mark Turner : How ‘Jingle Bells’ by the Singing Dogs Changed Music Forever – The Atlantic

December 21, 2014 02:32 AM

This is a fascinating account of the version of “Jingle Bells” recorded by The Singing Dogs. I always assumed this song was from the late 1970s – big deal, someone sampled dogs and made a song. I was shocked tonight to find out it was actually recorded in 1955! I had no idea that this was such a groundbreaking song, launching the arts of multitrack recording and sampling. Who knew?

Let’s, for a moment, consider "Jingle Bells" as performed by the Singing Dogs. With jaded, 21st-century ears, it’s easy to dismiss as Yuletide kitsch. It topped a 2007 survey of most-hated Christmas songs, but there was a time when listeners marveled at it—Dogs! And they’re singing!

It’s time we give the Singing Dogs their due. Created in Denmark in the early 1950s by a self-taught ornithologist and released in the U.S. in 1955, the record marks a turning point in how we listen to music. I’ll explain.

via How 'Jingle Bells' by the Singing Dogs Changed Music Forever – The Atlantic.

Mark Turner : Cuban relations

December 18, 2014 06:09 PM

President Obama caused quite a stir yesterday when he announced the normalization of relations with Cuba. Of course, Republicans quickly went ape-shit at this announcement and are already lining up to oppose it. Being that it’s the President’s constitutional prerogative to conduct foreign affairs, I’m not sure what whiny Congressmembers can do.

As for ditching restrictions on Cuba, I say good riddance! I’ve never understood the continuing economic embargo against Cuba. Yes, Cuba is a communist country but for decades we’ve had no trouble doing business with communist (and nuclear-armed) China. Hell, China actively spies on us, conducts provocative naval maneuvers, and is actively working to diminish the stature of the United States in the Pacific region. I suppose if Cuba had a population of a billion potential consumers like China we be falling all over ourselves to put aside our differences.

The other issue at play here is the hit in stature that Cuban-focused politicians take. For decades, these politicians have drawn their power from the Cuban embargo. They’ve positioned themselves as the gatekeeper to Cuba and if that gate swings wide open their power goes with it. For them, the best interests of Cuba and/or the United States has always taken a back seat to their own interests. Screw ‘em.

I am thrilled that President Obama is taking this step and am thankful that Pope Francis helped shepherd it along. It’s high time we all moved on.

(Cuba and Castro has always been a useful boogey man. Read this stunning transcript in the National Security Archives of Castro’s offer to be a boogey man for LBJ.

Tanner Lovelace : Preparing for a endurance events

December 17, 2014 05:29 PM

16003486781_da602fb948_b

For 2015 I’m currently signed up to run my first marathon in April (Raleigh Rock-n-Roll), the Raleigh 70.3 half iron triathlon (May 31) and the Beach2Battleship full iron distance triathlon in October. I will probably also do several smaller races. To make sure I can make it through all these events, I went to see a physical therapist today. I already knew my core muscles were weak but I didn’t realize just how tight they all were. So, I’m going to be working through this for a while until I can get my body back into shape. We’ll see how it goes.

Magnus Hedemark : Patterns for Success – Landing the Job: An Overview

December 16, 2014 04:52 AM

Welcome to my first of what I hope will be many contributions to the Autism community via Autism Daily Newscast. As a high functioning autistic person with a well-established career in the software industry, I expect to research and share with you patterns for success in your career endeavors. While it is frequently a challenge, I’m convinced that success can be yours.

read more


Scott Schulz : Tweet: Blog: Twitterversary – 8 Years http://t.co/Xy7vHYN…

December 14, 2014 06:21 AM

Blog: Twitterversary – 8 Years ift.tt/1GB9Yuj

Eric Christensen : How to really screw up TLS

December 12, 2014 04:09 PM

I’ve noticed a few of my favorite websites failing with some odd error from Firefox.

Firefox's Unable to connect securely error messageThe Firefox error message is a bit misleading.  It actually has nothing to do with the website supporting SSL 3.0 but the advanced info is spot on.  The error “ssl_error_no_cypher_overlap” means that the client didn’t offer any ciphers that the server also supports.  Generally when I see this I assume that the server has been setup poorly and only supports unsafe ciphers.  In this case the website only supports the RC4 cipher.  I wondered why I was starting to see a reversal of removing RC4 from so many websites recently (especially since RC4 is very weak and is on the way out).  Apparently these websites all use the F5 load balancer that had a bad implementation of the TLS 1.0 standard causing a POODLE-like vulnerability.

Stepping back for a moment, back in October the POODLE vulnerability hit the streets and a mass exodus from SSL 3.0 happened around the world.  I was happy to see so many people running away from the broken cryptographic protocol and very happy to see the big push to implementing the latest version of TLS, TLS 1.2.  So with SSL 3.0 out of the way and the POODLE vulnerability being squelched why are we seeing problems in TLS 1.0 now?

Well, simply put, F5 load balancers don’t implement TLS 1.0 correctly.  The problem with SSL 3.0 is that the padding format isn’t checked.  Apparently in the F5 devices it’s still a problem in TLS 1.0.  And while the company did offer up patches to fix the issue, some really bad advice has been circulating the Internetz telling people to only support RC4, again.  Sigh.

When RC4 finally dies a fiery death I’ll likely throw a party.  I’m sure I won’t be the only one…