Tarus Balog : Electronic Program Guide Changes at Schedules Direct

February 26, 2015 06:09 PM

I just noticed that my OpenELEC, Kodi and Tvheadend based DVR was no longer updating the Electronic Program Guide (EPG).

I would get the error:

Service description 'http://docs.tms.tribune.com/tech/tmsdatadirect/schedulesdirect/tvDataDelivery.wsdl' can't be loaded: 500 Can't connect to docs.tms.tribune.com:80 (Connection timed out)

when running the fetch script.

Digging around, I found out the reason is that the Gracenote service is being discontinued and thus some URLs have changed.

I use a script called tv_grab_na_dd from the Debian (wheezy) xmltv-utils package. Version 0.5.63-2 doesn’t appear to use the new URLs. The link above suggests adding:

54.85.117.227  docs.tms.tribune.com webservices.schedulesdirect.tmsdatadirect.com 

to /etc/hosts and that worked well for me. Of course, if the IP address for Schedules Direct ever changes it will need to be updated.

It looks like this is fixed in xmltv-utils version 0.5.66.

Mark Turner : Wilson’s official statement on today’s FCC ruling

February 26, 2015 05:39 PM

Here is Wilson’s official statement on today’s FCC ruling.

CITY OF WILSON APPLAUDS FCC CHAIRMAN WHEELER AND THE COMMISSION FOR ITS LEADERSHIP IN DECIDING IN FAVOR OF LOCAL BROADBAND CHOICE

Wilson, N.C. — The City of Wilson applauds FCC Chairman Wheeler and the Commission for their leadership today in approving the City’s petition to preempt a North Carolina state law that restricts municipal Gigabit broadband deployment. Today’s historic decision now enables Wilson and other North Carolina municipalities to provide the Gigabit broadband infrastructure and services that North Carolina and America need in order to remain competitive in our emerging knowledge-based global economy.

Wilson filed this petition not with immediate plans to expand into its rural neighboring communities, but to facilitate the future advancement of its critical Gigabit fiber-optic infrastructure over the long term. Going forward, the City will continue to expand its Gigabit network in the same measured and responsible manner it has utilized in the past, as opportunities and resources allow. The FCC’s decision will also permit the City to share its experience, knowledge and expertise with other communities to help foster the growth of critical economic infrastructure for their businesses and residents.

By its action today, the FCC has empowered local North Carolina communities to do whatever it takes for all of our citizens to realize the benefits of access to essential Gigabit infrastructure in our beautiful state. All possibilities are now on the table, whether through public-private partnerships or municipally-owned broadband networks, to ensure North Carolina’s businesses and residents remain competitive in the global economy.

Mark Turner : Fuse cutout – Wikipedia, the free encyclopedia

February 26, 2015 05:19 PM

During bad weather, many folks will hear electrical booms in their area and blame it on a “transformer blowing.”

The truth is that transformers are expensive, so the power companies protect them with equipment called “cut-out fuses.” In a lot of cases where a branch has brushed a power line, these fuses will blow and cut power to a street. If the branch falls away and the line isn’t damaged, a lineman can quickly restore power just by resetting the fuse using a long pole.

So now you know.

In electrical distribution, a fuse cutout or cut-out fuse is a combination of a fuse and a switch, used in primary overhead feeder lines and taps to protect distribution transformers from current surges and overloads. An overcurrent caused by a fault in the transformer or customer circuit will cause the fuse to melt, disconnecting the transformer from the line. It can also be opened manually by utility linemen standing on the ground and using a long insulating stick called a "hot stick".

via Fuse cutout – Wikipedia, the free encyclopedia.

Mark Turner : The FCC rules against state limits on city-run Internet – The Washington Post

February 26, 2015 05:08 PM

Wilson’s petition to the FCC was just granted and I couldn’t be happier. North Carolina’s “Level Playing Field” law, written by Time Warner Cable, is now null and void. Now communities across the state can build themselves their own digital future with a community broadband service.

I would be dancing in the street if the street wasn’t a slushy mess right now!

For years, cities around the country have been trying to build their own, local competitors to Verizon, Charter and other major Internet providers. Such government-run Internet service would be faster and cheaper than private alternatives, they argued. But in roughly 20 states, those efforts have been stymied by state laws.

Now, the nation’s top telecom regulators want to change that. On Thursday, the Federal Communications Commissions voted 3-2 to override laws preventing Chattanooga, Tenn., and Wilson, N.C. from expanding the high-speed Internet service the cities already offer to some residents.

via The FCC rules against state limits on city-run Internet – The Washington Post.

Jesse Morgan : Please Steal This Idea.

February 25, 2015 04:19 PM

Someone take this idea and run with it; just make sure it’s free to use. I don’t have the time for it and it’s too great not to write down.

 

The idea? A simple graph paper map maker. Nothing with fancy graphics like campaign cartographer or FantasticMapper, Just a simple graph paper mapper.

The interface consists of two major parts- the map window and the collapsible sidebar.

Main Window

The main window looks like an unblemished minesweeper screen with a giant crosshair segmenting it into 4 parts. There is both a horizontal and vertical scrollbar. only 1/2 of the window is showing according to the scrollbars, meaning you can scroll left or right, up or down. In the bottom corner is a zoom tool similar to google maps.

Sidebar

The sidebar appears as a small button on the left that folds out when clicked. it contains a vertical accordion menu with the following headers:

Select

[S]elect mode will let you click on an object underneath the cursor. multiple clicks will cycle focus on the next item in the stack underneath the cursor. This could be a Feature, Interior Wall, or Path.

Excavate

[E]xcavate has two main options to toggle between- Excavate (default) and Fill. While these are excavated, clicking on squares in the main window will either be emptied or filled in. Using the shift key reverses the active option- Excavate becomes Fill and vice versa. Excavate is the default tool when the page is loaded.

Wall

[W]all would be relatively straight forward. it would only affect existing exterior walls, either in part, whole, or individual lengths. Options would include smooth, rough, and natural.

Interior Wall

[I]nterior Wall has several options and two modes. The primary mode “Snap” would snap the grid between two excavated squares; the secondary mode “Free” would allow straight lines to be drawn between two arbitrary points. Clicking and Dragging will draw out a temporary path until the mouse is released. Using the shift key lets you a rectangle of interior walls.  Options would include walls (default), half walls, engraved walls, magical forcefields, cliffs/ledges, ruined walls, lower level walls, walls with arrow slits, water lines, etc.

Door

[D]oors would have several options, all of which snaps to and highlights a border between two squares. This would work between two dug squares or a dug and filled square (i.e. a fake/useless door). Options would include regular door (default), double door (2 squares long), secret door, portcullis, false door etc.

Features

[F]eatures can be placed and resized on any map, and is layered between the floor tiles and the walls, allowing for things like puddles to be half-covered. This will contain a block of highlightable icons, which will let you draw an item that can be moved, resized, or spun. Icons include stairs (default), circular stairwell, debris, water, pit, pillar, altar, chair, throne, table, crate, barrel, fireplace, statue, well, sarcophagus, dias, bridge, carpet, etc.

Markers

Traps

[T]rap behavior would be dictated by type, but would mostly act like Features. Options would include pit traps (default), spike traps, blade traps, poison gas, etc.

Path

[P]ath would allow you to draw a simple line from point A to point B. This could be a dispersed “sandy” type line, a dashed, dotted or solid line of configurable width.

Options

[O]ptions would contain:

  • Line color (black, blue, gray)
  • Grid (none, excavated areas, all areas)
  • Grid Fade (100%, 50%, 25%)
  • Grid Color (black, blue, gray)
  • Border (click and drag region to be included in PNGs
  • Show Compass checkbox
  • Show scale checkbox
  • Tile Pattern (none, granite, stone, etc)
  • Fill Pattern (none, stone, line color, black)
  • square scale (5ft, 10ft, other)

Save

[S]ave would give you the option of saving the output (SVG) to google drive, locally, or exporting to PDF if a border is not defined, it will do a best guess.

Navigation

Right click would drag the map; +/- would zoom, arrow keys would pan. The map will pan infinitely in any direction, based off the centerpoint.

Hotkeys would include:

  • [S]elect mode
  • [E]xcavate
  • [I]nterior Wall
  • [D]oors
  • [F]eatures
  • [T]rap
  • [P]ath
  • [O]ptions
  • [S]ave
  • [ctrl+z] undo
  • [ctrl+shift+z] redo
  • standard copy/cut/paste

So that’s the idea that’s been kicking around in my head. If you’re a UI person and interested in helping me, I’d be glad to help give guidance on functionality, but I don’t have the time to develop it myself.

Mark Turner : 6 of the Most Unbelievably Cheap Paradises on Earth | Thrillist

February 22, 2015 09:07 PM

Wanderlust.

Everyone at one time or another has wanted to get away from it all and beach/ski/paraglide-bum it in some foreign land. Small problem: that’s very expensive. Or is it? That’s our sweet rhetorical way of saying maybe not. Check this list of 12 shockingly affordable paradises you can live in for peanuts… though you’ll probably be packed and out the door by number seven.

via 6 of the Most Unbelievably Cheap Paradises on Earth | Thrillist.

Alan Porter : Merging multiple git projects into one

February 22, 2015 09:04 PM

Over the last few months, my daughter Sydney and I have been working on Python programming assignments. I showed her that we can occasionally make a snapshot of our work using git, so if we mess something up, we can always get back to our previous checkpoint.

So we got into the habit of starting off new assignments with “git init .“.

Recently, though, I decided I wanted to host a copy of her assignments on my home file server, so we could check out the assignments on her computer or on mine. In the process, I decided to merge all of the separate assignments into a single git project. As a matter of principle, I wanted to preserve the change histories (diffs and author and dates — but not necessarily the old SHA hashes, which would have been impossible).

I did some searching on the topic, and I found a variety of solutions. One of them used a perl script that sent me off into the weeds of getting CPAN to work. A couple of good posts (here and here) used branches for each assignment, and then merged all of the branches together. The results were OK, but I had the problem where the assignment files started off on their own top-level directory, and then I later moved the files to their own assignment subdirectories. I really wanted to rewrite history so it looked like the files were in their own subdirectories all along.

Then I noticed that my daughter and I had misspelled her name in her original “git config –global”. Oops! This ended up being a blessing in disguise.

This last little snag got me thinking along a different track, though. Instead of using branches and merges to get my projects together, maybe I could use patches. That way, I could edit her name in the commits, and I could also make sure that files were created inside the per-assignment directories!

So I whipped up a little shell script that would take a list of existing projects, iterate through the list, generate a patch file for each one, alter the patch file to use a subdirectory, (fix the mis-spelled name), and then import all of the patches. The options we pass to git format-patch and git am will preserve the author and timestamp for each commit.

#!/bin/bash

remoteProjects="$*"

git init .

for remoteProject in $remoteProjects ; do
   echo "remote project = $remoteProject"
   subProject=$(basename $remoteProject)
   ( cd $remoteProject ; git format-patch --root master --src-prefix=AAAA --dst-prefix=BBBB --stdout ) > $subProject.patch
   # essential file path fixes
   sed -i -e "s|AAAA|a/$subProject/|g" $subProject.patch
   sed -i -e "s|BBBB|b/$subProject/|g" $subProject.patch
   sed -i -e "s|/$subProject/dev/null|/dev/null|g" $subProject.patch
   # other fixes, while we're here
   sed -i -e 's/syndey/sydney/g' $subProject.patch
   # bring the patch into our repo
   git am --committer-date-is-author-date < $subProject.patch
   # clean up
   rm $subProject.patch
done

exit 0

I think this solution works nicely.

The one with the separate branches above was kind of cool because a git tree would show the work we did on each assignment. But in the end, the linear history that we produced by using patches was just as appropriate for our project, since we actually worked on a single homework assignment each week.

I suppose I could combine the two solutions by creating a branch before doing the "git am" (git "accept mail patch") step. That is left as an exercise for the reader.

Mark Turner : A Newbie’s Guide to Publishing: Unconscionability

February 22, 2015 05:44 PM

An author picks apart the standard publishing contract, showing how ridiculously one-sided it is.

Unconscionability also known as unconscientious dealings is a term used in contract law to describe a defense against the enforcement of a contract based on the presence of terms that are excessively unfair to one party. Typically, such a contract is held to be unenforceable because the consideration offered is lacking or is so obviously inadequate that to enforce the contract would be unfair to the party seeking to escape the contract.

If you read this blog, you know where I’m going with this. I’m going to point out some of the more one-sided, onerous terms in a standard publishing contract. And make no mistake–these are practically universal, and for the most part, non-negotiable.

via A Newbie's Guide to Publishing: Unconscionability.

Tarus Balog : SCaLE 13x – Day One

February 21, 2015 04:50 PM

Well, technically it was Day Two, but with the launch of the new OpenNMS Group website, our Meridian product, and actually trying to finish up my slides for my SCaLE presentation, it was the first day I actually made it to the show.

I love this show. It was the first real grassroots open source conference I ever attended (at Scale 5x back in 2007) and it was amazing. I haven’t been able to make as many of them as I would have liked (they scheduled one on Valentine’s Day once) but I always welcome the opportunity. This year they can accommodate 3000 attendees and while they haven’t released actual numbers, that is a lot of geeks.

I spent almost all of the day in the expo hall. We introduced the new Horizon/Meridian booth:

which I think turned out well. I also got to wander around and talk with a few of the other projects that are here. One was the Kodi team:

and having used it for several weeks now I think it is an amazing piece of software. I also got to talk briefly with Jeremy Sands, one of the organizers of the SouthEast LinuxFest:

and I should point out that the dates have been set for the conference this year (12-14 June) and the RFP is now open.

My talk at SCaLE is about the changing nature of open source, and it has never been a better time to be involved if you want a job. At most shows I see signs like this:

and there is even a career booth hosted by Disney, of all companies:

We had a nice amount of booth traffic. The OpenNMS shirts went in the first hour (should have brought more) and in honor of MC Frontalot performing on Saturday night, we are giving away signed sets of all six of his CDs.

The Friday winner was Ganeshbaba who registered at the very last minute, but we still have two more sets to give away.

Anyway, if you are at the show be sure to stop by and if you aren’t, well, why the heck aren’t you here?

Mark Turner : Is this the fuel cell that will crack the code to the data center? | Gigaom

February 20, 2015 07:31 PM

Microsoft is exploring putting fuel cells directly in datacenter racks and skipping the DC/AC/DC conversion.

The controversial idea of using fuel cells to power data centers has been under discussion for the past couple of years. Probably the most famous project out there is Apple’s 10 MW fuel cell farm, which uses 50 fuel cells from Silicon Valley startup Bloom Energy installed next to its east coast data center in North Carolina.

But Microsoft is just starting to kick off a pretty unusual and innovative project using fuel cells and data centers that could some day draw a lot of interest. Microsoft is working with young startup Redox Power Systems and using a grant from the Department of Energy’s ARPA-E program, to test out Redox’s fuel cells to power individual server racks within a data center.

via Is this the fuel cell that will crack the code to the data center? | Gigaom.

Mark Turner : Reporters on the CIA take

February 20, 2015 05:55 PM

The story of Ken Dilanian playing footsie with the CIA brought to mind a comment I heard a few years back from someone in a position to know who insisted that news anchor Ted Koppel was a paid CIA asset. That was quite an extraordinary claim but I did not follow up and I could not find much evidence on the web to back it up.

It is not, however, a new phenomenon. Legendary journalist Carl Bernstein wrote a lengthy story about improper CIA involvement with the media. Wikipedia describes “Operation Mockinbgird” as a CIA plan to influence media and speaks of it in the past tense, though there is no indication that the operation has ended. Perhaps it hasn’t.

Mark Turner : AP reporter soft-pedals phone key theft

February 20, 2015 05:32 PM

Ken Dilanian

Ken Dilanian

Associated Press Intelligence reporter Ken Dilanian reports on the NSA/GCHQ’s theft of mobile phone keys, as reported by The Intercept.

WASHINGTON AP — Britain’s electronic spying agency, in cooperation with the U.S. National Security Agency, hacked into the networks of a Dutch company to steal codes that allow both governments to seamlessly eavesdrop on mobile phones worldwide, according to the documents given to journalists by Edward Snowden.

via AP News | The Times-Tribune | thetimes-tribune.com.

Dilanian’s soft-pedaling arrives in the second paragraph:

A story about the documents posted Thursday on the website The Intercept offered no details on how the intelligence agencies employed the eavesdropping capability — providing no evidence, for example, that they misused it to spy on people who weren’t valid intelligence targets. But the surreptitious operation against the world’s largest manufacturer of mobile phone data chips is bound to stoke anger around the world. It fuels an impression that the NSA and its British counterpart will do whatever they deem necessary to further their surveillance prowess, even if it means stealing information from law-abiding Western companies.

Dilanian claims there is “no evidence” that intelligence agencies “misused it to spy on people who weren’t valid intelligence targets.” However, by qualifying this with “using the eavesdropping capability” he glosses over the fact that GCHQ targeted a mobile phone security company accused of no wrongdoing whatsoever. Intelligence agencies targeted innocent employees of Gemalto in what was clearly an extra-legal activity.

Dilanian also fails to tell his audience why this is a serious issue. All the reader gets is “experts called it a major compromise in mobile phone security.” No “experts” are quoted although credible security experts are pretty easy to find. It might be helpful for readers to know that they can now have zero confidence that their communications can’t be monitored but he didn’t find it worth mentioning.

Interestingly, Dilanian has a history of being cozy with intelligence agencies. Last September, The Intercept broke the news that FOIA documents showed Dilanian was collaborating with the CIA, sending them entire stories before he published them and even requesting feedback from agency officials on improving his stories. This is a major breach of journalistic ethics and apparently continued even after he left the Los Angeles Times for the AP.

When confronted with the CIA allegations, Dilanian responded:

“I shouldn’t have done it, and I wouldn’t do it now,” he said. “[But] it had no meaningful impact on the outcome of the stories. I probably should’ve been reading them the stuff instead of giving it to them.”

Riiiight. So in hindsight you should’ve read your stories to them because then you wouldn’t have gotten caught, is that it? For a reporter who writes about intelligence he doesn’t seem to have much of his own.

Dilanian posted this on his Twitter feed:

The NSA’s job is to break the laws of other countries by stealing info that helps US security. Still, this one looks bad.

You know what really looks bad? Being a journalist with a lack of ethics.

Mark Turner : The VA’s crystal ball

February 20, 2015 01:56 PM

VA diagnosis by crystal ball

VA diagnosis by crystal ball


The Veterans Administration is the most amazing medical system anywhere, bar none. I had always been under the impression that rendering a diagnosis required a doctor but somehow the VA can do it without one.

After years of mysterious health issues, I finally got mad enough two weeks ago to file paperwork to enroll in VA coverage. A day or two after mailing my paperwork I was delighted to receive a phone call from a VA representative who helpfully set me up with an appointment. Having long worked in customer service, I was impressed with my representative’s knowledge of his job and his rapport with his customer. In fact, I was already working on a blog post and even considered sharing my praise with Rep. David Price. All was looking up until I got this fancy-looking, full-color customized booklet in the mail yesterday. On page five was the bad news:

“Nonservice-connected.”

I was aghast. The multiple-page application Form 10-10 EZ I had labored over never asked any medical questions, did not include any request for release of my civilian medical records, and yet somehow VA determined that my ills are nonservice-connected.

That’s some crystal ball they’ve got there. Right off the bat I’m on the defensive. Helpfully included with my pamphlet are two forms which describe the appeal process. My appeal window is one year and the clock is already ticking.

Though Form 10-10 EZ asks nothing about medical history, it is heavy on net worth questions. I am fortunate to be well-compensated for my work and I have no problem shouldering my fair share of any treatment costs. I can’t help but wonder if the VA saw the numbers and decided I didn’t matter, though. How can VA diagnose anyone based on their wealth?

And what about this full-color booklet? It was printed with my unique information in it, name, case disposition, and other details. It must have cost a relative fortune to print this, but for what? If you see your name printed in an expensive booklet and it says right there that you’re a chump, are you going to feel encouraged to contest it? How many others would take this booklet as gospel?

It was only after I left the Navy that I came to appreciate the skills I had picked up near the end of my service, the skills successful people use to navigate huge bureaucracies. There is the official way of getting things done and then there’s the backchannel way. It’s about who you know, making allies of those who hold the key to your solution. Obviously I have to brush up on these skills.

It looks like I’m in for a long slog. To keep up with it, I’m creating a new category on the blog: VA. I will share my journey since it might help someone else.

Mark Turner : The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle

February 20, 2015 01:44 AM

NSA hacked SIM card manufacturer Gemalto and stole millions of encryption keys without the company’s knowledge. While I don’t particularly mind NSA targeting bad guys (that’s why we have NSA), I consider hacking the good guys to get the bad guys to be very poor form.

I am not surprised that this took place on Obama’s watch, either. His record is just as bad as George W. Bush’s. Perhaps worse.

The monitoring of the lawful communications of employees of major international corporations shows that such statements by Obama, other U.S. officials and British leaders — that they only intercept and monitor the communications of known or suspected criminals or terrorists — were untrue. “The NSA and GCHQ view the private communications of people who work for these companies as fair game,” says the ACLU’s Soghoian. “These people were specifically hunted and targeted by intelligence agencies, not because they did anything wrong, but because they could be used as a means to an end.”

via The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle.

Mark Turner : Lenovo shipping laptops with pre-installed adware that kills HTTPS | CSO Online

February 19, 2015 04:24 PM

Whoops. Lenovo shipped computers with adware that breaks ALL SSL on its laptops. Not only that, but the private key is also widely available, meaning anyone can spoof any website on an unsuspecting Lenovo owner’s computer. Major security fail!

Lenovo is in hot water after it was revealed on Wednesday that the company is shipping consumer laptops with Superfish Adware pre-installed. Security experts are alarmed, as the software performs Man-in-the-Middle attacks that compromises all SSL connections.

It’s a fact of life; PC manufacturers are paid to install software at the factory, and in many cases this is where their profit margin comes from. However, pre-installed software is mostly an annoyance for consumers. Yet, when this pre-installed software places their security at risk, it becomes a serious problem.

via Lenovo shipping laptops with pre-installed adware that kills HTTPS | CSO Online.

Update: More technical info here and here.

Eric Christensen : RC4 prohibited

February 19, 2015 03:25 PM

Originally posted on securitypitfalls:

After nearly half a year of work, the Internet Engineering Task Force (IETF) Request for Comments (RFC) 7465 is published.

What it does in a nutshell is disallows use of any kind of RC4 ciphersuites. In effect making all servers or clients that use it non standard compliant.

View original


Mark Turner : New Snowden Docs Indicate Scope of NSA Preparations for Cyber Battle – SPIEGEL ONLINE

February 18, 2015 03:10 AM

Germany’s Der Spiegel published Snowden documents last month that describe an NSA project to modify hard drive firmware for spying purposes. This pretty much fingers the NSA as the “Equation Group” Kaspersky mentioned in its report.

Normally, internship applicants need to have polished resumes, with volunteer work on social projects considered a plus. But at Politerain, the job posting calls for candidates with significantly different skill sets. We are, the ad says, "looking for interns who want to break things."

Politerain is not a project associated with a conventional company. It is run by a US government intelligence organization, the National Security Agency (NSA). More precisely, it’s operated by the NSA’s digital snipers with Tailored Access Operations (TAO), the department responsible for breaking into computers.

via New Snowden Docs Indicate Scope of NSA Preparations for Cyber Battle – SPIEGEL ONLINE.

Mark Turner : Equation Group: NSA-linked spying team have software to hack into any computer – News – Gadgets and Tech – The Independent

February 17, 2015 07:52 PM

Astonishing. The apparent creators of Stuxnet have learned how to alter the firmware in hard drives to hide spying software in hidden sectors.

The US security services have developed software that has enabled it to spy on home computers almost anywhere in the world.Russian researchers at Kaspersky Lab have claimed that the software gave those behind it, thought to be the US National Security Agency, the power to listen in on the majority of the world’s computers.

It could be installed on practically any of the world’s most common hard drives and spy on the computer while going undetected.

It was used to break in to government and other important institutions in 30 countries across the world, they claim.

via Equation Group: NSA-linked spying team have software to hack into any computer – News – Gadgets and Tech – The Independent.

Update 10:20 PM: Read Kaspersky’s blog post on the Equation Group and it’s Equation Group Q&A [PDF].

Mark Turner : Why Tesla’s battery for your home should terrify utilities | The Verge

February 14, 2015 07:39 PM

Telsa and SolarCity are working on a residential battery that might let people drop off the electric grid completely. The utilities are sweating.

Earlier this week, during a disappointing Tesla earnings call, Elon Musk mentioned in passing that he’d be producing a stationary battery for powering the home in the next few months. It sounded like a throwaway side project from someone who’s never seen a side project he doesn’t like. But it’s a very smart move, and one that’s more central to Musk’s ambitions than it might seem.

via Why Tesla's battery for your home should terrify utilities | The Verge.

Mark Turner : MicLoc – DIY acoustic triangulation

February 13, 2015 06:22 PM

On the the East CAC Facebook page, some neighbors recently asked if the police department was using acoustic triangulation systems for tracking gunfire. I responded that systems like ShotSpotter were interesting but that the police department couldn’t afford the $300k cost.

Ah, the joys of open source! It turns out one enterprising hacker has built his own Arduino-based triangulation system using easy-to-obtain parts. This has me thinking that if a few neighbors here and there were willing to station these near their homes, the fixes that could be plotted would be extremely accurate. Even a small network of these would do wonders. In this way, neighbors could be helping to fight crime in their area without actually having to do anything. It sounds like a great solution!

MicLoc is an effort to develop a device capable of passively identifying a sound based event position on a given map, therefor pinpointing its location. The whole idea is to achieve this goal with everyday electronics and reduced development costs.With the event of small, affordable, powerful microprocessors and electronics in general, this technology now seems accessible to potential commercial applications and general public use.The main goals of this project are:

  • Develop a low cost, compact device capable of identifying a source source location on a map with sub-meter precision.
  • Develop, detail and open-source the hardware and plans used so anyone can build this device.
  • Develop, detail and open-source the software needed to interface the device with a computer.

via rural hacker: MicLoc.

Mark Turner : Google Cloud and latency

February 13, 2015 02:53 AM

Since I’ve been having so much fun with Amazon Web Services, I thought I would check out Google’s offering, called Google Cloud. I’ve only had a trial running with it for about 24 hours but so far it seems solid. The server I am using is fast and has good connectivity to Google’s servers, which is a good thing.

What is a bad thing, however, is that my hosted server has very poor connectivity to me. The round-trip ping time is about 55ms, whereas AWS with it’s Ashburn, VA datacenter gets me 25ms. Huge difference! Also, my AWS instance has 14 routers to navigate before it gets to me but my Google Cloud instance travels through a whopping 24 routers. Those packets bounce around like ping pong balls! I was hoping that with Google’s company-owned fiber network and datacenters located here in North Carolina I would get faster response times. No such luck … yet.

Why “yet?” Well, Google Fiber is coming to the Triangle, in case you’ve been under a rock. I’m hopeful once I’m on the Google Fiber network, my latency to Google Cloud will drop considerably, perhaps <1ms. This invites all sorts of innovations. Give clever developers fat resources located close (on the network, anyway) to their audience and some interesting things start to happen.

Google Fiber could be the fire that lights off Google Cloud. I figure it’s worth checking out the new landscape now so that I can get in on the game.

Mark Turner : Up to speed on Amazon Web Services

February 13, 2015 02:43 AM

I’ve been getting up to speed on Amazon Web Services over the past few weeks. With the end of the year bonus I got from my work I put down the money to get a 3-year reserved instance, gaining a hefty hosted server for a remarkably low price.

I’d had an Amazon instance for a few months just to kick the tires. However, when my reserved instance got purchased, it took me a while to figure out that Amazon had changed its virtualization techniques and in order to take advantage of the new instance I would have to convert my existing image to a completely new one. The blocker for this was that the CentOS-based AMI I used seemed locked and the root drive couldn’t be mounted to a new instance. I had to copy everything using the old instance.

My new instance was created completely by me, using a recipe that helped me build it from the ground up. Now that I have a good base to start from I can build some useful AMIs and share them with others. I hope to make a Rivendell Radio Automation AMI someday so that people can launch their own online radio station with a few clicks of a button.

I’ve also dug into the wonder that is S3, creating an s3fs “filesystem” on my Linux instance for serving up music for my Rivendell install. I will eventually do the same for the media included here on MT.net and push that to CloudFront.

The cool thing about the cloud is that it’s a geek’s ultimate laboratory. It’s incredibly easy and cheap to spin up computer sessions. I can play with technologies without having to commit to them long-term. I’m having a lot of fun with it.

I’m particularly proud that I was able to migrate the server that hosts my neighborhood email lists from a locally-hosted server over to AWS without any of my neighbors knowing I’d done it. I guess twenty years of sysadmin experience pays off every now and then!

Tarus Balog : OpenNMS Horizon 15.0.1 Released

February 12, 2015 06:06 PM

Just a quick note to let everyone know that OpenNMS 15.0.1 has been released. This is the first bug fix release for OpenNMS 15, and if you are running it I strongly suggest you upgrade.

As we are working to complete our transition to Hibernate (which will allow OpenNMS to use any database backend, not just PostgreSQL) we discovered an old issue where, under certain circumstances, duplicate outage records could be created. When this happened under the new code, it would cause an exception and the outages would never be cleared. This has been corrected.

The complete list of changes is as follows:

Bug

  • [NMS-7331] – Outage timeline does not show all outages in timeframe
  • [NMS-7392] – Side-menu layout issues in node resources
  • [NMS-7394] – Outage records are not getting written to the database
  • [NMS-7395] – Overlapping input label in login screen
  • [NMS-7396] – Notifications with asset fields on the message are not working
  • [NMS-7399] – Surveillance box on start page doesn't work
  • [NMS-7403] – Data Collection Logs in wrong file
  • [NMS-7406] – Incorrect Availability information and Outage information
  • [NMS-7409] – Visual issues on the start page
  • [NMS-7423] – Duplicate copies of bootstrap.js are included in our pages
  • [NMS-7425] – Poller: start: Failed to schedule existing interfaces
  • [NMS-7426] – Not monitored services are shown as 100% available on the WebUI
  • [NMS-7427] – The PageSequenceMonitor is broken in OpenNMS 15
  • [NMS-7432] – Normalize the HTTP Host Header with the new HttpClientWrapper
  • [NMS-7433] – Topology UI takes a long to load after login
  • [NMS-7434] – Disabling Notifd crashes webUI
  • [NMS-7435] – The Quick Add Node menu item shouldn't be under the Admin menu
  • [NMS-7437] – The default log level is DEBUG instead of WARN on log4j2.xml
  • [NMS-7452] – CORS filter not working
  • [NMS-7454] – Netscaler systemDef will never match a real Netscaler

Enhancement

  • [NMS-7419] – Read port and authentication user from XMP config
  • [NMS-7438] – Apply the auto-resize feature for the timeline charts

Warren Myers : my tech predictions for 2015

February 12, 2015 02:18 PM

I put these up as a comment on Cringely.com – but they deserve sharing here, too.

In no particular order:
AIX EoL’d
HP-UX retired
Itanium EoL’d (perhaps on an accelerated schedule)
– Solaris truly open-sourced / abandoned by Oracle in favor of OEL
– HP spins-off more business units
– IBM loses 25-35% of its value – and spins-off / sells more business units to make Wall Street happy
– POWER continues to slow; IBM doesn’t understand it needs to stop putting so much money into it until all the engineers have been fired
Z/OS systems grow dramatically – the only place IBM makes *more* money
– people finally realize “cloud” isn’t a “thing” – it’s just renting crap when you need it (perhaps from yourself (private cloud)) and giving it back when you don’t
cloud hosting providers cut prices so things like AWS instances are no longer more expensive than dedicated hardware (see eg http://benmilleare.com/how-shaving-0-001s-from-a-function-saved-us-400-dollars)
– enough of the Old Guard hits retirement age that New School tech can finally make big inroads into stodgy businesses and government (automation, cloud, *aaS, etc)
– buzzword-compliance becomes necessary even for mom-and-pop shops who don’t have computers
Android 6 brings native, “real” 3D to cell phones
– … and iOS 9 makes it look “good”
– there’s a new MacBook Flex that offers touchscreen, a fold-flat-reverse form factor, and 12 hours of battery life; the iPad 5 is the first 5K resolution tablet, with a full day of battery life
– Max OS 10.11, aka Denali, allows users to run iOS apps via a “fat binary” model (harking back to the shift to PowerPC from 68k and then again x86 from PowerPC)
– Apple announces the first non-x86 Macs (starting with the Flex)
Apple buys a car company in cashPorsche or Hyundai (Hyundai would be the smart move – get more electronics manufacturing capability in-house; spin-off heavy industry wing)
Tesla introduces a model that non-millionaires can afford – bringing snazzy competition to the Volt price point
SpaceX sends a mission to Venus, and another to Mars
Square opens an online bank
Uber and Lyft grow, win cases against taxi companies – and local competition pops-up all over the country
– several major metro areas across the US all enter the “gigacity” club
– … but it’s led with smaller metro areas (like Chattanooga has already done)

Mark Turner : RALEIGH: Senate plan would cut NC gas tax | State Politics | NewsObserver.com

February 10, 2015 03:24 PM

Our state legislature is considering cutting our state gasoline tax when we should be doubling it. How unfortunate.

Also, I’m not happy with Bruce Sieceloff’s story about it as he doesn’t explain why our state’s gasoline tax is so high. North Carolina has the largest state-maintained highway system in the country, bigger than Texas and even California. That’s why North Carolina’s gas taxes are higher than neighboring states. Shame on you, Bruce, for failing to mention this fact.

The legislature has moved twice over the past decade to put an upper limit on rising gas tax rates. But in 2009, a tax ceiling that had been enacted two years earlier was converted to a floor to close a gap in the DOT budget. Without that action in 2009, the tax rate would have dropped from 29.9 to 27.9 cents.

North Carolina’s gas tax is one of the highest in the nation. The highway use tax collected at the time of car sales, another major source of road money, is lower in North Carolina than in neighboring states.

via RALEIGH: Senate plan would cut NC gas tax | State Politics | NewsObserver.com.

Update: As I noted then, the N&O’s editorial board mentioned this back in May 2012:

“There’s a good reason why our gas tax is so hefty. State government here, due to a policy with roots in the Depression, bears a much greater share of local road expenses than in most states. North Carolina ranks second only to Texas in miles of state-maintained roadways. This policy serves to lighten the load on county governments and is reflected in their relatively low tax rates.”

I feel it is only fair that when our state’s high gas tax is mentioned, our state’s gigantic, state-owned highway system should be mentioned, too.

Mark Turner : Brian Williams and lies about Iraq

February 10, 2015 02:55 PM

Brian_Williams
There’s a lot being made about NBC News anchor Brian Williams having claimed he was in a helicopter in Iraq that made an emergency landing after being hit by enemy fire. I give Williams a pass. He had made a living telling other people’s stories, stories he did not write. After reading thousands of these over the years, it must become difficult keeping straight what one did and what one only read or saw. It does not diminish my perception of Williams if his helicopter wasn’t hit as he claimed. In the heat of it all it becomes difficult to piece together what’s what.

As the photo above attests, it would be a shame if Williams were the only one punished for lying about Iraq. There are presidents, vice-presidents, cabinet officials, – and, yes, news media – that buried everyone under lie upon lie about Iraq. Williams’s faux pas is tame by comparison.

Hanging Brian Williams out to dry for Iraq lies is like making Martha Stewart the fall guy for insider trading. The worst offenders get away.

Mark Turner : Dean Smith passes away

February 08, 2015 03:21 PM

Dean Smith speaks with Erskine Bowles

Dean Smith speaks with Erskine Bowles

Dean Smith, legendary basketball coach of the team I love to beat (the Tar Heels), passed away last night at the age of 83. Though I’m a Wolfpack fan, I had a lot of respect for Coach Smith. You knew when your team beat his it was something special because he always had his teams prepared.

I was fortunate to stand behind him at the Kerry-Edwards rally at N.C. State on July 10, 2004. It was unbearably hot and he was sweating through his dress shirt. I asked him if the heat bothered him and he smiled and said it was actually his bad knees that bothered him. We were on risers with no seats and at that moment I wanted to flag down and organizer and demand a seat be provided to Coach Smith.

Mark Turner : How RadioShack Helped Build Silicon Valley | WIRED

February 07, 2015 09:21 PM

My friend Laura Leslie posted a classic advertisement for the RadioShack TRS-80, complete with absurdly-high price tags. It reminded me of RadioShack’s Chapter 11 bankruptcy filing on Thursday, and of how different I’d be if it weren’t for RadioShack.

RadioShack was once every geek’s Mecca for electronics. Much of our digital world would not exist if it weren’t for RadioShack’s inspiration on a generation of geeks and tinkerers. Wired.com takes a fond look back at how many of our modern-day tech giants spent their formative years browsing the aisles at their local RadioShack.

Today, RadioShack filed for Chapter 11 bankruptcy. Part of a coming reorganization will involve co-branding as many as 1,750 stores with Sprint, one of the company’s largest creditors, and will almost certainly result in the closing of many others. While the RadioShack name may live on, its original spirit is probably gone for good. As it goes, so goes one of the unsung heroes of a generation of tinkerers and builders, a key piece of the Silicon Valley tech-boom puzzle.

via How RadioShack Helped Build Silicon Valley | WIRED.

Mark Turner : Street closing hints of Google Fiber disruption

February 05, 2015 07:24 PM

Traffic backs up on Edmund St.

Traffic backs up on Edmund St.


Tuesday night, street crews began blocking off Glascock Street and side streets in preparation for a traffic calming and sewer line replacement project. Glascock’s traffic was detoured down the normally serene side street of Edmund, where traffic now roared down the 25MPH road. Understandably, the neighbors were livid with this gigantic disruption, especially in light of no notice being given to the community outside of the few neighbors who live on Glascock itself. Hopefully in the future, the city will choose to notify the neighbors on the detour street, too, as they get impacted just as strongly as those on the street getting the construction.

The whole mess got me thinking of what it might be like in the next few years when Google Fiber gets started here in earnest. Tuesday’s closure affected just one block whereas Google likely will be tearing things up everywhere. How will people react to this kind of disruption happening all over town?

Mark Turner : Google Fiber and an FCC decision could give more people cheaper access to the Internet | News Feature | Indy Week

February 05, 2015 05:50 PM

Indyweek talked with Erica Swanson, head of Google Fiber’s Community Impact programs, about bringing broadband to all income levels.

The bad news about Google Fiber coming to seven cities in the Triangle is that the high-speed Internet service won’t be installed in your neighborhood by the next season of House of Cards.

The good news is that Google Fiber says it will seek out traditionally underserved communities—low-income, minority, non-English speaking areas, where some residents don’t have home Internet at all.

About 60 million people in the U.S. don’t have Internet at home, according to the Pew Research Center. In cities, that number is 1 in 4. For some, a computer and a connection are too expensive; others say they don’t need it—the Internet has no place in their lives.

That might change, hinging on Google’s expansion plans, along with a pending decision by the FCC, that could give more people cheaper access to the Internet.

"Affordable connectivity, that’s the piece we can address," says Erica Swanson, Google’s head of Community Impact Programs.

via Google Fiber and an FCC decision could give more people cheaper access to the Internet | News Feature | Indy Week.

Mark Turner : R-Line envy

February 04, 2015 06:55 PM

R-Line-BikeOnBus
Speaking of transit, I see that the marketing director for Cameron Village is trying to drum up support for diverting the R-Line buses from the original mission of serving downtown Raleigh. I’m all for improved transportation around Cameron Village because trying to drive anywhere around there is a nightmare. That said, I’m not sure extending the R-Line is the answer.

The R-Line buses came about through a joint effort of the Raleigh Transit Authority, the Downtown Raleigh Alliance (DRA), and the Greater Raleigh Convention and Visitors Bureau (GRCVB). All three groups helped make the R-Line possible. Cameron Village is not part of DRA and I don’t see that they do much with the GRCVB. Is the shopping center proposing to help pay for this extended service the way these other groups have? If so, I haven’t heard it. It would be great to get everything for free, but someone has to pick up the tab.

Cameron Village already has city bus service (two routes, 12 and 16). It makes sense to improve this existing service and leave the R-Line to do what it’s been doing: giving visitors an easy way to get around downtown Raleigh. That’s why downtown businesses subsidize it.

Mark Turner : N&O’s Christensen gets light rail wrong

February 04, 2015 06:24 PM

The N&O’s Rob Christensen makes the classic light rail vs. commuter rail blunder in this week’s column. If the media can’t even properly explain the difference between light rail and commuter rail, how do we ever expect the public to understand?

When it comes to a light-rail system for Raleigh, label me a skeptic.

I am a believer in buses, and I think our bus system should be expanded and more bus shelters erected.

Before we sink huge bundles of money into a light-rail system, I think a stronger case needs to be made, given our limited resources.

He also misidentifies the real problem with our bus system, which is it’s unusable to all but those who have no other choice. I’ve written about that before.

via Christensen: Raleigh needs buses, not rail | Rob Christensen | NewsObserver.com.

Mark Turner : Taking aim at Gage’s Google Fiber op-ed

February 04, 2015 04:48 PM

I submitted this letter to the editor to the N&O today. I trust they’ll agree with it and run it to correct the errors in the abysmal op-ed they ran last week.

Dawson Gage’s recent opinion piece about Google Fiber was deeply flawed. No public infrastructure is being “handed over” to Google. In actuality, Google will buy or build its infrastructure like any other provider. Gage also alleges Google was “deeply involved in the illegal, secret surveillance” when in fact much evidence exists to the contrary. Furthermore, how Gage can suggest that broadband hasn’t enriched our lives is bizarre and puzzling.

I know Google Fiber’s arrival is exciting news but let’s keep our heads, please.

Mark Turner
Raleigh, NC
Founder, Bring Google Fiber to Raleigh! Facebook Group

Update 6 Feb: The N&O ran my letter today. Gave it a headline of “Google all good.” I’m not sure I’d go that far, but at least someone has now set the record straight. On the same page, though, another letter writer repeated Gage’s “public giveway” premise. Sigh.

Tarus Balog : Review: 2015 Dell XPS 13 (9343) Running Linux

February 03, 2015 11:51 PM

In short, it doesn’t run Linux very well. (sigh)

When and if Eric reads this he’s just going to shake his head. For two years in a row now I’ve been lured by the wonders of new laptops announced at CES, and in both years I’ve been disappointed. He tells me I’m stupid for ordering the “new shiny” and expecting it to work, but I refuse to give up my dream.

Luckily this isn’t a huge issue for me since my main machines are desktops, but my second generation Dell XPS 13 “sputnik” is getting a little old. I am really looking forward to a slightly larger screen. The pixel density isn’t great on my laptop, especially compared to what is out now, and I am finding myself a little cramped for screen real estate.

The new XPS 13 is an amazingly beautiful device. I spent over three days trying to get it to work just because it was gorgeous. It had become precious to me.

My precious.

But it was not to be. I first started out with my default desktop, Linux Mint. It installed easily and I was very happy to see that code had been added to deal with the insane size of the screen (3600×1800 pixels). While a few icons were still small (like the reload arrow at the end of the Firefox search bar) most adjusted well, including the icons in the settings window. Great job Cinnamon team.

No, the issue I fought long and hard to fix was the touchpad. Every minute or so it would just freeze:

Feb  1 13:15:48 sting kernel: [ 1746.787178] psmouse serio1: resync failed, issuing reconnect request
Feb  1 13:15:52 sting kernel: [ 1750.722621] psmouse serio1: TouchPad at isa0060/serio1/input0 lost sync at byte 1
Feb  1 13:15:52 sting kernel: [ 1750.723734] psmouse serio1: TouchPad at isa0060/serio1/input0 lost sync at byte 1
Feb  1 13:15:52 sting kernel: [ 1750.724642] psmouse serio1: TouchPad at isa0060/serio1/input0 lost sync at byte 1
Feb  1 13:15:52 sting kernel: [ 1750.725717] psmouse serio1: TouchPad at isa0060/serio1/input0 lost sync at byte 1
Feb  1 13:15:52 sting kernel: [ 1750.737756] psmouse serio1: TouchPad at isa0060/serio1/input0 - driver resynced.
Feb  1 13:15:55 sting kernel: [ 1753.855093] psmouse serio1: TouchPad at isa0060/serio1/input0 lost synchronization, throwing 2 bytes away.
Feb  1 13:15:55 sting kernel: [ 1754.361293] psmouse serio1: resync failed, issuing reconnect request

I found a post that discussed changing out the driver which seemed to help, some but I could never get the problem to go completely away. The amazingly helpful Arch Linux folks suggested some workarounds, but nothing helped. I found it ironic that the touch screen worked fine.

I then switched to Ubuntu, thinking that might help. It didn’t, and along the way I lost audio. It seemed the audio device would just disappear. I tried 14.04, 14.10 and the alpha of 15.04. Also, Ubuntu did not handle the resolution well. While I could adjust the settings, it wasn’t done automatically for me like with Cinnamon, and certain things like the settings window remained tiny and somewhat “clipped”.

I went back to Mint and discovered that now I had wonky audio issues there. Sometimes it would be there and other times not. I stayed on 17.1 but updated the kernel to the 3.19 release candidate, but that didn’t help.

The scariest issue was that on occasion the screen would just go blank. It didn’t kill the system, if I was playing a movie file you could still hear the audio (assuming that was working), but no combination of key strokes would bring it back. I did find that closing the screen (to suspend) and reopening it would fix it for awhile, but I don’t necessarily want to have to do that in the middle of an important presentation.

Note: while the system seemed to suspend and resume okay, the power light didn’t blink to let you know it was still on like on the older XPS 13 model.

Now I’m certain that most of this will be corrected in the next few months. The Broadwell chipset is still pretty new, and rumor has it that Dell plans to support Ubuntu 14.04 on this laptop, but they will have a lot of work to do since it seems to require the 3.18+ kernel for most of the new shiny.

In the meantime I returned it and bought an M3800 preloaded with Ubuntu. While it is a bigger laptop than I’m used to, I like supporting Linux-native products and I will at least have the ability to contact Dell with issues should they arise.

I should point out that, while not quite to Apple standards, Dell has been pretty amazing throughout the process of ordering and returning this laptop. While not ready for prime time, if you are in the market in a couple of months for a small, awesome Linux laptop, be sure to check out the XPS 13. But unless you are a masochist like me, you definitely should wait.

Oh, and if any Dell folks should join the ranks of my three readers, I’m more than happy to test any unit you might send my way (grin).

Mark Turner : What does it Mean to be a Gig City? Upload Speeds Powering Entrepreneurs — Next Century Cities

February 03, 2015 05:20 PM

Remember when I pointed out the secret sauce of Google Fiber is the upload speeds? Will Aycock, operations manager of Wilson’s Greenlight community broadband system, agrees.

It’s all about the upload. If you are the owner of a small engineering business with dense blueprints to send to your European clients, or a specialized country doctor who depends on the quick transmission of x-rays, a digital film effects company, or a media artist, your ability to upload your dense information to your clients means business. For GigCity, Wilson, North Carolina, offering gigabit upload speeds to its community is real business for its future.

via What does it Mean to be a Gig City? Upload Speeds Powering Entrepreneurs — Next Century Cities.

Mark Turner : The FCC is moving to preempt state broadband limits – The Washington Post

February 02, 2015 06:47 PM

It looks increasingly likely that the FCC will overturn North Carolina’s anti-municipal broadband law, freeing cities like Wilson, NC to provide broadband to whomever it chooses.

Federal regulators are moving ahead with a proposal to help two cities fighting with their state governments over the ability to build public alternatives to large Internet providers.

The Federal Communications Commission this week will begin considering a draft decision to intervene against state laws in Tennessee and North Carolina that limit Internet access operated and sold by cities, according to a senior FCC official. The agency’s chairman, Tom Wheeler, could circulate the draft to his fellow commissioners as early as Monday and the decision will be voted on in the FCC’s public meeting on Feb. 26.

Chairman Wheeler just released the following statement:

FCC Chairman Tom Wheeler issued the following statement today regarding a proposed Order on community broadband that he will circulate to his fellow commissioners this week:

“Communities across the nation know that access to robust broadband is key to their economic future – and the future of their citizens. Many communities have found that existing private-sector broadband deployment or investment fails to meet their needs.

They should be able to make their own decisions about building the networks they need to thrive. After looking carefully at petitions by two community broadband providers asking the FCC to pre-empt provisions of state laws preventing expansion of their very successful networks, I recommend approval by the Commission so that these two forward-thinking cities can serve the many citizens clamoring for a better broadband future.”

I wonder if this means the FCC can also veto any spending limitations that state law has shackled municipalities with?

via The FCC is moving to preempt state broadband limits – The Washington Post.

Mark Turner : Vets study links PB pills, genetic variations to Gulf War illness | TribLIVE

February 01, 2015 03:11 AM

A government-issued pill intended to protect troops from nerve agents may have made some troops more vulnerable to a chronic condition marked by headaches, cognitive problems, pain and fatigue, researchers say.

People with certain genetic variations were 40 times more likely to contract Gulf War illness if they took pyridostigmine bromide, or PB, pills that the Defense Department issued to protect them from soman, a nerve agent, during the 1990-91 war, researchers concluded in a study funded by the U.S. Army Medical Research and Materiel Command and published this month in the journal Environmental Health.

via Vets study links PB pills, genetic variations to Gulf War illness | TribLIVE.

Mark Turner : Yes, Walking Through A Doorway Really Does Make You Forget — Here’s Why – Forbes

January 31, 2015 09:31 PM

I forgot to post this earlier.

More often than I care to admit, I’ll walk from one room to another with a clear vision in mind of whatever I need to do once I get there, but then I get there and can’t remember why I started. The only thing that happened between my first movement and my last is that I walked through a doorway. Surely that has nothing at all to do with forgetting something I knew just moments before, right?

Wrong, says new research. As it turns out, walking through a doorway exerts an imperceptible influence on memory. In fact, merely imagining walking through a doorway can zap memory.

via Yes, Walking Through A Doorway Really Does Make You Forget — Here's Why – Forbes.

Mark Turner : Is It Time To Kill The K-Cup, Before It Kills Our Planet?

January 31, 2015 06:50 PM

We have these coffee machines at work and they sure do produce a lot of trash for the amount of coffee they produce.

Your Keurig coffee pods have a dirty little secret. Actually, make that a big secret.

In 2013, Keurig Green Mountain produced 8.3 billion K-Cups that were brewed on millions of machines around the world — enough to circle to globe 10.5 times. Last year, production rose to nearly 9.8 billion, and most of those pods are not recyclable.

A new video made by Canadian production company Egg Studios takes a look at the environmental impact our coffee addiction has created. Titled "Kill The K-Cup," the short showcases a dystopian future where a single-use coffee pod monster destroys everything in its path.

via Is It Time To Kill The K-Cup, Before It Kills Our Planet?.

Tarus Balog : SCaLE 13x – February 2015

January 30, 2015 03:20 PM

We are three weeks away from the Southern California Linux Expo and I am getting really excited about it.

For those of you who are in to OpenNMS then tune in that day because we are making a pretty significant announcement at the show. Be sure to come buy the booth on the expo floor and say “hi” to the team, and both Jeff and I will be speaking (although at least during my talk you probably have better things to go see. For example, have you met our Lord and Savior, Docker?)

We are also incredibly excited that MC Frontalot will be performing. I’m not sure of the exact details but I believe it will be Saturday night.

(Note: I stole that picture from here since I like the fact that he has hair in it, well for certain values of “hair”, and note that link may not be safe for work [nudity])

If you are unfamiliar with his work, be sure to check out his YooToob Channel, and if you are so inclined I strongly recommend reading this well written bit (on Jezebel no less) concerning an issue surrounding a Penny Arcade comic a few years ago that really showcases the type of guy he is. Again, might not be safe for work (language). Be sure to click on the link to the original post for more detail.

If you are still on the fence about SCaLE, perhaps this little nugget will sway you: use Promo Code “ONMS” and get 40% off show registration. It’s cheap at twice the price and one of my favorite events of any year, but we want it to be extra special for 2015.

Jesse Morgan : Geomorph Theory

January 29, 2015 06:43 PM

*Random Crazy Person Thought of the Day: Ultra-specialized Geomorphs and Naming Conventions*

Standard Geomorph

A geomorph has 4 sides, and connects to all for sides via two entryways or “ports.” It looks a little bit like an octothorpe/hash with the center filled in (#).

Layer

Base2 Geomorph Sets

While Standard Geomorph tiles are cool, theres no way to close the system. To do this, you need to introduce the concept of a side being open (has two connecting ports) or closed (has no connecting ports).

Since there are only two options per side, we can represent each side with a binary number- 0 for closed, 1 for open. By using binary, we can now represent our tile as a four digit binary number. A four digit binary number has 16 possible states (0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, etc).  If we allow for rotation, we can reduce the total number of unique tiles needed, i.e. 1000, 0100, 0010 and 0001 can all be represented with the same one-sided geomorph by turning it.

With the addition of rotation, we can reduce our 16 down to 6 unique configurations:

0000= Sealed Geomorph
0001= One-sided Cave Geomorph
0011= Two-sided Corner Geomorph
0101= Two-sided Tunnel Geomorph
0111= Three-sided Edge Geomorph
1111= Four-sided Standard Geomorph

Layer #8Layer #4Layer #5 Layer #9Layer #7 Layer

Suppose we wanted to create store hundreds of tiles with these configurations. How would we store them? The most logical way is to create directories based on their configuration, which could be named after the binary number above. if you needed a eastern wall, you could translate it to 1011, which is simply a three-sided Edge geomorph with a 90 degree clockwise rotation. You could then snag a random one-sized edge tile from 1011/ and simply rotate it.

Base4 Geomorph Set

While this is neat, you can take it a step further with segmented geomorphs, which track the state of individual left and right ports:

 

00= both closed
01= first open
10= second open
11= both open

The addition of these two new states forces us to use 2 bits to represent state per side, or 8 bits total to represent a tile.

labeled

This means there are 256 different configurations for tiles. This can be reduced not only by rotation, but by flipping:

10 00 00 00 = Top left open
01 00 00 00 = Top right open (the above tile, flipped on it's Y axis)

(Also note, flipping along the X and Y axis has the same effect as rotating 180 degrees.)

00 10 00 00 = right top open
00 01 00 00 = right bottom open (flipped along X axis)
00 00 00 01 = left top open (flipped along Y axis)
00 00 00 10 = left bottom open (rotated 180 degrees)
00 00 00 10 = left bottom open (flipped along X and Y axis)

 

By the time you add in flipping and rotating, we end up with significantly less than 256 tiles. How much? I have no idea, the math is beyond me right now without drawing them all out. What I can say is that we can represent them with Base4 notation:

0= both closed
1= first open
2= second open
3= both open

Layer #8Layer #10 Layer #11Layer #4

 

 

 

 

This allows us to represent every tile category with only 4 digits. Looking at what we’ve represented previously:

0000= Sealed Geomorph
0003= One-sided Cave Geomorph
0033= Two-sided Corner Geomorph
0303= Two-sided Tunnel Geomorph
0333= Three-sided Edge Geomorph
3333= Four-sided Standard Geomorph

But we could also represent things like:

  • a pinwheel configuration: 1111
  • a crooked fork in the road:   0302
  • a narrow corridor  0102

PINWHEEL Layer #3 corridor

Among others.

Base8 Geomorphs

Lets take it another step- lets say that the solid center part between the two ports was changeable, essentially giving us 3 ports; three binary positions giving us a total of 8 combinations per side.

000 = all closed
001 = right open
010 = center open
011 = center and right open
100 = left open
101 = left and right open  (standard geomorph)
110 = center and left open
111 = all three open

With 3 bits per side, that gives us a total of 12 bits to represent a geomorph; If I remember my Base2 properly, that’s 4096 possible configurations (again much less with rotation and flips). We could still represent our standard configurations with only 4 digits if we use octal:

0000= Sealed Geomorph
0005= One-sided Cave Geomorph
0055= Two-sided corner Geomorph
0505= Two-sided Tunnel Geomorph
0555= Three-sided Edge Geomorph
5555= Four-sided Standard Geomorph

In addition we could create neat things like plusses, crosses, Y’s, trees, etc.

onesideelbowLayer #6 Layer #14 Layer #13

treeLayer #2 Layer #1 fatladder ladder

Base32 Geomorphs

If we wanted to take this one last insane step further, we could introduce the idea of ultra-specialized. where the 2 solid edges of each side were turned into ports. This means there are 5 binary areas (open or closed) per side, which translates to 32 configurations per side, meaning we can use base32 to encode each of the four sides with a simple four-letter code.

To this end, you could represent a “regular” geomorph side with the binary representation, i.e. 01010, which is 10 in decimal and A in base32. This means a regular geomorph tile would be encoded as AAAA.

0000=sealed Geomorph
A000= One-sided Geomorph
AA00= Two-sided Corner Geomorph
A0A0= Two-sided Tunnel Geomorph
AAA0= Three-sided Edge Geomorph
AAAA= Four-sided Standard Geomorph

So, the final tally? Five binary on 4 sides is 20 bits of data per tile; That’s over a million different variations. My brain hurts now.

Until I sat down and did the math, I thought 5bit-sided geomorphs were doable. Now I see how wrong I was.

 

Mark Turner : N&O runs horrible broadband op-ed

January 29, 2015 05:57 PM

The Google Fiber op-ed that ran in today’s N&O entitled “Google Fiber deal not in best interest of NC public” is so godawful that I don’t even know where to begin. Written by Dawson Gage, who calls himself an “IT worker, freelance writer, and aspiring law student,” it is incredibly misinformed on so many levels:

I rejoiced when my family first got broadband Internet when I was about 13, but I doubt it has made any of our lives richer or more productive. The usefulness of computers, for the most part, has little enough to do with how fast they are. No one wants delivery vans and school buses that go 20,000 mph.

Is Gage actually suggesting that life isn’t richer than in the days of dialup? Before YouTube, NetFlix, Wikipedia, Facebook, and Google? Apparently, having a mind-blowing amount of the world’s information instantly available isn’t rich or more productive enough for him. I bet he’s a big fan of the abacus.

In light of this, a massive dose of skepticism is appropriate. The upshot of the Google deal is that an enormously valuable piece of public infrastructure, which ought to be owned in common by the public, is handed over lock, stock and barrel to a private company based in California.

Do what, now? There is no “public infrastructure” being handed over to Google or anyone else. Google is getting nothing for free here. It’s paying its franchise fees, permits, taxes, and other costs just like any other company. I have no idea what Gage means here.

This same company was deeply involved in the illegal, secret surveillance of all our Internet usage by the NSA.

Well, no. Google reached out to NSA for help when the company found it had been hacked by the Chinese government. Soon afterward, when Edward Snowden’s documents revealed NSA was tapping directly into Google’s unsecured internal networks, the company angrily protested and immediately set about encrypting all of its links. This was the subject of an extensive story in June in the New York Times. What Gage wrote is patently false.

Its entire business model is founded on the premise that Google has the right to meticulously monitor and record every morsel of data that passes within its reach.

Google’s business model is to make money. They do this very well with advertising but it’s not all Google does. Part of Google’s mission seems to be pushing technological boundaries. This results in innovations like Google Earth, Google Maps, Google Voice, and Google Fiber. Sometimes these ideas don’t pan out (like Google Glass), but not everything they do is to support their advertising business.

Moreover, the law passed by the General Assembly to make public municipal Internet services illegal (save for that of Wilson, which was grandfathered in) is itself testament to the fact that public alternatives are feasible and sustainable. Indeed, at the time of that bill’s passage, the town of Chapel Hill was already laying its own high-speed fiber, which now presumably will be annexed by Google.

Well, no, again. See above. Municipally-owned networks will stay municipally-owned, and the same law Gage cites prevents cities from letting commercial entities use their networks even if they wanted to.

At the time of the law’s passage in 2011, its proponents argued that municipal or other government involvement in providing Internet service was “an interference in the free market.” Last time I checked, lobbying the government to outlaw an entire sector of potential competition was not much of a “free-market” approach. What erstwhile advocates of “free market” principles in the realm of infrastructure actually believe in is a doctrine of private ownership as an unchallenged system.

Why not simply contract Google – or even better, some of the many competent North Carolina businesses – to build a high-speed fiber network, which would then be owned by the public? Would any of us wish to drive on privately owned toll roads? Those who stand to benefit and, yes, profit from such ventures as the Google plan would prefer we did not ask such questions.

These passages echo the broadband op-ed I wrote back in 2011. Nothing new here.

And do we not imagine that Google views owning our Internet infrastructure as a fantastic bonanza of the data on which it feeds? Google Fiber is a business venture, not an act of philanthropy.

Yes, it’s a business, and Gage implying Google is only interested in monitoring its Google Fiber customers is not only unsupported by any evidence but goes a little into the tinfoil-hat arena.

Appeals to the virtues of the market are hollow in the cases of government-anointed monopolies like Google or, for that matter, Duke Energy. In the era of Gov. Pat McCrory, I understand that many of those in power see the private ownership of public infrastructure as a beau-ideal, part of the natural order.

Again, Google wasn’t awarded any monopoly here. Nor, strictly speaking, was Time Warner Cable (as much as I hate to admit it). The cities that succeeded in attracting Google did so by working through a checklist of requirements Google needed to determine whether a deployment was possible. This was what spurred on the N.C. Next Generation Network (NCNGN) effort: to figure out how to streamline these requirements. AT&T was the first company to agree to the NCNGN terms and has started rolling out its own fiber deployment. There is nothing preventing Google from also agreeing to the NCNGN terms and providing Google Fiber under this agreement. Google, however, has preferred to do things its own way in its previous deployments and I’m betting its Triangle deployments will be similar.

And for the last time, there is no “private ownership of public infrastructure” going on here.

Gage might want to check his facts before penning another op-ed, and maybe the N&O should pay a little more attention before it runs one like this.

Mark Turner : Google Fiber: fast download AND upload speeds

January 29, 2015 02:37 PM

fiber_house
Most of the local news stories I’ve read about Google Fiber coming to Raleigh highlight the ability to “download YouTube videos quickly.” Quickly downloading the stuff you’ve always downloaded is cool, but it isn’t an Earth-shattering use case. The real value of Google Fiber is that Google treats the Internet the way it should be treated – like a two-way street.

Other broadband providers will sell you fast connections but those connections are strictly asymmetrical. You may get a 15 Mbps download speed but you’ll only get a 1 Mbps upload speed. You see, Big Telecom wants you to treat you as a “consumer,” meaning you’ll take whatever the media companies choose to give you. They don’t think of you as having anything to bring to the conversation.

Google Fiber is different. Not only can you get 1 Gbps download speeds, you also get equally fast 1 Gbps upload speeds! Your download and upload speeds are equal, exactly how God intended. You become a full partner in the Internet, able not only to download at blazing speeds a multitude of cat videos from YouTube but able offer up your own. Or, you can hold videoconferences with your friends without being interrupted by buffering. Or play video games with others without sluggishness.

When last year’s dreary, snow-filled winter kept everyone home, I had the hankering to hold a guitar-picking session with some of the musicians in the neighborhood. I thought it would be cool to do this over the Internet, but such coordination could never happen with traditional, compression-filled video solutions because the timing would always be off. With a fat pipe like Google Fiber on either end, a jam session could be held with neither side missing a beat. This would ordinarily only be possible with expensive, time-locked (and … well, ancient) technology like T1 or ISDN lines.

Couple this with the impromptu jam sessions we’ve seen around town during Raleigh’s new showcase event, the International Bluegrass Festival, and I predict you’ll soon see some really cool musical collaborations that wouldn’t ordinarily be possible. I would love to see roving teams of broadband broadcasters out beaming street music into the homes of viewers right as it happens.

The beauty of Google Fiber is that it enables everyone to contribute to the Internet. So, rather than thinking in terms of fast video downloads, imagine what fast upload speeds now make possible.

Tarus Balog : Twitter

January 29, 2015 11:23 AM

After a long absence, I thought I’d let my three readers know I’m back on Twitter (as @sortova). Don’t expect much from me since I can’t even say “I’m back on Twitter” in less than 140 characters, and I tend to echo the sentiments of John Cleese on the subject, but it should allow those of you with nothing better to do more things with which to do nothing.

Mark Turner : An Introduction to Google Fiber

January 29, 2015 02:22 AM

An_Introduction_To_Google_Fiber_cover

One of the most useful things I got out of yesterday’s Google Fiber press conference (well, aside from a sweet Google Fiber water bottle) is an insightful booklet called “An Introduction To Google Fiber.” It basically spells out what the next steps are for the Google Fiber rollout.

Of particular interest is the question of “how do I get Google Fiber in my neighborhood?” Google’s answer?

Our approach is to build where people want us.

Fiber optic cable will travel into your neighborhood into boxes called telecom cabinets. One of these cabinets can serve you and a few hundred of your neighbors with Fiber — we call this grouping your “fiberhood”.

That’s where you come in. For us to bring Google Fiber to you — i.e. for us to light up your local telecom cabinet with working Google Fiber service and then for us to bring that service right down the street and up to your house — you and your neighbors first need to tell us you want us. Each fiberhood will have a sign-up goal that you can see on our website by entering your address — and the process is transparent, so you and your neighbors can see how close your fiberhood is to the goal.

After you and your neighbors reach your goal, we’ll be able to bring fiber the last mile (or so) from the cabinet to your home.

Wondering why we do it this way? It’s because we focus our energy on a handful of fiberhoods at once, doing an all-out installation and construction blitz. We do this so we can provide you with better, faster service; we won’t make you wait around for a crew that’s stuck across town. After we’re done in one fiberhood, we’ll move on to the next.

Already, word of the Google Fiber signup page has lit up neighborhood email lists, Facebook pages, and NextDoor pages all across town. Geeks in Cary have organized a MeetUp to engage their neighbors in the signup strategy. Now that Raleigh has worked its way towards achieving Gig City status, it’s amusing to me to see neighborhoods vying amongst themselves to be the first “fiberhoods.”

I spoke with the Erik Garr, Head of Google Fiber Raleigh/Durham, at tonight’s reception. He insisted that the Google Fiber rollout would not simply target Raleigh’s wealthiest neighborhoods first. Instead, Google will include neighborhoods of all economic means. Mr. Garr emphasized that Google would be making good use of its free service. This approach makes me happy as it will mean Google Fiber’s presence will help bridge the “digital divide” rather than increase it (exponentially).

I highly encourage you to read the rest of the nuggets contained in Google’s booklet, downloadable from the City of Raleigh’s website.

Happy Fiber Hunting!

Tarus Balog : Welcome to OpenNMS 15

January 28, 2015 09:31 PM

Today OpenNMS 15 was released. It was a year and a half between the release of OpenNMS 1.12 and OpenNMS 14, but only three months between OpenNMS 14 and OpenNMS 15.

As we move forward this year we are trying to adhere more to the open source mantra of “release early, release often”, and thus the new major release. There have been 1177 new commits since 14.0.3

You’ll also notice that this version of OpenNMS has a new name – Horizon. We’ve always thought that OpenNMS represents the best network management platform available and the name is meant to reflect that. We hope to make as many improvements we can, as fast as we can, without sacrificing quality, thus keeping OpenNMS out on the “horizon” from the competition.

The main improvement for the 15 release is in the webUI. Although you might not notice it at first, we’ve spent months migrating the whole interface to a technology called Bootstrap. The Bootstrap framework allows us to create a responsive UI that should look fine on a computer, a tablet or a phone. This should allow us a lot more freedom to modify the style sheet and we hope to be able to add “skinable” theme options soon.

A cool feature that can be found in this new UI is the ability to automatically resize resource graphs. If you have a particular set of resource graphs displayed:

and then you shrink the window, you’ll note that the menu turns into a dropdown and the graphs themselves now fit the more narrow window:

There are a number of bug fixes and other new features, and a complete list can be found at the bottom of this post or in our Jira instance (but for some reason you have to be logged in to see it). I am happy to say that there was no need for major security fixes in this release. (grin)

Sub-task

  • [NMS-6642] – CiscoPingMibMonitor
  • [NMS-6674] – NetScalerGroupHealthMonitor
  • [NMS-7060] – merge DocuMerge branch into develop branch
  • [NMS-7086] – alter documentation deploy step in bamboo to match the new structure
  • [NMS-7164] – Fix fortinet event typos (fortinet vs fortimail)
  • [NMS-7238] – Fix UEI names for CitrixNetScaler trap events
  • [NMS-7264] – Document CORS Support

Bug

  • [NMS-1956] – Missing localised time in web pages
  • [NMS-2358] – Time to load Path Outages page grows with each entry added
  • [NMS-2580] – Null/blank sysName value causes null/blank node label
  • [NMS-3033] – Create a HibernateEventWriter to replace JdbcEventWriter
  • [NMS-3207] – Able to get to non authorised devices via path outages link.
  • [NMS-3615] – Custom Resource Performance Reports not available
  • [NMS-3847] – jdbcEventWriter: Failed to convert time to Timestamp
  • [NMS-4009] – wrong content type in rss.jsp
  • [NMS-4246] – Paging arrows invisible with firefox on mac
  • [NMS-4493] – Notification WebUI has issues
  • [NMS-4528] – Time format on Event webpage is different that on Notices webpage
  • [NMS-5057] – Installer database upgrade script (install -d) scans every RRD directory, bombs with "too many open files"
  • [NMS-5427] – RSS feeds are not valid
  • [NMS-5618] – notifications list breadcrumbs differs from notifications index page
  • [NMS-5858] – Resource Graphs No Longer Centered
  • [NMS-6022] – Vaadin Header not consistent with JSP Header
  • [NMS-6042] – Empty Notification search bug
  • [NMS-6472] – Map Menu is not listing all maps
  • [NMS-6529] – Web UI shows not the correct Java version
  • [NMS-6613] – Problems installing "Testing" on Ubuntu 14.04
  • [NMS-6826] – Queued Ops Pending default graph needs rename
  • [NMS-6827] – Many graph definitions in snmp-graph.properties have line continuation slashes
  • [NMS-6894] – New Focal Point Topology UI (STUI-2) very slow
  • [NMS-6917] – Node page availability graph isn't "(last 24 hours)"
  • [NMS-6924] – WMI collector does not support persistence selectors
  • [NMS-6956] – test failure: org.opennms.mock.snmp.LLDPMibTest
  • [NMS-6958] – Requisition list very slow to display
  • [NMS-6967] – GeoMap polygons activation doesn't accurately reflect cursor location
  • [NMS-7015] – Navbar in Distributed Map is missing
  • [NMS-7059] – Local interface not displayed correctly in "Cdp Cache Table Links"
  • [NMS-7075] – xss in device snmp settings
  • [NMS-7112] – provision.pl just works if the admin user credentials are used
  • [NMS-7115] – Message Error in DnsMonitor
  • [NMS-7120] – Unable to add graph to KSC report
  • [NMS-7126] – ReST call for outages ends up with 500 status
  • [NMS-7144] – OpenNMS logo doesn't point to the same file
  • [NMS-7149] – footer rendering is weird in opennms docs
  • [NMS-7170] – Add a unit test for NodeLabel.computeLabel()
  • [NMS-7176] – ie9 does not display any 'interfaces' on a switch node – the tabs are blank
  • [NMS-7185] – NullPointerException When Querying offset in ReST Events Endpoint
  • [NMS-7246] – OpenNMS does not eat yellow runts
  • [NMS-7270] – HTTP 500 errors in WebUI after upgrade to 14.0.2
  • [NMS-7277] – WMI changed naming format for wmiLogicalDisk and wmiPhysicalDisk device
  • [NMS-7279] – Enable WMI Opennms Cent OS box
  • [NMS-7287] – Non provisioned switches with multiple VLANs generate an error
  • [NMS-7322] – SNMP configuration shows v1 as default and v2c is set.
  • [NMS-7330] – Include parts of a configuration doesn't work
  • [NMS-7331] – Outage timeline does not show all outages in timeframe
  • [NMS-7332] – Unnecessary and confusing DEBUG entry on poller.log
  • [NMS-7333] – Switches values retrieved incorrectly in the BSF notification strategy
  • [NMS-7335] – QueryManagerDaoImpl crashes in getNodeServices()
  • [NMS-7359] – Acknowledging alarms from the geo-map is not working
  • [NMS-7360] – Add/Edit notifications takes too much time
  • [NMS-7363] – Update Java in OpenNMS yum repos
  • [NMS-7367] – Octectstring not well stored in strings.properties file
  • [NMS-7368] – RrdDao.getLastFetchValue() throws an exception when using RRDtool
  • [NMS-7381] – Authentication defined in XML collector URLs cannot contain some reserved characters, even if escaped.
  • [NMS-7387] – The hardware inventory scanner doesn't recognize PhysicalClass::cpu(12) for entPhysicalClass
  • [NMS-7391] – Crash on path outage JSP after DAO upgrade

Enhancement

  • [NMS-1595] – header should always contain links for all sections
  • [NMS-2233] – No link back to node after manually unmanaging services
  • [NMS-2359] – Group path outages by critical node
  • [NMS-2582] – Search for nodes by sysObjectID in web UI
  • [NMS-2694] – Modify results JSP to render multiple columns
  • [NMS-5079] – Sort the Path Outages by Critical Path Node
  • [NMS-5085] – Default hrStorageUsed disk space relativeChange threshold only alerts on a sudden _increase of free space_, not a decrease of free space
  • [NMS-5133] – Add ability to search for nodes by SNMP values like Location and Contact
  • [NMS-5182] – Upgrade JasperReports 3.7.6 to most recent version
  • [NMS-5448] – Add link to a node's upstream critical path node in the dependent node's web page
  • [NMS-6508] – Event definitions: Fortinet
  • [NMS-6736] – ImapMonitor does not work with nginx
  • [NMS-7123] – Expose SNMP4J 2.x noGetBulk and allowSnmpV2cInV1 capabilities
  • [NMS-7157] – showNodes.jsp should show nodes in alphabetical order
  • [NMS-7166] – Backup Exec UEI contain "http://" in uei
  • [NMS-7205] – Rename link to configure the Ops Board in the Admin section.
  • [NMS-7206] – Remove "JMX Config Generator Web UI ALPHA" from stable
  • [NMS-7228] – Document that user must be in 'rest', 'provision' or 'admin' role for provision.pl to work
  • [NMS-7247] – Add collection of SNMP MIB2 UDP scalar stats
  • [NMS-7261] – CORS Support
  • [NMS-7278] – Improve the speed of the ReST API and Service Layer for the requisitions' repositories.
  • [NMS-7308] – Enforce selecting a single resource for Custom Resource Performance Reports
  • [NMS-7317] – Rearrange Node/Event/Alarm/Outage links on bootstrap UI
  • [NMS-7384] – Add configuration property for protobuf queue size
  • [NMS-7388] – IpInterfaceScan shouldDetect() method should check for empty string in addition to null string

Mark Turner : N&O Editors miss Hatem hypocrisy

January 28, 2015 06:29 PM

I was disappointed to read the N&O’s take in this editorial.

Greg Hatem is an acquaintance of mine. He’s done a tremendous job helping kick-start downtown Raleigh’s renaissance, investing when others would not. He’s earned some respect and should have his say.

On this issue, though, I must respectfully disagree with Greg. Downtown has continued to grow since those days when Empire Properties was the only game in town. Greg’s businesses have grown and thrived as well in this new, noisier downtown Raleigh. Heck, his businesses have contributed more than their share to the noise and revelry. For Greg Hatem to have played such a large role (as well as profited) in popularizing downtown and now complain about its success seems a tad hypocritical, doesn’t it?

It mystifies me how the editors at the News and Observer failed to see this irony.

When someone heads a company with 40 buildings and 500 employees connected to downtown Raleigh, getting the Raleigh City Council’s attention is fairly easy.

And Greg Hatem – whose company owns the restaurants Sitti, Gravy, The Pit and the Raleigh and Morning Times, along with many other properties – has earned that attention. Hatem’s involvement with downtown Raleigh goes back to a time when it was by no means certain that the city would see the boom it has. Hatem took big chances and got big returns.

But he’s moving his family, which includes younger children, out of a Fayetteville Street apartment into the Oakwood neighborhood near downtown. Why? The noise and party aftermath have made downtown, he says, "unlivable." He doesn’t like the idea of his family waking up to the garbage and other remnants of the previous night’s revels.

via Lower the volume on Raleigh's boom | Editorials | NewsObserver.com.

Mark Turner : Wake Forest police address concerns about ‘stranger danger’ cases :: WRAL.com

January 28, 2015 06:07 PM

Wake Forest Police have expressed exasperation with citizens sharing information on Facebook about a recent spate of “stranger danger” incidents. The incidents involve men driving a silver or gray SUV and trying to lure kids into the vehicle.

It’s a very frightening situation and any parent’s worst nightmare. People are afraid and rightfully so. They want answers, and if the police aren’t giving them then these folks will fill the void using social media outlets like Facebook and NextDoor.

I’ve seen how social media can help solve crimes. It works. Nothing helps police efforts like citizens working together. Instead of blaming it for “heresay,” Wake Forest PD should embrace social media as a “force multiplier” to solve crimes. If there are rumors that should be quashed, they should go online and set the record straight. It’s a new world we live in, after all.

Leonard said the police department has received other reports on social media that investigators have looked into, noting that they have had to use resources to track down "inaccurate information and hearsay.""If you see something that looks suspicious in your neighborhood, call the police department first rather than posting it on Facebook," Leonard said.

via Wake Forest police address concerns about 'stranger danger' cases :: WRAL.com.

Warren Myers : merging centos iso images

January 28, 2015 05:56 PM

Thanks to @Anon on Unix.SE for the pointer on how to do this. And to @Andy‘s comment on @mmckinst‘s answer for the warning about additional packages you may need.

As my three readers know, I run a CentOS mirror. One of the idiosyncrasies of CentOS, like its upstream RHEL, is that DVD ISOs aren’t always just one image – for example, the 6.6 x64 image comes on two ISOs. I suppose this has something to do with the “normal” or “simple” capacity of a DVD disc, but it’s annoying.

Enter the mkdvdiso.sh script (original found here) from Chris Kloiber & Phil Schaffner.

The process I used to combine these two ISOs into one is as follows:
yum install isomd5sum createrepo mkisofs
mkdvdiso.sh /full/path/to/original/isos /full/path/to/destination.iso

For posterity, and in case the CentOS wiki dies, below is the mkdvdiso.sh script:

#!/bin/bash

# by Chris Kloiber 
# Mods under CentOS by Phil Schaffner 

# A quick hack that will create a bootable DVD iso of a Red Hat Linux
# Distribution. Feed it either a directory containing the downloaded
# iso files of a distribution, or point it at a directory containing
# the "RedHat", "isolinux", and "images" directories.

# This version only works with "isolinux" based Red Hat Linux versions.

# Lots of disk space required to work, 3X the distribution size at least.

# GPL version 2 applies. No warranties, yadda, yadda. Have fun.

# Modified to add sanity checks and fix CentOS4 syntax errors

# TODO:
#   Add checks for available disk space on devices holding output and
#       temp files.
#   Add optional 3rd parameter to specify location of temp directory.
#   Create .discinfo if not present.

OS_VER=\
$((test -e /etc/fedora-release && rpm -qf /etc/fedora-release --qf "FC%{VERSION}") \
|| (test -e /etc/redhat-release && rpm -qf /etc/redhat-release --qf "EL%{VERSION}") \
|| echo OS_unknown)

case "$OS_VER" in
  EL[45]*|FC?)
        IMPLANT=/usr/lib/anaconda-runtime/implantisomd5
        if [ ! -f $IMPLANT ]; then
            echo "Error: $IMPLANT Not Found!"
            echo "Please install anaconda-runtime and try again."
            exit 1
        fi
        ;;
  EL6*|FC1?)
        IMPLANT=/usr/bin/implantisomd5
        if [ ! -f $IMPLANT ]; then
            echo "Error: $IMPLANT Not Found!"
            echo "Please install isomd5sum and try again."
            exit 1
        fi
        ;;
  OS_unknown)
        echo "Unknown OS."
        exit 1
        ;;
  *)
        echo "Fix this script for $OS_VER"
        exit 1
esac

if [ $# -lt 2 ]; then
        echo "Usage: `basename $0` source /destination/DVD.iso"
        echo ""
        echo "        The 'source' can be either a directory containing a single"
        echo "        set of isos, or an exploded tree like an ftp site."
        exit 1
fi

DVD_DIR=`dirname $2`
DVD_FILE=`basename $2`

echo "DVD directory is $DVD_DIR"
echo "ISO file is $DVD_FILE"

if [ "$DVD_DIR" = "." ]; then
    echo "Destinaton Directory $DVD_DIR does not exist"
    exit 1
else
    if [ ! -d "/$DVD_DIR" ]; then
        echo "Destinaton Directory $DVD_DIR must be an absolute path"
        exit 1
    else
        if [ "$DVD_FILE" = "" ] || [ -d "$DVD_DIR/$DVD_FILE" ]; then
            echo "Null ISO file name."
            exit 1
        fi
    fi
fi

which mkisofs >&/dev/null
if [ "$?" != 0 ]; then
    echo "mkisofs Not Found"
    echo "yum install mkisofs"
fi

which createrepo >&/dev/null
if [ "$?" != 0 ]; then
    echo "createrepo Not Found"
    echo "yum install createrepo"
fi

if [ -f $2 ]; then
    echo "DVD ISO destination $2 already exists. Remove first to recreate."
    exit 1
fi

# Make sure there is enough free space to hold the DVD image on the filesystem
# where the home directory resides, otherwise change ~/mkrhdvd to point to
# a filesystem with sufficient free space.

cleanup() {
    [ ${LOOP:=/tmp/loop} = "/" ] && echo "LOOP mount point = \/, dying!" && exit
    [ -d $LOOP ] && rm -rf $LOOP 
    [ ${DVD:=~/mkrhdvd} = "/" ] && echo "DVD data location is \/, dying!" && exit
    [ -d $DVD ] && rm -rf $DVD 
}

cleanup
mkdir -p $LOOP
mkdir -p $DVD

ls $1/*.iso &>/dev/null
if [ "$?" = 0 ]; then

    echo "Found ISO CD images..."

    CDS=`expr 0`
    DISKS="1"

    [ -w / ] || {   # Very portable, but perhaps not perfect, test for superuser.
        echo "Only 'root' may use this script for loopback mounts" 1>&2
        exit 1
    }

    for f in `ls $1/*.iso`; do
        mount -o loop $f $LOOP
        cp -av $LOOP/* $DVD
        if [ -f $LOOP/.discinfo ]; then
            cp -av $LOOP/.discinfo $DVD
            CDS=`expr $CDS + 1`
            if [ $CDS != 1 ] ; then
                DISKS=`echo ${DISKS},${CDS}`
            fi
        fi
        umount $LOOP
    done
else
    if [ -f $1/isolinux/isolinux.bin ]; then

        echo "Found FTP-like tree..."

        if [ -e $1/.discinfo ]; then
            cp -av $1/.discinfo $DVD
        else
# How does one construct a legal .discinfo file if none is found?
            echo "Error: No .discinfo file found in $1"
            cleanup
            exit 1
        fi
        cp -av $1/* $DVD
    else
        echo "Error: No CD images nor FTP-like tree found in $1"
        cleanup
        exit 1
    fi
fi

if [ -e $DVD/.discinfo ]; then
    awk '{ if ( NR == 4 ) { print disks } else { print ; } }' disks="ALL" $DVD/.discinfo > $DVD/.discinfo.new
    mv $DVD/.discinfo.new $DVD/.discinfo
else
    echo  "Error: No .discinfo file found in $DVD"
    cleanup
    exit 1
fi

rm -rf $DVD/isolinux/boot.cat
find $DVD -name TRANS.TBL | xargs rm -f

cd $DVD
createrepo -g repodata/comps.xml ./
mkisofs -J -R -v -T -o $2 -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 8 -boot-info-table $DVD
if [ "$?" = 0 ]; then

    echo ""
    echo "Image complete, create md5sum..."

#  $IMPLANT --force $2
# Don't like forced mediacheck? Try this instead.
    $IMPLANT --supported-iso --force $2

    echo "Start cleanup..."

    cleanup

    echo ""
    echo "Process Complete!"
    echo "Wrote DVD ISO image to $DVD_DIR/$DVD_FILE"
    echo ""
else
    echo "ERROR: Image creation failed, start cleanup..."

    cleanup

    echo ""
    echo "Failed to create ISO image $DVD_DIR/$DVD_FILE"
    echo ""
fi

Tarus Balog : OUCE 2014 Videos Now Available

January 28, 2015 03:34 PM

The dates are now set for the 2015 OpenNMS Users Conference, but if you can’t wait until September you can now relive the 2014 conference through the magic of YouTube.

You can visit the 2014 conference events calendar and if a video is available it will show up under the “Links” section.

Markus Neumann has been working through the videos and doing his best to improve them, but apologies in advance for the quality of some of them. We’ll attempt to record things better in Fulda.