An Alternative to our Broken Delegate System

We all know the primary problem with our delegate voting system–it all but eliminates the odds of a third-party win, because voters have the Sophie’s Choice of either supporting the “lesser evil” major party candidate, or voting their conscience and risking that doing so will tip the scale toward the “greater evil” candidate.

I’ve looked into alternatives such as Instant Runoff Voting (IRV), but they tend to be complicated, and would require both retooling of all existing manual and electronic voting systems and retraining the entire populace in how to vote.

My solution, Instant Vote Transfer (IVT), is much simpler. It works like this:

  1. Election is held in the usual way, voters choosing the best candidate.
  2. Votes are counted and delegates are assigned (geographic or proportional, doesn’t matter).
  3. Candidates with no delegates exit the race.
  4. The candidate with the fewest delegates also exits, but has the option of sending their delegates to any remaining candidate of their choice.
  5. (4) above is repeated until two candidates remain.
  6. The candidate with more delegates wins.
  7. This entire process can be broadcast in real time. I believe it having all candidates sharing a stage while this happens would result in more solidarity and unity as a country than the current process.

    Under this system, it is impossible for a third-party candidate to “spoil” an election, because their votes are not lost, they are only transferred to the next-best choice (according to that candidate). And while, at first, the end result would still likely result in wins by the centrist major parties, things get interesting as more third parties gain votes and transfer to one another rather than transferring straight to the GOP or DNC. The major parties could easily be marginalized by a virtual coalition of third parties on election night.

    Another advantage of IVT is that we no longer need party primaries. There can be 10 Republican candidates, 15 Democratic candidates, etc., because, again, splitting the vote among candidates no longer spoils the party’s chances. In fact, it may *help*, because they no longer have to pick one candidate who could alienate segments of their base (as both parties have managed to do this year). Sure, too many candidates makes party messaging and funding more complicated, but we could at least see primary elections where, say, three candidates are nominated rather than one.

    One other aspect of IVT I love is it could be used even in local or state elections, where you simply replace “delegates” above with “votes.”

    On the off chance that there is a tie, two things would happen:

    1. First, all losing candidates who did not send their delegates have another chance to do so. So, a third-party candidate who really does hate everyone else on the stage can initially withhold their delegates out of principle, but can still suck it up and choose the lesser evil if a tiebreaker is needed.
    2. If this does not resolve the tie (or if all losing-candidate delegates were already transferred), the candidate with the fewest (but more than zero) delegates is given back their delegates. This is guaranteed to break the tie. No Supreme Court needed.

    Thoughts? Comments / trackbacks welcome.

ASP.NET Core Web API in Production on OS X

My main web site (, which is mostly comprised of galleries of my art photography, runs on a MAPP stack — Mac OS X, Apache, PHP, and PostgreSQL. I’ve been playing with ASP.NET Core and loving it, and I’ve been wanting to make the switch to using it for the web gallery API. I prefer C# over PHP, and it’s a good real-world project to learn the new bits, since my day job will be using legacy ASP.NET for the foreseeable future.

However, I didn’t want ASP.NET Core to run the whole show — Apache is perfectly capable of handling my static files and unrelated PHP code, I just wanted Kestrel to handle database communication for the photo galleries. Creating the photo gallery API replacement in Visual Studio Code was straightforward. With some POCOs, Dapper, and the .NET Core PostgreSQL driver, I had Kestrel serving up my JSON API on port 5000 in a Terminal window without too much effort. The future is here!

But to use it “in production,” I didn’t want Kestrel exposed directly (nor does Microsoft recommend that), and I didn’t want my web site to go down if I log out or close the Terminal window where the dotnet process was running. A workable solution would require (a) proxying the API through Apache, and (b) finding a way to reliably and automatically run Kestrel in the background.

Creating a Reverse Proxy in Apache

The first goal was to get Apache to call my Web API application and convey its response to the calling browser. This required setting Apache to load the proxy module by uncommenting these two lines in the Apache configuration file (/private/etc/apache2/httpd.conf):

LoadModule proxy_module libexec/apache2/
LoadModule proxy_http_module libexec/apache2/

The mod_proxy module requires some new configuration settings in the same file (or, if you have multiple virtual hosts, in whatever configuration file has your site’s vhost config):

ProxyPass /api/ http://localhost:5000/
ProxyPassReverse /api/ http://localhost:5000/
ProxyHTMLEnable On
ProxyHTMLURLMap http://localhost:5001 http://[your domain]/api
ProxyPreserveHost On
<Location /api/>ProxyHTMLURLMap / /api/</Location>

This tells Apache to send any request to http://[your domain]/api over to Kestrel and return Kestrel’s response. So, for example, a request to” will make Apache retrieve http://localhost:5000/blah/1 and return the response to the client. The ProxyPreserveHost option tells Apache to relay the request headers (user agent, cookies, etc.) to Kestrel.

Note that the client’s IP address will be sent in the “X-Forwarded-For” header (rather than the usual “REMOTE_ADDR”). Also, if your Controller uses the default attribute [Route("api/[controller]")] and you use the above configuration, your client-side end-point will look like I removed the extra “api/” in the attribute to solve this, but could have just adjusted the mappings above as well.

After making these changes and restarting Apache, I had a working proxy! Half of the job was done, now I just needed the dotnet process to run my application on boot.

Running Kestrel Automatically

Under OS X, the launchd service replaces the old-school *nix cron utility. It can be used to launch processes automatically on boot, or when a user logs in. I wanted my ASP.NET Core application to launch on boot, without requiring a user login, so I needed launchd to run it as a Launch Daemon. This is done by creating a “plist” XML file. launchd looks in several folders for “*.plist” files, I put mine in /Library/LaunchDaemons/. I used the freeware LaunchControl application to create the file, but you can also create it by hand. Here’s my plist file:

This tells launchd to run the dotnet process from the directory where the ASP.NET Core application is published (by default, in ./bin/Debug/netcoreapp1.0/publish/ relative to your project folder) and load the application (which is actually a DLL file in Core-world). It starts dotnet using the _www user account, the same one used to run Apache, so it’s important that this account has proper access to that directory. The other configuration options here set up the log files. I actually haven’t seen anything written to stderr from dotnet — errors appear to be going to stdout in the current version. There’s also an option there that tells launchd to re-run the process alive if it exits.

Since my project files are in my user folder, I didn’t want _www having permissions there, and I wanted to be able to quickly relaunch the process if I make changes. To handle both of these issues, I created a shell script in my main project folder, called

sudo pkill -f "dotnet MyWebApi.dll"
rsync -a --delete bin/Debug/netcoreapp1.0/publish/ /full/path/to/where/api/application/should/launch
sudo launchctl load /Library/LaunchDaemons/dotnet.api.plist

This does the following:

  1. Kills the existing dotnet process, if it is running. pkill -f uses the full command line to find the process(es) to kill, unlike killall, which would only be able to kill all dotnet processes.
  2. Overwrites my _www-accessible folder with the latest compiled and published application files.
  3. Tells launchd to restart the application.

This does prompt for the superuser password, but other than that, deploying changes is painless. To make it even more convenient, I added the following to my project.json file to the script runs automatcally when I run dotnet publish:

"scripts": { "postpublish": "bash ./" }

(This is dead code walking, since project.json is on the way out, so if you’re reading this a few months from now, you’ll need to look up the equivalent MSBuild setting.)

Final Thoughts

There may be an easier way to do some of this, I’m just showing how I muddled through it. I’m pretty happy with my solution, and it’s doing a great job serving up my galleries now. Hopefully other developers working on OS X will find this useful.

Libertarians, Check your Privilege!

I’m happy you found in Gary Johnson a candidate who agrees with you on social issues, foreign policy, and domestic security. I’m right there with ya.

But before you jump on that train, you should understand libertarian *economic* positions, because they are downright bonkers.

(If you think I’m making any of this shit up, go read the party platform, as well as speeches and interviews by the current and past candidates.)

Libertarians do not believe in Social Security, Medicare, Medicaid, welfare assistance, minimum wage, labor laws, health and safety laws, product safety laws, public parks, subsidized health insurance, unemployment insurance, public transportation, air or water pollution regulations, restaurant food safety inspections, laws against discrimination for jobs or commerce, or requiring liability insurance for cars. They don’t even believe in city ordinances that require your neighbors to cut their grass.

The libertarian response to any infringement of your life, health, or property by another person (including a huge corporation) is that you should sue them. So, if you don’t like the air being polluted, for example, you would need to sue the local refineries and PROVE in court that it affects your health.

Libertarians would also turn public schools over to private corporations, who would then run them into the ground to scrape as much profit out of each voucher as possible.

They also don’t believe in corporations paying taxes, or estates of the ultra-rich being taxed, or even a higher tax rate on someone making $100 million a year over someone who made $10,000. And they’re perfectly fine with not having ANY laws to limit the campaign donations of corporations or the rich.

Basically, the libertarian response for everything is “you’re on your own, or good luck finding charity.” If you’re educated, middle class, white, young, straight, healthy, etc. (and I’m all of those, so that’s no dig), this may sound tempting, because any dollar you don’t pay in taxes is a dollar you can spend on Pokemon purchases, and if the shit hits the fan, you have your own cushion or support network to fall back on.

But we do have a BUNCH of people in this country who, through no fault of their own (or even because of a dumb mistake), NEED institutional-scale help to stay afloat, or dig themselves out of an economic problem, or to help their kids have better lives than they’ve had, or to ensure that they aren’t taken advantage of because of their race, gender, sexual orientation, economic status, etc.

So, if you can vote for someone who, given the opportunity, would completely dismantle the protections that government provides ALL of us against unreasonable actions by other people and a safety net if you end up falling on hard times, you really should recognize that while you might have the privilege necessary to weather the next economic storm or personal health crisis, not everyone does.

(If you basically agree with the Libertarian Party but *do* believe in government helping the poor and protecting your property and health from corporations, you might want to check out the Green Party. As with Gary Johnson, there’s no hope in Jill Stein even getting into the debates, but if you’re going to vote your conscience rather than for the lesser major party evil, vote for the lesser third-party evil!)

A Silly Hack Around the Excel Cell Character Limit

Users will always find the limits of your application.

I manage a web application where users primarily edit data by importing and exporting Excel files. This allows them to work on huge swaths of data without the limitations of a web-based interface. The workbooks store 1-to-many relationships using comma-delimited lists of integer IDs, and use macros, validation, etc. to maintain referential integrity.

For almost all cases, this works perfectly, it’s unusual to have more than 100 relationships of a given type for any entity. But for one relationship, there could be thousands. And that’s where we found the limit.

The limit is that a single Excel cell may only hold 32,767 characters. Our IDs have an average of 8-10 characters, plus the comma delimiter overhead, so we max out at around 3,000-4,000 relationships. The exceptions are rare, but they do happen, so I needed a solution.

My first thought was to use hexadecimal to reduce the average number of digits required. While decimal encodes around 3.3 bits per digit, hex gives 4. But that would only improve our situation by around 20%. I feared we would find a new exception that would bust the limit again within a few months. I needed a step change.

I felt like switching bases was on the right track. If I could get a 20% improvement with hexadecimal, what base would make me comfortable that we’ve addressed the limitation for our users?

Base64 was the obvious next choice. It’s well-known and has native support in C# (for the ASP.NET side of things), and its six bits per character encoding would nearly double my relationship limit. However, the “+” symbol had a special meaning in my data loader, and the letters “E” and “e” could trick Excel into treating the encoded string as a number in certain situations. I could substitute those characters in my algorithm, but for simplicity, I decided to use only [A-DF-Za-df-z0-9], effectively creating a Base 60 encoder.

This worked, and gave me the headroom I needed for now, but it got me thinking. What if I could pack even more bits per character? Where would I hit the ceiling where the code complexity or performance were sub-optimal?

Using additional ASCII characters was out — I’d already used 60, and after discounting control characters and others with special meaning, I wouldn’t see enough improvement to bother. But to go to higher-order characters, I had to know if Excel’s limit was truly a character limit or a byte limit. Fortunately, this was easy enough to test using REPT() and a Unicode character with a value over 255. It’s definitely a character limit.

So, I could use any Unicode character I wanted, other than [+*,~-], the characters with special meaning for my data loader. But Unicode itself is a Swiss cheese patchwork of allocated and unallocated code points, and even many allocated characters have poor font support. While users don’t have to understand the values, I didn’t want a bunch of rectangles or invisible characters. Unicode also has random code points assigned as modifiers, whitespace, or other poor candidates for a human-readable (if human-incomprehensible) encoding. I wanted to squeeze some real juice from the 16-bit character space (I didn’t test Excel’s limit for characters above 65535), but I didn’t want to have code that required tacking together a hodgepodge of value ranges.

I went on a little search through Unicode, looking for the longest contiguous range of allocated characters that would show up in a US English setting in Windows without needing special fonts. Not surprisingly, this turned out to be the CJK Unified Ideographs, the blocks in the range 4E00–9FFF. Within this range, the first 20,950 code points are allocated for Chinese characters, all of which are supported by Windows by falling through whatever Western font you use in Excel to whatever the default Chinese font is on your version of Windows.

While I could use all 20,950 characters, I decided to use 14-bit encoding, the first 16,384 characters. There’s something satisfying about an even power of 2, and while it shouldn’t matter for my application, the micro-optimizer in me likes the fact that I can use bit-shifting in the conversion process.

The final result? My encoding is now down to 1-3 characters per integer, plus the comma, giving my users and upper bound of around 10,000 relationships. This, as Bill Gates famously probably did’t say about 640KB, should be enough for anyone. I don’t foresee any business case for us that would support more than around 5,000, so having the extra headroom is nice, and the conversions are fast enough that they are a rounding error when importing and exporting workbooks.

I doubt anyone in their right mind will find a use for it, but for the curious, my code is posted here as a Gist…

You don’t get to decide.

I posted this on Facebook, but since there’s a sliver of a chance that someone still reads this blog, I’m cross-posting it here.

I just got blocked by a photographer friend because I stood up for TG people while he was making demeaning comments people self-identifying as something other than their birth-assigned gender.

I can argue all day with my conservative friends about taxes or federalism or foreign policy, but I won’t stand for belittling of marginalized minorities.

The most important freedom we have is NOT the right to carry a gun. It isn’t the right to be a jerk in front of a microphone. It isn’t the right to own land or save up for retirement or buy a nice car. It isn’t the right to pay low taxes or start a business.

No, the MOST important freedom we have is that, in the most private and intimate aspects of our life (gender, race, sexual orientation, religion, love, art, etc.), WE get to tell people who we are and act on those deeply-held beliefs, without asking for someone else’s permission, and without being harassed or forced to accept “separate accommodations” in lieu of equality, provided our beliefs and actions do not impede on those same rights for others.

So if you truly believe that TG people are mentally ill, or are “faking it” to get attention, or are possessed by demons, or need to stop eating gluten, good for you. You also have the right to believe in the Flying Spaghetti Monster. But what you DO NOT have the right to do is to use the color of law to force TG people to conform to who YOU think they are when they are going about their lives minding their own business.

Managing EPPlus Macro Modules

I use EPPlus to create Excel workbooks server-side, and overall it does a great job — it’s far better than mucking around with OOXML file internals directly.

While it supports adding VBA code modules to the workbooks you create (normal modules, or Workbook, Worksheet, or class modules), the VBA code itself that you want to insert is a string, and unless your VBA code is incredibly simple, using C# string constants is going to quickly become a pain. I’ve seen code samples where the VBA was loaded from local text files, but that’s a no-go for me — I don’t want to clutter up my web application folders. Usually I would resort to storing something like this in my app’s database, but then I’d need to create an interface for managing and editing the code files, which is overkill for me.

Fortunately, I found a quick and easy solution to embed the VBA source in my compiled application without using string constants:

  1. In Solution Explorer, choose Add…New Item…
  2. Select Text File, but give the file an extension of “.vb” rather than “.txt”.
  3. For that file, go to the Properties and change the Build Action to Embedded Resource.
  4. Paste the code from Excel into the file in Visual Studio.

Boom! Not only can I edit the file without embedding it in a string, I even have partial Intellisense support! There are some caveats (which I’ll get to in a minute), but it’s good enough for me.

Retrieving the code as a string and getting it into my EPPlus file was also simple, but there are a few tricks involved. I created a utility function to grab the resource:

using System.IO;
using System.Reflection;
public string GetVBACodeFromResource(string resourceFilename) {
	var key = "MainFolder.SubfolderWithVBAFiles." 
		+ resourceFilename + ".vb";
	string result = null;
	using(var reader = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream(key))) {
		result = reader.ReadToEnd();
	if(string.IsNullOrEmpty(result)) return string.Empty;
	return result.Replace("\t", "    ").Trim();

Then later, to call the function to add a module to my package:

var m = myExcelPackage.Workbook.VbaProject.Modules.AddModule("modMain");
m.Code = GetVBACodeFromResource("modMain");

Note in the “key” above, I’m prefixing my filename with “MainFolder.SubfolderWithVBAFiles.”. This is because resources are identified by their path within your solution, using “.” in the resource key rather than the usual “/” as a folder separator. If you get stuck figuring out the names of your current resources, you can use “Assembly.GetExecutingAssembly().GetManifestResourceNames()” to get a list of the current resources in your assembly.

I also converted tabs to spaces, because I have Visual Studio set to keep my tabs (I like tabs — I know I’m in the minority), but when raw tabs get embedded in the VBA files, Excel doesn’t convert them to spaces and indent properly.

I mentioned there are a few caveats to using Visual Studio’s VB.NET Intellisense to edit VBA code, because VB and VBA, while sharing a common ancestry, are not the same language:

  1. VB.NET requires procedure call arguments to be wrapped in parenthesis (e.g., “Foo(bar)”), while VBA doesn’t. However, by prefixing the statement with the Call keyword (e.g., “Call Foo(bar)”), the syntax is legal in both languages.
  2. The above doesn’t work with “Debug.Print” calls — you’ll have to live with VS putting red squiggles under those calls.
  3. The “Set” and “Let” keywords were removed from VB.NET, so they will also always be formatted as errors.
  4. Same goes for Variant and other VBA/Excel-specific types that aren’t native to VB.
  5. VB.NET has no concept of “Option Base” or the “To” statement in Dim/ReDim.
  6. When you paste or edit code, VS will break some code–notably, rewriting some Option directives and removing “_” line continuation characters. You can fix this behavior by disabling “Pretty listing (reformatting) of code” under Options: Text Editor: Basic: Advanced, but it would impact the IDE for all VB code you have (if you usually use C#, you’re all good).

This sounds like a long list, but in reality, most VBA code will be properly color-coded, and the Visual Studio IDE even does a decent job of providing auto-complete and error detection. You won’t be able to do everything you can do in Excel’s VBA editor, but it’s great for quick edits.

Political post ahead…

I was trying to explain yesterday why my beliefs lean to left libertarianism (i.e., somewhat close to democratic socialism), and how that has absolutely nothing to do with Marxist Socialism or Communism.

In short, it’s because I believe the government is not the enemy, nor is business. Instead, the enemy is *unfettered* corporate power and *unchecked* government.

We check corporate powers through a combination of free-market capitalism (voting with our feet), unions, and where those are ineffective (and they are in a number of broad classes of corporate abuse), we use the law and the courts to regulate them. Now, I work in the business of dealing with regulations, so I’m keenly aware that there are good regulations and bad ones, but that doesn’t mean we should just drop all regulations just because a big business complains about them.

*Both* of these are incredibly important to keeping corporate power in check. And if you’ve *read* The Wealth of Nations, you would know that this is fully in agreement with Adam Smith’s own beliefs on capitalism — he is not the father of the economic anarchy that the far-right libertarian wing makes him out to be.

On the other hand, we check government powers by ensuring that legislators are working for educated, engaged voters, not for special interests who plaster the airwaves with lies to scare the masses who can’t be bothered to do some research before taking a position.

And we check both with a free, competitive, open press.

Well, here’s one small example of what happens when regulators fight for their *constituents*, not for big media conglomerates who bankroll their campaigns and then abuse their natural monopoly. And it also happens to be one more reason that I’m confident that I’m backing the right guy.

And… that’s my last political post for awhile…

“Take Our Country Back”

When someone says that, this is what I hear:

TAKE — providing nothing in return, using force, manipulation, mob rule, bribery, obstruction, or any other means to achieve the goal.
OUR — WASPs, aka “real” Americans.
COUNTRY — militant nationalists, willfully ignorant of anything else going on around this tiny blue dot.
BACK — back to the days when non-comformists and minorities of every kind were enslaved, ridiculed, railroaded, interred, denied the right to vote, and ignored in the courthouses and statehouses.

I don’t want to “take” this country back. I want to share it with people who didn’t win the genetic lottery by being born here. I want to change it to make a more perfect union. I want to build it to be a light shining on a hill, an example of informed democracy.

Ten Reasons I Hate Local News

Let me preface this by saying (a) I used to work in local television, and (b) I have friends who are or have been part of the industry, so I don’t really blame the on-screen talent or even some of the people behind the scenes.

I hate local newscasts.

I still record one newscast a day, and skim it about 75% of the time, usually during dinner. I’ve chosen the least objectionable local newscast, which for me is KFDM, but it’s still pretty terrible.

The reason I hate local news is that it could be so much better. As in, it could be a true force for good and change in our communities, rather than being the mostly-useless filler between advertisements that it is today.

In the interest of time, I’ve whittled my many grievances down to ten of the top reasons I hate local news, in no particular order.

1. Local Sports

Sports takes up almost half of the average local newscast, and then half of that is wasted regurgitating scores for national and state games that anyone with an Internet connection who gives a shit already knows.

Then, what passes for local coverage is a mindless droning of scores, with the occasional inane interview with a local coach or athlete that is completely interchangeable with any other interview (“we’re just gonna go out there and have teamwork and try our hardest blah blah blah…”).

I’ll admit I’m not a fan of taxpayer-sponsored religion school sports in the first place, but if you’re going to cover it, cover it!

Take just one or two of the 20 hours of live local news programming each week and create a dedicated show for local sports. Show recaps of local games. Go over the schedules. Talk about the athletes. Cover the little-league and soccer teams. Also, extend coverage to also include other intramural competitions — debate, theater, band, chess, robotics, spelling, math, etc., showing local children that throwing a ball around is not the only way to be recognized on local TV.

2. Weather

Everyone has a smart phone now. We’re all two clicks away from a 10-day forecast that generally beats the local guy’s predictions, and it doesn’t come with the 10-minute lecture about high- and low-pressure systems or the gleeful watching of every storm cloud in the Gulf of Mexico that could, with enough butterfly-flaps, transform into a hurricane in a few weeks.

So unless there’s a tornado coming, just show some pretty infographics with sunrise, sunset, forecast, and the boring stuff only people with boats care about, then go away.

If you want to jazz it up, show us something interesting happening in astronomy, or help shed some light on climate change for the 30% of Americans who still believe it’s a liberal conspiracy to take away their incandescent lights and gas-guzzling duallies.

3. Pre-Packaged News

You aren’t fooling us with that “Tech Time” and “Healthcare Watch” and “Market Minute” and other bullshit content (including lead-in copy) that you bought to fill the time. These pieces are the true bottom of the barrel of journalism, with their third-rate analysis, copy ripped straight from press releases, and the intellectual depth of the average Kardassian. I watch the local news for local news, not so I can hear Sally the Generic Reporter tell me that that evil hackers want to steal my credit card and I’d better protect myself by using a good password and buying an antivirus program.

4. Biased National Politics

Poll questions obviously written by a drunk Tea Party activist. News copy ripped straight from the GOP daily talking points. Gushing coverage of Republican candidates who come to town. Lack of even the most basic fact-checking when reporting what a politician says. Those of us who don’t get our national news only from local sources are on to you, and that includes the majority of Millennials, including conservative ones who can still smell one-sided BS. I suppose pandering to the old white audience is what sells more truck commercials, but the bias is obvious, and it stinks.

5. Social Media Comments

If I wanted to attend a virtual KKK rally, I already have an Internet connection and I can go look at the ignorance and knee-jerk hatred spewing from the comment section of every article on your Facebook page. Repeating that shit on the air, especially without any sort of critical analysis, just adds fuel to the flame.

6. We Are Experiencing Technical Difficulties

It’s disappointingly rare to watch an entire live newscast without seeing some stupid technical snafu — dead mics, missing audio, mistimed B-roll, swapped graphics, poor lighting, misspelled crawls, drifting camera shots, reporters staring at their notes unaware that they’re live — the list goes on. Seriously, people, get your shit together! I’ve seen better production values at an elementary school musical.

(Ok, the video isn’t exactly on-point, but it’s still funny… I’m actually far mor forgiving of people flubbing their lines…)

7. Advernewsment

Yes, we the audience do notice that the people you to interview or book as “experts” just happen to be associated with the companies who advertise heavily on your station. We also notice when news that happens to be bad for local industry comes along, it gets glossed over, or only told from the industry’s perspective.

8. Quantity over Quality

Many network stations are churning out three or more hours of live local programming per day in newscasts and morning/afternoon shows. Worse, since so many stations are owned by the same media conglomerates, the same news program gets thrown out on multiple channels, or they share the same news desk.

The reason is obvious — stations believe they can make more in ad revenue with three hours of shite than 30 minutes of hard news by journalists who have the time to research their stories and produce compelling stories.

Maybe they’re right, and it’s more important simply to capture bored eyeballs immediately before and after the workday than it is to create a show people would actually make plans to watch. After all, the 24-hour news channels have the same approach — continuous, uncritical repetition of opinion, propaganda, and speculation rather than focused, critical journalism. Still, it’s sad.

9. Horrendous Web Sites


Seriously, folks, they are SO BAD. Horrific. Slow, ad-laden, broken, lacking in aesthetics, mobile-unfriendly, Flash-driven, content-sparse, disorganized, … I could go on.

I suspect the design templates and back-end programming are mandated by the media conglomerate bosses (who probably outsourced some Elbonian programmers to hack around on some “content management system” sold to them by a guy in a slick suit). So it’s not all the local station’s fault. But that doesn’t make the user experience any better for the local viewer, and it devalues the station’s brand on the very platform that will eventually replace the time-slot broadcast news they depend on for so much ad revenue.

National newspaper sites aren’t much better. It’s like all the people who knew a damned thing about typography, photography, white space, etc. were fired when pixels replaced paper, and they haven’t realized yet that their web sites look worse than a mimeographed church newsletter from the 1980s. Hell, you’re reading this post right now on the free standard WordPress template, and it looks cleaner and more professional than 90% of the major news sites.

Local television stations need to recognize that the Internet isn’t going away, and that their only long-term hope is to capture a younger audience who live online, don’t subscribe to cable, don’t have a UHF antenna, and who won’t put up for slow pages, broken links, pop-up ads, and designs that make their eyes bleed. (Edit: They redesigned their web site in early 2016, it looks MUCH better now!)

The last link is the local newspaper… they’re just as bad.

10. Little Proactive Reporting

All too often, I hear people in Beaumont say about some local event with a poor turn-out, “I wish I had known about it!” Same goes for interesting items that were on the agenda at city council or school board meetings, debates between local politicians, etc.

One of the things the Internet doesn’t do well these days is to connect nonprofits, schools, governments, churches, etc. with their local communities so they can promote their events to the public. Facebook actively works against such promotion, unless the organization in question wants to the extortion fees to “advertise” to their own fans.

Local news generally fails to actively engage with local NPOs to promote public events and opportunities before they happen. Sure, a few chosen favorites like YMBL and Gift of Life get pre-coverage of their events, but it’s nearly impossible for, say, a nonprofit art gallery to get a little story about a local artist’s show opening, or a children’s program or fundraiser. Likewise, coverage of basic election information, such as poll locations and interviews with people on the ballot, is dismally thin.

I have no doubt that if the local newscast included a stronger focus on letting people know what’s going to happen in their communities, people would tune in more often. I don’t really need to know about every car crash, house fire, and storm-felled tree, but I would like to know when things are happening that I might want to get involved in, not just see reports about them after the fact.

Wrapping It Up

I hate on local news not because it is simply so bad, but because I see what it could be if only station leadership (and their corporate overlords) had the vision to do more than crank out the same thing, over and over. I hope some of them recognize and address these issues before it’s too late and local newsrooms go the way of the dodo.

A Meaningful Backup Strategy for Photographers

For the second time in the past few weeks, I’ve heard of a photographer who lost many years’ worth of work due to their computer and drives being stolen. This has caused me to start re-evaluating my own backup strategy, and I thought I would share a few notes about what I’ve already learned and how I’m planning to improve my own data security.

IMHO, a good backup strategy involves five prongs: good drives, local live backup, local online backup, remote backup, and portable backup.

Good Drives

The first line of defense is that you should entrust your RAW and PSD files only to drives with a strong record of low failure rates. Based on recent numbers from a study done by Backblaze, Hitachi drives outperform other brands.

Of course, the best drives are solid state drives — their Annual Failure Rate (AFR) runs closer to 1%, while physical drives run 3-8% depending on their age. But for now, solid state drives are too expensive for serious photographers — we churn through way too much storage.

Live Backup

Assuming you aren’t reading this in the future where 4TB SSD drives have a 0.5% AFR and cost $100, you’ll still be using good old spinning-plate drives. The catastrophic death of these drives is not a matter of “if,” but of “when.”

So, the second prong in a good backup strategy is that photography drive volumes should always be created in pairs — a RAID 1 set (mirrored). With this approach, every byte written to one drive is written to both simultaneously, and both drives can be used independently if one of them fails.

In years past, RAID 5 was the gold standard, since it offered a better balance of usable space than RAID 1 (2/3 of the space is available, versus 1/2 for RAID 1). However, as the storage size of drives have increased exponentially, so have the chances that when a drive in a RAID 5 set fails, that an unrecoverable error will occur during the rebuild, which then puts the entire volume in peril.

RAID 5 also relies on proprietary logic that determines how the data and parity stripes are laid out on the physical drives. Thus, if the RAID controller hardware fails and you can’t find replacement hardware that uses the same firmware, there’s a good chance your array will become an instant doorstop.

At the time of this writing, Hitachi 4TB drives run are $180, and cheaper drives are $140. If your average shoot runs around 16GB, you’ll be paying around $1.50 per photo shoot for storage.

Local Online Backup

A mirrored volume is not, by itself, a sufficient backup. It mitigates the issue of drive failure, but does nothing to protect you from yourself. If you accidentally trash the wrong folder, run a bad script, or overwrite the wrong file, you can lose hours of work, entire shoots, or in the worst case, everything.

The reason that a local online (i.e., connected and turned on at all times) backup is important is that, being human, you will forget to connect and use your backup drives.

I recommend software such as Apple’s Time Machine, which silently, reliably, and quickly backs up all changes you make to your files each hour, and makes it a cinch to restore the files, should the need arise.

It is preferable to locate this drive in your house, but in a a separate physical location from your main computer. This reduces the chances that a localized fire or theft will result in the loss of both your primary and backup drives.

If you’re an Apple fan, a good solution is to use an external hard drive connected to your Airport Extreme router. Apple sells a version of the router with a drive built in (“Time Capsule”), but as usual with Apple, the price is much higher than just connecting your own drive via its USB port.

These backups should contain not just your photographs, but also your boot drive, applications, Lightroom catalogs, and anything else you need to back up on your computer.

Again, because drives do fail and backups are difficult to restart from scratch, having a RAID 1 volume for your backup drive is a good idea. Unfortunately, the Airport Extreme does not support software RAID, so to use it, you need a drive enclosure with a built-in hardware RAID controller (so the Airport Extreme only sees one virtual drive).

I use an Akitio Hydra enclosure, which supports RAID 1 or 5 for up to 4 drives. I recommend against the Drobo — while I’m sure they’ve put a lot of work into their “BeyondRAID” algorithm, the fact remains that it is proprietary, and I’ve heard a number of horror stories of people losing Drobo volumes and having no means of recovering them.

Remote Backup

Having all of your primary and backup drives in one physical location (and online) is a bad idea, because it exposes you to a number of potential threats — a thorough thief, fire, flood, lightning strike, power surge, etc.

So, it is essential to have a backup in another physical location, and to keep it up to date. This is the piece I’m lacking in my own strategy, and it’s something I’m working to address.

The simplest solution is to ask a friend or family member to hold your backup drive for you, and swap them out once every few months (a safety deposit box would also work). If you don’t work from home, you could also just keep the drive in your office.

To make these remote backups, you may need to buy some additional software that can do incremental copies of your main drive to your backup drives — synching only the files that have changed. If you’re handy with the command line, rsync on OS X and xcopy on Windows can do this for free.

In this situation, encrypting the drive is probably a good idea, especially if your photographs are sensitive in nature (boudoir, art nudes, etc.). Even if you trust the person holding the drives, you can’t trust a thief who might take off with your drive while robbing their house. Fortunately, this is very simple to do in Disk Utility on a Mac, and using BitLocker on Windows.

There are a number of “cloud” backup services (Carbonite, Backblaze, Crashplan, Amazon Cloud Drive, Microsoft Drive, and Mozy, to name a few). On the plus side, these offer continuous (daily) backups and can allow you to access the files online from another computer. However, there are some disadvantages:

  • They can be expensive over time compared to just buying a hard drive or two.
  • You have to understand which of their plans you need to use. For example, if you use Carbonite, you would need to use the $100/year plan, not the $50/year one, because only the more expensive plan will back up something other than your main user directory (which will almost certainly not be the drive you’re using for your RAW and PSD files).
  • Upload speeds can be terrible. Most Internet providers give you only a modest upload speed — mine is 1.5Mbps, which is 1/10th of the download speed. At this speed, sending 16GB of RAW files to the cloud would take a full 24 hours and would saturate my uplink, which might cause issues with other Internet usage. So before you consider a cloud solution, test your upload speed and do the math!
  • Consider the risks of systems that don’t offer end-to-end encryption. Some services encrypt your files on their server and in transit, but they hold the keys, so anyone who compromises their system can read your files. The only safe encryption is where the encryption key never leaves your computer. If you don’t want your boudoir clients or models being involved in the next “The Fappening”-style breach, be sure you understand the basics of encryption and how they treat your files (good rule of thumb: if you can log into their site using a normal web browser and see your files, any “encryption” they say they do on your files is not sufficient).

I will say that of the cloud services I’ve seen, the one I like best is Crash Plan’s “Offsite Drive” option. This is a free service, it basically allows you to make a backup external drive, trade drives with your friend, and your changed files will be sent to your drive on their computer, directly and automatically, over the Internet (and vice versa). The drives are heavily encrypted (the right way), so you can’t see each other’s files. And if you want them to additionally store the files on their servers, they are happy to do that as well (for a fee of course).

While this concept from Crashplan doesn’t completely overcome the issue of upload speed, at least you are only uploading new/changed files to one another, not trying to upload your entire library of files. If you were trying to, say, upload 2TB of photos via a 1.5Mbps uplink, it would take over 4 months to complete the backup, so trading drives with a friend is far better than using a traditional cloud backup service, which is optimized for “normal” people who may only have 20-50GB of total data.


Portable Backup

The final piece to the puzzle is to have an emergency backup of your most important documents on your person at all times.

If the nightmare scenario happened and someone was able to compromise and destroy your files both on your local copies and your cloud backup, the goal of this backup would be to save (a) personal files of great importance, such as family photos and tax records, and (b) your legacy of work as a photographer.

Carrying around multi-terabyte hard drives is obviously not an option (yet), but you don’t really need to. Right now, a 256GB flash drive runs around $70. This won’t be nearly enough for your RAW and PSD files, but you can at least use it to store a very large number of full-resolution, final JPEGs of your work.

If you ever found yourself with only that flash drive remaining, the loss of the PSDs and original files would be regrettable, but you would still have a digital master that is suitable for making new prints.

Again, encryption is absolutely essential for this — if your USB drive is ever lost or stolen, you don’t want your personal information to be available to whoever ends up with the drive.

A good strategy is to create two partitions — one very small, unencrypted FAT partition with just a “readme.txt” file containing your contact information and a promise of a few bucks to return of the drive, and the second one for your main encrypted storage. Giving the smaller partition some extra breathing room (say, 4-8GB) might also be useful for keeping some basic data rescue programs, or just so you can use the drive in untrusted computers for short-term file transfers.

This final layer of protection may seem as if it borders on paranoia, but keep in mind that if every other backup you have happens because of automatic processes, you need at least one backup that requires a manual copy process.

Keep in mind, however, that current flash drive technology requires that drives be used — if you let a flash drive sit dormant long enough (a year or two), you could end up with corrupted data. As such, flash drives aren’t a perfect replacement for other backup media. (SSD drives have the same bit-rot issue.)

Final Thoughts

Data is fragile, and thinking through the potential points of failure requires good planning and a solid basic understanding of basic technology. No one ever thinks something bad will happen to their data, until it does. Years or even decades of work can disappear in the blink of an eye. So, stop reading this and GO BACK UP RIGHT NOW. 🙂