Where We Seem To Have Arrived (A Non-Technical Post)

Fifteen years ago, I took part in an essay competition run by the Ohio State University’s Glenn School of Public Policy and Management. The essay prompt asked whether and how the Internet would have effect on politics and in what way it might improve democracy in America.

I won second place, based on what I thought to be mildly-biased reasoning from the judges. That is, I said that the Internet might not actually help improve civic engagement, and I’m pretty sure they felt that it would. Which is, of course, what the first-place winner’s essay said. (I read it, he had a lot of charts and graphs cribbed from various places to back his argument, but it wasn’t particularly well-written.)

But this was the early ‘aughts, and the Internet still held some kind of mystical promise. The dotcom crash had happened, of course, but Google was slowly happening and the promise of unfettered, unlimited, unbridled access to information was something most people thought would make the world a better place. ISIS hadn’t happened yet, though September 11 had. Social media wasn’t a thing, domestic surveillance was still mostly analog.

Unfortunately, I think my thesis is still correct: The Internet didn’t enable the kind of democracy we will need to effectively use what resources we have left, before we collapse as a species.

The fact is that the Internet has allowed people to amplify and justify their innate tendencies. It has allowed people of similar dispositions to join forces, for sure. But it hasn’t given people reasons to change their dispositions. And by that, I mean, it certainly hasn’t helped people of conservative tendencies to realize that their beliefs, and especially the exercise of those beliefs onto others, have no place in modern pluralistic societies.

The two things that fix this: the broad distribution of prosperity and the right of women to control their reproductive destiny, are both now under major stress.

The Internet hasn’t led to a broader distribution of prosperity, nor led to public policy that prioritizes this outcome.

There’s a hilarious question I’ve been asked before: Communism or Capitalism, which one is better?

The answer, of course, is neither. At their late stages (earlier so in Communism) they both colossally misallocate resources. If neither system allocates resources appropriately, both will eventually run out of them. There’s no suspension of belief inherent to either system that prevents that outcome.

Efficient supply chain management, abundant and cheap container shipping, end-to-end tracking from production to point of sale, these things help a corporation and a society as a machine to be more efficient. But it’s driving a trickle-up economics, a mechanism for efficiently extracting capital from labor in even the most minimal arbitrage situations. And when advanced machine learning and AI start getting applied, they will learn to make this process even more efficient, with the corresponding ruthless effects on people.

By also discounting negative externalities and by entirely ignoring mispriced nonrenewable and limited resources, these machines eventually will recursively optimize for the worst possible outcomes for humanity itself, while simultaneously optimizing for the best possible outcome for system revenue and cost of goods sold.

It’s not clear how to fix this, besides throwing out a catch-all “more people need to get more educated” cliché. Even then, it’s not clear that education is actually fixing this situation either, given the current generation’s intent to run in fear intellectually from anything that might remotely threaten it.

In order to survive comfortably into the next century, we absolutely need strong democracies and prosperous societies, across the globe. But it is even less clear to me now how we get there from here. We seem to be in a resonating negative feedback loop, more and more rapidly destroying itself and everything around us.

Power Hungry Desktops

I’ve been mucking about with a Linux desktop again, and doing electrical power measurements to figure out how efficient it is. Most home users probably aren’t thinking about this, as the difference between 100W and 200W is inconsequential to them. But I’m curious about processing capacity per unit power, or perhaps processing capacity per CPU core. When you consider that it takes about 1 pound of coal to produce a kilowatt hour of electricity (equivalent to running a computer using 100W, for 10 hours), the difference is no longer inconsequential over even normal periods of operating time.

At the moment, my usage pattern bounces between two systems: a Macbook Pro from 2009, and a Dell desktop from 2011.

The Macbook Pro has an Intel Core 2 Duo P8400 processor, which according to this performs at an abstract level of 1484. That works out to a performance level of 742 per processor core. It does feel slower using this system, when I’m developing and compiling software, but then it uses half the power of the bigger system (100W).

The Dell desktop has an AMD Phenom II X6 1055T Processor, which according to this performs at an abstract level of 5059. This works out to a performance level of 843 per processor core. The system uses 250W overall, to run everything.

But let’s say I’ve been thinking about buying a new Macbook Pro with Retina Display. The late-2013 model uses an Intel Core i5-4258U processor, which according to this performs at an abstract level of 4042, which works out to a performance level of 2021 per processor core. If its processor cores are 2.5 times the performance of my current Macbook Pro, and at least twice the speed of the Dell desktop, there’s a good chance that for many single-threaded apps the overall experience of using the device would be better anyway. And let’s face it, most of the time the user-interface is running on a single thread anyway. If the system also only draws 100W at idle (likely less, given the improvement in process technologies), then it offers almost the same amount of performance at half the energy consumption, which is a huge win.

The trouble with all existing processors is the fact that they can’t completely shut off processor cores when they aren’t needed. If 99% of the time, I’m idle at the computer, and it’s able to handily process everything I’m doing, then the power used in running extra cores all of the time even at the lowest C-state seems like a terrible waste.

Power Hungry GPUs

One other thing that struck me as a bit odd is the fact that when I hook up a second monitor to the desktop, the power utilization measured at the wall jumps from 128W (idle) to 200W (idle). Powering each monitor uses about 20W, so I can only assume that the graphics card is chewing up the 50W difference, but I don’t understand how the GPU architecture can be so power hungry or the drivers can be so poor. It doesn’t make sense to me that the difference between driving one monitor and two is a 60% increase in total power consumption.

In a nutshell, this desktop system is burning 2 pounds of coal every 10 hours, which seems a bit much since it spends 99% of its time idling.

Quite Possibly

Quite possibly the world’s worst-labeled option on a WordPress plugin:

Force SSL Exclusively - Any page that is not secured via Force SSL or URL Filters will be redirected to HTTP.

What you might assume, as I did, is that setting this option would enable HTTPS to be used on the entire WordPress installation and that it would make the site exclusively HTTPS-based. And you’d be wrong, too.

Instead the option really means: “If you set this, you have to explicitly mark each and every post you want to be secured by HTTPS. Your entire site will now remain completely HTTP-only.” And somewhat snidely, it might add: “Thanks for defeating the purpose of this plugin!”

Now, I can’t imagine who would actually want to do that, and/or what their site configuration would look like. But HTTPS always seems to me to be an all-or-nothing proposition and this mis-labeled option only led to time wasted looking for a culprit in the .htaccess file, in the mod_rewrite rules, the wp-config file, and elsewhere.

A Lack Of Negative Reinforcement

When I was growing up, my parents taught me that if you couldn’t say something nice, you shouldn’t say anything at all. When I grew up, I figured out that that was bullshit, but I still tend to hold the line.

Google, Facebook, and others don’t seem to understand that the lack of a negative reinforcement signal does not help to generate results that users want.

I’m tired, namely, of this appearing in various, completely unrelated search results on YouTube:

The Ultimate Girls Fail Compilation 2012

How about a “never show me this again” option? Or an Unlike button. Without Unlike, all of the possible Likes in the universe are biased in such a way that you have only two choices, with the first being a conflated form of “I dislike it and would gladly never see it again / I am ambivalent about it and couldn’t care less” and the second being “Like”.

On YouTube, I believe you can downvote a video, after you’ve clicked on it, which seems kind of stupid, since it gives the uploader the view they so desperately want. There should be an option to remove items you find stupid when you’re hovering over suggestions, and that ought to count in some way against them.

I suppose the only saving grace is that it’s a good thing that the social network operators of the world only know my Likes, but not yet my Dislikes. The higher their signal to noise ratio gets, the creepier the online experience becomes.