I’ve been spending some time refactoring the CSS code on TandemExchange, and have been using the fantastic support for media queries in LESS CSS, to create very concise and adaptive style rules.
My general rule at the moment is that there are roughly three form factors to worry about (plus hidpi-style displays, though I haven’t tackled that yet) which are easy to define using LESS CSS’s media queries as variables terminology:
CSS
1
2
3
@phone: ~"only screen and (max-width: 480px)";
@tablet: ~"only screen and (max-width: 800px)";
@desktop: ~"only screen and (min-width: 801px)";
Which means in plain English that every screen up to 480px applies the @phone rule, up to 800px applies the @tablet rule, and anything more than 800px uses the @desktop rule.
Then, when defining the CSS class rules, you can just do neat things like this:
CSS
1
2
3
4
5
6
7
.someClass {
color:#101010;
@media @phone {
color:#ffffff;
}
}
Though, I’m not sure why you’d ever want something to go bright red. For many positioning issues, and especially side-by-side element to inline layouts, this is a huge boon to maintaining a tight set of CSS rules.
The article’s protagonist’s ending quote is golden:
“When the dishwasher or the washing machine are running, I want them to tell me when they’re finished or how long there is to go,” he says. “I want that kind of information, because it’s just irritating not knowing.”
Which raises the existential question:
If he’s home, he’ll probably hear the ending beeps. But if he’s not home, and a bear shat in the woods, what difference would it make?
I’m not sure what the global market for smarter home appliances is, or ever will be. But I’m guessing its appeal is somewhere between 3D movies at home and internet-capable televisions that require text input via on-screen keyboards. There may be benefits, but just getting them set up might be such a hair-tearing experience, that the average user won’t bother.
In any case, I do know that the last thing I’ll probably ever need is something like the following:
Besides 7 years and a few generations of processor technology, surprisingly little. The Dell was born in 2004, the Mac was born in 2011. There’s USB 2.0, Gigabit Ethernet, and 802.11abg on the Dell thanks to eBayed expansion cards, so the connectivity options are roughly equivalent. There’s better power management for sure on the latest Mac models, more/multiple cores, multithreading, more Instructions Per Cycle (IPC) at the chip level, and built-in video decoding, among various other technical improvements.
But does the laptop on the right actually help me get work done faster than the laptop on the left?
Probably not.
And therein lies a major source of Intel’s and Microsoft’s year-on-year growth problems. Their chips, and in fact, the whole desktop PC revolution, have produced machines that far outstrip anyone’s average needs. The acute cases, playing games, folding proteins, making and breaking cryptography, and so on, will always benefit from improved speed. But the average web surfer / Google Docs editor / web designer / blogger, probably could make due with a machine from 2004.
In fact, one might be willing to say that the old computer is actually better for getting work done, because it consumes such a huge fraction of CPU performance when playing YouTube videos. There’s no room for the faux, 10-way “efficient multitasking” people believe they are capable of doing these days.
Which means I can essentially single-task or dual-task (write some code, refresh the browser), and keep my attention focused on fewer tasks. I’m constrained in a useful way by the hardware, and I can use those constraints to my advantage.
The software stack remains the same (one of the great achievements of the Windows era): Google Chrome, PuTTY, iTunes, maybe a Firefox window open in there. All of this runs just fine on a Pentium 4-M at 2.0Ghz and 1GB of RAM (probably about the performance of the Nexus 7, a 4 + 1-core ARM processor w/1GB of RAM), and the CPU is still over 90% idle. When Intel announces that it’s building chips with ten times the performance, that just means the processor is ten times more idle on average, big whoop.
What I’m saying is: if all the average user needs is the compute-equivalent of a laptop from 2004, then that’s effectively what a current smartphone or tablet is.
But then what I’m also asking is: Why can’t I do as much work on a smartphone from 2013, as I can on a laptop from 2004?
I believe that there are a number of huge shortcomings in the phone and tablet world, which still cause people to unnecessarily purchase laptops or desktops:
The output options are terrible. While it’s nice to have a Retina display with what is essentially mid-1990s, laser-printer dots-per-inch resolution, you can still only fit so much text into a small space before people can’t read it anyway. It doesn’t matter how sharp it is, the screen is just too damn small.
What would be nicer for anyone buying these expensive, output-only devices, would be the ability to hook any tablet up to a 24″ HD monitor and then throw down some serious work or watch some serious Netflix.
Why can’t I, for the love of God, plug a USB memory stick into one of these things? It’s just a computer, computers have USB ports, ergo, I ought to be able to plug a memory stick into a tablet.
Whose artificial constraints and massively egocentric desire to organize all of the world’s information is keeping me from using a tablet to quietly organize mine in the comfort and literal walled-garden of my own home?
There’s no Microsoft Excel on a tablet, and Google Sheets is obviously not a useful replacement. But that is a moot point, because…
…the input options for tablets are terrible, there are no sensible input options for anyone wanting to do more than consume information. Content creation, especially when it comes to inputting data, is impossible. Keyboards-integrated-into-rubbery-crappy-covers will not help here. So tablets are still only useful for 10-foot view user interfaces, and things that don’t require much precision.
Sure, Autodesk and other big-league CAD companies have built glorified 3D model viewers for tablets, but note how many of the app descriptions include the word “view” but not the words “edit” or “create”. No serious designer would work without keyboard shortcuts and a mouse or Wacom tablet.
The walled-garden approach to user-interface design and data management in the cloud is terrible. There are multiple idioms associated with saving and loading data, multiple processes, there is no uniformity at the operating-system level. By killing off the single Open dialog coupled to hierarchical storage and user-defined storage schemas in the form of endlessly buried directories, it’s very hard for a user to get a sense of the scale and context of their data.
People have their own ways of navigating and keying in on their data, but the one-size-fits-all cloud model rubs abrasively right up against that. It seems to me that eliminating the Open dialog was more an act of dogma than of use-testing, and that it’s almost heresy (and in fact, essentially coded into OS X (via App Sandboxing), iOS, and Android’s DNA now) to wish for the unencumbered dialog to return. Cordoning data off into multiple silos is akin to saving data on floppies or that old stack of burned CDs you might have laying about somewhere: it’s not really fun to have to search through them all to find what you were looking for.
Unfortunately, by refusing to consider the above points, no one’s currently thinking about using these phones and tablets for what they really are: full-fledged desktop/laptop replacements, particularly when combined with things like Nvidia’s GRID technologies for virtualized 3D scene rendering.
The market for computing devices is now very strange indeed. Phablet manufacturers sell input/output-hobbled devices for more than full-fledged machines and even command huge price premiums for them. Meanwhile, the old desktop machine in the corner is still more than powerful enough to survive another processor generation, but it’s the old hotness and no one really wants to use it. And no one is making real moves to make the smartphone the center of computing in the home, even though that is where most of the useful performance/Watt research and development is going.
So here’s a simple request for the kind of freedom to use things in a way the manufacturers never anticipated: I’d like to have text editing on a big monitor work flawlessly out-of-the-box with my next smartphone, using HDMI output and a Bluetooth keyboard. (Why aren’t we even there yet? It took several versions of Android and several ROM flashes before I could even get it working, and even now the Control keys aren’t quite working.)
Add in Chrome / Firefox on the smartphone and you have enough of a setup for a competent web developer to do some damage, and perform most of the write, refresh, test development cycle. And be able to do this anywhere via WiFi or HSDPA/LTE connections (a fallback data connection apparently being necessary if you ever want to do work at this place, snarkiness aside).
There’s so much more usability that could be enabled if the input, output, and storage subsystems of the major smartphone OSes were ever made to be unconstrained.
Such a shift to smartphone primacy is almost presently possible and I can only wonder whether in a few years’ time (2 – 3 phone generations) this scenario will become the norm, but I’m not holding my breath. The companies involved in the current round of empire building (including some of the companies involved in the last) are seemingly too concerned with controlling every part of the experience to allow something better to reach us all.
So I’m looking for a pair of cables to adapt a 4-pin 3.5mm TRRS headphone + mic jack to 2 x 3-pin 3.5mm TRS jacks, because I want to do a little audio synthesis experimentation with my Android tablet and phone. But I’ve now learned that just because they wanted to and for no good reason beyond screwing consumers for overpriced accessories, Apple took an existing standard, forked it, and then patented it.
Now I’m not sure which cable to buy, because no one clearly marks the standard to which their cables conform, and the cables are incompatible with one another, because the bureaucracy was asleep on the job and for low-end commodity goods like wire and headphone splitters, who actually wants to take the time?
Note that the only difference is the reversal of the microphone and ground positions. This is 100% bullshit masquerading as progress. I don’t see how this isn’t consumer hostile or how this in any way fosters healthy competition. I don’t see how the USPTO and other patent-granting bodies could see this and somehow consider it a reasonable thing to allow. This helps no one, not even Apple’s customers. It’s just a spiteful bit of “engineering”.
So when you wonder why you can’t just reuse that headset + mic you got for your Apple device on any other device you may have bought on the open market, this is the reason why.
Update: Ok, I realize this post is a bit unfair to Apple, this is ultimately a problem with the way intellectual property is assigned. It is, of course, not the government’s job to decide the pinout for an audio headset. But it still strikes me as extremely stupid that intellectual property rights cause situations like this to happen. Since the functionality to control your phone or music player via an inline controller + mic has spread to all of the devices on the market anyway, what was the point of utterly bifurcating something simple into two incompatible standards? It seems to have been done only so that one party could avoid the other party invoking their IPRs, to the detriment of everyone.