Thresholds in computing: Part 8 – Thin-ITX vs Mini-ITX

(Part 8 in a series of posts on small-form-factor computing)

When Intel released the first thin-ITX motherboards at Computex in 2011, many folks were left scratching their heads: what use is thin-ITX when we already have ITX? It quickly seemed that aside from use in all-in-one (AIO) systems, thin-ITX was a stillborn idea, consigned to a quiet fizzle-out once Intel had a more robust strategy figured out.

Thin-ITX parts #

Today it is still hard to tell if thin-ITX is going to really take off. But what I noticed, assembling my own thin-ITX system, is that it’s not about the “thin” at all. The first immediate difference between the two hits you as you’re shopping for parts for the build—laptop RAM (SODIMM, mSATA drives, perhaps even a wifi card. It feels a lot like buying parts for a laptop upgrade. Obviously these are needed for the thinness, but they also take up much less space.

Q87T layout
Q87T layout

The laptop SODIMMs lie flat against the board, which means their footprint is much wider on the board vs regular desktop DIMM slots. But when your memory slot does not cover an entire edge on its own, you free up more edge surfaces for placing connectors and ports. At the same time, the circuit board area below the SODIMMs, beside the SODIMM solder points, is still useable for small microchips and the like.

desktop DIMMs stretch from edge to edge
Desktop DIMMs stretch from edge to edge

This is important for ITX allometry. The Mini-ITX board could not go any smaller than 17×17cm, because it would then no longer fit a full-size desktop memory slot. But right now, on thin-ITX, no single component dictates the minimum width. That thin-ITX PCB looks dense indeed, but I believe it would be possible to go smaller, say 16×16cm or below, until we bump into another shrinking bottleneck. There are reasons for not doing so, such as spec fragmentation—we don’t want too many small-form-factor standards. The more of them you have, the more troublesome it is to design computer hardware/accessories that work well with all of them.

Sandisk X110 256GB SSD
Sandisk X110 256GB SSD, in mSATA format

While many thin-ITX builders still end up using 2.5″ drives due to better availability and variety, the mSATA slot offers an alternative storage point for more compact systems. The advantages are twofold: mSATA is smaller, and involves no cabling to PSU/motherboard connectors.

This space-saving advantage sounds farcical on paper: it’s just two cables! But combine it with other compounding effects, and the effect is very tangible once you assemble your own thin-ITX system. And anyone who has had to wrestle with one of those SATA power T-connectors know just how unwieldy those things can be in a small-form-factor system.

Power delivery from Mini-ITX to Thin-ITX #

Look at that. Take a good, long look at that.

Motherboard 24-pin power cable
Motherboard 24-pin power cable

It’s a sight that anyone who has been assembling desktop systems in the past ten years would be familiar with.

Zoom out, take another look.

DSC01326

Just business as usual. A big, black, messy bunch of PSU cables, nothing more.

Building SFF systems is like watching a good satire. All the absurdity of fitting a legacy support system to a modern device comes starkly into focus once the empty cruft is stripped out. On my previous Mini-ITX build, there was a short section of 24-pin cable, no longer than 20cm, whose sole purpose was to connect power from the LR1005 DC-DC adapter to the motherboard. Long, thick, unwieldy 24-pin cable to connect two power points that are a mere 17cm apart. And that cable pretty much dominates the space, leaving little room for any other possible components—front USB ports, a small card reader, etc.

LR1005 DC-DC converter to motherboard cable
LR1005 DC-DC converter connected to motherboard by a thick section of cable

What do we really need 24 pins on that cable for? Understanding the development of this cable requires a trip to the past.

A quick run-down of ATX PSU history:

  1. First IBM PC power supply delivers 5V and 12V. Most microchips of that time used 5V. 12V was for higher-power devices, such as motors in hard drives and cooling fans.
  2. Subsequent microprocessors, such as the 486, used 3.3V. It’s an observable trend that the smaller transistors get, the lower their operating voltage can go. Motherboards converted the 3.3V required by microprocessors from the 5V power rail, using inefficient linear regulators (which just dissipate the unneeded power as heat).
  3. Intel develops the ATX standard, which included the 20-pin power supply connector, in 1995. The 3.3V rail is included as a requirement. And since low voltages are sensitive to voltage fluctuations (a 0.3V rise/drop is worse for 3.3V than for 12V devices), multiple wires/rails are provided for 3.3V.
  4. Microchips grow small enough for their operating voltage to drop to 1V. The 3.3V rail remained to power other microchips on the motherboard. The required CPU power was converted on the motherboard itself, from the 5V rail.
  5. The Pentium 4 happened. It needed so much power that the 5V rail was insufficient. It was more effective to draw that power from the 12V rail instead. And since the 20-pin motherboard cable only has one 12V rail, the ATX12V extension was introduced, adding a separate 4-pin P4 connector (consisting of two 12V rails + ground) for the CPU. The 5V rail continues to be included for legacy reasons.
  6. In 2003, the 20-pin motherboard cable was extended by 4 more pins, adding an additional 3.3V, 5V, and 12V rail. (The last pin is for ground.)
    24-pin motherboard cable pin-out diagram
    24-pin motherboard cable pin-out diagram

We are looking at the buildup of 30 years of specification neurosis in a single connector. There is a pin for standby power (+5V SBY). There is a pin for a power on signal (PWR ON). there is a pin for the power supply to tell the motherboard “okay, the current is now stable, you may proceed to switch on” (PWR GOOD). Not to mention the multiple ground pins that make up a third of the connector. And keep in mind that this still does not supply the CPU’s power directly; that has to be converted on the motherboard itself, to the 1V needed by the CPU. I am well aware of the need for backward compatibility, butwhen this system stops making sense for whatever you are trying to make, at some point you’re just going to have to ask: when we can achieve a clean break?

In recent years, there has been an increasing willingness to publicly move on from this. Fujitsu announced a 12V-only motherboard in 2011, although any adoption seems to be largely confined to industrial servers and similar systems. Such a motherboard would take in power only at a single voltage, 12V, converting it to the other voltages required by the various microchips, rather than have the voltage conversion take place in a bulky power supply.

The OpenCompute Project also has a design for a single-voltage power supply. That is because their AMD and Intel motherboards are designed to run on 12V. AMD’s Open Modular Server has similar power delivery intention.

But for the rest of us, the answer is simpler: do away with this neurosis when you no longer need an ATX power supply, of course.

Thin-ITX simplicity #

The ITX form factor actually has a lot of history. Started by VIA in 2001, it was, very essentially, a stripped-down ATX system, using the innermost set of four securing screws. Any new form factor in those days needed compatibility with existing systems for widespread availability, so the earliest ITX systems had 24-pin motherboard connectors. Many of these ITX boards were used in industrial/embedded systems, and the form factor only trickled its way into consumer systems after Intel switched their processor strategy around, to target lower power with the Core 2 product line and beyond.

With Thin-ITX, Intel finally made a clean break from all that. There are some high-power builds for ITX, designed by hobbyists who enjoy building systems with graphic cards larger than the motherboard; these would have need of ATX PSUs and the accompanying 24-pin motherboard cable. But for the most part, ITX systems are relatively lower-power systems that still support all but the highest-end CPUs. They don’t have to deal with ATX PSUs.

The Thin-ITX Design Guide makes 19V input a requirement. This can be supplied via a 19V DC jack, or a 2-pin connector. All the other required voltages are converted on the motherboard. No more 24-pin motherboard cable!

Complete build, minus PCIe card and wifi antennae
Complete build, minus PCIe card and wifi antennae

The first thing that struck me about this build is how barren it looks. No chunky, inflexible 24-cable mess. No SATA power cables with unused connectors left dangling. No 4-pin P4 CPU cable, which usually splits off from the other cables and plugs near the middle of the ITX board. Just front-panel cables for the power button and USB3 ports, which tend to be thinner (HDPlex’s cable for the USB3 cluster makes no sense to me yet).

The only other cables in that diagram are to join the 3-pin AC power input to the internal power supply, and then to connect its 19V output to the motherboard. Both of which can be moved out of the case if I go with a 19V external AC adapter, but I like this better.

Conclusion #

Thin-ITX is really about making the move away from some of the ATX spec’s legacy baggage, parts which are no longer relevant to modern desktop systems. The height reduction is a bonus, but the real value is in the options it brings for a clutter-free system.

There is a tradeoff for this, of course: Thin-ITX systems are generally limited to a TDP of 65W or less, which limits the range of CPUs that can be safely and effectively used. But that’s a story for another post.

On to Thresholds in computing: Part 9 – heat dissipation and Thin-ITX

See also

Thresholds in computing: Part 7 – ITX cooling