Balanced audio was developed to fight noise and other issues common in professional audio contexts. But it was only a matter of time before high-end consumer audio companies picked up on it as a gotta-have marketing differentiator. Below I explore a number of issues any manufacturer should think about before deciding that they need to support balanced inputs and outputs in their consumer hi-fi equipment.
Balanced doesn’t mean differential
There’s a fair amount of confusion, or at least assumption, regarding what balanced audio is. The primary defining feature of a balanced topology is that output signals consists of two legs — a hot and a cold with identical output impedance — and inputs that differentially sum the hot and cold legs. This produces a net signal where any common mode noise picked up in the interconnection is canceled out.
The above doesn’t require that a balanced output be differential. The noise cancellation will work as intended if only one leg of the a hot/cold pair is driven, as long as the leg that isn’t driven is terminated with the proper impedance. Before you cry “Foul!” many classic AKG mics are configured this way.
So, a truly balanced output doesn’t need to be differential, and it’s expected that a truly balanced device will produce identical or nearly identical output behavior if it receives a fully differential input versus an “AKG-style” input. Contrast this with a topology that I refer to as “pure differential.” In a pure differential system, it’s assumed that both hot and cold are actively driven with differential signals all the way from the source to the system output (typically loudspeakers). This distinction is important in the discussion that follows.
Typical pro-audio style balanced topologies are useless for consumer hi-fi audio
A lot of line-level balanced stuff in the wild consists of a single-ended core with balanced to single-ended and single-ended to balanced converters on the front and back. This works well for professional sound setups where the main motivation is to eliminate E.M. noise and other issues resulting from long cable runs. But for consumer hi-fi, where cable runs are so short that E.M. noise pickup is negligible, this approach offers no benefit whatsoever. Worse, it likely introduces signal degradation owing to the additional balanced receiver/driver circuitry. In fact, the requirement that a balanced device produce identical output whether or not the input is active differential raises all sorts of constraints that mean the highest level of fidelity while maintaining pure balanced behavior may not be possible or practical.
Maintaining the highest levels of fidelity with a pure differential approach is much easier. Further, a pure differential approach when implemented for the highest levels of fidelity may result in better performance than its non-differential equivalent. This is because a fully differential configuration has the potential to null even-order nonlinearities in the gain stages, something not true in general with pure balanced approaches. The problem with a pure differential approach is that it introduces use limitations the customer is not likely to fully understand.
Supporting balanced or differential audio gets very complex very quickly.
Will preamp inputs take only balanced/differential inputs or will there be provisions for single ended inputs as well? Supporting both potentially leads to a crazy lot of circuits and/or circuit switching.
Will outputs be both balanced/differential and unbalanced? Doing both probably means additional circuitry.
Will inputs and outputs be “truly balanced” or “pure differential”? Are you ready to educate your users in the proper use of pure differential devices?
Is it better to try removing the need to balance rather than use balancing to cover up the limitations of existing circuits?
The only thing balancing offers hi-fi, and then only when it is implemented as a pure differential topology, is the potential to null even-order nonlinearities generated by the electronics. Which is to say, any benefit seen by going to a pure differential setup means something in the system isn’t working as well as it could. This means you may be able to achieve the same benefit at much lower cost by optimizing your existing circuitry for better overall linearity.
Of course a pure differential topology will get rid of even-order nonlinearities even if they are small, and that might bring some added benefit after you’ve optimized things as much as possible. However, as with anything, be critical and weigh the costs. Directly following from this …
Is balancing line level signals worth it?
Running a pair of power amps differentially often results in better sound because power amps tend to be the most stressed devices in a system. They tend to operate more deeply in their nonlinear regions than other equipment. But you don’t need a balanced system to experience the benefit of differential amps — you only need to bridge your power amp setup. Note that many switching amp topologies are already bridged.
While it’s not uncommon for power amps to generate audible even-order nonlinearities in typical use, even-order nonlinearity in your line level designs may already be so low that turning them into a fully differential topology won’t give you any audible benefit. If there is a benefit, you may, as pointed out above, be able to achieve the same benefit at much lower cost by optimizing your existing line level circuitry for better overall linearity.
Are balanced systems still a thing?
There was once a lot of buzz (pun intended) in high-end hi-fi circles about balanced systems. But things have changed a lot in the last couple decades. What percentage of the market do the balanced-happy or balanced-sympathetic now represent? If it’s 10% or less, you will need to be become quite a hero in that circle if you’re going to make back your development costs. The alternative is that you have to commit yourself to convincing those outside the balanced circle that they need to get in. In deciding to do this, keep in mind that those folks abandoned the temptation at least once already.
The new DAC MK 5 that I’ve been working on for Audio by Van Alstine has finally been released.
I am very grateful to Frank Van Alstine for giving me so much room to develop the best reasonably priced DAC I know how to design. The results have so far exceeded everyones expectations, including my own. We all learned a lot through the process of designing this unit, which is as it should be. Rapid prototyping turned out to be instrumental in exploring a number of early electronic design alternatives. Looking forward to the reviews!
There’s a growing series of good videos covering ESP8266 Tips & Tricks on ACROBOTIC’s YouTube channel. The ESP8266 has become quite a darling in the IoT world, and a seriously cool community is growing around it.
NodeMCU devkit picture by Vowstar (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons.
I was in a large technology retailer store the other day when I noticed a young adult/late adolescent standing in front of the audio display at the entry to their high-end audio and video department. The system consisted of a pair of hybrid electrostatic loudspeakers from a company with a long history of making such things, a tube amp of notable lineage, and some other stuff. It was playing (not the listener’s choice) some really awfully mastered pop music. Truly, utterly, dreadfully done.
I observed the listener to see what he found interesting in the setup. I’m pretty sure he didn’t know I was watching him. This is all conjecture of course, but here is my impression of what was happening in this young lad’s mind:
Wow, this looks impressive. I’m wondering how a system like this might improve my experience of music and whether I should start coveting something like this. Hmmm … Hmmm … maybe if I move back a little … or up … It really does look impressive … Hmmm … Hmmm … I guess my ears aren’t good enough to notice a difference. I’ll go look at TVs.
The system was set up to let the dipole electrostatics work well: lots of space all around. In spite of this, like the young lad, I could barely stand to listen to it. With the chosen content, I’m guessing the system was showing about 10% of what it could do if you knew what to listen for and close to zero if you didn’t.
There once was a time when high-end gear could make poorly done recording sound listenable. I am beginning to think that current kill, crush, and destroy mastering practices have succeeded in subverting this.
I am cautiously optimistic about the recently announced resolution of conflict between Arduino LLC and Arduino Srl. Back when the issue flared up, I took sides based on the information I had available, but I then decided to refrain from public comment as additional information on the issue did not seem to be forthcoming.
While I am hoping this announcement means a lovebath for everyone, I am concerned about some of the wording used in the announcement, specifically that, “The newly created ‘Arduino Holding’ will become the single point of contact for the wholesale distribution of all current and future products, and will continue to bring tremendous innovations to the market.” Does this mean that Arduino will shift its focus toward for-profit and more closed designs? In other words, will the hardware arm of the project maintain the project’s fully open culture? There have been signs that things have been closing up on the software side as well since Arduino LLC seem to be be in no hurry to answer questions regarding whether the code for their new SaaS IDE will be open sourced or not.
So, cautiously optimistic I am.
Since I had to install Arduino support in a new Netbeans installation, I decided it was a good time to document what’s needed. Check it on the wiki.
It’s amazing how much of what comes up here is relevant forty seven years on.
National Instruments hosts a very handy table for finding IPC 7351 equivalents to JEDEC and IEC package names.
I’m linking to a local PDF version for convenience.
Hybrid applications seem to be gaining traction now. In what follows, I’d like to present thoughts on an alternative to the emerging standard hybrid app architecture.
The conventional hybrid architecture
In this model, which to the best of my knowledge is used by Electron, NW.js, and others, the user interface is rendered as HTML using whatever HTML, CSS, and front-end JS frameworks you desire. The use of Web technologies for the UI is an explicit goal of this architecture.
The UI is tightly bound in a one-to-one relationship with the app engine.1 The app engine is implemented with a Web back-end technology, typically Node.js. The app engine makes system calls through the engine’s baked-in features or through generic
child_process.exec()-like calls. This means custom and platform-specific behaviors that the app may require will need to be implemented as external
child_process.exec() callable units.
This architecture does a good job of leveraging Web technologies to create secure conventional desktop apps. In addition, frameworks like Electron and NW.js have matured to the point that developing hybrid apps that use many desktop app conventions is relatively easy.
An alternative hybrid architecture
In what follows, I present what I believe is a more flexible approach to developing hybrid apps—one that requires more carefully considered design but that yields greater flexibility.
In this model, the tightly bound user↔app engine connection is replaced by a REST API. Thus the app engine becomes a REST server, possibly embellished with some needed superpowers for accessing host resources. When the app interface is API driven, any REST client technology can be used for the interface, including HTML/CSS/JS clients, native mobile clients, terminal clients, etc. In addition, the client need not be local, making remote-controlled apps almost trivial to implement. Adequate measures must be taken to assure secure and authorized communication with the REST server.
The other change in the above model is that the REST server is implemented in C++. When this is the case, interacting with the host system can be done directly using a wide range of C++ libraries written for this purpose. The choice of C++ here is somewhat arbitrary; it can be any language that supports ready-to-roll support for the system manipulations that your app requires.
The two changes outlined above are decoupled—meaning that either can be adopted in the absence of the other.
One downside to using C++ (or Java, or Python…) for the server part of this approach is that the server must be able to run on the host platform. This isn’t a significant issue with desktop deployment: only recompiling the REST server for each target platform will be required. But it does currently present a problem for mobile deployment as few mobile platforms provide native support for C++ and its oft used libraries. If you plan to target mobile apps, implementing a REST server in a more universally supported language will likely be required.
Follow updates on my wiki.
1. I’m using “app engine” generically here, not as a reference to Google’s App Engine.