JavaScript and ‘this’

Keeping your head square about JavaScript’s this variable can be a little challenging. A wonderfully concise summary on the issue is found in chuckj’s answer to a StackOverflow question (modified here to account for differences between ECMAScript 5’s strict and non-strict modes):

this can be thought of as an additional parameter to the function that is bound at the call site. If the function is not called as a method then the global object (non-strict mode) or undefined (strict mode) is passed as this.”

Let’s see what this means (pun intended?) for various scenarios.

Continue reading “JavaScript and ‘this’”

What is a single-ended amplifier?

Single-ended amplifiers, whether made with triodes (as in the single-ended triode, or SET, amplifier), pentodes, or solid state devices, entered the high-end consumer audio consciousness a couple decades ago, and they continue to have a particular pull for a certain camp of audiophiles. This may lead the rest of us to wonder whether these folks are onto something that we should pay attention to. However, there seems to be some confusion regarding what exactly single-ended amplifier are. So I thought I’d try to clear things up a little.

So, what exactly is a single-ended amplifier?

It might be easier if we first cover what isn’t a single-ended amplifier.

Continue reading “What is a single-ended amplifier?”

Displays for classic Arduinos

Arduino driving TFT display

You often hear that to work with graphic displays on the Arduino platform you need to use a Mega or other high-performance board. I got curious about how much you can actually get done on an a measly Uno and similar boards based on the classic ATmega328P. You can find the ongoing results on my wiki.

The story so far: 128×64 and smaller monochrome displays are usable. The smallest TFT displays much less so.

AVA preamp chassis

AVA “SLR” remote controlled preamplifier chassis

A recent chassis redesign project I undertook for Audio by Van Alstine is now in production.

This project pushed “constraints as creative resource” to the limit. The client specified that the design language and elements from the product’s predecessor be maintained—down to the knobs, faceplate treatments, and typography.

The project brief revolved around electronic and industrial design work to bring the client’s preamplifer platform up to functional parity with current market offerings within a framework that fits with the client’s existing manufacturing capabilities. The result is a platform that is significantly more capable than what it replaces yet easier for the client to manufacture. It is also amenable to comprehensive appearance changes if and when the client deems the timing is right.

So while it might not seem there’s much innovation on the outside, there is a lot of innovation for the client on the inside.

HTTP API apps: Part 1

code screenshot

Way back near the turn of the century, Winamp was one of the coolest things around. One of the things that made it cool was plugins, and one of the coolest plugins was one that provided a Web interface. Not only did it let you control your Winamp remotely, but it also had decent interactivity, like a nice indicator of remaining track time, etc. To get the client-side job done, it used plain old HTML, CSS, and JavaScript rather than Flash. And this got me thinking: Why not use HTML, CSS, and JavaScript for the UI for apps in general?

Now I wasn’t the first or only person to think this by any means, but the idea of using Web technologies to build desktop apps didn’t really begin to gain much traction until fairly recently. Frameworks like NW.js and Electron, on which GitHub’s Atom and Microsoft’s Visual Code are built, do just that. But they’re not not quite the same thing as what I had in mind: implementing an app as a server that all sorts of different kinds of UI clients might interact with, including those built with Web front end technologies.

Here’s what I mean. Below is a diagram of what Electron and similar frameworks do:

Conventional hybrid app architecture

In this model, the user interface is rendered inside a dedicated environment provided by the framework using whatever HTML, CSS, and front-end JS frameworks you desire. The UI is tightly bound in a one-to-one relationship with the app engine. The app engine is implemented with a Web back-end language, typically Node.js, and it makes system calls through the engine’s baked-in features or, if the baked-in features don’t do what you need, through generic child_process.exec()-like calls that then invoke custom functions written on the host.

What I’ve got in mind looks like this:

Alternative hybrid app architecture

In this model, the tight UI↔app engine binding is replaced by a flexible REST API binding, and the app engine is simply a REST server embellished with whatever libraries are needed for accessing the required host resources.

The UI can be implemented with any kind of client that can speak the API, including HTML/CSS/JS clients, native mobile clients, terminal clients, etc. The REST server can be implemented in any language on which a REST server can be built. Thus the language the “app” is written in can be one that most easily supports the necessary system interactions or is otherwise best-suited to the app’s requirements. Also, unlike the conventional architecture, the client need not be local, which makes remote-controlled apps almost trivial to implement. As with any network-based communication, adequate measures must be taken to assure secure communication with the REST server.

So a few vacations ago, I set out to see whether this approach was actually workable. Because I have a high threshold of abuse, my test case wasn’t going to be some self-contained desktop app. Instead, I created a mini research project to study this architecture in the context of  real-time control of a physical appliance. (In reality, this was relevant to some other research I was doing. My tolerance of abuse had little to do with it.) The appliance I decided to prototype was an audio preamp with screen-based control.

This is where I ended up:

It’s not much to look at, but there’s a fair amount going on under the hood — which I’ll talk about in a future post.

But for now, that’s all my stories.

Programming Fundamentals with Processing

Kind of a big day here. I’ve decided to put online what I’ve written so far of my book on Processing. I’m pretty sure this will motivate me to do more work on it.

I’m about 80% done with the first half. I’m sort of thinking that once the first half is actually done, I might try a Kickstarter or GoFundMe.

Feel free to kick my butt about this.

Thoughts on balanced audio for hi-fi

Balanced audio was developed to fight noise and other issues common in professional audio contexts. But it was only a matter of time before high-end consumer audio companies picked up on it as a gotta-have marketing differentiator. Below I explore a number of issues any manufacturer should think about before deciding that they need to support balanced inputs and outputs in their consumer hi-fi equipment.

Balanced doesn’t mean differential

There’s a fair amount of confusion, or at least assumption, regarding what balanced audio is. The primary defining feature of a balanced topology is that output signals consists of two legs — a hot and a cold with identical output impedance — and inputs that differentially sum the hot and cold legs. This produces a net signal where any common mode noise picked up in the interconnection is canceled out.

The above doesn’t require that a balanced output be differential. The noise cancellation will work as intended if only one leg of the a hot/cold pair is driven, as long as the leg that isn’t driven is terminated with the proper impedance. Before you cry “Foul!” many classic AKG mics are configured this way.

So, a truly balanced output doesn’t need to be differential. Further, it’s expected that a truly balanced device will produce identical or nearly identical output behavior whether it receives a fully differential input or an “AKG-style” input. Contrast this with a topology that I refer to as “pure differential.” In a pure differential system, it’s assumed that both hot and cold are actively driven with differential signals all the way from the source to the system output (typically loudspeakers). This distinction is important in the discussion that follows.

Typical pro-audio style balanced topologies are useless for consumer hi-fi audio

A lot of line-level balanced stuff in the wild consists of a single-ended core with balanced to single-ended and single-ended to balanced converters on the front and back. This works well for professional sound setups where the main motivation is to eliminate E.M. noise and other issues resulting from long cable runs. But for consumer hi-fi, where cable runs are so short that E.M. noise pickup is negligible, this approach offers no benefit whatsoever. Worse, it likely introduces signal degradation owing to the additional balanced receiver/driver circuitry. In fact, the requirement that a balanced device produce identical output whether or not the input is active differential raises all sorts of constraints that mean the highest level of fidelity while maintaining pure balanced behavior may not be possible or practical.

Maintaining the highest levels of fidelity with a pure differential approach is much easier. Further,  a pure differential approach when implemented for the highest levels of fidelity may result in better performance than its non-differential equivalent. This is because a fully differential configuration has the potential to null even-order nonlinearities in the gain stages, something not true in general with pure balanced approaches. The problem with a pure differential approach is that it introduces use limitations the customer is not likely to fully understand.

Supporting balanced or differential audio gets very complex very quickly.

Will preamp inputs take only balanced/differential inputs or will there be provisions for single ended inputs as well? Supporting both potentially leads to a crazy lot of circuits and/or circuit switching.

Will outputs be both balanced/differential and unbalanced? Doing both probably means additional circuitry.

Will inputs and outputs be “truly balanced” or “pure differential”? Are you ready to educate your users in the proper use of pure differential devices?

Is it better to try removing the need to balance rather than use balancing to cover up the limitations of existing circuits?

The only thing balancing offers hi-fi, and then only when it is implemented as a pure differential topology, is the potential to null even-order nonlinearities generated by the electronics. Which is to say, any benefit seen by going to a pure differential setup means something in the system isn’t working as well as it could. This means you may be able to achieve the same benefit at much lower cost by optimizing your existing circuitry for better overall linearity.

Of course a pure differential topology will get rid of even-order nonlinearities even if they are small, and that might bring some added benefit after you’ve optimized things as much as possible. However, as with anything, be critical and weigh the costs. Directly following from this …

Is balancing line level signals worth it?

Running a pair of power amps differentially often results in better sound because power amps tend to be the most stressed devices in a system. They tend to operate more deeply in their nonlinear regions than other equipment. But you don’t need a balanced system to experience the benefit of differential amps — you only need to bridge your power amp setup. Note that many switching amp topologies are already bridged.

While it’s not uncommon for power amps to generate audible even-order nonlinearities in typical use, even-order nonlinearity in your line level designs may already be so low that turning them into a fully differential topology won’t give you any audible benefit. If there is a benefit, you may, as pointed out above, be able to achieve the same benefit at much lower cost by optimizing your existing line level circuitry for better overall linearity.

Are balanced systems still a thing?

There was once a lot of buzz (pun intended) in high-end hi-fi circles about balanced systems. But things have changed a lot in the last couple decades. What percentage of the market do the balanced-happy or balanced-sympathetic now represent? If it’s 10% or less, you will need to be become quite a hero in that circle if you’re going to make back your development costs. The alternative is that you have to commit yourself to convincing those outside the balanced circle that they need to get in. In deciding to do this, keep in mind that those folks abandoned the temptation at least once already.

AVA DAC MK 5 released

DAC MK 5 with Berkeley
Berkeley investigates the new DAC MK 5

The new DAC MK 5 that I’ve been working on for Audio by Van Alstine has finally been released.

I am grateful to Frank Van Alstine for giving me the room to develop the best reasonably priced DAC I know how to design. The results have so far exceeded all expectations, including my own. We all learned a lot through the process of designing this unit, which is as it should be. Rapid prototyping turned out to be instrumental in exploring a number of early electronic design alternatives. Looking forward to the reviews!

Acrobotic’s ESP8266 Tips & Tricks

NodeMCU devkit

There’s a growing series of good videos covering ESP8266 Tips & Tricks on ACROBOTIC’s YouTube channel. The ESP8266 has become quite a darling in the IoT world, and a seriously cool community is growing around it.

NodeMCU devkit picture by Vowstar (Own work) [CC BY-SA 4.0 (], via Wikimedia Commons.