Janko Keyboard

Binaural Synthesizer

Thorns in the Bed of Roses

Easier Arm

A Brief History of Code and Data Chunking

Cellular Calculators


The common model, taught in elementary school, says the world is made up of persons, places, and things (nouns), that perform actions (verbs).

There is nothing wrong with this theory. It describes pretty well how some human language is structured.

But nouns and verbs are not enough. You also need subjects and objects, which define a one way relationship between nouns.

With prepositions, things start to get messy, because they describe additional relationships, such as inside and outside, and the ambiguous "of".

Then you need tenses, like past, present, and future, further modified by moods, like subjunctive, and it goes on.

By the time you've finished studying grammar, you're pretty much bored-to-death with all the abstractions, and you still don't have a clue what life is about.

So you give up on analyzing what is going on, and concentrate on how you feel about your various friends, and what you are going to do together, and you dream about what you will do when you grow up.

Sometimes it seems the whole purpose of elementary education is to make you lose any interest in understanding what the world is, and how you fit into it.

Consider what would happen if you had been taught that your world is made up of relationships. Your mom, dad, brothers, and sisters, dog, and cat are your family relations.

You are warmed by the Sun, caressed by the wind, and fed by a whole intricate web of economic relations, with the farmers who grow your food, the brokers who distribute it, the grocers who stock it.

Your classmates are connected in the same way you are, as well as differently, to other networks into which they were born. The more you learn about your friends, teachers, vendors, and protectors, the more fascinating your life becomes.

You begin to see how you fit into the web of life, and how it nourishes and protects you. You develop respect for the people in your web, and strangers, whose connections you do not yet understand.

If you have scientific interests, you learn that the world is not made of solid objects, but of connections between a wide menangerie of tiny particles, which pass influence in endless waves of interaction.

It becomes apparent that the world is not a bunch of objects, but a web of activity in which individuals, animals, plants, and even rocks are supported, and nourished by natural processes and forces.

Properly educated children would understand and feel supported by the web of life that supports us all. Such children would feel a reverence for everyone and everything around them.

Janko Keyboard

On a standard piano, a tune is played differently in different keys. To accompany a variety of vocalists, a pianist must have taught his fingers the motions needed to play in each key. In 1882, pianist Paul von Janko designed a keyboard that eliminated this annoying requirement.

Early Janko keyboard implementation:

The piano has the same number of notes as before. The key linkages are a bit more complex than the standard keyboard, but the tradeoff is a piano that is easier to play.

Each row in the keyboard contains every other note in a regular piano keyboard. Each octave of the Janko keyboad occupies less linear keyboard space than the conventional keyboard.

According to the website

"Because it has an isomorphic layout, each chord, scale, and interval has a consistent shape and can be played with the same fingering, regardless of its pitch or what the current key is. If you know a piece of music in one key you can transpose it simply by starting at a different pitch because the fingering is the same in every key."

In BiSynth, the keys are evenly spaced in a hexagonal grid. This better utilizes the limited iPhone keyboard space. The keyboard scrolls from side to side to provide five full octaves.

A video demonstrating the use of a modern Janko keyboard on an otherwise normal piano is available at: Janko Keyboard Demo.

More information can be linked from the site.

Binaural Synthesizer

As a musical hobbiest, I have long been interested in synthesizers, both analog and digital. The iPhone synthesizer BiSynth has grown out of that interest.

BiSynth is a virtual analog synthesizer. The audio output is produced by a pair of computed waveforms modulated by a note envelope.

The left and right output channels of the iPhone may be given different waveforms from a selection available on the settings screen. The note envelopes are the same on each channel, and are computed from attack, decay, sustain, and release settings, also on the settings screen.

A frequency difference between left and right channels can be set in Hertz (cycles per second) or halfsteps (the smallest musical interval on the piano). Halfstep differences are useful for exploring harmonious and not so harmonious intervals. The differences in Hertz can generate vibrato of different frequences, as well as a kind of poor mans surround sound simulation.

A variable cutoff, resonant filter allows the user to adjust the sound of each pair of waveforms, and a pan control allows the left and right channels to be mixed in different proportions.

The ouput sounds different with earphones than with speakers. This reflects the fact that the brain mixes sounds differently than the room. Some experimenters discovered that brain waves are sometimes entrained at the frequency difference between and left and right ear tones.

When you fiddle around with the settings and produce a sound that you like; you can save it in a database of presets.

BiSynth provides a choice between a standard, 5 octave keyboard, which scrolls from side to side, and a scrolling Janko keyboard with the same number of notes.

Thorns in the Bed of Roses

My first experience with mbed was a positive one.

The mbed library removes a lot of hassles involved with pin identification, peripheral setup, and the use of common peripherals.

The flagship board, the mbed NXP1768 board, works out of the box, with or without the application board which provides useful peripherals.

Simple applications of the main-loop/interrupt routine variety can be coded and downloaded to the board from the web-based IDE.

The code can also be exported for development on the users own workstation, providing it has a suitable IDE or compiler.

When I moved beyond the core libraries, the quality of user-supplied modules was not great.

Limited debugging can be done by sending messages out the virtual serial port used for application loading. But the absence of hardware breakpoints and memory access cut my productivity in half.

The stylized schematic supplied with the NXP board was helpful, but no substitute for real schematics.

The next mbed board I tried was a combination GPS/Cellmodem board with a Cortex-M3 processor. I intended to use it for a GPS locator/tracker.

This board was much more difficult to use. The implementation of the usb code loader and virtual serial port was inadequate, leading to semi-infinite gotchas while loading and debugging code.

After wasting a lot of time, I put together a component system with a Cortex-M3 header board, a GPS receiver, and cell modem board. Using the Em:Blocks IDE on my own computer, development went more quickly.

In summary, the web-based mbed environment appears to be a useful tool for embedded developers, provided they check reviews of the mbed board of interest. A lack of hardware debugging, spotty peripheral libraries, and inadequate board schematics may limit its use in typical real world projects.

Easier Arm

In years past, developers of embedded ARM systems have taken a couple of different paths:

Path 1) Buy a finished ARM board with enough resources to run Windows or Linux, obtain a board support package or kernel configuration from the board vendor, and write code in any language and library set supported by Windows or Linux.

Path 2) Build a custom board based upon the design of an evaluation kit provided by a chip manufacturer or board house. Select a compiler, and possibly an RTOS. Then use the chip library supplied by the chip manufacturer, a board library supplied by the board house, a C library supplied by the compiler vendor, and possibly a kernel provided by the RTOS vendor.

Those developers selecting the first path often run into incompatibilities between the device drivers of the board support package or supplied kernel and the needs of their intended application. In the windows environment this can result in large sums paid to device driver developers late in the development cycle. In the linux environment, it may be necessary to learn far more than one would like about linux kernel configuration, also late in the development cycle.

Developers who choose path number 2 are forced to spend a lot of time integrating libraries from chip supplier, board house, compiler vendor, and RTOS vendor, if any. This can be a big job, but it usually takes place earlier in the development cycle. A frequent obstacle in this approach is the belated discovery of library bugs creating havoc with the application code.

To solve problems associated with alternative 2, some compiler or RTOS vendors undertook the task of producing an IDE that they furnish with driver code for peripherals in a variety of ARM chips. Often they simply rebranded driver code furnished by Chip vendor and board vendor with their own name, and made it available in their IDE for various evaluation kits which they also sold. The downside of this approach is the high cost of such Integrated Development Environments's.

A consortium of ARM ltd, chip, board, and compiler vendors have lately sought to relieve the embedded developer of some the headaches associated with paths 1 or 2. This consortium has established a website called

The mbed consortium has taken the approach of creating a standard library definition, which uses standard ARM interfaces. This library definition provides a common interface capable of supporting a wide variety of embedded ARM processors, peripherals, and evaluation boards. Board and chip vendors implement the functions required by the library definition, and any common ARM compiler or IDE may be used to build the application.

Apparently, a lot of work went into the library design, because it has proven itself easier to use than most library sets hitherto provided by chip, board, or compiler vendors.

The mbed website,, allows access to an on-line IDE which may be used to code, compile, and download executables for a variety of evaluation boards. A selection of free and paid ARM compilers are supported, and more evaluation boards and processors are added to the list of supported targets as time goes on.

Abundant example code is available, including make files that can be used in any build environment. The developer who chooses can download application and library source and object code from the website to his own development system in office or home.

The expense of operating the website and supplying the library code is bourne by the consortium members. All of this support for embedded ARM development is therefore provided free of charge.

A Brief History of Chunking

Mr. A. M. Turing proved that a suitably provisioned computing machine can simulate any other computing machine. This led to the widespread acceptance of a simple hardware model for computers invented by Mr. Von Neumann.

Assemblers were invented to translate mnemonic instruction codes and data into binary machine code. Linkers allowed placement of chunks of code and data into Von Neuman's address space.

With the fundamentals of programming in place, John Bakus of IBM developed FORTRAN, a language useful for scientific computation. Admiral Hopper and IBM collaborated on the definition of COBOL, a business oriented language.

Compilers for FORTRAN and COBOL improved the ability of mere mortals to produce code and data chunks, and allowed the code chunks to invoke other code chunks in an orderly way, and to access various types of data chunks.

Subsequent language developers invented ever more elaborate types of code and data chunking.

A whole boatload of geniuses collaborated to invent Algol, which introduced nested scoping conventions for data and code chunks. Algol became, for a while, a standard for algorithm definition.

Nicklaus Wirth's Pascal simplified Algol's code chunking features, allowing university students to more easily take up programming.

IBM introduced it's elaborate language PL1 in 1964, providing for a great many new attributes of data, io, and code chunks.

In the mid 70s, Donald Chamberlin and Raymond Boyce, also of IBM, revolutionized access to persistent data chunks with the invention of Structured Query Language (SQL).

In 1978 Kerrnighan and Ritchie published the first edition of The C Programming Language. This language had light-weight versions of the data chunks that IBM had been evolving since the invention of COBOL and later PL1. It also had an easy-to-parse syntax, and simple runtime library, which allowed for widespread disemination of the language.

Then all hell broke loose.

Alan Kay integrated ways of chunking data with ways of chunking code in his Smalltalk80 language. He invented something called a "class", which consisted of several code chunks and the definitions of some data chunks, which the code chunks presumably could manipulate.

Then he defined sets of "objects" belonging to a class as addressable configurations of memory, each of which could contain all of the class's data chunks.

To allow objects to interact, he named the code chunks of a class "methods" which could be invoked on an object of that class, by sending the corresponding "message" from the same or another object of the same or a different class.

Mr. Kay also included in Smalltalk80 the concept of class inheritance. Classes could be defined, which inherited data chunk definitions, and code chunks from other classes.

Thus an object of one class could accept not only messages intended to invoke it's own class's code chunks, but also messages which invoked code chunks belonging to any of its super-classes. In addition, objects of a subclass were were expected to allocate, store, and access data defined by the data chunk defintions of all of it's super-classes.

A description of the language was devised that made perfect sense, without mentioning the underlying complexity of interaction of code and data chunks in the computer's address space.

The buzz-word that came to be used to describe this kind of code and data chunking was "object orientation".

A rush ensued to produce object-oriented versions of C. Bjarne Stroustrup of Bell Labs developed C++. James Gosling at Sun Microsystems developed early versions of Java. Brad Cox and Tom Love produced the first Objective-C compiler.

Object orientation succeeded beyond all expectation. The encapsulation of code and data into objects is a powerful software design metaphor. The inheritance by one class of the code and data of other classes facilitated the design and implementation of windowing systems for Amiga, Apple, and Wintel computers, as well as the XWindows Display environment.

IBM added object orientation to it's PL1 language, as well as COBOL and FORTRAN.

Failing to sell an object oriented language called J++, Microsoft produced C#, yet another object oriented C extension.

After many years of using object-orientation concepts in design, coding, and maintaining software systems, several observations deserve mention:

In software design, the encapsulation of data and functionality within objects is beneficial. That benefit carries over throughout the lifecycle of a project, making it easier to code, test, maintain, and reuse.

Once a computer language is compiled into code and data chunks and linked into a computer's address space, there is no discernable difference in performance or safety or corruptability between object-oriented and non-object-oriented languages.

An object oriented language is not needed to implement encapsulation of data and functionality within objects.

If a project requires code and data definition inheritance, then it is easier to use a language with those features built in.

More skill and training is required to use object oriented languages, than to use their simpler predecessors.

Cellular Calculators

During the Manhattan project in WWII, it was necessary to perform massive numbers of arithmetic computations to complete the engineering calculations necessary to produce the bomb.

According to Dr. Feynman, an assembly line was developed, in which individuals stood ready with calculating machines to evaluate mathematical expressions based upon supplied formulae and input values passed to them from other individuals. A conductor programmend the assemblage by distributing formulas to individual formula calculators, and gave input variable values to the first level of formula calculators. Each member of the team performed the assigned calculation with the input variables supplied and passed the resulting value on to the designated next calculator. After everyone had done her part, a result was produced by the final tier of calculators.

Each member of the team was effectively a single cell of a cellular calculator.

After Jobs and Wozniak, et. al. developed the Apple ][, Dan Bricklin and Bob Frankston created for it the program called VisiCalc. VisiCalc implemented an automatic financial spreadsheet with a virtual two dimensional array of cells, each of which was capable of producing a programmable result from the inputs of other cells in the virutal array. Each cell was programmed by entering a arithmetic formula involving the input cells. When that cell produced it's result, that became the input to formulae of other cells in the array.

VisiCalc was soon copied by better financed organizations, using more powerful computers, and eventually the modern spreadsheet appeared.

MathWeb is a more recent example of a cellular calculator. It uses a browser/editor rather than a spreadsheet metaphor for its user interface. Similar to VisiCalc, each cell contains a single numeric value and/or a script which may set that value to the result of a computation involving other cells.

The browser allows the user to create, delete, and edit cells, their values, scripts, and execute at will all or part of the resulting network of calculations.

The user may navigate the web of cells by following links (cellnames) within the script fields of the cells, or by selecting from an alphabetic list of cellnames.