|Complex Discrete Logs||Compressed Histogram Hash|
|Heterodyne Physics||McCoskey Count Sort|
|Oil Spill Fix||Personal Rail|
|Rational Residue||Residue Arithmetic|
|Reverse Translation Analysis||Rotary Engine|
In the late 1970s that I came up with an algorithm for translating from residue number systems into decimal. I wrote the code on a TI-59 calculator. On December 22, 1980 I filed for patent number 4,458,327 based on this principle.
In 1966 I invented a thing called the resettable counter. I built a prototype out of RTL. It took 5 amps when it was running, which wasn't a lot. It was meant to be a musical instrument and it had three of these counters. I did a patent search and had a lot of meetings with corporation executives and lawyers, but got no capital. One of these corporate execs from CBS told me that the way they would handle an idea like mine would be for them to make some minor change and go ahead and use the info and there would be nothing I could do about it. Eventually, I did get a patent on some parts of the idea, but I didn't file till 1978.
In 1966 I left LA and went back East and got a job working for GE as a technician. Then I gave up on the corporate life for a number of years. In about 1971 GE filed for a patent on a pre-settable counter which was my circuit with a minor change. Today, these circuits are in almost every electronic device from microwave ovens to cell phones cars to mainframe computers. The patent attorney who wrote my music patent was a professional colleague of the patent attorney who wrote the GE patent on the pre-settable counter.
DARPA offered to let me log onto a parallel computer they had to develop the residue arithmetic idea. But when I asked them about the ownership of the work product, they wouldn't commit any percentage of ownership to me, so I declined their offer. I have no way of knowing whether any of the boxes owned by NSA, DARPA, and etc. are based on this principle. Maybe someday I'll find out.
In the prior art, as the patent guys say, there were two ways of translating from residue to a polynomial number system. One way is called the Chinese remainder method and has been know from antiquity. The other was invented by Gauss and is called the Gaussian Remainder algorithm.
Back in the 80s I didn't see my new algorithm as much of an advance. I thought it was about the same as the Gaussian algorithm. Now I have a different view. But then as now, I think other aspects of the patent are more important.
The Chinese remainder method is the one you will find in the number theory texts. It is computationally the most inefficient of the three algorithms because it requires a computational range twice as large as the number being decoded, and it requires that range for every step of the algorithm, but it is easy to explain and easy to use.
Both the Gaussian method and my method require only an arithmetic range as large as the number to be decoded. The calculations start with really small numbers and get larger as the translation proceeds. But both my method and the Gaussian method require precomputed constants for their operation. In a silicon computational device built on these principles, the constants would be built into the system. In other words, the constants would not need to be recomputed for most applications, but would be computed once and then used to translate any amount of data.
Now for the differences between the Gaussian algorithm and my algorithm. Both of these algorithms work on the principle of rotation. The Gaussian method rotates the number being translated, and my method rotates the answer.. It turns out that rotating the answer is a lot easier in both hardware and software.
The computers we use nowadays are designed from the bottom up to use the binary number system. It is not nearly as easy to do number theory programming as it would be in a computer designed from the bottom up for residue arithmetic. In polynomial computers a residue number must be stored in several separate registers and it takes multiple operations for any arithmetic operation, while a polynomial number can be stored in one register and added, subtracted, multiplied, or divided with one instruction. The result is that the Gaussian method requires N squared divided by two operations to translate a number but my method does it in N operations.
In a gate level hardware environment, both methods require the same amount of time to complete the translation. That was the basis of my earlier conclusion. I was focused on the amount of time the operations take. What I overlooked is that the Gaussian method takes approximately N squared silicone area, while my method only takes N silicone area.
Polynomial numbers have advantages for some situations and residue for others. Low level hardware for both number systems should be included in a computer. For example, addition and subtraction are both much faster and simpler in a residue configuration, so registers that only require addition and subtraction like the program counter and address registers could be residue. But that would require substantial re-thinking of the architecture. A simpler way to integrate residue into a system would be as a residue coprocessor.
My algorithm can also be used in number theory research. Basically, each computational cell has two inputs and an output. If you know any two of these, you can determine the third. That allows you to define the constraints on a possible translation of a number at the level of any cell using simple look-up tables. If some of these numbers are given for any cell that constrains the numbers that a translation will output.
In the original version of this page I had come sample code, but it was not rendering correctly in a browser, so I took it out. Click on the complex discrete logarithms link to see samples of this algorithm in python code.