Inner Circle Newsletter November-December 2025

Contents

How to Train Your PLUTO
A Picture is Worth a Thousand Words
Opulent Voice Update: Abandon Ship!
Opulent Voice Update: From Pluto to Libre
Opulent Voice Update: From Boot Failure to RF Transmission
Opulent Voice Update: Correlator Upgrade
ORI Regulatory Update: FCC Proposes Deleting BPL Rules
August-November Puzzle Solution

How to Train Your Pluto

by Paul Williamson KB5MU

Have you ever saved a `pluto.frm` firmware file and then lost track of which build it was? Or perhaps a whole directory full of different .frm files with cryptic names? This little shell script will analyze the .frm file(s) and give you a little report about each one.

% ./pluto-frm-info.sh *.frm
Version information for pluto-1e2d-main-syncdet.frm:
device-fw 1e2d
buildroot 2022.02.3-adi-5712-gf70f4a
linux v5.15-20952-ge14e351
u-boot-xlnx v0.20-PlutoSDR-25-g90401c
Version information for pluto-peakrdl-clean.frm:
device-fw abfa
buildroot 2022.02.3-adi-5712-gf70f4a
linux v5.15-20952-ge14e351
u-boot-xlnx v0.20-PlutoSDR-25-g90401c
Version information for pluto-peakrdl-debug.frm:
device-fw d124
buildroot 2022.02.3-adi-5712-gf70f4a
linux v5.15-20952-ge14e351
u-boot-xlnx v0.20-PlutoSDR-25-g90401c

The first line gives the `device-fw` *description*. This is created in the Makefile by `git describe –abbrev=4 –always –tags`, based on the git commit hash of the code the .frm file was built with. It may be just the first four characters of the hash, as shown here. It can also include tag information, if tags are in use in your repository, in which case it also inserts the number of commits between the latest tag and the current commit.

Note that if you commit, build a version, make some changes, and build another version without committing beforehand, you will end up with two builds that have the same description. There is no mechanism in the current build procedure to catch that situation, so there’s no way to extract that information from the .frm file.

Theory of Operation

The .frm file for loading new firmware into an ADALM PLUTO, as built with the Analog Devices [plutosdr-fw](https://github.com/analogdevicesinc/plutosdr-fw) Makefile, is in the format of a device tree blob. That is, a compiled binary representation of a Linux device tree. Without specialized tools, this is an opaque binary file (a blob) that is difficult to understand.

So, the first step in understanding the file is to decompile it. Fortunately, the device tree compiler tool, `dtc`, is also able to decompile binary device trees into an ASCII source format.

The decompiled source is in a structured format, and also in a very consistent physical layout. It mostly consists of large image blocks, represented as long strings of two-digit hexadecimal byte values, enclosed in square brackets, each on one very long line of text. There is an image block of this type for each binary resource needed to program a Pluto device. Among other things, this includes an image of the Linux filesystem as it will appear in a RAM disk. The build procedure places certain information about other components of the build into defined locations within the filesystem, in order to make that information available to the Pluto user. We take advantage of the rigid format of the decompiled ASCII format to extract the hexadecimal byte values encoding the filesystem image. This is done in two steps. First, `sed` is used to find the header of this image object, identified by its name `ramdisk@1`, and extract the second line after the line containing that string. Second, the hex data is found between the square brackets using `awk`. This all depends on the exact format of the decompilation, as created by `dtc` in its decompilation mode.

We convert the hexadecimal values into binary using `xxd`. This binary is gzip compressed, so we use `gunzip` to uncompress it. The result is a CPIO archive file containing the Linux filesystem. We use `cpio` to extract from the archive the contents of the file that would be `/opt/VERSIONS` when installed. This file contains the information we are after; we print it to stdout.

That whole procedure is wrapped in a little loop, so it runs once for each file named (or expanded from a wildcard pattern) on the script’s command line.

Prerequisites

This is a bash script, which I’ve developed on macOS Sequoia and Raspberry Pi OS (Debian) bookworm. It ought to work on any Unix-like system.

You will probably need to install some or all of these utilities before the script will work. These should all be available in your favorite package manager, if not already installed on your system.

– dtc
– sed
– awk
– xxd
– gunzip (part of gzip)
– cpio

A special note about `cpio` for macOS users

The script uses the `–to-stdout` command line flag to avoid writing any files to your host’s filesystem. The version of `cpio` that Apple ships with macOS does not support this feature, but the version available in [Homebrew](https://brew.sh) does. When you install `cpio` using brew, you’re warned that brew’s cpio does not override the Apple-provided cpio by default (it’s installed “keg-only”). It provides a modification you can install in your shell’s startup script to add brew’s cpio to the path, so you always get brew’s cpio. If you do that, you can use the standard version of this script, named `pluto-frm-info.sh`. If you choose not to do that (which is what I recommend), and your brew installation is in `/opt/homebrew` as usual, you can use the modified version found in `pluto-frm-info-macOS.sh`. If your brew installation is elsewhere, modify the path to cpio in the macOS version.

Speed

This script does a substantial amount of work on largish data objects, so it can be slow. It takes about 3 seconds per file to run on my 2024 M4 Pro Mac Mini, but it takes about 75 seconds per file on my Raspberry Pi 4B.

Credits

Thanks to all those unknown developers who shared their sed and awk scripts and tutorials on the web, where they could be scraped for LLM training. I don’t know enough `sed` or `awk` to have written the respective command lines without a lot of research and struggle. I used suggestions provided by everybody’s favorite search engine when I asked how to do those tasks, and in this case the LLM-generated answers were more immediately useful than the first few hits of presumably human-generated content.

Missing Features

– Output build time, building user, and building hostname. This information is in the .frm file somewhere, so this feature might be added with some additional research.

– Describe changes since the commit that appears in the build description. As noted above, I think this information is not contained in the .frm file, so it cannot be added.

– In ORI builds, an extract of the git log for the build is placed (by a command in the Makefile) in `root/fwhistory.txt`. This could be extracted the same way as `opt/VERSIONS`, by just changing the string in this script. However, this history information is better extracted from the repo online, starting from the commit revealed by device-fw line in VERSIONS.

Other Places to Find Information

If you have the Pluto up and running, it has a web server. If you look at [index.html](http://pluto.local/index.html) (on the Pluto) with a web browser (on a connected computer), you’ll see all kinds of build information, much more than is returned by this script.

But if you look in the ramdisk image in the pluto.frm file, you’ll find the HTML file in /www/index.html contains only placeholders for all of this information. It turns out that these placeholders are replaced with the actual information by a script that runs when the Pluto boots up. That script is located at /etc/init.d/S45msd (or in the Analog Devices repo at [buildroot/board/pluto/S45msd](https://github.com/analogdevicesinc/buildroot/blob/f70f4aff40bcc16e3d9a920984d034ad108f4993/board/pluto/S45msd)). This script has a function `patch_html_page()` that obtains most of its information from resources that are only available at runtime, such as `/sys`, or by running programs like `uname`. Some of the info comes from /opt/VERSIONS, which is where this script looks. Some of that runtime information actually comes out of the hardware, like the serial number, or non-volatile storage on the hardware, like the USB Ethernet mode. The rest, including more build information, must be in the pluto.frm file somewhere, possibly scattered around in multiple places.

The name of the computer the firmware was compiled and the username of the user who compiled it, and more, are buried in the kernel image, and are made accessible at runtime via

[pluto3:~]# cat /proc/version

Linux version 5.15.0-20952-ge14e351533f9 (abraxas3d@mymelody) (arm-none-linux-gnueabihf-gcc (GNU Toolchain for the A-profile Architecture 10.3-2021.07 (arm-10.29)) 10.3.1 20210621, GNU ld (GNU Toolchain for the A-profile Architecture 10.3-2021.07 (arm-10.29)) 2.36.1.20210621) #2 SMP PREEMPT Fri Nov 7 23:04:52 PST 2025

[pluto3:~]#

The first part of this output is stored in the kernel in a string named `linux_proc_banner` in `init/version.c`. If I knew how to decompress the Linux zImage, I could probably extract this information.

#!/bin/bash
# Output version information embedded in one or more
# Pluto firmware files in .frm format.
#
# Specify filename(s) on the command line. Wildcards ok.
for filename in “$@”; do
echo “Version information for $filename:”
cat $filename |
dtc -I dtb -O dts -q |            
          # decompile the device tree
sed -n ‘/ramdisk@1/{n;n;p;q;}’ |  
          # find the ramdisk image entity
awk -F’[][]’ ‘{print $2}’ |       
          # extract hex data between [] brackets
xxd -revert -plain |              
          # convert the hex to binary
gunzip |                          
          # decompress the gzip archive
cpio --extract --to-stdout --quiet opt/VERSIONS
# extract the VERSIONS file from /opt
echo                                  
     # blank line to separate multiple files
done

A Picture is Worth 1000 Words

This image, made with the MacOS DataGraph program by Paul Williamson KB5MU, shows that the Opulent Voice transmit first in first out buffer (FIFO) wasn’t properly reset at the start of the program. This means that there was data already in the transmit FIFO at startup. This is not desirable because when we want to transmit, we want to start with the data that we’re sending. Whether this is voice, or keyboard, or a file transfer, we don’t want extra data at the beginning of the transmission. For voice, this might just mean a very short burst of noise. But, for a text message or data file, it could cause corruption. Visualizations like this help make our designs better, because they reveal what’s truly going on with the flow of data through the system. 

Opulent Voice Report: Abandon Ship!

Opulent Voice, our digital communications protocol, is used as the uplink for our open source satellite program Haifuraiya. Opulent Voice is also perfectly suited for terrestrial communications links. 

Development so far has targeted the PlutoSDR. This platform has served us extremely well. However we’ve driven it as far as it can go. This is the story of how we learned we’d hit the wall, and what we did about it. 

The long-term goal for Opulent Voice is an open source ASIC and world-class radio hardware. On the PlutoSDR, Opulent Voice data payloads are delivered from an external source to the modem’s network socket (USB). These data payloads have the Opulent Voice header, COBS header, UDP header, IP header, and if voice, RTP and OPUS headers. These data payloads arrive in the modem and are sent in to a transmit first in first out buffer (FIFO). The FIFO absorbs some of the network latency and uncertainties, so that we can support remote radio deployments as well as other challenging real-world timing situations. 

The ARM Processor and FPGA in the PLUTO work together in order to send a preamble at the beginning of a transmission, randomize each data payload, apply forward error correction encoding to each data payload, interleave all the bits to take full advantage of the error correction, and then prepend a three-byte synchronization word to the beginning of each frame. The resulting 271 byte frame goes out over the air, modulated as a minimum shift keying signal. 

Received signals are demodulated. Preambles help recover bit timing. The synchronization word is used to detect the start of the frame. The resulting payload is deinterleaved, the error correction is decoded, and then the resulting data is derandomized. We now have a data payload frame with Opulent Voice (and other) headers. This is delivered to the human-radio interface so that the data, voice, or text can be presented to the operator.

Up until the point where we fully integrated the forward error correction (FEC), the entire transceiver could fit into the Zynq 7010 in the PlutoSDR. This has 17,600 look-up tables (LUTs), a metric of what we call utilization on an FPGA. The number of LUTs available is similar to the number of shelves in a warehouse. If you fill up all the shelves, then there is no more room for inventory. However, that’s not the entire story. Filling up the LUTs with our logic is one aspect of FPGA utilization. Another aspect is how well the different parts of the design are connected together. Data flows through the design, and there are FPGA resources that must be used to make these connections. Some of the connections are only a bit wide, and some are 32 bits wide. The connection resources are like the aisles between the shelving systems in a warehouse. If you can’t reach a shelf, then it doesn’t matter if you have the inventory or not. Unreachable inventory is not useful. 

Below is a visualization of the FPGA utilization from Vivado. The cyan blocks are LUTs that are assigned. Blank spots in the upper 20% or so of the image are unassigned LUTs. Utilization immediately before complete FEC integration was approximately 60%. Why the difference between the visual 80% and the reported 60%? Because each cyan block in the image is not entirely full. What appears to be 80% utilization at this point in development was 60% by LUT count. This is what you want to see, with functions spread out over the resources and not densely packed in to smaller areas. 

Radio functions at this point were good. Randomization, the FEC placeholder, and interleaving were all working. Frame sync word was being received and baseband data was being recovered. We expected to integrate “real” FEC and have it fit. There appeared to be enough resources. We decided to go for it.

We’d had a placeholder for the FEC in the design for a while. Since this was a rate 1/2 convolutional encoder, one bit in to the encoder resulted in two bits out. For the placeholder, we simply duplicated every bit and sent it on its way. Once we replaced this placeholder with the much more complicated real convolutional encoder and decoder, the utilization went over the resource limit. After a lot of work, we got it back down under the limit and it looked like that the design would still fit in the PlutoSDR’s relatively small Zynq 7010.

Or did it?

After carefully writing and testing in loopback an open source 1/2 rate constraint length 7 decoder depth 35 convolutional FEC (yes, the time-honored “NASA code”) we integrated the new code into the frame encoder and decoder in the source code. And, we went over budget. Not by much, but enough to where the design simply would not fit. After some work on reducing the generous allocation to the transmit and receive FIFO to get back some resources, we then came in under the LUT budget, but the failed routing. The next compromise was to drop back to a simpler interleaver. Interleavers reorder the bits in a frame in a way that spreads them out as widely apart from each other’s original position as possible. This makes the frame resilient against burst errors. This is a sudden crash of noise or interference or other dropout that lasts for a specific amount of time. The type of forward error correction that we were using wasn’t great against burst errors. If we got a burst error, then it would hurt us more than distributed errors. Distributed errors are the type of damage you get from low signal to noise ratios.Burst errors are like someone ripping out 40 pages of a novel you’re reading. That’s really annoying, but you can still finish the book. You just lose all that storyline. Now, if someone ripped out 40 pages from a book where the pages were all mixed up and not in sequential order, then you could put the pages back together in the right order and you’d just be missing a page every now and then. That’s easier to deal with because the damage is now spread out over the whole book. You can infer more of the storyline since contiguous pages were not affected. 

Now, imagine that instead of interleaving the pages before you leave your book lying around book vandals, that you interleaved all the paragraphs. Losing 40 pages worth of paragraphs is much less noticeable. Let’s keep thinking about this. How about interleaved sentences? Even better! Finally, let’s consider the best possible case. Interleaved letters. At this level of book defense, you can figure out almost every word in the book if you’re just missing a letter ever so often. This is how interleaving helps our forward error correction. Our FEC can deal really well with burst errors spread out, just like our brains can deal with missing letters spread out over a whole book. Unfortunately, our “interleave the letters” logic was too expensive. We had to drop back to something like “interleave the pages”. We had been interleaving each bit and enjoying the benefits. To reduce the size of the interleaver, we first simplified the design so that the buffer could be assigned to block RAM resources instead of LUTs. At one point this did get things under the LUT count, but it wouldn’t route the design. We had a full warehouse, but couldn’t reach all the shelves. Next, we changed the interleaver to re-order each byte, instead of each bit. This design required a simpler buffer and smaller lookup table for the positions.  And, this new smaller design fit under the LUT count and routing worked again.

Utilization went down to 86%. We were thrilled. This was a huge step forward. We made a firmware build for the PlutoSDR and went into the lab to test over the air. However, the transmitter sent exactly one frame, and then quit. We called this bug “the transmitter stall” and started working on fixing it. The immediate blame fell on the encoder. We reasoned that this was probably a broken handshake between the data passing functions of the FIFO to the encoder, or the encoder to the deserializer. Not great, not terrible, just another thing to sort out. Simulation worked flawlessly, so the problem was only in hardware. Bypassing the encoder resulted in data flowing. It wasn’t being received, but the receiver was trying to decode unencoded data of the wrong size, so we didn’t think it was much of a clue.But, after combing through the code, and generating a lot of excellent bug fixes and other improvements, the transmitter stall stubbornly remained. Additional signals were brought out to status and control registers, so that we could get a little more visibility into the internals. Unlike in an FPGA simulator, we just can’t see most of the signals in the design in the PlutoSDR hardware. We can only see what’s exposed to the processor side through registers.We had recently gotten three new registers focused on the FIFO and frame synchronization. There was plenty of room in two of them, so we took over those bits to tell us what the encoder was up to. And then it got very interesting. The patterns that we were seeing clearly showed a stall. But, not in the forward error correction, which was the new code and therefore getting the suspicion. Instead, the stall was in the interleaver. 

The real bug was in the loop for the interleaver. An Opulent Voice data payload has 134 bytes. A forward error corrected data payload has 268 bytes. But, the interleaver was only reordering 134 of the 268 bytes. This was an easy fix, only one line of code. But that one line of code caused utilization to soar above the LUT limit again. This was very curious. 

And then the real learning started. The process of turning source code into FPGA hardware involves a process called synthesis. Synthesis figures out how to represent your source code into logic gates. Synthesis is followed by implementation, where we place and route the design in particular hardware targets. Synthesis can and will optimize parts of your design away. Synthesis will remove dead or unreachable code. And, only doing 134 of 268 things in your interleaver will remove quite a bit of unused unreachable code. 

Once this became clear, we dug in harder into the design. We knew we had a tricky situation with the pseudo random binary sequencer (PRBS) tricking the synthesizer into not bothering to implement the encoder. We’d already protected the encoder with “don’t touch” attributes that told the synthesizer to keep its ambitious little hands off our code. But, we didn’t protect the separate module for the “real” FEC. And, we hadn’t protected the decoder either. And, now we had this much larger loop in the interleaver. We got to work protecting the design against the optimizer, and then doing a lot of optimizing ourselves in order to free up more resources. After properly protecting all the new code, which implemented all the missing parts of the encoder and the decoder, we now also had more logic in the design from the proper loop sizing. We removed the (unused) TDD function, I2C peripheral, and SPI peripheral. We simplified anything we could think of that was a buffer. We thought about removing PRBS entirely, but the savings were minimal. For a brief moment, we got under the LUT limit. Here’s what that looked like.

It looked like we’d succeeded. But, the table of utilization results broke the bad news. We’d protected the frame encoder, the FEC encoder, and the frame decoder, but the synthesizer had still removed most of the internals from the FEC decoder. It looked good from the top level, but it was missing vital functions deep inside. Protecting all the signals in the decoder busted our LUT limit hard. There was nothing else to remove without cutting deeply into the quality of the design. We were already settling for hard (instead of soft) decisions in the frame sync word detector and FEC, and we were already running with a compromised byte-level interleaver. We still had symbol lock to integrate, and we didn’t want to rewrite the entire design just to fit this one hardware development target. 

It was time to move to a different development target. This process of changing from a platform you’ve outgrown to another with better resources is much like abandoning a sinking ship. You really don’t want to jump into the freezing cold ocean unless you can see the lifeboats coming from the other ship. But, we knew this day was coming, and we were prepared. 

Opulent Voice Update: From Pluto to Libre

Once the decision was made to find a larger FPGA, we had to decide what development platform we should move to. There are many choices. We have multiple FPGA development boards, ranging from the Basys 3 (33,280 logic units) to the ZCU102 (equivalent to 600,000 logic units). But, in order to continue development, we really needed something with an integrated or connected radio. Something similar to what we were already using, which was an Analog Devices 936x family. We also had experience with the 9009 and 9002 radio chips.

We settled on the LibreSDR, a PlutoSDR clone. See the github repository that we used here: https://github.com/hz12opensource/libresdr This SDR had been recommended by Evariste F5OEO, one of the Opulent Voice technical volunteers. Remote Labs had gone ahead and purchased one in anticipation of running out of space on the PlutoSDR. The layout, form factor, and bill of materials was very similar to the PlutoSDR. The FPGA was a Zynq 7020, with 33,200 logic units. At three times the resource capacity of the PlutoSDR’s Zynq 7010, but with most other things remaining very similar or the same, this SDR should work for us. 

Getting the LibreSDR up and running in the lab for Opulent Voice development had several stages. First, we had to decide how we were going to set up the repository for the source code and firmware creation framework. Most of the mechanisms for this come from either Xilinx (AMD) or Analog Devices. We decided to add the LibreSDR firmware factory in parallel to the PlutoSDR firmware factory in the pluto_msk repository. A command line switch would tell the firmware creation scripts what target we wanted. The alternative was a standalone repository.

We gathered the technical differences between the PlutoSDR and LibreSDR designs. We created new constraints, modified the top level source code blocks, and then tackled the firmware creation scripts themselves. This is where we ran into a bit of a headache. 

The scripts from Xilinx (AMD) take command line arguments to identify the hardware target. However, these arguments, if given on the command line, cause the variable name to concatenate itself onto directory names. Then, when crucial files are fetched later in the process, the directory doesn’t match the place the scripts thought things were. The Xilinx system archive file, which contains the FPGA bitfile created early in the process, came up “missing”. This doesn’t usually happen to most people because most people simply type “make” for PlutoSDR and not “make TARGET=pluto”. Since we were adding the option of making LibreSDR software to an existing PlutoSDR firmware creation process, we now needed to use the command line argument. And, we ran into the directory names being mangled. We needed a way to tell the scripts that we wanted to use the LibreSDR files and make libre.frm file, and not use the PlutoSDR files and create a pluto.frm file.

Figuring this out and getting around the problem took a combination of carefully reading scripts, cargo-culting a lot of cruft, and making up a new procedure that neither Analog Devices nor LibreSDR folks were using. We’d use the command line switch (make TARGET=libre) but we’d ignore it in later stages. We had tried to clear this variable and then unset this variable, but neither of those tactics worked.

Ignoring the variable after it did its job did work, and a baseline firmware build, with none of our custom code, was produced. This would prove that the basic process of producing the firmware image for the LibreSDR was working. But, was it a usable image? The firmware image was then sent to the lab, installed on the LibreSDR hardware, and it successfully booted up on the device. The first stage of migration from PlutoSDR to LibreSDR was a success.

This modified pluto_msk repository may not be the permanent solution, but it will serve us until a more stable solution comes online in Remote Labs. https://github.com/OpenResearchInstitute/pluto_msk

What is that more stable solution? It’s Tezuka, a project from Evariste F5OEO that provides a universal Zynq/AD9363 firmware builder for a variety of SDRs. The current state of this project can be found here: https://github.com/F5OEO/tezuka_fw

This brought us to the second step of the migration process, where we added in our custom logic to the LibreSDR reference design. This brought us to the second step of the migration process, where we added in our custom logic to the LibreSDR reference design, and then attempted to produce a firmware build with our custom code inside. This would reproduce the excellent results we were getting with the Pluto build process.

The PlutoSDR and LibreSDR, and many other radio boards that use Analog Devices radio chipsets, come with a transceiver reference design. This reference design fills in most of the basic system block diagram for the transceiver. This gives designers an enormous head start, since we don’t have to design the direct memory access controllers for the transmitter and receiver. We don’t have to set up the register access to the microprocessor, or design basic transmit filters. We are also given the digital highways and traffic signals that our data needs to get from memory to the transmitter, and from the received signal back to memory. 

The way we integrate our custom design into this existing design has several moving parts. First, we use a file that lists the connections between parts. Each part of the radio block diagram has input and output ports. To insert a new design in an existing pathway, we disconnect that pathway. We break the connections. The unconnected outputs now go to new inputs. The outputs of our new design then go to the newly exposed inputs of the existing design. 

Now, this sounds easy enough, and it is. The script we’re modifying is a text file. The commands are intuitive and simple. “Connect from here to here with a wire”. But this is the beginning of the process, and not the end. Second, we have to tell the software that programs our FPGA the location of the new files that control the behavior of this new block we’ve dropped on its head, and we have to make sure that adding a new set of functions in the middle of a busy digital highway doesn’t have any repercussions. Spoiler: it almost always does have repercussions!

For example, what we do with Opulent Voice is take over the pathway that dumps IQ samples from memory to the transmitter, and we take over the pathway that brings IQ samples back to the processor. Instead of IQ samples to and from the processor, which are in a format almost ready to transmit, we instruct the processor to send and receive data bits instead. Our custom FPGA code turns data bits into IQ samples, instead of getting these samples from the processor. We do all the work to prepare, modulate, and encode these data bits into IQ samples inside the FPGA fabric. We are moving more of the work into the FPGA, so that digital signal processing can happen faster and more efficiently. Doing this also frees up the processor to add user interface and user experience functions that a human operator will appreciate. We have the FPGA doing what it does best (DSP) and the processor is much more free to do what it does best (high level human-focused communications tasks). Even better for our future, the FPGA design will then become an ASIC, for compact, efficient, and modern manufactured radios. 

After we integrated the design into the FPGA, we created the SD card image for the LibreSDR. There were some hiccups, but they got worked out in short order. The process cleared up, we sent the newly created files over, and power cycled. And, it didn’t work! 

Now we had a problem on our hands that did not have a clear solution.

Opulent Voice Update: From Boot Failure to RF Transmission

Porting the Opulent Voice MSK modem from PlutoSDR to LibreSDR hit a hard wall. The PlutoSDR uses a different digital interface internally than the LibreSDR. Part of this new interface (LVDS) is a tuning algorithm. The tuning is needed to get the interface timing calibrated. The transmission tuning algorithm failed consistently during boot. This transmission tuning algorithm doesn’t tune the RF transmitter, but refers to how the transmit data from the radio chip is sent out over the bus to the FPGA. Usually, tuning algorithm information is sent to the next block down in the reference diagram, and that block knows how to participate in this tuning algorithm. However, we cut those wires and “soldered in” our own components. We don’t do any of this tuning algorithm. What we have done is take over the timing for the radio within our logic. We can handle it, but the radio chip doesn’t know this!

The tuning diagnostic showed all failures across the entire timing grid. Here’s what it looked like in the logs:

SAMPL CLK: 61440000 tuning: 
TX  0:1:2:3:4:5:6:7:8:9:ac:d:e:f:
0:# # # # # # # # # # # # # # # #
1:# # # # # # # # # # # # # # # #
ad9361 spi0.0: ad9361_dig_tune_delay: 

Tuning TX FAILED!

This pattern indicates a fundamental problem with the timing not happening at all, and not marginal timing. The system worked fine on PlutoSDR. Stock LibreSDR firmware booted without issues. What was different? Well, the presence of our design was different. But, how could hardware working perfectly on another platform, and working perfectly in simulation, cause this sort of a failure?

The key insight came from comparing Pluto and LibreSDR at the hardware interface level. Pluto uses a CMOS digital interface to the AD9361 radio chip. No timing calibration needed. LibreSDR uses LVDS, which requires precise timing calibration between FPGA and AD9361. The driver’s tuning algorithm sends test patterns through the transmit path and checks what comes back on the feedback clock.

Here’s where our MSK circuits caused the problem. In our FPGA design, the MSK modulator sits directly in the TX data path. During kernel boot, before any userspace initialization, MSK outputs zeros. The tuning algorithm expects to see its test patterns reflected back. Instead, it sees nothing but zeros at every timing setting. Every cell fails. Stock LibreSDR firmware passes tuning because its FPGA design has a clean path from the internal DDS to the DAC during boot.The AD9361 driver supports a digital-interface-tune-skip-mode device tree property. That’s a fancy way of saying that we have choices for how the driver does these tests. There’s a setting that can be 0, 1, 2, or 3.  

0 = Tune both RX and TX

1 = Skip RX tuning

2 = Skip TX tuning

3 = Skip both

Setting skip-mode to 2 tells the driver “Don’t try to calibrate TX timing because the FPGA handles it.” This looked like what would be most correct for our design. MSK owns the transmit data path, and our FPGA timing constraints were already met with 0.932 ns of “slack”, or timing margin. RX tuning still runs normally because MSK sits downstream of where this test occurs on the receive path.

The fix was a one-line change in ori/libre/linux-dts/zynq-libre.dtsi. Here’s that change!

adi,digital-interface-tune-skip-mode = <2>; /* Skip TX tuning - MSK owns TX path */

This one line removed the block and we were able to boot and confirm transmission over the air. This revealed yet more very interesting problems that will be described in next month’s newsletter.

While debugging the boot issue, we discovered the build system was generating a Pluto-centric uEnv.txt that lacked SD card boot support for LibreSDR. We had to manually swap in the uEnv.txt file to get it to boot off the SD card. This wasn’t going to work long-term, so we updated the Makefile. It now automatically adds the sdboot command for SD card booting and fixes the serial port address (serial@e0001000 to serial@e0000000). These fixes apply only when PLATFORM=libre, keeping Pluto builds unchanged.

With these changes, LibreSDR booted successfully with the MSK modem. We confirmed TX/RX state machine registers responding via libiio, RSSI register readable (custom MSK logic working), frame sync status visible, RF transmission confirmed on spectrum analyzer, and the 61.44 MHz sample clock verified. This was a huge step forward, and gave us valuable experience in porting our design to different FPGAs. We expect to port the design to the zcu102 development board (with Ultrascale+ FPGA) in order to demonstrate Haifuraiya HEO/GEO satellite work in 2026. The port process, in order for Opulent Voice to be in the uplink receiver channel bank, will go very similar to what is described here. 

In Remote Labs today, we’re now debugging actual MSK modem behavior (frame timing and synchronization) rather than fighting boot failures. This represents a significant milestone: the first successful integration of the Opulent Voice FPGA design with LibreSDR hardware.

Lessons learned? CMOS vs LVDS interfaces have different boot-time requirements that aren’t obvious until you hit them. When custom FPGA logic sits in the data path, driver auto-calibration may not work as expected. Device tree properties can tell drivers “I know what I’m doing” when appropriate. Build system automation prevents manual copy errors that waste debugging time. 

Next steps? Debug the weird 9.42-frame gap appearing after dummy frames. Investigate frame synchronization timing. Loopback testing to verify full transmit and receive chain. And, integration testing with Dialogus and Interlocutor. 

Finally, we could close the circle. We had to abandon the PlutoSDR because we ran out of room on the FPGA. What did the FPGA utilization look like now on the LIbreSDR?

Well, that’s a lot better! The design has a different shape because of the different layout of the programmable logic. And, there’s more room. But wait. There’s something wrong. 

Look at the decoder utilization. Only 3 logic units? That’s not even remotely plausible. The Viterbi decoder had been completely optimized out! Our decoder is a hollow shell, just passing data from derandomizer to deinterleaver.

Aggressively adding instructions to the synthesis tool reversed the damage. Carefully inspecting the log files for any disconnections or removed logic, and protecting any signals affected anywhere in our design, finally resulted in a completely clean bill of health. Utilization reports were run again, and the true picture of how much logic it takes to place and route our design came into focus.

With hard decision Viterbi decoder and a hard decision synchronization word detector, we are at 56% utilization. We now have plenty of room to go back to the bit-level interleaver, upgrade to a soft decision decoder, and get a true correlator for the synch word detector. This is a very satisfying result and gives us a truly good place to  be for 2026.

Opulent Voice Update: Correlator Upgrade

How we went from theory to gates in optimizing the frame synchronization for Opulent Voice. 

Finding the beginning of a data frame in a noisy radio channel is like searching for a needle in a haystack. Except, the haystack is constantly shifting, and some of the hay looks suspiciously like needles. For the Opulent Voice digital voice protocol, we’ve tackled this physics challenge on two fronts. First, we mathematically optimized our synchronization word. Second, we upgraded our FPGA detector from hard-decision matching to soft-decision correlation in order to take advantage of the better mathematics of the optimized codeword.

When we first designed the frame structure for Opulent Voice, we experimented with a familiar tool used in legacy digital voice protocols, called Barker codes. These binary sequences, discovered by R.H. Barker in 1953, have near-perfect autocorrelation properties. This means that when you slide them past themselves, you get a sharp peak at alignment and minimal response elsewhere. The problem? The longest Barker code is only 13 bits, and we needed 24 bits. 

The textbook solution is concatenation. Lucky for us, we could stick an 11-bit Barker code together with a 13-bit Barker code, and get 24-bits output. This gives you 0xE25F35, with a Peak Sidelobe-to-Mainlobe Ratio (PSLR) of 3:1. Respectable, but we realized that this wasn’t necessarily optimal for 24 bits.

The answer required brute force. With 2^24 = 16,777,216 possible sequences, modern computers can exhaustively search the entire space in about 90 seconds. The results were illuminating. 6,864 sequences achieve the optimal PSLR of 8:1. This was nearly three times better than our concatenated Barker code.

We can think of this like antenna directivity. A PSLR of 8:1 means our “main lobe” (the correlation peak when perfectly aligned) is eight times stronger than any “sidelobe” (responses at other alignments). Higher PSLR translates directly to better false-detection rejection, especially in multipath environments where delayed signal copies can trigger spurious sync detections.

For Opulent Voice, we selected 0x02B8DB from the optimal set. Besides having the best possible PSLR, it has good DC balance (11 ones, 13 zeros) and a maximum run length of 6 zeros. This woulld be important for tracking loop stability in minimum shift keying modulation. The mnemonic is “oh to be eight dB” for the hex digits.

Having an optimal sync word is only half the battle. The detector implementation matters just as much.

Our original frame sync detector used Hamming distance. We counted up how many bits differed between the received pattern when compared to our known sync word. If fewer than some threshold differ, we declared that sync was found. This works fine for strong signals, but there’s a fundamental problem buried lower in the noise. By the time bits reach the detector, the minimum shift keying demodulator has already made “hard decisions”. That means that each symbol has been quantized to a definitive 0 or 1. 

Hard decisions throw away valuable information. The demodulator might have been 99% confident about one bit but only 51% confident about another, yet both become equally weighted in the Hamming distance calculation. In a D&D analogy, it’s like reducing your attack rolls to just “hit” or “miss” without tracking the actual roll. You lose the ability to distinguish a near-miss from a catastrophic fumble. Now, if all we had to work with was hard decisions, then this is the best we could do. But there’s something really neat about our demodulator. It also decodes how confident is is of that 1 or 0. The soft decision metric is already produced and already available as a demodulator output.

The solution to our sync word detection optimizaiton problem is to use soft-decision correlation. Instead of binary bits, we work with signed values that indicate both the decision and the confidence. A value of +7 means “definitely a 0 with high confidence,” while +1 means “probably a 0 but not very sure.” Negative values indicate 1s.

The math is elegant. For each of the 24 sync word positions, we multiply the soft sample by +1 if we expect a ‘0’ or -1 if we expect a ‘1’, then sum all 24 products. Perfect alignment produces a large positive value; misalignment produces values near zero. The peak stands out sharply from the noise floor. We already had the information. We just had to use it.

The new `frame_sync_detector.vhd` in our encoder-dev branch implements soft-decision correlation with several key features:

First, we have parallel data paths. We maintain two shift registers. One is for soft decisions (24 × 16-bit signed values) and one is for hard decisions (24 bits). The soft path feeds the correlator; the hard path handles byte assembly after sync is found. This lets us have our cake and eat it too.

Second, we have polarity-aware correlation. This sounds fancy, but it’s a simple process. Our minimum shift keying demodulator uses a convention where positive soft values indicate ‘0’ bits and negative values indicate ‘1’ bits. The correlator accounts for this. When the sync word expects a ‘1’, we subtract the sample (making a negative contribution become positive). This detail matter. If we get the polarity wrong then our correlator becomes an anti-correlator.

ORI Regulatory Update: FCC Proposes Deleting BPL Rules

The FCC’s “Delete, Delete, Delete” initiative proposes removing the entire Access Broadband over Power Line (BPL) regulatory framework (Part 15 Subpart G) from the Code of Federal Regulations. The reasoning: BPL was never successfully commercialized, so the rules are dead letter. This item is scheduled for the December 18, 2025 FCC Open Meeting.

For those who weren’t in the amateur radio trenches in the United States during the mid-2000s, BPL was one of the most contentious regulatory battles in recent ham radio history. The technology promised broadband internet delivery over power lines, but there was a big catch. Power lines make excellent antennas in the HF spectrum. BPL systems operating from 1.7 MHz to 80 MHz range caused substantial interference to amateur radio operations, shortwave broadcasting, and other licensed services. This was documented by radio groups large and small across the US. 

ARRL fought this battle all the way to federal court. In 2008, the DC Circuit Court found the FCC had violated the Administrative Procedure Act in its BPL rulemaking. At the time, this was recognized as a significant victory. The court ordered the FCC to reconsider, but the Commission largely reaffirmed its original rules in 2011, leading to continued legal challenges that seemed to promise to drag on for years. 

Then a plot twist happened. The market solved the problem before the courts got back around to it. Every major commercial BPL deployment in the United States eventually shut down because they failed their business cases. Fiber, DSL, cable, and wireless broadband simply won. The last significant BPL internet provider (IBEC) closed shop in 2012. Cincinnati’s BPL system pulled the plug in 2014.

Part 15 Subpart G contained special provisions for Access BPL devices, including things like exclusion zones, database registration requirements, consultation requirements, mandated measurement procedures, and notching requirements for amateur bands.

Without Subpart G, any future BPL-like device would be regulated under the general Part 15 unintentional radiator provisions. These are the same rules that govern everything from your laptop to your garage door opener.

So, does this matter now? Well, yes. First of all, good riddance. These rules governed a technology that no longer exists in commercial deployment. Removing dead regulations is good regulatory hygiene. If someone wanted to resurrect BPL tomorrow, they’d still need to meet Part 15 emission limits and couldn’t cause harmful interference to licensed services. That’s the spectrum regulatory reality regardless of Subpart G. But, it’s not that simple. The specialized Subpart G rules existed precisely because generic Part 15 limits were inadequate for dealing with how harsh BPL interference really was. NTIA studies showed that BPL systems operating at generic Part 15 limits had essentially 100% probability of interfering with nearby HF operations. Removing the framework means any future power-line broadband technology would start from scratch without the hard-won protections built into Subpart G.

This is being processed as what is known as a Direct Final Rule. This means that the FCC believes it’s non-controversial and doesn’t require the traditional notice-and-comment process. However, the agency is accepting input. If adverse comments are filed, the rule would convert to a standard rulemaking requiring public comment.

Parties who have views on this deletion (like ARRL, which invested significant resources fighting these battles) have an opportunity to weigh in before the December 18 meeting.

FCC Document: DOC-415572A1 (Delete, Delete, Delete – Direct Final Rule)
Current regulations: 47 CFR Part 15 Subpart G (§§15.601–15.615)  
Background: ARRL v. FCC, DC Circuit Court of Appeals (2008)

For ORI members interested in the regulatory history, the ARRL’s BPL archive at arrl.org/broadband-over-powerline-bpl contains extensive documentation of the interference measurements, court filings, and technical studies from this era.

BPL Regulatory Throwback

From ARRL Bulletin ARLB003, in February 2005:

ARRL CEO David Sumner, K1ZZ, called Powell’s performance ‘a deep disappointment’ after some initial optimism–especially given his unabashed cheerleading on behalf of the FCC’s broadband over power line (BPL) initiative.

‘It’s no secret that we thought Chairman Powell was going entirely in the wrong direction on BPL and dragging the other commissioners and FCC staff along–willing or not–because he was, after all, the chairman,’ Sumner said. ‘A new chairman might be a chance for a fresh start.’

When the FCC adopted new Part 15 rules for BPL last October, Powell called it ‘a banner day.’ While conceding that BPL will affect some spectrum users, including ‘all those wonderful Amateur Radio operators out there,’ Powell implied that the FCC must balance the benefits of BPL against the relative value of other licensed services.

Third, we have frame tracking with a flywheel. Once locked, we don’t search for sync on every bit. Instead, we count through the known frame length and verify sync where we expect it. This “flywheel” approach dramatically reduces computation and provides robustness against brief interference. We maintain lock through up to two consecutive missed syncs before returning to full search mode.It takes three consecutive successful sync word detections to declare lock. We may update these numbers later on if they are too small or too big. This is a good start and is similar to what commercial communications systems implement. We’re on the right track here. 

Fourth, we have adaptive thresholds. Our HUNTING mode uses a stricter threshold than LOCKED mode. When searching, we need high confidence to avoid false positives. Once locked and tracking, we can be more forgiving. If sync is in roughly the right place with reasonable correlation, we stay locked. We have to really lose track of our frame boundaries in order to go back to HUNTING, where we search through every single bit we receive with a sliding window and correlator to find our optimized pattern.

Fifth, we have some debug instrumentation. The design exports correlation values and peak tracking signals, essential for threshold calibration. We can’t set thresholds blindly; they depend on your ADC scaling, Costas loop gains, and signal levels. We need to know what the correlator calculated and we need to know the peak detected. Otherwise we might be way off on thresholds. 

The combination of optimized sync word and soft-decision detection provides measurable improvements. 

For pure AWGN channels, correlation detection offers roughly 2-3 dB improvement over Hamming distance at moderate SNR. The optimal sync word provides a slight additional edge at very low SNR compared to concatenated Barker. This means that we can deliver the same performance with  about half the signal power. That’s not bad. But, the real payoff comes in multipath environments. With delayed echoes from terrain features, the 8:1 PSLR sync word dramatically outperforms the 3:1 concatenated Barker code. The suppressed sidelobes mean echoes are far less likely to trigger false sync detection. Combined with correlation-based detection, we see substantial improvement in frame acquisition reliability under realistic VHF/UHF propagation conditions.

If you’re building an Opulent Voice implementation, here’s how to calibrate the correlation thresholds. 

First, connect the debug correlation output to an ILA or register interface. Second, transmit known sync words and observe the peak correlation value. Third, set `HUNTING_THRESHOLD` to 70-80% of this observed peak. Finally, set `LOCKED_THRESHOLD` to 40-50% of the observed peak.

The defaults in the VHDL (10,000 for hunting, 5,000 for locked) were conservative starting points. Your actual values will depend on your particular signal chain in your design. 

August Puzzle Solution

This month’s puzzle update is a VHDL solution for the August “bug report” and September test bench hint. 

Link to .vhd file here: https://github.com/OpenResearchInstitute/documents/blob/master/Papers_Articles_Presentations/Articles_and_Announcements/VHDL_newsletter_puzzle_solution.vhd

Inner Circle Newsletter January-February 2026

Contents

Welcome!
DVB-S2 Receiver for Haifuraiya
A Picture is Worth a Thousand Words
Inner Circle Sphere of Activity
Rolling Dice Over Radio
Four Dice Bugs and a Microphone
Upgrading a Hard-Decision Viterbi Decoder to Soft-Decision
Retevis RT86, Hamcation Sponsorship, and 70cm Band Protection
Is Amateur Radio an Alternative to Age-Restricted Social Media?

Welcome to the Debugging Issue!

It’s February 2026 and this issue of Inner Circle has a special focus on Debugging. How do you feel about this necessary and often frustrating process?

Open Research Institute is a non-profit dedicated to open source digital radio work on the amateur bands. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer and we work to use and protect the amateur radio bands. You can get involved by visiting https://openresearch.institute/getting-started 

Membership is free. All work is published to the general public at no cost. Review and download our work at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world. 

Calling All Adventurers: Help Us Complete the dvb_fpga Repository

Rolling for Initiative

Fellow adventurers of the amateur radio realm, we have a quest of legendary proportions before us. The dvb_fpga repository, Open Research Institute’s open-source FPGA implementation of DVB-S2 components, sits at a critical juncture. The transmitter side has been conquered, tested, and proven in battle. But the receiver? That’s the dragon’s lair we haven’t fully mapped yet.

We’ve built a magnificent ballista (the transmitter) that can launch messages into the sky with precision. But catching those messages when they come back? That requires a completely different set of skills. Timing, synchronization, error correction, and the arcane arts of signal processing.

The Story So Far: Our Transmitter Victory

The dvb_fpga repository at https://github.com/OpenResearchInstitute/dvb_fpga already has 130 stars and 39 forks. This is a testament to Suoto’s leadership and the community’s interest. The transmitter chain is complete. The Baseband Scrambler, BCH Encoder, LDPC Encoder, Bit Interleaver, Constellation Mapper, Physical Layer Framing have all been tested and hardware-verified

The entire transmitter chain synthesizes cleanly in Vivado for a Zynq UltraScale+ at 300 MHz, using only about 6.5k LUTs, 6.1k flip-flops, 20 block RAMs, and 64 DSP slices. It’s lean, mean, and ready for deployment. All outputs match GNU Radio reference implementations bit-for-bit.

The Dragon’s Lair: Building the Receiver

Here’s where the quest gets interesting. If the transmitter is like carefully packing a message into an enchanted bottle and throwing it into a stormy sea, the receiver is like trying to catch that bottle while blindfolded, in a hurricane, not knowing exactly when it will arrive—and then having to unscramble the message even if some of the ink got smeared.

The DVB-S2 receiver needs several major components, each a boss encounter in its own right:

Symbol Timing Recovery “The Temporal Synchronizer”

Our receiver clock and the transmitter clock are in different time zones, metaphorically speaking. They drift, they jitter, they disagree about the fundamental nature of time. Symbol timing recovery must analyze the received waveform and figure out exactly when to sample each symbol.

Frame Synchronization “The Beacons are Lit!”

DVB-S2 frames start with a 26-symbol Physical Layer Start of Frame (PLSOF) sequence. It’s like a lighthouse beacon in the rain and fog. The frame synchronizer must detect this pattern, lock onto it, and maintain frame alignment even as conditions change. Miss the beacon, and you’re lost at sea.

Carrier Recovery “The Phase Walker”

Frequency offsets and phase drift cause the received constellation to spin and wobble. Carrier recovery must track these impairments and correct them in real-time. It’s like merging into traffic on a busy freeway. You have to match the speed of the rest of the traffic in order to get where you want to go.

LDPC Decoder “The Error Slayer”

This is the final boss. Low-Density Parity Check (LDPC) codes have near-Shannon-limit error correction performance, but decoding them requires iterative belief propagation across massive sparse matrices. The DVB-S2 LDPC decoder must handle frame sizes up to 64,800 bits with various code rates. Implementations exist (Charles Brain’s GPU version, Ahmet Inan’s C version in GNU Radio), but we need an efficient, open-source FPGA implementation.

Adventurers Wanted: Your Skills Are Needed

This quest isn’t for a single hero. It’s for a party. We need diverse classes of contributors. We need FPGA Wizards, who are versed in VHDL or Verilog who can write synthesizable RTL. The existing codebase uses VUnit for testing. DSP Clerics are needed. These are signal processing experts who understand timing recovery algorithms, PLLs, and carrier synchronization techniques. Algorithm Bards, who can implement LDPC decoders (Min-Sum, layered architectures) and understand the mathematics of iterative decoding. We need GNU Radio Rangers, Python experts who can create reference implementations and test vectors. And, Documentation Warlocks, the Technical writers who can document architectures, interfaces, and usage in clear accessible language. 

Your Starting Equipment

You don’t have to start from scratch. ORI provides Remote Labs, granting access to Xilinx development boards (including ZCU102 with ADRV9002 and a ZC706 with ADRV9009) and test equipment up to 6 GHz. Real hardware, remotely accessible. Existing test infrastructure is VUnit-based testbenches and GNU Radio data generation scripts. These are already in the repository. Reference implementations exist. GNU Radio’s gr-dvbs2rx and Ron Economos’s work provide software references to test against. And, we have community with an enforced code of conduct. The ORI Slack, regular video calls, and an international team of collaborators have built a friendly environment for people to build quality open source hardware, firmware, and software.

The Treasure at Journey’s End

Why does this matter? An open-source, FPGA-based DVB-S2/X receiver enables amateur satellite communications, including but not limited to Phase 4 Ground’s digital multiplexing transceiver for GEO/HEO missions. Students can learn real-world DSP implementations. Experimenters can modify and experiment without proprietary limitations. Commercial DVB-S2 receivers cost thousands of dollars and are black boxes. An open-source FPGA implementation changes the game entirely.

Join the Party

Ready to roll for initiative? Here’s some ways to get started. 
Fork the repository: github.com/OpenResearchInstitute/dvb_fpga
Join ORI Slack: Request an invite at openresearch.institute/getting-started
Check the issues: Seven open issues await eager contributors
Request Remote Lab access: Real hardware for real testing
Make your own Adventure: Start with symbol timing, frame sync, or dive into LDPC decoding

At Open Research Institute, we believe that open source approaches to digital communications yield the best outcomes for everyone. This project is living proof. The transmitter exists because volunteers made it happen. The receiver needs you.

May your synthesis always meet timing, and your simulations always match hardware!

A Picture is Worth a Thousand Words

This image, made with the MacOS DataGraph program by Paul Williamson KB5MU, was created in order to answer a question about the LibreSDR integration effort. Are the frames really coming out at 40ms all the time, and at what rate is the modulator trying to accept bits during that time? Answers should be yes, and 2168 bits per frame. Here we’ve measured both the duration and the count of bits made available in an FPGA register, and normalized them both so that the nominal value is shown as 1.0. Black circles are bits, and red circles are durations. You’ll have to look closely to see that the red and black circles are paired up almost perfectly.  There’s some variation in the duration measured by this method, but it is mostly synchronous with the bit count measurement. The variation shown here is only a few percent, though it looks a bit messy. This is a good result, showing that frame timing was preserved. 

Quality instrumentation enables debugging.

Rolling Dice Over Radio: Games, Community, and an Extensible Command System for Interlocutor

Building a framework for shared activities on the Opulent Voice Protocol

Michelle Thompson, W5NYV

Drop into many repeater nets or tune across the bands and you may find the same small group of regulars having the same conversations. Newcomers often report that the airwaves don’t feel as welcoming as they should. That there’s no obvious reason to key up, nothing to do together, no shared activity that would give someone a reason to come back tomorrow. The technical barriers to entry have been falling for years, but some social ones seem to remain.

What if your radio could roll dice? What if a group of operators could play a tabletop role-playing game (RPG) campaign over a digital voice link, with the interface handling initiative rolls and ability checks right in the chat window? What if the same framework that rolls a d20 could also deal cards for poker night, flip coins for a bet, or run a trivia game? Suddenly there’s a reason to get on the air that has nothing to do with signal reports and everything to do with having fun together.

That’s the idea behind the new command system in Interlocutor, the human-radio interface for the Opulent Voice Protocol. We built an extensible slash-command architecture, starting with a Dungeons and Dragons (D&D) dice roller, that gives operators and developers a framework for creating shared activities over radio. This article describes the design, how it works, and how you can extend it.

Why Games Matter for Amateur Radio

Games are how humans build community. A weekly D&D campaign gives people a standing reason to show up. A card game creates natural turn-taking and conversation. Even something as simple as a shared dice roll creates a moment of anticipation and reaction, like rolling a “natural” 20, that bonds people together. These are exactly the dynamics that amateur radio nets have. The pressure is off the individual to show up and make conversation or feel like they are on the spot. The topic or purpose of the net is the standing reason to show up. Games give this purpose additional interest and fun. Having commands that make games easier enables increased operator participation on the airwaves.

The tabletop gaming community and the amateur radio community already overlap more than most people realize. Both involve people who love systems, protocols, and rules. Both have rich traditions of face-to-face gathering. Both struggle with bringing in younger participants. By giving operators a way to play games over radio, we create a bridge between these communities and gives yet another answer to the question every new ham asks: “I got my license, now what do I do with it?”

At first, the dice roller was the entire point and purpose of the code edit. It was going to be built is as an “Easter Egg”, or an undocumented fun or frivolous feature. But, after thinking it through, we didn’t want to build just a dice roller. And, there was no need to hide it. We realized that that we wanted a framework that the community can extend. We wanted this because we can’t predict what games and activities people will want to play, and the command lexicon on Slack, Discord, multi-user dungeons and dragons (MUDD), and massively multiplayer role-playing games (MMORPG) like Everquest was expansive and well-established. The architecture had to be open and inviting to contributors, just like the airwaves should be open and inviting to operators.

The Architecture Became a Command Dispatch System

The command system sits between user input and the radio transmission pipeline. When you type a line in the chat window, the dispatcher checks whether it starts with a slash-command. If it does, the command is executed locally and the result displayed in your chat window. If it doesn’t, the text passes through to ChatManagerAudioDriven for normal radio transmission. Commands are never sent over the air as raw text. They’re intercepted, interpreted, and (in general) consumed locally.

The implemented design is a three-layer stack that mirrors the modem’s own protocol architecture. The CommandDispatcher is the routing layer. This is analogous to frame sync detection in the radio receiver. It examines the preamble (the slash and command name) and routes to the right handler. Each Command subclass is a self-contained handler that parses its own arguments and returns a structured CommandResult. The result contains both a human-readable summary for the CLI and a structured data dictionary for rich web UI rendering. This is much better than a hardcoded and hidden dice roller.

The Dispatch Pattern

The integration is identical in both CLI and web interfaces. Every line of user input passes through the dispatcher before reaching the chat manager:

result = dispatcher.dispatch(user_input)

if result is not None:

    display(result)     # local only, never transmitted

else:

    send_to_radio(user_input)  # normal chat

Unrecognized slash-commands pass through as normal chat rather than producing errors. This is intentional. Nothing should stand between an operator and their radio. The system helps when it can and stays out of the way when it can’t.

The First Spell in the Book: /roll

The /roll command implements standard tabletop dice notation. The same grammar that RPG players have used for decades and is very well-documented in source code examples online. Type /roll 4d6+2 and Interlocutor rolls four six-sided dice, adds 2, and displays the result with individual roll values visible for full transparency. The alias /r provides shorthand for frequent rollers, because when initiative is on the line, every keystroke counts!

Command Result

/roll d20 d20 is [14] = 14
/roll 4d6+2 4d6+2 is [3, 5, 1, 6] + 2 = 17
/roll d100 d100 is [73] = 73
/roll 3d6-2 3d6-2 is [4, 2, 6] – 2 = 10

Individual roll values are always shown in brackets because some game mechanics depend on them. Such as, dropping the lowest for some methods of ability score generation, counting natural 20s (D&D), counting numbers of 1s or 6s (Warhammer 40k), identifying critical failures (D&D). It also builds trust in the group. You can see the dice aren’t being fudged. Guardrails enforce sensible limits (maximum 100 dice, sides between 2 and 1000), and the error messages include flavor text because error messages are part of the user experience too: “Even Tiamat doesn’t roll that many dice.”

In the Web Interface: A Third Column

In Interlocutor’s web interface, incoming messages appear on the left. Outgoing messages appear on the right. Command results appear centered. They’re neither incoming nor outgoing, they’re a shared activity happening in the space between operators. They are essentially system messages. This three-column layout emerged naturally from the domain model. Messages have a direction (incoming, outgoing, or system), and the visual layout (and CSS styling) reflects that directly.

The web UI receives command results as structured data over WebSocket, not just formatted text. This means the frontend can render dice results with custom styling, highlight critical hits, animate the roll, or display any rich visualization that makes sense for the command. The CLI gets a clean text summary of the same result. Both interfaces use the same backend dispatch logic.

Extending the System: Your Game Here

Adding a new command requires no changes to the dispatcher, the web interface, or the CLI. You create a Python file, subclass Command, implement three properties and one method, and register it. The dispatcher handles all routing. The entire command system is standard-library Python with no external dependencies.

Commands we envision for the community:

More dice mechanics: /roll 4d6kh3 (keep highest 3 for ability scores), /roll 2d20kh1 (D&D advantage), exploding dice, World of Darkness target number pools, other tabletop game conventions.

Game utilities: /coinflip, /draw (deal cards from a shared deck), /initiative (roll for the whole party and sort the results), /shuffle.

Radio commands: /freq, /power, /mode, lets you display or adjust radio parameters without cluttering the chat.

Conference management: /who (list participants), /mute, /invite — for Interlocutor’s upcoming conference tab.

The architecture is deliberately inviting to contributors. If you can write a Python function, you can add a command. Customize your ride!

Looking Ahead: Game Night Over the Air

Right now, command results are local to the operator’s interface. That means that all dice rolls are private. You see your own results in your own interface. Nobody else does. This is fine for solo testing and local play, but it’s not ideal for a game session over radio.

When Interlocutor gains its conference tab, command results will become shareable through the conference signaling layer. The conference model is straightforward. A conference is a list of target call signs, and traffic addressed to that conference reaches all members. Dice rolls will travel through this conference channel, not the radio text channel, in a similar way that party chat and zone chat are separate channels in an online game.

The interesting design question is visibility control. On platforms like D&D Beyond, players choose whether to roll publicly or privately. A dungeon master (DM) rolling behind the screen needs private rolls. The party shouldn’t see whether that monster’s saving throw succeeded before the DM narrates the result. But player rolls generally need to be public for the group to trust them. We’ll need something like /roll d20 for public rolls and /rollpriv d20 or /roll d20 –private for DM-only results. We’re not there yet, but the CommandResult structure already carries enough metadata to support it. So, adding a visibility flag is straightforward once the conference transport exists.

Imagine a Saturday night D&D session over Opulent Voice. The DM describes a scene by voice, a player types /roll d20+5 for their attack, and every operator in the conference sees the result appear in their chat window. The dice emoji, the individual roll, the modifier, the total. Someone keys up: “Natural 18 plus 5, that’s a 23, does that hit?” The radio comes alive with exactly the kind of activity that gives people a reason to come back next week. Data mode allows simple maps. This won’t replace the advanced graphics of dndbeyond.com, but it does fully enable the theater of the mind at the heart of roleplaying games.

That is what welcoming looks like. Not just a sign on the door, but something worth doing on the other side of it.

Get Involved

The Interlocutor command system is open source and available in the Interlocutor repository on GitHub (https://github.com/OpenResearchInstitute/interlocutor/).

We welcome contributions—new commands, new game ideas, bug reports, or just telling us what you’d want to do over radio that you can’t do today. The best way to make the airwaves more welcoming is to give people something fun to do when they get there. If you’ve ever wished your radio could roll a d20, now it can! And if you want it to do something else entirely, the framework is ready for you.

Four Dice Bugs and a Microphone

Debugging the Interlocutor Command System

Michelle Thompson, W5NYV

We added a “simple” feature to Interlocutor’s web interface and spent more time fighting the browser than writing the feature. Here is our debugging war story from the Opulent Voice Protocol project.

A companion article in this newsletter describes the new slash-command system for Interlocutor. This is an extensible architecture that lets operators type /roll d20 in the chat window and see dice results locally instead of transmitting the raw text over the air. The design is clean, the test suite passes, and the CLI works perfectly.

The web interface, however, had other plans!

What followed was a debugging session that touched every layer of the stack. Python async handlers, JavaScript message routing, browser rendering, and (most painfully for me) browser caching. Each bug had a clear symptom, a non-obvious cause, and a fix that taught us all something about the assumptions hiding in our code.

So here is the story of those four bugs!

Bug 1: The Eager Blue Bubble

Symptom

Type /roll d6 in the web interface. A blue outgoing message bubble appears on the right side of the chat, showing the raw text /roll d6, as if it were a normal chat message being sent over the radio. No dice result appears. It’s supposed to be in the middle and a difference color, due to a special CSS case for commands. Commands are neither sent messages or received messages, therefore they are in the center of the message area and are visually distinct with a different color.

After refreshing the browser, the blue bubble disappears and the correct dice result appears instead, centered and properly styled! Well, that isn’t going to work.

The Investigation

The fact that refresh fixed the display turned out to be the key clue. It meant the server was doing its job correctly. It was dispatching the command, generating the result, and storing it in message history. The problem was in how the browser rendered the initial interaction, right after the operator pressed return at the end of the command.

Here’s the original sendMessage() function in app.js:

function sendMessage() {
    const messageInput = document.getElementById('message-input');
    const message = messageInput.value.trim();

    if (!message) return;

    if (ws && ws.readyState === WebSocket.OPEN) {
        const timestamp = new Date().toISOString();
        displayOutgoingMessage(message, timestamp);   // ← THE CULPRIT

        sendWebSocketMessage('send_text_message', { message });
        messageInput.value = '';
        messageInput.style.height = 'auto';
    }
}

See line 224? displayOutgoingMessage(message, timestamp) fires immediately, before the WebSocket message even leaves the browser. The function creates a blue right-aligned bubble and appends it to the chat history. So far so good. Then the message travels to the server, where the command dispatcher intercepts it and sends back a command_result. But, by then, the user is already looking at a blue bubble containing /roll d6.

This is an optimistic UI pattern. This is the kind you see in iMessage or Slack, where sent messages appear instantly without waiting for server confirmation. It’s the right design for normal chat messages, where the server is just a relay. But slash-commands aren’t normal chat. They need to be processed by the server before the UI knows what to display.

The Fix

A one-line gate:

    if (ws && ws.readyState === WebSocket.OPEN) {
        const timestamp = new Date().toISOString();
        // Don't display slash-commands as outgoing chat — the server
        // will send back a command_result that renders properly
        if (!message.startsWith('/')) {
            displayOutgoingMessage(message, timestamp);
        }

        sendWebSocketMessage('send_text_message', { message });

Normal chat still gets the instant blue bubble. Slash-commands wait for the server’s command_result response and render through the proper handler. The UI now reflects the actual data flow, which is almost always the best way to do it.

The Lesson

Optimistic UI is a performance optimization with semantic consequences. When you render before processing, you’re saying that you already know what the result looks like. For relay-style operations like send text or display text, this assumption holds. For operations that transform input like parse command, execute, or return structured result, it doesn’t. The display strategy needs to match the processing model.

Bug 2: The Silent Tagged Template Literal

Symptom

After adding the slash-command gate to sendMessage(), the web interface stops working entirely. Whoops! The page loads, but no WebSocket connection is established. The server logs show HTTP 200 for the page and JavaScript files, but no WebSocket upgrade requests. The browser appears completely dead! Doh.

The Investigation

Opening Safari’s Web Inspector, the console showed:

SyntaxError: Unexpected token ';'. Expected ')' to end a compound expression.
    (anonymous function) (app.js:234)

Line 234 wasn’t anywhere near our edit. It was this line, which had existed in the codebase before we touched anything:

        addLogEntry`Sent message: ${message.substring(0, 50)}...`, 'info');

Spot the problem? I didn’t, at first. There’s a missing ( between addLogEntry and the backtick. The correct call should be:

        addLogEntry(`Sent message: ${message.substring(0, 50)}...`, 'info');

Here’s where it gets interesting. This line had been working before our edit. It had worked all along no problem. But, how?

In JavaScript, functionName followed by a template literal (backtick string) is valid syntax. It’s called a tagged template literal. It calls the function with the template parts as arguments. Why do we have tagged template literals in our code? Spoiler alert. We don’t!

JavaScript didn’t complain because addLogEntry`…`  is coincidentally valid syntax. It’s a tagged template literal call. The language feature exists so you can do things like sanitizing HTML (html\<p>${userInput}</p>`) or building SQL queries with automatic escaping. Libraries like styled-components and GraphQL's gql` tag use them heavily.

But nobody chose to use one here. The typo just happened to land in the exact one spot where a missing parenthesis produces a different valid program instead of a syntax error. It was an accidental bug hiding in plain sight.

So addLogEntry\Sent message: ...`` was being parsed as a tagged template call, which would produce garbage results but wouldn’t throw an error.

The , 'info'); after the closing backtick was previously being parsed as part of a larger expression that happened to be syntactically valid in context. But our edit to sendMessage() changed the surrounding code structure just enough that the JavaScript parser could no longer make sense of the stray , 'info'). And, Safari, unlike Chrome, refused to be lenient about it.

One missing parenthesis, silently wrong for who knows how long, suddenly became fatal because we edited a nearby line.

The Fix

Add the (:

        addLogEntry(`Sent message: ${message.substring(0, 50)}...`, 'info');

The Lesson

Tagged template literals can be a silent trap. A missing ( before a backtick doesn’t produce a syntax error. It produces a different valid program. The bug was latent in the codebase, asymptomatic until a nearby change shifted the parser’s interpretation of the surrounding code. This is the kind of thing a linter catches instantly, and it’s a good argument for running one.

Bug 3: Safari’s Immortal Cache

Symptom

After fixing the tagged template literal, we save app.js, restart the server, and reload the browser. The same error appears! We use Safari’s “Empty Caches” command (Develop menu, select Empty Caches). Same error. We hard-refresh with Cmd+Shift+R. Same error. The server logs show 304 Not Modified for app.js. The browser isn’t even requesting the new file. Ugh.

The Investigation

FastAPI’s StaticFiles serves JavaScript files with default cache headers that tell the browser to cache aggressively. Safari honors this enthusiastically. The “Empty Caches” command clears the disk cache, but Safari also holds cached resources in memory for any open tabs or windows. As long as a Safari window exists, even if you’ve navigated away from the page, the in-memory cache can survive a disk cache clear.

We verified this by checking the server logs. After “Empty Caches” and reload, the server never received a request for app.js at all. Safari was serving the old file from memory without even asking the server if it had changed. In production, this is useful. In development, it can be confusing and result in a wasted time and effort.

The Fix

Quit Safari completely. Cmd+Q, not just closing the window, and then relaunch. On the fresh launch, Safari requested all files from the server (status 200), got the corrected app.js, and the WebSocket connection established immediately. This could be seen in Interlocutor’s terminal output.

For future development, we can consider three approaches. First, adding Cache-Control: no-cache headers via middleware. Second, appending cache-buster query strings to script tags (app.js?v=2). Third, using content-hashed filenames. All are legitimate. For an actively-developed project without a build system, the full-browser-quit approach during development is the simplest, and proper cache headers can be added when the project matures.

The Lesson

Browser caching is not a single mechanism. Disk cache, memory cache, service worker cache, and HTTP cache negotiation are all separate systems that interact in browser-specific ways. “Clear the cache” can mean different things depending on which layer you’re clearing. When changes to static files seem to have no effect, verify at the network level (server logs or browser network tab) that the new file is actually being requested, not just that the old cache has been “cleared.”

Bug 4: The Split-Personality Refresh

Symptom

With the cache issue resolved, slash-commands now work in the web interface. Yay! Type /roll d6 and a properly styled command result appears, centered in the chat with a dark background and dice emoji. Type /roll fireball damage and a red error message appears, also centered. It looks great.

Then hit refresh.

The same messages reload from history, but now they’re displayed as incoming messages. They are eft-aligned, light background, wrong styling. The live rendering and the history rendering are producing completely different visual output for the same data. Blech.

The Investigation

Interlocutor’s web interface loads message history on every WebSocket connection and this includes reconnects and page refreshes. The loadMessageHistory() function in app.js iterates over all stored messages and dutifully renders each one:

function loadMessageHistory(messages) {
    messages.forEach(messageData => {
        let direction = 'incoming';
        let from = messageData.from;

        if (messageData.from === currentStation ||
            messageData.direction === 'outgoing') {
            direction = 'outgoing';
            from = 'You';
        }

        const message = createMessageElement(
            messageData.content, direction, from, messageData.timestamp
        );
        messageHistory.appendChild(message);
    });
}

This function knows about two types of messages: incoming and outgoing. A command result has direction: "system"and from: "Interlocutor" — which doesn’t match the outgoing check, so it falls through to the default direction = 'incoming'. The function dutifully renders it as a left-aligned incoming message. It’s just doing what it’s told.

Meanwhile, live command results arrive as WebSocket messages with type: "command_result", which routes to handleCommandResult(). This is a completely separate rendering path that produces the centered, dark-styled output.

Same data, two rendering paths, two visual results. The message type field was present in the stored data but loadMessageHistory() never checked it.

The Fix

Add a type check at the top of the history loop:

    messages.forEach(messageData => {
        // Handle command results from history
        if (messageData.type === 'command_result') {
            handleCommandResult(messageData);
            return;
        }

        let direction = 'incoming';
        // ... existing code continues ...

Now history-loaded command results route through the same handleCommandResult() function as live ones. Same code path, same visual output, regardless of whether you’re seeing the result live or after a refresh.

The Lesson

When you add a new message type to a system that stores and replays messages, there are always two rendering paths: the live path and the history path. If you only add handling to the live path, the system appears to work, but only until someone refreshes. This is a specific instance of a more general principle. Any system that persists data and reconstructs UI from it must handle every data type in both the write path and the read path. Miss one and you get a split personality. And that is what happened here.

The Meta-Lesson

All four bugs share a common thread. Interlocutor had multiple paths to the same destination, and we only modified some of them!

The blue bubble existed because sendMessage() had an immediate rendering path and a server-response rendering path, and we only added command handling to the server path. The tagged template literal survived because JavaScript had two valid parsings of the same token sequence, and we only intended one. The cache persisted because Safari had a memory cache and a disk cache, and we only cleared the disk. The split-personality refresh existed because the UI had a live rendering path and a history rendering path, and we only added command handling to the live path.

In each case, the fix was pretty small. Conditional check, a parenthesis, a browser restart, a type guard. The debugging time came from discovering which path we’d missed. The lesson isn’t about any particular technology and had nothing to do with the functionality implemented with this code commit. It’s about the discipline of asking “What are all the ways this data can reach this code?” and making sure every path handles every case.

For a radio system where reliability matters, that discipline is well worth cultivating.

The Interlocutor command system is open source and available in the Interlocutor repository on GitHub (https://github.com/OpenResearchInstitute/interlocutor/). This new interlocutor_command module includes comprehensive documentation, has a demo program to show how it works, 43 tests in a mini-test suite, and an integration.md guide that now includes a Troubleshooting section born directly from these four bugs.

Upgrading a Hard-Decision Viterbi Decoder to Soft-Decision: A Case Study in FPGA Debugging

This article documents the process of upgrading a working hard-decision Viterbi decoder to soft-decision decoding in an FPGA-based MSK modem implementing the Opulent Voice protocol. Soft-decision decoding provides approximately 2-3 dB of coding gain over hard-decision, which is significant for satellite communications and weak signal work where every dB matters. We describe the architectural changes, the bugs encountered, and the systematic debugging approach used to resolve them.

Introduction

The Opulent Voice (OPV) protocol uses a rate 1/2, constraint length 7 convolutional code with generator polynomials G1=171 and G2=133 in octal representation. The original implementation used hard-decision Viterbi decoding, where each received bit is quantized to 0 or 1 before decoding. Soft-decision decoding preserves additional information from the demodulator. In addition to the output of a 0 or a 1, we also have what we are going to call “confidence information” from the demodulator. This additional information allows us to make better decisions because some bits are more reliable than others. Some bits come in strong and clear and others are very noisy. If we knew how sure we were about whether the bit was a 0 or a 1, then we could improve our final answers on what we thought was sent to us. How is this improvement achieved? We can’t read the mind of the transmitter, so where does this “confidence information” come frome? How do we use it?

Consider receiving two bits. We get one bit with a strong signal and we get the other bit near the noise floor. Hard decision treats both bits equally. They’re either 0 or 1, case closed. Soft decision decoding says “I’m 95% confident this first bit is a 1, but only 55% confident about the second bit being a 0.” When the decoder must choose between competing paths, it can weight reliable bits more heavily than the ones it has less confidence about.

When our modem demodulates a bit, the result is calculated as a signed 16 bit number. For hard decisions, we just take the sign bit from this number. This is the bit that tells us if the number is positive or negative. Negative numbers are interpreted as 1 and positive numbers are interpreted as 0. The rest of the number, for hard decisions, is thrown away. However, we are going to use the rest of the calculation for soft decisions. How close to full scale 1 or 0 was the rest of the number? This is our confidence information.

In practice, a technique called 3-bit soft quantization captures most of the available information and gets us the answers we are after. Quantization means that we translate our 16 bit number, which represents a very high resolution of 65536 levels of confidence, into a 3 bit number, which represents a more manageable 8 levels of confidence. Think of this like when someone asks you to rate a restaurant on a scale from 1 to 5. That’s relatively easy. 1 is terrible. 5 is great. 3 is average, or middle of the road. If you were asked to rate a restaurant on a scale from 1 to 65536, you probably could, but how many levels of quality are there really? Simplifying a rating to a smaller number of steps makes it easier to deal with and communitcate to others. This is what we are doing with our 16 bit calculation. Converting it to a 3 bit calcuation simplifies our design by quite a bit without sacrificing a lot of performance. We can always go back to the 16 bit number if we have to. Since we were using signed binary representation, 000 is the biggest number and 111 is the smallest. If we print the numbers out, you can see how it works if we just take the sign bit and “round up” or “round down” the rest of the result.

Here’s our quantized demodulator output. The sign of the number (positive or negative) is the first binary digit. Then the rest of the number follows.

000 largest positive number - definitely received a 0
001 probably a 0
010 might be a 0
011 close to zero, but still positive
100 close to zero, but still negative
101 might be a 1
110 probably a 1
111 smallest negative number - definitely received a 1

After the conversion, our implementation uses the following rubric.

confidence        outcome
000               strong ‘0’ (high positive correlation)
111               strong ‘1’ (high negative correlation)
011-100           uncertain!

System Architecture

Original Hard-Decision Path

[Demodulator] to [Frame Sync] to [Frame Decoder] to [Output]
                     |              |
                  rx_bit      Deinterleave
                  rx_valid    Hard Decision Viterbi
                              Derandomize

The hard-decision path is as follows. The demodulator outputs `rx_bit` (0 or 1) and an `rx_valid` strobe. This strobe tells us when the `rx_bit` is worth looking at. We don’t want to pick up the wrong order, or get something out of the oven too early (still frozen) or too late (oops it’s burned). `rx_valid` tells us when it’s “just right”.  The frame sync detector finds sync words in the incoming received bitstream and then assembles bytes into frames. The sync word is then thrown away, having done its job. The resulting frame needs to be deinterleaved, to put the bits back in the right order, and then we do our forward error correction. After that, we derandomize. We now have a received data frame.

New Soft-Decision Path

[Demodulator] to [Frame Sync Soft] to [Frame Decoder Soft] to [Output]
                     |                      |
                  rx_bit              Deinterleave
                  rx_valid            Soft Decision Viterbi
                  rx_soft[15:0]       Derandomize
                     |
              Quantize to 3-bit
              Store soft buffer

There is a lot here that is the same. We deinterleave, decode, and derandomize. We decode with a new soft decision Viterbi decoder, but the flow in the soft frame decoder is essentially the same as in the hard decision version.

What is new is that the demodulator provides 16-bit signed soft metric alongside the previously provided hard bit. This is just bringing out the “rest” of the calculation used to get us the hard bit in the first place. A really nice thing about our radio design is that this data was there all along. We didn’t have to change the demodulator in order to use it.

Another update is that the frame sync detector quantizes and buffers these soft “confidence information” values. So, we have an additional buffer involved. Finally, the frame decoder uses soft Viterbi with separate G1/G2 soft inputs, instead of the `rx_bit` that we were using before.

Implementation Details

Soft Value Quantization

The demodulator’s soft output is the difference between correlator outputs: `data_f1_sum – data_f2_sum`. Large positive values indicate confident ‘0’, large negative indicate confident ‘1’.

FUNCTION quantize_soft(soft : signed(15 DOWNTO 0)) RETURN std_logic_vector IS
BEGIN
    -- POLARITY: negative soft = ‘1’ bit, positive soft = ‘0’ bit
    IF soft < -300 THEN
        RETURN “111”;  -- Strong ‘1’ (large negative soft)
    ELSIF soft < -150 THEN
        RETURN “101”;  -- Medium ‘1’
    ELSIF soft < -50 THEN
        RETURN “100”;  -- Weak ‘1’
    ELSIF soft < 50 THEN
        RETURN “011”;  -- Erasure/uncertain
    ELSIF soft < 150 THEN
        RETURN “010”;  -- Weak ‘0’
    ELSIF soft < 300 THEN
        RETURN “001”;  -- Medium ‘0’
    ELSE
        RETURN “000”;  -- Strong ‘0’ (large positive soft)
    END IF;
END FUNCTION;

The thresholds (+/- 50, +/- 150, +/- 300) must be calibrated for the specific demodulator. If you want to implement our code in your project, then start with these values and adjust based on observed soft value distributions.

Soft Buffer Architecture

The Opulent Voice frame contains 2144 encoded bits (134 payload bytes × 8 bits × 2 for rate-1/2 error correcting code). Each bit needs a 3-bit soft value, requiring 6432 bits of storage.

TYPE soft_frame_buffer_t IS ARRAY(0 TO 2143) OF std_logic_vector(2 DOWNTO 0);
SIGNAL soft_frame_buffer : soft_frame_buffer_t;

Bit Ordering Challenges

The most subtle bugs in getting this design to work involved bit ordering mismatches between hard and soft paths. The system has multiple bit-ordering conventions that must align. There were several bugs that tripped us up in this category. The way to solve it was to carefully check the waveforms and repeatedly check assumptions about indexing.

Byte transmission: MSB-first (bit 7 transmitted before bit 0)
Byte assembly in receiver: Shift register fills MSB first
Interleaver: 67×32 matrix, column-major order
Soft buffer indexing: Must match hard bit indexing

The Arrival Order Problem

Bytes transmit MSB-first, meaning for byte N a `bit_count` of 0 receives byte(7) = interleaved[N×8 + 7] and a `bit_count` of 7 receives byte(0) = interleaved[N×8 + 0]

The hard path naturally handles this through shift register assembly. The soft path must explicitly account for it. We got this wrong at first and had to sort it out carefully.

Wrong approach which caused bugs:

-- Tried to match input_bits ordering with complex formula
soft_frame_buffer(frame_byte_count * 8 + (7 - bit_count)) <= quantize_soft(...);

This formula has a timing bug: `bit_count` is read before it updates (VHDL signal semantics), causing off-by-one errors.

Correct approach which gave the right results:

-- Store in arrival order, handle reordering in decoder
soft_frame_buffer(frame_soft_idx) <= quantize_soft(s_axis_soft_tdata);
frame_soft_idx <= frame_soft_idx + 1;

Then in the decoder, we used a combined deinterleave+reorder function:

FUNCTION soft_deinterleave_address(deint_idx : NATURAL) RETURN NATURAL IS
    VARIABLE interleaved_pos : NATURAL;
    VARIABLE byte_num : NATURAL;
    VARIABLE bit_in_byte : NATURAL;
BEGIN
    -- Find which interleaved position has the deinterleaved bit
    interleaved_pos := interleave_address_bit(deint_idx);
    -- Convert interleaved position to arrival position (MSB-first correction)
    byte_num := interleaved_pos / 8;
    bit_in_byte := interleaved_pos MOD 8;
    RETURN byte_num * 8 + (7 - bit_in_byte);
END FUNCTION;

Soft Viterbi Decoder

The soft Viterbi decoder computes branch metrics differently than hard Viterbi decoders. Hard-decision branch metric has a Hamming distance of 0, 1, or 2.

branch_metric := (g1_received XOR g1_expected) + (g2_received XOR g2_expected);

Soft-decision branch metric is the sum of soft confidences.

-- For expected bit = ‘1’: metric = soft_value (high if received ‘1’)
-- For expected bit = ‘0’: metric = 7 - soft_value (high if received ‘0’)
IF g1_expected = ‘1’ THEN
    bm := bm + unsigned(g1_soft);
ELSE
    bm := bm + (7 - unsigned(g1_soft));
END IF;
-- Same for G2

The path with the highest cumulative metric wins. This is the opposite convention from hard-decision Hamming distance, where the least differences between two different possible patterns wins.

Debugging Journal

Bug #1: Quantization Thresholds

Symptom was all soft values were 7 (strong ‘1’). But, we knew our data was about half 0 and about half 1. The root cause was that initial thresholds (+/- 12000) were far outside the actual soft value range (+/- 400). We adjusted thresholds to +/- 300, +/- 150, and +/- 50.

Lesson? Always check actual signal ranges before setting thresholds.

Bug #2: Polarity Inversion

Symptom was that output frames were bit-inverted. The root cause was that soft value polarity convention was backwards?positive was mapped to ‘1’ instead of ‘0’. This was fixed by inverting the quantization mapping.

Bug #3: Viterbi Output Bit Ordering

The symptom was that decoded bytes had reversed bit order. Viterbi traceback produces bits in a specific order that wasn’t matched during byte packing. After several missed guesses, we corrected the bit-to-byte packing loop.

FOR i IN 0 TO PAYLOAD_BYTES - 1 LOOP
    FOR j IN 0 TO 7 LOOP
        fec_decoded_buffer(i)(j) <= decoder_output_buf(PAYLOAD_BYTES*8 - 1 - i*8 - j);
    END LOOP;
END LOOP;

Bug #4: VHDL Timing – Stale G1/G2 Data

The symptom was that the first decoded bytes were correct, and the rest were garbage. This was super annoying. The root cause was that `decoder_start` was asserted in the same clock cycle as G1/G2 packing, but VHDL signal assignments don’t take effect until process end. We added pipeline stage. We pack G1/G2 in `PREP_FEC_DECODE`, assert start, and wait for `decoder_busy` before transitioning to `FEC_DECODE`

Bug #5: Vivado Optimization Removing Signals

In Vivado’s waveform visualizer, the `deinterleaved_soft` array was partially uninitialized in simulation. The cause was unclear, but we surmised that Vivado optimized away signals it deemed unnecessary.

We added `dont_touch` and `ram_style` attributes. This seemed to fix the symptom, but we didn’t feel like it was a cure.

ATTRIBUTE ram_style : STRING;
ATTRIBUTE ram_style OF soft_buffer : SIGNAL IS “block”;
ATTRIBUTE dont_touch : STRING;
ATTRIBUTE dont_touch OF soft_buffer : SIGNAL IS “true”;

Bug #6: Soft Buffer Index Timing

We saw that soft values were stored at wrong indices. The entire pattern was shifted. Storage formula `byte*8 + (7 – bit_count)` read `bit_count` before increment, which pulled everything off by one. Waveform showed index 405 being written when 404 was expected. The formula was mathematically correct, but VHDL signal assignment semantics meant `bit_count` had its old value.

We abandoned the complex indexing formula. We now store in arrival order using simple incrementing counter and handle reordering in decoder.

Bug #7: Wrong Randomizer Sequence

Output frames were completely corrupted despite all other signals appearing correct. The cause was that the soft decoder was created with a different randomizer lookup table than the encoder and hard decoder.

Encoder/Hard Decoder: `x”A3”, x”81”, x”5C”, x”C4”, …`
Soft Decoder (wrong): `x”96”, x”83”, x”3F”, x”5B”, …`

When creating the soft decoder, the randomizer table was generated incorrectly instead of being copied from the working hard decoder. We copied exact randomizer sequence from `ov_frame_decoder.vhd`, and it started working. Lesson learned? When creating a new module based on an existing one, copy constants exactly. Don’t regenerate them.

Debugging Methodology

Systematic Signal Tracing

When output is wrong, work backwards from output to input. Check final output values. Check intermediate values after major transformations (after derandomize, after Viterbi, after deinterleave). Check input values to each stage, even if you totally believe you’re getting the right data. Find where expected and actual diverge. Don’t try to solve “in the middle” of wrongness. Work on finding the edges between correct and incorrect, even if it points in unexpected directions.

Reference Implementation

Maintain a reference design. This independent design needs to be different than the platform and language that you are working on. This reference performs identical operations and can help you figure things out. For example, our Python references helped us solve problems in our VHDL.

def convolutional_encode(input_bits):
    G1, G2 = 0o171, 0o133
    shift_reg = 0
    output = []
    for bit in input_bits:
        shift_reg = ((shift_reg << 1) | bit) & 0x7F
        g1_bit = bin(shift_reg & G1).count(‘1’) % 2
        g2_bit = bin(shift_reg & G2).count(‘1’) % 2
        output.extend([g1_bit, g2_bit])
    return output
def interleave(bits):
    ROWS, COLS = 67, 32
    interleaved = [0] * len(bits)
    for i in range(len(bits)):
        row, col = i // COLS, i % COLS
        interleaved[col * ROWS + row] = bits[i]
    return interleaved

Compare FPGA signals against reference at each stage.

Test Patterns

Use recognizable patterns that make errors obvious. For example, we use alternating frames for our test payload data. First frame is sequential. The bytes go 0x00, 0x01, 0x02, and so on up to 0x85. The second frame is offset from this. 0x80, 0x81, 0x82, up to 0xFF where it rolls over to 0x00, 0x01, 0x02,0x03, 0x04 ending at 0x05. Alternating distinctive frames like these help to reveal a wide variety of errors. Frame boundary issues, such as when a frame starts in the middle of another frame, can be spotted. Initialization issues might be revealed as the root cause if only first frame works and all the rest fail. And, state machine issues could be the underlying problem if the pattern of output bytes is inconsistent.

Waveform Analysis Tips

Check before clock edge! Signal values are sampled at rising edge. What you see “after” may be the next value, not the current value. Watch for ‘U’ (uninitialized). This indicates a signal never written or it got optimized away. Why wasn’t it ever written? Why is it missing? This is a clue! Track indices. When storing to arrays, verify both the index and value are correct. Off by one errors are very common. Compare parallel paths. If, for example, a hard decision path works but soft decisions do not, the difference reveals the bug.

Results

After fixing all bugs, the soft-decision decoder produces identical output to the hard-decision decoder for clean signals. The benefit appears at low SNR where soft decisions allow the Viterbi algorithm to make better path selections. Soft decisions add 2-3 additional dB of coding gain over hard decisions, which brought us 5 dB of coding gain.

Conclusions

Upgrading from hard to soft decision decoding requires careful attention to bit ordering, VHDL timing, polarity conventions, and code re-use. Multiple conventions must align. You have to get transmission order, assembly order, interleaver order, and buffer indexing all correct. Signal vs. variable semantics matter for single-cycle operations. This is a language-specific thing for VHDL, but all languages have crucial differences between the tools used to get things done in code. Document and verify positive/negative soft value meanings. In other words, pick heads or tails and stick with it. Copy constants and lookup tables exactly from working code. Re-use saves time until a copy and paste error slows the debugging process for hours or days.

The 2-3 dB coding gain from soft decisions is worth the implementation complexity for satellite communications where link margins are precious. The coding gain helps in terrestrial settings to increase range and reliability.

Source Code

The complete implementation is available in the Open Research Institute `pluto_msk` repository at `https://github.com/OpenResearchInstitute/pluto_msk`. `frame_sync_detector_soft.vhd` is the frame sync with soft value quantization and buffering. `ov_frame_decoder_soft.vhd` is the frame decoder with soft Viterbi. And `viterbi_decoder_k7_soft.vhd` is the soft-decision Viterbi decoder core. These are separate files from the hard decision versions, which are also available. Look for files without the soft in the titles. This work may still be in the encoder-dev branch when you read this, but the eventual destination is main.

Acknowledgments

This work was performed at Open Research Institute as part of the Opulent Voice digital voice protocol development. It is open source and is publised under CERN open hardware licesnse version 2.0. Special thanks to Paul Williamson KB5MU for Remote Labs hardware support and testing, Matthew Wishek NB0X for modem architecture, design, and implementation, and Evariste F5OEO for integration advice for the Libre SDR. Thank you to the many volunteers reviewing the work and providing encouragement and support during the debugging process.

Retevis RT86, Hamcation Sponsorship, and 70cm Band Protection

A Technical Analysis

This analysis examines concerns regarding Retevis sponsorship of Orlando Hamcation 2026 and the RT86 radio’s potential to facilitate illegal business operations on the 70cm amateur band. The concern is real. The Retevis RT86 holds only FCC Part 15B certification. This certification is for receivers and unintentional radiators. Yet, the radio is marketed for business use on 430 to 440 MHz. These frequencies are allocated exclusively to amateur radio. The sponsorship of Hamcation 2026 by Retevis has resulted in strong emotions and an accusation of bribery. Characterizing commercial sponsorship as bribery is legally incorrect and counterproductive to addressing the underlying regulatory concern. That concern is warranted and real.

Technical Facts: The RT86 Certification Gap

The Retevis RT86 presents a clear regulatory discrepancy between its FCC certification and its marketing.

SpecificationRT86 Value
Frequency Range430-440 MHz (70cm amateur band)
Power Output10W high / 5W med / 1W low
FCC ID2ASNSRT86
Equipment ClassCXX (Communications Receiver)
FCC CertificationPart 15B ONLY (unintentional radiators/receivers)
MarketingBusiness use: warehouses, construction, logistics, security

The Certification vs. Use Discrepancy

AspectRT86 RealityLegal Business Radio
FCC CertificationPart 15B onlyPart 90 required
Frequency Band430-440 MHz (amateur)450-470 MHz (business)
ProgrammingUser-programmableRestricted (dealer only)
License RequiredAmateur license for these frequenciesPart 90 business license

FCC Enforcement Precedent: Skydive Elsinore (May 2024)

The FCC does enforce against businesses operating illegally in amateur bands. In May 2024, the FCC issued a Notice of Unlicensed Operation to Skydive Elsinore, LLC (Lake Elsinore, CA) for transmitting on 442.725 MHz without proper authorization. This confirms the FCC takes action when violations are documented. Though, enforcement really does depend on available resources and case priority.

The Enforcement Pipeline: Riley Hollingsworth and FCC Resource Constraints

Some community members have expressed frustration that Riley Hollingsworth, K4ZDH, hasn’t been able to resolve complaints about illegal business operations on 70cm. Understanding Hollingsworth’s current role, and its limitations, is essential context.

Riley Hollingsworth served as Special Counsel for the Spectrum Enforcement Division of the FCC’s Enforcement Bureau from 1998 until his retirement in 2008. During that decade, he was effectively the face of amateur radio enforcement at the FCC. He handled violations, issued warning letters, and pursued enforcement actions. He became legendary in the amateur community for his hands-on approach to spectrum enforcement.

In 2019, the ARRL and FCC signed a Memorandum of Understanding creating the Volunteer Monitor (VM) program, with Hollingsworth as the Volunteer Monitor Program Administrator. This program replaced the old Official Observer system. Under this arrangement, trained volunteer monitored observe the bands, documented violations, and reported them to Hollingsworth, who reviews cases and refers appropriate ones to the FCC for action.

Hollingsworth is a contractor/volunteer for ARRL. He is no longer an FCC employee. He can review cases, document violations, and refer them to the FCC, but he has zero enforcement authority. The FCC Enforcement Bureau must decide whether to act on any referral. The VM program was explicitly created “in the wake of several FCC regional office closures and a reduction in field staff.” The FCC doesn’t have the resources to handle amateur enforcement at the level it once did. Cases referred by the VM program compete for attention with interference to aviation, public safety, and cellular. All of which have paying constituencies and Congressional interest.

When someone says “Riley knows about it, but can’t do anything,” they’re essentially correct. He can pass complaints to the FCC, but the FCC Enforcement Bureau prioritizes cases based on available resources and perceived harm. Unlicensed business operations on 70cm, while a legitimate violation, may not rise to the level that triggers immediate FCC action when balanced against their other enforcement priorities.

This doesn’t mean filing complaints is pointless. Documented cases build the record that can eventually justify FCC action, and the VM program gives cases priority treatment over the general complaint process. But expectations should be calibrated to the reality that FCC amateur enforcement operates under significant resource constraints.

The Bribery Question: Why the Characterization Matters

Characterizing Retevis’s Hamcation sponsorship as “bribery” is legally incorrect and strategically counterproductive. This is a characterization made by people genuinely upset and outraged by Hamcation accepting Retevis as a sponsor. It feels bad and looks bad. This isn’t just another radio equipment company, but one that has sold products that have been recorded operating on the 70 cm ham band in open violation of FCC regulations.

Why is bribery the wrong characterization? Bribery requires offering value to influence an official act. Commercial sponsorship is transparent and a routine business practice.

Hamfests regularly accept sponsorship from vendors whose products can be used illegally. Many radios can transmit outside their authorized frequencies. Almost all of Open Research Institute’s work is done on software-defined radios that have essentially no limits on which band they can transmit on. It’s entirely up to our volunteers and experimenters to operate the equipment legally and well.

The legitimate question is one of organizational ethics. Should hamfests vet sponsors for regulatory compliance? That’s a policy discussion, not a criminal allegation.

The technical facts, like the Part 15B-only certification, the business marketing, the amateur-band defaults out of the box, are all strong enough to stand on their own.

Recommendations

For the Amateur Radio Community

Focus on regulatory facts. Document specific instances of illegal operation with time, frequency, location, and signal characteristics. This has been done and has been reported to Riley Hollingsworth. More reports are needed. File FCC complaints through the VM program (K4ZDH@arrl.net) or directly at ConsumerComplaints.FCC.gov

Understand that enforcement takes time and resources. Building a documented case history helps justify eventual FCC action. It may feel like a lot is being done, for a long time, with nothing to show for it. That’s often how this sort of work feels.

It’s a good question as to whether or not individual amateur radio operators have a responsibility to educate businesses about licensing requirements when encountering illegal operations. It’s almost never a good idea to confront someone doing something that is illegal. Amateur radio is supposed to be self-policing, but this has almost always been taken to reinforce the idea of policing our own ranks (signals that splatter, tuning up on top of the DX station, repeater hogging), and not policing band incursions outside our ranks. Like, from commercial operations that might take great exception to an interruption of logistics at a construction site. If education can be done safely and if it corrects a genuine misunderstanding of the radio products that a warehouse or delivery service has purchased, then good. If it results in a physical confrontation or being trespassed, then not good. ORI cannot and does not recommend physically confronting people that are abusing the bands. Use email or a phone call, be professional and brief, cite the regulations, and make a report as outlined above.

For Hamfest Organizers

Consider developing sponsor vetting criteria that include regulatory compliance review. Engage constructively with community concerns about sponsor practices

For Retevis:

Obtain proper Part 90 certification for radios marketed for business use. Adjust default programming to appropriate frequencies. Specifically 450 to 470 MHz for business applications. Or clearly market 430 to 440 MHz products as amateur-only equipment.

Conclusion

The technical concerns about the RT86 have merit. The combination of Part 15B-only certification, explicit business marketing, and default programming on 430 to 440 MHz creates conditions where unsuspecting purchasers will operate illegally. This is a regulatory gap worth addressing.

The FCC has demonstrated willingness to enforce against 70cm band intrusion (Skydive Elsinore), though enforcement capacity is limited. Documented complaints through proper channels combined with realistic expectations about enforcement timelines are the most productive path forward.

Written by Michelle W5NYV, with Kenneth Hendrickson N8KH

Is Amateur Radio an Alternative to Age-Restricted Social Media?

Those of us at Open Research Institute think the answer is overwhelmingly yes. Amateur radio occupies a legally and structurally distinct space that makes it essentially immune to age verification laws as currently written, while providing exactly the kind of meaningful social and technical connection these laws threaten to eliminate.

The age verification laws sweeping across the US and globally are remarkably consistent in what they purport to target. Florida’s HB 3 defines social media platforms as online forums, websites or applications where users can upload or view content from other users, at least 10% of daily active users under 16 spend two or more hours daily on the platform, the platform employs algorithms to select content for users, and the platform has “addictive features” like infinite scrolling or push notifications. (https://www.hunton.com/privacy-and-information-security-law/florida-enacts-legislation-restriction-social-media-accounts-for-minors) The law explicitly exempts platforms limited to email or direct messaging.

Alabama’s proposed bill targets any online service that both allows users to upload or view other users’ content and employs algorithms that analyze user data to present content. Both criteria must be met. (https://en.wikipedia.org/wiki/Social_media_age_verification_laws_in_the_United_States)

These definitions consistently hinge on several key elements. Algorithmic content curation, addictive design features, commercial operation, and the scale or size of the service. Some laws like the earlier Florida SB 7072 targeted platforms with 100 million or more monthly users. This limits the services affected to a relatively small number. Multiple state laws explicitly exclude services where interaction is limited to direct messaging, educational resources, and non-commercial communications. (https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_age_assurance_in_the_us_sept24.pdf)

Amateur radio, regulated under 47 CFR Part 97, exists in an entirely different regulatory universe. The FCC defines the amateur and amateur-satellite services as being “for qualified persons of any age who are interested in radio technique solely with a personal aim and without pecuniary interest,” presenting “an opportunity for self-training, intercommunication, and technical investigations.” (https://www.fcc.gov/wireless/bureau-divisions/mobility-division/amateur-radio-service)

The five statutory purposes of amateur radio under Part 97.1 are recognition of its value as a voluntary noncommercial communication service. They are, in order: For the advancement of radio art, for the expansion of the trained operator pool, to develop the amateur’s ability to provide emergency communications, and for international goodwill.

This is not a technicality or simply fluff. It’s a fundamental jurisdictional distinction. Amateur radio is a federally licensed radio service, not a commercial internet platform. The FCC, not state legislatures regulating commercial internet companies, has the primary jurisdiction.

Youth participation is a proud tradition, and not an afterthought or inconvenience. There is no minimum age for amateur radio licensing. Applicants as young as five years old have passed examinations and were granted licenses. (https://en.wikipedia.org/wiki/Amateur_radio_licensing_in_the_United_States)

Amateur radio is a merit-based system. If you can demonstrate the knowledge then you earn the privilege, regardless of age. You need not be a US citizen, though you must have valid photo identification, be able to get mail at a US address, and there is no lower or upper age limit.

The community actively encourages youth participation. The ARRL Youth Licensing Grant Program covers the one-time $35 FCC application fee for new license candidates younger than 18, and candidates under 18 pay only $5 for the exam itself (https://www.arrl.org/youth-licensing-grant-program) rather than the standard $15. This isn’t grudging compliance with some inclusion mandate. This is the community investing in its future by removing financial barriers for young people.

Let’s take a specific amateur radio application and compare it to social media. Sometimes we hear “amateur radio is the original social media”. But, is amateur radio simply an early version of social media? Or is amateur radio something distinctive? Let’s compare Opulent Voice and Interlocutor vs. the social media platforms targeted by age verification laws.

What is triggering age verification laws? Let’s go through the list.

Algorithmic content curation designed to maximize engagement. Opulent Voice and Interlocutor has none of this. Content is delivered based on who’s transmitting, not algorithmic selection.

Addictive design features such as infinite scroll, autoplay, and engagement metrics. Opulent Voice and Interlocutor has none of this. Opulent Voice is a communications system, not an engagement-maximization engine.

Commercial operation with data monetization. Opulent Voice and Interlocutor are non-commercial by both design and legal requirement under Part 97.

Targeted advertising. Opulent Voice and Interlocutor? Targeted advertising is prohibited under amateur radio regulations. You cannot conduct commercial activity on amateur frequencies.

Collection and exploitation of minor user data. Opulent Voice and Interlocutor has no data harvesting model. This is an open-source infrastructure with no business incentive to collect personal data.

What does Opulent Voice and Interlocutor provide that maps to beneficial social media functions? We’ve listed things that are big differences, but what does social media and a modern amateur radio protocol and product have in common?

Text communication (like messaging), voice communication (like voice/video calls), data exchange, social activities (the dice roller, game commands), conference capabilities (group interaction), community building around shared interests.

The critical distinction is that OVP and Interlocutor freely and openly provide the communications substrate, the part of social media that actually benefits people, without the engagement manipulation layer that these laws are targeting. The dice roller functions and game commands we recently built into Interlocutor are particularly important because they demonstrate that social interaction and fun are possible without commercial algorithmic manipulation.

Does Amateur Radio Fall Under Age Verification Laws? Based on Open Research Institute’s analysis of the statutory language across multiple states, no, for several reasons.

First, a federal preemption. Amateur radio is a federally regulated service under FCC jurisdiction. State laws regulating “social media platforms” and “commercial entities” don’t reach into FCC-regulated radio services. The Supremacy Clause creates a strong argument that states cannot impose additional restrictions on participation in a federal radio service that the FCC has deliberately made open to all ages.

Second, a clear definitional exclusion. The statutory definitions consistently require combinations of the following. Commercial operation, algorithmic content curation, addictive design features, and scale or size thresholds. An amateur radio system fails every single one of these criteria. Even the broadest definitions we’ve found at ORI wouldn’t capture a non-commercial, non-algorithmic, licensed radio communication system.

Third, we have licensing as our gatekeeper. Amateur radio already has a long history of a highly functional knowledge-based qualification system. A young person who passes a Technician exam has demonstrated competency in RF safety, regulations, and operating practices. This is arguably a more meaningful form of “age-appropriate access” than any ID-checking scheme. It verifies capability rather than just birthday.

Finally, the Part 97 non-commercial requirement. The amateur service’s prohibition on pecuniary interest means it structurally cannot become the kind of data-exploiting, attention-harvesting platform these laws target.

We come to a clear and probing question. Can we market this? When we say “we”, then we have to be very clear. We are talking about considering many different categories of people and power. Can ARRL market this? Can individual clubs market this? Can open source authors and organizations like ORI market this? Can the amateur community in general better market this? Can amateur equipment companies market this? Can amateur radio lobbyists market this?

This is where we need some nuance and some caution. There are at least two approaches, and they’re not mutually exclusive:

The direct approach would be positioning protocols and products like Opulent Voice and Interlocutor explicitly as a youth-accessible alternative to restricted social media. This has appeal. It’s a genuine differentiator and a compelling narrative. But it carries risks. It could attract regulatory attention from legislators who might try to expand definitions, and it could attract users who are not genuinely interested in radio technique. This could create Part 97 compliance issues if usage drifts away from the amateur service’s purposes.

Then there is a more subtle approach. Organizations like Open Research Institute could position work (like Opulent Voice and Interlocutor) as what it actually is. An educational and technical communications platform built on open-source principles within a federally licensed radio service. We don’t need to say “unlike social media, kids can use this.” We could frame it as “Amateur radio has always welcomed young people who demonstrate technical curiosity and competency. Opulent Voice provides modern digital communications capabilities, such as text, voice, data, and social interaction, within this tradition.”

The narrative practically writes itself. While commercial platforms are being restricted because their business models depend on manipulating users (including children), amateur radio has always operated on a fundamentally different model. The “restriction” is that you have to learn something first. The “verification” is demonstrating knowledge, not surrendering your identity documents to a commercial entity.

Age verification laws, if they had been in effect when I was young, would have fundamentally altered my journey, experience, and wellbeing. I found people that were like me, I made very meaningful and positive connections, and I learned things that my parents had no idea were important to me. It’s not that my parents and teachers didn’t want me to learn, but the internet removed the solid barriers of geography and reduced some of the barriers of sexism and agism. It didn’t primarily matter that I was a girl in Arkansas that wanted to learn more about telephone hardware, how to build and play guitars, and how internal combustion engines worked. For the most part, with some exceptions, I was able to learn about these things, and a whole lot more beyond that. If I had not had positive reinforcement, access to a diverse set of “others” that spoke to me like I was a person and not a silly little girl, the excellent recommendations for what to master and what to skip, time-saving advice (some of it bluntly given, some of it without any tact, sure), along with the congratulations, compliments, and celebration, then I would absolutely not be where I am today. Not anywhere close.

This loss is part of what the EFF and other civil liberties organizations are warning about. The EFF has characterized these laws as creating “a sprawling surveillance regime that would reshape how people of all ages use the internet.” The EFF has said that “age verification mandates are spreading rapidly, despite clear evidence that they don’t work and actively harm the people they claim to protect.” (https://www.eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review)

Research has repeatedly shown for decades that these sort of laws don’t reduce access to restricted content. They just change how people access it. Florida saw a 1,150% increase in VPN demand after its law took effect (https://www.eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review). So the kids who are technically savvy enough to find workarounds will do so, while the ones who most need connection, the isolated, the curious, the ones in unsupportive environments, will get cut off.

Amateur radio can legally provide something commercial social media cannot. A federally protected, non-commercial communications service with no age minimum, no algorithmic manipulation, no data harvesting, and a built-in community of mentors. People who have always been part of ham radio culture. A young person who gets their Technician license and connects through something like Opulent Voice and talking to others on a conference server or a satellite or a terrestrial groundsat (repeater) isn’t consuming algorithmically-curated content designed to maximize a corporation’s ad revenue. They’re participating in a technical community that the federal government has recognized for nearly a century as serving the public interest.

The strongest position for those of us that care about this is probably to document this regulatory distinction clearly whenever we can, as soon as we can, and as firmly as we can, while creating protocols and products that are delightful and easy for amateur operators to use.

Perhaps we need more formal white papers or FCC/TAC filings. And, we should make the case not as “we’re a social media alternative” but as “the amateur radio service has always provided youth with meaningful technical and social engagement, and modern digital amateur radio protocols and products continue this tradition in a way that is structurally incompatible with the harms these laws address.” That framing is both legally sound and genuinely true.

Inspired by Mike W2FBI, written by Michelle W5NYV

Retevis RT86, Hamcation Sponsorship, and 70cm Band Protection

A Technical Analysis

This analysis examines concerns regarding Retevis sponsorship of Orlando Hamcation 2026 and the RT86 radio’s potential to facilitate illegal business operations on the 70cm amateur band. The concern is real. The Retevis RT86 holds only FCC Part 15B certification. This certification is for receivers and unintentional radiators. Yet, the radio is marketed for business use on 430 to 440 MHz. These frequencies are allocated exclusively to amateur radio. The sponsorship of Hamcation 2026 by Retevis has resulted in strong emotions and an accusation of bribery. Characterizing commercial sponsorship as bribery is legally incorrect and counterproductive to addressing the underlying regulatory concern. That concern is warranted and real.

Technical Facts: The RT86 Certification Gap

The Retevis RT86 presents a clear regulatory discrepancy between its FCC certification and its marketing.

SpecificationRT86 Value
Frequency Range430-440 MHz (70cm amateur band)
Power Output10W high / 5W med / 1W low
FCC ID2ASNSRT86
Equipment ClassCXX (Communications Receiver)
FCC CertificationPart 15B ONLY (unintentional radiators/receivers)
MarketingBusiness use: warehouses, construction, logistics, security

The Certification vs. Use Discrepancy

AspectRT86 RealityLegal Business Radio
FCC CertificationPart 15B onlyPart 90 required
Frequency Band430-440 MHz (amateur)450-470 MHz (business)
ProgrammingUser-programmableRestricted (dealer only)
License RequiredAmateur license for these frequenciesPart 90 business license

FCC Enforcement Precedent: Skydive Elsinore (May 2024)

The FCC does enforce against businesses operating illegally in amateur bands. In May 2024, the FCC issued a Notice of Unlicensed Operation to Skydive Elsinore, LLC (Lake Elsinore, CA) for transmitting on 442.725 MHz without proper authorization. This confirms the FCC takes action when violations are documented. Though, enforcement really does depend on available resources and case priority.

The Enforcement Pipeline: Riley Hollingsworth and FCC Resource Constraints

Some community members have expressed frustration that Riley Hollingsworth, K4ZDH, hasn’t been able to resolve complaints about illegal business operations on 70cm. Understanding Hollingsworth’s current role, and its limitations, is essential context.

Riley Hollingsworth served as Special Counsel for the Spectrum Enforcement Division of the FCC’s Enforcement Bureau from 1998 until his retirement in 2008. During that decade, he was effectively the face of amateur radio enforcement at the FCC. He handled violations, issued warning letters, and pursued enforcement actions. He became legendary in the amateur community for his hands-on approach to spectrum enforcement.

In 2019, the ARRL and FCC signed a Memorandum of Understanding creating the Volunteer Monitor (VM) program, with Hollingsworth as the Volunteer Monitor Program Administrator. This program replaced the old Official Observer system. Under this arrangement, trained volunteer monitored observe the bands, documented violations, and reported them to Hollingsworth, who reviews cases and refers appropriate ones to the FCC for action.

Hollingsworth is a contractor/volunteer for ARRL. He is no longer an FCC employee. He can review cases, document violations, and refer them to the FCC, but he has zero enforcement authority. The FCC Enforcement Bureau must decide whether to act on any referral. The VM program was explicitly created “in the wake of several FCC regional office closures and a reduction in field staff.” The FCC doesn’t have the resources to handle amateur enforcement at the level it once did. Cases referred by the VM program compete for attention with interference to aviation, public safety, and cellular. All of which have paying constituencies and Congressional interest.

When someone says “Riley knows about it, but can’t do anything,” they’re essentially correct. He can pass complaints to the FCC, but the FCC Enforcement Bureau prioritizes cases based on available resources and perceived harm. Unlicensed business operations on 70cm, while a legitimate violation, may not rise to the level that triggers immediate FCC action when balanced against their other enforcement priorities.

This doesn’t mean filing complaints is pointless. Documented cases build the record that can eventually justify FCC action, and the VM program gives cases priority treatment over the general complaint process. But expectations should be calibrated to the reality that FCC amateur enforcement operates under significant resource constraints.

The Bribery Question: Why the Characterization Matters

Characterizing Retevis’s Hamcation sponsorship as “bribery” is legally incorrect and strategically counterproductive. This is a characterization made by people genuinely upset and outraged by Hamcation accepting Retevis as a sponsor. It feels bad and looks bad. This isn’t just another radio equipment company, but one that has sold products that have been recorded operating on the 70 cm ham band in open violation of FCC regulations.

Why is bribery the wrong characterization? Bribery requires offering value to influence an official act. Commercial sponsorship is transparent and a routine business practice.

Hamfests regularly accept sponsorship from vendors whose products can be used illegally. Many radios can transmit outside their authorized frequencies. Almost all of Open Research Institute’s work is done on software-defined radios that have essentially no limits on which band they can transmit on. It’s entirely up to our volunteers and experimenters to operate the equipment legally and well.

The legitimate question is one of organizational ethics. Should hamfests vet sponsors for regulatory compliance? That’s a policy discussion, not a criminal allegation.

The technical facts, like the Part 15B-only certification, the business marketing, the amateur-band defaults out of the box, are all strong enough to stand on their own.

Recommendations

For the Amateur Radio Community

Focus on regulatory facts. Document specific instances of illegal operation with time, frequency, location, and signal characteristics. This has been done and has been reported to Riley Hollingsworth. More reports are needed. File FCC complaints through the VM program (K4ZDH@arrl.net) or directly at ConsumerComplaints.FCC.gov

Understand that enforcement takes time and resources. Building a documented case history helps justify eventual FCC action. It may feel like a lot is being done, for a long time, with nothing to show for it. That’s often how this sort of work feels.

It’s a good question as to whether or not individual amateur radio operators have a responsibility to educate businesses about licensing requirements when encountering illegal operations. It’s almost never a good idea to confront someone doing something that is illegal. Amateur radio is supposed to be self-policing, but this has almost always been taken to reinforce the idea of policing our own ranks (signals that splatter, tuning up on top of the DX station, repeater hogging), and not policing band incursions outside our ranks. Like, from commercial operations that might take great exception to an interruption of logistics at a construction site. If education can be done safely and if it corrects a genuine misunderstanding of the radio products that a warehouse or delivery service has purchased, then good. If it results in a physical confrontation or being trespassed, then not good. ORI cannot and does not recommend physically confronting people that are abusing the bands. Use email or a phone call, be professional and brief, cite the regulations, and make a report as outlined above.

For Hamfest Organizers

Consider developing sponsor vetting criteria that include regulatory compliance review. Engage constructively with community concerns about sponsor practices

For Retevis:

Obtain proper Part 90 certification for radios marketed for business use. Adjust default programming to appropriate frequencies. Specifically 450 to 470 MHz for business applications. Or clearly market 430 to 440 MHz products as amateur-only equipment.

Conclusion

The technical concerns about the RT86 have merit. The combination of Part 15B-only certification, explicit business marketing, and default programming on 430 to 440 MHz creates conditions where unsuspecting purchasers will operate illegally. This is a regulatory gap worth addressing.

The FCC has demonstrated willingness to enforce against 70cm band intrusion (Skydive Elsinore), though enforcement capacity is limited. Documented complaints through proper channels combined with realistic expectations about enforcement timelines are the most productive path forward.

Written by Michelle W5NYV, with Kenneth Hendrickson N8KH

Is Amateur Radio an Alternative to Age-Restricted Social Media?

Those of us at Open Research Institute think the answer is overwhelmingly yes. Amateur radio occupies a legally and structurally distinct space that makes it essentially immune to age verification laws as currently written, while providing exactly the kind of meaningful social and technical connection these laws threaten to eliminate.

The age verification laws sweeping across the US and globally are remarkably consistent in what they purport to target. Florida’s HB 3 defines social media platforms as online forums, websites or applications where users can upload or view content from other users, at least 10% of daily active users under 16 spend two or more hours daily on the platform, the platform employs algorithms to select content for users, and the platform has “addictive features” like infinite scrolling or push notifications. (https://www.hunton.com/privacy-and-information-security-law/florida-enacts-legislation-restriction-social-media-accounts-for-minors) The law explicitly exempts platforms limited to email or direct messaging.

Alabama’s proposed bill targets any online service that both allows users to upload or view other users’ content and employs algorithms that analyze user data to present content. Both criteria must be met. (https://en.wikipedia.org/wiki/Social_media_age_verification_laws_in_the_United_States)

These definitions consistently hinge on several key elements. Algorithmic content curation, addictive design features, commercial operation, and the scale or size of the service. Some laws like the earlier Florida SB 7072 targeted platforms with 100 million or more monthly users. This limits the services affected to a relatively small number. Multiple state laws explicitly exclude services where interaction is limited to direct messaging, educational resources, and non-commercial communications. (https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_age_assurance_in_the_us_sept24.pdf)

Amateur radio, regulated under 47 CFR Part 97, exists in an entirely different regulatory universe. The FCC defines the amateur and amateur-satellite services as being “for qualified persons of any age who are interested in radio technique solely with a personal aim and without pecuniary interest,” presenting “an opportunity for self-training, intercommunication, and technical investigations.” (https://www.fcc.gov/wireless/bureau-divisions/mobility-division/amateur-radio-service)

The five statutory purposes of amateur radio under Part 97.1 are recognition of its value as a voluntary noncommercial communication service. They are, in order: For the advancement of radio art, for the expansion of the trained operator pool, to develop the amateur’s ability to provide emergency communications, and for international goodwill.

This is not a technicality or simply fluff. It’s a fundamental jurisdictional distinction. Amateur radio is a federally licensed radio service, not a commercial internet platform. The FCC, not state legislatures regulating commercial internet companies, has the primary jurisdiction.

Youth participation is a proud tradition, and not an afterthought or inconvenience. There is no minimum age for amateur radio licensing. Applicants as young as five years old have passed examinations and were granted licenses. (https://en.wikipedia.org/wiki/Amateur_radio_licensing_in_the_United_States)

Amateur radio is a merit-based system. If you can demonstrate the knowledge then you earn the privilege, regardless of age. You need not be a US citizen, though you must have valid photo identification, be able to get mail at a US address, and there is no lower or upper age limit.

The community actively encourages youth participation. The ARRL Youth Licensing Grant Program covers the one-time $35 FCC application fee for new license candidates younger than 18, and candidates under 18 pay only $5 for the exam itself (https://www.arrl.org/youth-licensing-grant-program) rather than the standard $15. This isn’t grudging compliance with some inclusion mandate. This is the community investing in its future by removing financial barriers for young people.

Let’s take a specific amateur radio application and compare it to social media. Sometimes we hear “amateur radio is the original social media”. But, is amateur radio simply an early version of social media? Or is amateur radio something distinctive? Let’s compare Opulent Voice and Interlocutor vs. the social media platforms targeted by age verification laws.

What is triggering age verification laws? Let’s go through the list.

Algorithmic content curation designed to maximize engagement. Opulent Voice and Interlocutor has none of this. Content is delivered based on who’s transmitting, not algorithmic selection.

Addictive design features such as infinite scroll, autoplay, and engagement metrics. Opulent Voice and Interlocutor has none of this. Opulent Voice is a communications system, not an engagement-maximization engine.

Commercial operation with data monetization. Opulent Voice and Interlocutor are non-commercial by both design and legal requirement under Part 97.

Targeted advertising. Opulent Voice and Interlocutor? Targeted advertising is prohibited under amateur radio regulations. You cannot conduct commercial activity on amateur frequencies.

Collection and exploitation of minor user data. Opulent Voice and Interlocutor has no data harvesting model. This is an open-source infrastructure with no business incentive to collect personal data.

What does Opulent Voice and Interlocutor provide that maps to beneficial social media functions? We’ve listed things that are big differences, but what does social media and a modern amateur radio protocol and product have in common?

Text communication (like messaging), voice communication (like voice/video calls), data exchange, social activities (the dice roller, game commands), conference capabilities (group interaction), community building around shared interests.

The critical distinction is that OVP and Interlocutor freely and openly provide the communications substrate, the part of social media that actually benefits people, without the engagement manipulation layer that these laws are targeting. The dice roller functions and game commands we recently built into Interlocutor are particularly important because they demonstrate that social interaction and fun are possible without commercial algorithmic manipulation.

Does Amateur Radio Fall Under Age Verification Laws? Based on Open Research Institute’s analysis of the statutory language across multiple states, no, for several reasons.

First, a federal preemption. Amateur radio is a federally regulated service under FCC jurisdiction. State laws regulating “social media platforms” and “commercial entities” don’t reach into FCC-regulated radio services. The Supremacy Clause creates a strong argument that states cannot impose additional restrictions on participation in a federal radio service that the FCC has deliberately made open to all ages.

Second, a clear definitional exclusion. The statutory definitions consistently require combinations of the following. Commercial operation, algorithmic content curation, addictive design features, and scale or size thresholds. An amateur radio system fails every single one of these criteria. Even the broadest definitions we’ve found at ORI wouldn’t capture a non-commercial, non-algorithmic, licensed radio communication system.

Third, we have licensing as our gatekeeper. Amateur radio already has a long history of a highly functional knowledge-based qualification system. A young person who passes a Technician exam has demonstrated competency in RF safety, regulations, and operating practices. This is arguably a more meaningful form of “age-appropriate access” than any ID-checking scheme. It verifies capability rather than just birthday.

Finally, the Part 97 non-commercial requirement. The amateur service’s prohibition on pecuniary interest means it structurally cannot become the kind of data-exploiting, attention-harvesting platform these laws target.

We come to a clear and probing question. Can we market this? When we say “we”, then we have to be very clear. We are talking about considering many different categories of people and power. Can ARRL market this? Can individual clubs market this? Can open source authors and organizations like ORI market this? Can the amateur community in general better market this? Can amateur equipment companies market this? Can amateur radio lobbyists market this?

This is where we need some nuance and some caution. There are at least two approaches, and they’re not mutually exclusive:

The direct approach would be positioning protocols and products like Opulent Voice and Interlocutor explicitly as a youth-accessible alternative to restricted social media. This has appeal. It’s a genuine differentiator and a compelling narrative. But it carries risks. It could attract regulatory attention from legislators who might try to expand definitions, and it could attract users who are not genuinely interested in radio technique. This could create Part 97 compliance issues if usage drifts away from the amateur service’s purposes.

Then there is a more subtle approach. Organizations like Open Research Institute could position work (like Opulent Voice and Interlocutor) as what it actually is. An educational and technical communications platform built on open-source principles within a federally licensed radio service. We don’t need to say “unlike social media, kids can use this.” We could frame it as “Amateur radio has always welcomed young people who demonstrate technical curiosity and competency. Opulent Voice provides modern digital communications capabilities, such as text, voice, data, and social interaction, within this tradition.”

The narrative practically writes itself. While commercial platforms are being restricted because their business models depend on manipulating users (including children), amateur radio has always operated on a fundamentally different model. The “restriction” is that you have to learn something first. The “verification” is demonstrating knowledge, not surrendering your identity documents to a commercial entity.

Age verification laws, if they had been in effect when I was young, would have fundamentally altered my journey, experience, and wellbeing. I found people that were like me, I made very meaningful and positive connections, and I learned things that my parents had no idea were important to me. It’s not that my parents and teachers didn’t want me to learn, but the internet removed the solid barriers of geography and reduced some of the barriers of sexism and agism. It didn’t primarily matter that I was a girl in Arkansas that wanted to learn more about telephone hardware, how to build and play guitars, and how internal combustion engines worked. For the most part, with some exceptions, I was able to learn about these things, and a whole lot more beyond that. If I had not had positive reinforcement, access to a diverse set of “others” that spoke to me like I was a person and not a silly little girl, the excellent recommendations for what to master and what to skip, time-saving advice (some of it bluntly given, some of it without any tact, sure), along with the congratulations, compliments, and celebration, then I would absolutely not be where I am today. Not anywhere close.

This loss is part of what the EFF and other civil liberties organizations are warning about. The EFF has characterized these laws as creating “a sprawling surveillance regime that would reshape how people of all ages use the internet.” The EFF has said that “age verification mandates are spreading rapidly, despite clear evidence that they don’t work and actively harm the people they claim to protect.” (https://www.eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review)

Research has repeatedly shown for decades that these sort of laws don’t reduce access to restricted content. They just change how people access it. Florida saw a 1,150% increase in VPN demand after its law took effect (https://www.eff.org/deeplinks/2025/12/year-states-chose-surveillance-over-safety-2025-review). So the kids who are technically savvy enough to find workarounds will do so, while the ones who most need connection, the isolated, the curious, the ones in unsupportive environments, will get cut off.

Amateur radio can legally provide something commercial social media cannot. A federally protected, non-commercial communications service with no age minimum, no algorithmic manipulation, no data harvesting, and a built-in community of mentors. People who have always been part of ham radio culture. A young person who gets their Technician license and connects through something like Opulent Voice and talking to others on a conference server or a satellite or a terrestrial groundsat (repeater) isn’t consuming algorithmically-curated content designed to maximize a corporation’s ad revenue. They’re participating in a technical community that the federal government has recognized for nearly a century as serving the public interest.

The strongest position for those of us that care about this is probably to document this regulatory distinction clearly whenever we can, as soon as we can, and as firmly as we can, while creating protocols and products that are delightful and easy for amateur operators to use.

Perhaps we need more formal white papers or FCC/TAC filings. And, we should make the case not as “we’re a social media alternative” but as “the amateur radio service has always provided youth with meaningful technical and social engagement, and modern digital amateur radio protocols and products continue this tradition in a way that is structurally incompatible with the harms these laws address.” That framing is both legally sound and genuinely true.

Inspired by Mike W2FBI, written by Michelle W5NYV

Inner Circle Newsletter August 2025

The Who What When Where Why

Open Research Institute is a non-profit dedicated to open source digital radio work on the amateur bands. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer and we work to use and protect the amateur radio bands.

You can get involved in our work by visiting https://openresearch.institute/getting-started 

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Solution to July Puzzle

August Puzzle: The Mysterious Lock Loss

— SCENARIO: You’re debugging a Costas loop implementation that works

— perfectly in simulation but fails intermittently in hardware. 

— The loop locks quickly to F1 (carrier + 1kHz), but when the 

— input switches to F2 (carrier + 3kHz), it sometimes loses lock

— entirely instead of reacquiring.

— PUZZLE: What’s causing this intermittent lock loss? 

— HINT: Look carefully at the loop filter characteristics and gain scheduling.

library IEEE;

use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

use IEEE.MATH_REAL.ALL;

entity costas_loop_puzzle is

    generic (

        DATA_WIDTH : integer := 16;

        PHASE_WIDTH : integer := 12;

        F1_OFFSET : integer := 1000;  — 1 kHz offset from carrier

        F2_OFFSET : integer := 3000   — 3 kHz offset from carrier

    );

    port (

        clk         : in  std_logic;

        reset       : in  std_logic;

        rf_input    : in  signed(DATA_WIDTH-1 downto 0);

        freq_select : in  std_logic; — ‘0’ for F1, ‘1’ for F2

        — Outputs for debugging

        i_data      : out signed(DATA_WIDTH-1 downto 0);

        q_data      : out signed(DATA_WIDTH-1 downto 0);

        phase_error : out signed(DATA_WIDTH-1 downto 0);

        vco_freq    : out signed(PHASE_WIDTH-1 downto 0);

        lock_detect : out std_logic

    );

end entity;

architecture behavioral of costas_loop_puzzle is

    — VCO signals

    signal vco_phase : signed(PHASE_WIDTH-1 downto 0) := (others => ‘0’);

    signal vco_i, vco_q : signed(DATA_WIDTH-1 downto 0);

    signal vco_control : signed(DATA_WIDTH-1 downto 0) := (others => ‘0’);

    — Mixer outputs

    signal mixer_i, mixer_q : signed(DATA_WIDTH-1 downto 0);

    — Loop filter components

    signal integrator : signed(DATA_WIDTH+4-1 downto 0) := (others => ‘0’);

    signal proportional : signed(DATA_WIDTH-1 downto 0);

    signal error_signal : signed(DATA_WIDTH-1 downto 0);

    — Lock detection

    signal error_magnitude : unsigned(DATA_WIDTH-1 downto 0);

    signal lock_counter : unsigned(15 downto 0) := (others => ‘0’);

    — Critical parameters (this is where the puzzle lies!)

    constant KP : signed(7 downto 0) := to_signed(32, 8);   — Proportional gain

    constant KI : signed(7 downto 0) := to_signed(2, 8);    — Integral gain

    — Gain scheduling based on frequency (THE TRAP!)

    signal adaptive_kp : signed(7 downto 0);

    signal adaptive_ki : signed(7 downto 0);

begin

    — Gain scheduling logic – reduces gains at higher frequencies

    — This looks reasonable but creates the error starvation!

    process(freq_select)

    begin

        if freq_select = ‘0’ then  — F1 mode

            adaptive_kp <= KP;

            adaptive_ki <= KI;

        else  — F2 mode – “optimize” for stability at higher frequency

            adaptive_kp <= shift_right(KP, 2);  — KP/4

            adaptive_ki <= shift_right(KI, 3);  — KI/8

        end if;

    end process;

    — VCO phase accumulator

    process(clk, reset)

    begin

        if reset = ‘1’ then

            vco_phase <= (others => ‘0’);

        elsif rising_edge(clk) then

            vco_phase <= vco_phase + vco_control;

        end if;

    end process;

    — VCO sine/cosine generation (simplified)

    — In real implementation, this would be a lookup table

    vco_i <= to_signed(integer(32767.0 * cos(real(to_integer(vco_phase)) * MATH_PI / 2048.0)), DATA_WIDTH);

    vco_q <= to_signed(integer(32767.0 * sin(real(to_integer(vco_phase)) * MATH_PI / 2048.0)), DATA_WIDTH);

    — Quadrature mixers

    process(clk)

    begin

        if rising_edge(clk) then

            — Multiply and low-pass filter (simplified)

            mixer_i <= shift_right(rf_input * vco_i, 15);

            mixer_q <= shift_right(rf_input * vco_q, 15);

        end if;

    end process;

    — Costas loop error detector (classic I*sign(Q) approach)

    process(clk)

        variable q_sign : signed(DATA_WIDTH-1 downto 0);

    begin

        if rising_edge(clk) then

            if mixer_q >= 0 then

                q_sign := to_signed(1, DATA_WIDTH);

            else

                q_sign := to_signed(-1, DATA_WIDTH);

            end if;

            error_signal <= shift_right(mixer_i * q_sign, 8);

        end if;

    end process;

    — Loop filter with adaptive gains

    process(clk, reset)

        variable scaled_error : signed(DATA_WIDTH+4-1 downto 0);

        variable prop_term : signed(DATA_WIDTH+4-1 downto 0);

    begin

        if reset = ‘1’ then

            integrator <= (others => ‘0’);

            vco_control <= (others => ‘0’);

        elsif rising_edge(clk) then

            — Scale error by adaptive gains

            scaled_error := resize(error_signal * adaptive_ki, DATA_WIDTH+4);

            prop_term := resize(error_signal * adaptive_kp, DATA_WIDTH+4);

            — Integrate with adaptive gain

            integrator <= integrator + scaled_error;

            — PI controller output

            vco_control <= resize(shift_right(integrator + prop_term, 4), DATA_WIDTH);

        end if;

    end process;

    — Lock detector – measures error magnitude

    process(clk, reset)

    begin

        if reset = ‘1’ then

            lock_counter <= (others => ‘0’);

            lock_detect <= ‘0’;

        elsif rising_edge(clk) then

            error_magnitude <= unsigned(abs(error_signal));

            if error_magnitude < 100 then  — Low error threshold

                if lock_counter < 65535 then

                    lock_counter <= lock_counter + 1;

                end if;

            else

                lock_counter <= (others => ‘0’);

            end if;

            — Declare lock after 1000 consecutive low-error samples

            if lock_counter > 1000 then

                lock_detect <= ‘1’;

            else

                lock_detect <= ‘0’;

            end if;

        end if;

    end process;

    — Output assignments

    i_data <= mixer_i;

    q_data <= mixer_q;

    phase_error <= error_signal;

    vco_freq <= resize(vco_control, PHASE_WIDTH);

end behavioral;

Defending Amateur Radio Spectrum: The AST SpaceMobile Battle Continues

Partial Victory in 430-440 MHz Band Defense

The amateur radio community has achieved a significant but limited victory in protecting the 430-440 MHz band from commercial satellite encroachment. AST SpaceMobile’s request for broad commercial use of amateur spectrum has been restricted to emergency-only operations for a maximum of 24 hours and only 20 satellites—but the fight isn’t over.

AST SpaceMobile (AST & Science LLC) operates a constellation of large commercial satellites designed to provide cellular service directly to mobile phones. Think of it as trying to turn satellites into massive cell towers in space. The problem? They wanted to use the 430-440 MHz amateur radio band for their Telemetry, Tracking, and Command (TT&C) operations across a planned 243-satellite constellation.

This isn’t just about frequency coordination—it’s about fundamental spectrum philosophy. The amateur bands exist for experimentation, emergency communications, and education. Commercial operations fundamentally change the character of these allocations, much like turning a public research laboratory into a private factory floor.

The Technical Challenge

AST SpaceMobile’s satellites are massive. These are some of the largest commercial satellites ever deployed, with solar arrays spanning over 700 square meters. These aren’t small CubeSats doing modest experiments. They are industrial-scale infrastructure requiring robust command and control systems.

The company initially deployed five Bluebird satellites in September 2024, operating on amateur frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth. These were launched and operated without proper authorization. Each planned satellite would require TT&C channels with bandwidths between 64 – 256 kHz, creating a significant interference footprint across the entire 10 MHz amateur allocation.

The Open Research Institute, along with numerous international amateur radio organizations, filed strong opposition to AST SpaceMobile’s request. Our argument was both technical and philosophical:

Summarized from our filed comment, the technical objections included the following. 

1) Multiple established commercial satellite bands exist (S-band: 2025-2110 MHz, X-band: 8025-8400 MHz, Ka-band: 27.5-30.0 GHz)

2) ITU studies specifically excluded the 430-440 MHz amateur allocation from commercial TT&C considerations

3) Modern satellite technology readily supports operations in higher frequency bands with better propagation characteristics. 430-440 MHz is not the best choice.

We raise the following philosophical and cultural concerns.

1) Amateur radio bands serve critical emergency communications when commercial infrastructure fails

2) These frequencies support STEM education and technological innovation. Where do you think many RF engineers get their start?

3) Commercial encroachment sets a dangerous precedent that could completely destroy the experimental character of amateur allocations

The FCC’s Decision: A Limited Victory

On August 29, 2025, the FCC issued a modified grant that significantly restricts AST SpaceMobile’s operations:

1) 24-hour limit. TT&C operations in the 430-440 MHz band are permitted only for periods not exceeding 24 hours.

2. Emergency only. Operations are restricted to Launch and Early Orbit Phase (LEOP) and emergency situations when no other band is available.

3. 20-satellite cap: Authorization covers only the next 20 satellites, including the FM1 prototype.

FM1 stands for “Flight Model 1” and is AST SpaceMobile’s first “Block 2” BlueBird satellite. It’s a much bigger, more powerful version of their current satellites. According to AST, it is about three times larger than their first-generation BlueBird satellites with 10 times the capacity. Launch dates have been delayed over the past year and the satellite might go up in early 2026. 

This represents a major step back from AST SpaceMobile’s original request for blanket commercial access across their entire constellation.

What Does This Mean for the Amateur Community?

The decision validates several key principles that we and many others have been patiently asserting to regulators. 

1) Amateur spectrum is different. The FCC acknowledged that amateur allocations can’t simply be treated as general-purpose commercial spectrum. The severe restrictions imposed recognize the unique character and public service value of amateur radio.

2) Technical alternatives exist. By limiting operations to emergencies “when no other band is available,” the FCC effectively endorsed our argument that commercial TT&C bands are technically viable for these operations.

3) Precedent matters. While it shouldn’t have to be repeatedly argued, precedent really does matter and vigilance is required in order to keep a solid regulatory foundation for amateur radio. Rather than opening the floodgates to commercial use of amateur spectrum, the FCC imposed strict limits that discourage similar requests from other operators.

Industry Response and Next Steps

AMSAT-DL described this as a “greater (partial) success” for amateur radio and AMSAT satellite operators. The 24-hour emergency-only restriction and 20-satellite cap should give AST SpaceMobile sufficient time to redesign their constellation for proper commercial frequency usage.

However, this isn’t a complete victory. AST SpaceMobile still has temporary access to amateur spectrum, and the company may seek to extend or modify these restrictions as their constellation develops.

Lessons for Open Source and Amateur Communities

This case illustrates several important principles for defending community resources. Documentation matters. Technical arguments backed by ITU studies, engineering analysis, and regulatory precedent carried significant weight in the FCC’s decision. Without this, things would have worked out very differently. 

Community coordination works. International amateur radio organizations presenting unified opposition demonstrated the global impact of spectrum decisions. 

Vigilance must continue. Protecting community resources, whether spectrum, software licenses, IP addresses, or technical standards, requires continuous engagement with regulatory and governance processes. 

The amateur radio community must remain vigilant as commercial space operations continue expanding. AST SpaceMobile’s modified authorization creates a framework for emergency use that other operators will definitely seek to exploit.

We encourage continued support for organizations like AMSAT-DL, ARRL, and the Open Research Institute that actively defend amateur spectrum rights.

You can participating in FCC comment periods on spectrum issues. Yes! You! Reach out to your local amateur radio organization and be part of the process. Support technical education that demonstrates amateur radio’s ongoing innovation. Engage with emergency communications activities that highlight amateur radio’s public service value.

The 430-440 MHz band remains primarily protected for amateur use, but this victory required sustained technical and legal advocacy. Our spectrum allocations, just like our open source projects, exist because communities actively defend and develop them. The technical part of a project is never the hardest part. The hardest part of any project is the people part. Negotiating, collaborating, compromising, defending, and communicating in a complex world are all “people work”. 

Technical Details and References

For those interested in the regulatory details, please refer to the following documents. 

FCC Proceeding 25-201 (application/licensing proceeding)

ICFS File Number SAT-MOD-20250612-00145 (FCC’s “case number” or filing reference for AST SpaceMobile’s request to modify their satellite authorization.)

Open Research Institute Comment to 25-201 (Filed July 21, 2025 by Michelle Thompson W5NYV)

FCC Decision (DA-24-756A1.pdf from August 29, 2025)

The full technical analysis includes frequency coordination studies, interference modeling, and alternative band analysis available through the FCC’s Electronic Comment Filing System.

ORI’s Tiny Payload Delivered to AmbaSat

Most recent mission update from AmbaSat is below.

Dear AmbaSat Launch Partner,

A big thank you to everyone who has already returned their AmbaSat-1 ChipSats – your support and timely action are helping to keep us on track for mission integration.

If you haven’t yet returned your ChipSat, we kindly ask that you do so as soon as possible to ensure its inclusion in the upcoming 3U CubeSat assembly. If there are any issues or delays, please don’t hesitate to contact us directly at support@ambasat.com — we’re happy to assist.

Mission Update

We’re pleased to share that the UK Civil Aviation Authority (CAA) engineering team recently visited AmbaSat HQ, where they carried out a detailed review of our processes, concept of operations, risk management strategy, and supporting documentation. Twice-monthly CAA meeting are ongoing and:

Following the visit, we’ve developed a focused CubeSat action plan to:

Reduce the risk of fragmentation from stored energy

Strengthen documentation of our engineering methodology and V&V campaign

Document and minimise the chance of accidental debris release

Finalise details of the ground segment and operational architecture, including responsibilities and procedures

Document acceptable re-entry and collision risks

Produce additional virtual modelling of the combined AmbaSat, Flight Tray, and CubeSat assembly

In summary, we’re making solid progress towards both integration and licensing milestones, and we’ll continue to keep you updated as the mission advances.

Thank you once again for being part of this exciting step into Low Earth Orbit.

Ad Astra,

Martin & the AmbaSat Team

AmbaSat Ltd

Office: +44 (0)1609 600884

Email: martin@ambasat.com

Web: https://ambasat.com

Opulent Voice Progress Report: Issue #22

Paul Williamson, KB5MU

Here’s a summary of Dialogus debugging progress leading up to September 2, 2025 and resulting in improved code committed to Git at https://github.com/OpenResearchInstitute/dialogus/commit/741137a17cadbf68086392bce4805cf1d037a029 . The problems we were attempting to address, and some of the key observations made along the way, are captured in this Git issue: https://github.com/OpenResearchInstitute/dialogus/issues/22

Recall that the problems we were seeing had to do with the flow of encapsulated Opulent Voice frame data to be transmitted from the user interface program Interlocutor to the transmit output of the Pluto, where we can observe it with the Rigol RSA5065N spectrum analyzer in the Remote Lab. Let’s start with a rundown of the components of that flow, as implemented here in the lab.

Components of the Frame Data Flow for Transmit

Interlocutor (https://github.com/openresearchinstitute/interlocutor) is a Python program running on a Raspberry Pi 5. It accepts voice input from an attached microphone and/or text message input from a keyboard. Interlocutor understands the logical structure of an over-the-air Opulent Voice frame. It composes these frames to contain the data to be transmitted. Each frame contains 12 bytes of Opulent Voice frame header and 122 bytes of COBS-encoded payload data, for a total of 134 bytes.

Interlocutor encapsulates each packet into a UDP message with port number 57372 addressed to the IP address of the Pluto. The Linux networking stack routes this by IP address to the appropriate network interface port, eth1 in this case.

But eth1 is not physically an Ethernet port at all. The Pluto doesn’t have an Ethernet port. It does have a USB port, and one of the standard ways to connect a Pluto to a computer is to use Ethernet over USB. There are several such protocols, any of which can make the USB connection act like a network connection. The supported protocols on the Pluto are RNDIS, CDC-NCM, and CDC-ECM. Linux supports all three of these protocols, but Windows and macOS each support only one, and of course it’s not the same one. Because we are using more Macs than Windows computers here, we chose CDC-NCM, which is the one supported by macOS. This is configured in the Pluto as explained in https://wiki.analog.com/university/tools/pluto/users/customizing

So our encapsulated packets flow over this simulated Ethernet and arrive in the Pluto. Recall that the Pluto is based on a Xilinx device that contains an ARM computer core and an FPGA fabric (Zynq XD7Z010-1CLG225C), plus an Analog Devices AD9363 radio transceiver. The ARM core runs a build of Linux provided by Analog Devices at https://github.com/analogdevicesinc/plutosdr-fw , which we have lightly customized at https://github.com/openresearchinstitute/pluto_msk/firmware . The encapsulated packets arrive over the USB port (which is part of the Zynq) and are handled by the Linux network stack running on the ARM.

Dialogus (https://github.com/openresearchinstitute/dialogus) is a C program running under Linux on the Pluto’s ARM. Built with the command line flag -DOVP_FRAME_MODE, it listens for the encapsulated Opulent Voice frames arriving on UDP port 57372. Its job is to arrange for the frames to be transmitted in the specified way using the MSK modulator.

The MSK modulator is a part of Locutus (https://github.com/openresearchinstitute/pluto_msk), the FPGA-based implementation of an Opulent Voice modem. Overall, Locutus is a modification of the standard reference design for the Pluto FPGA, provided by Analog Devices within https://github.com/analogdevicesinc/hdl . The reference design is what ships installed with the Pluto to create the PlutoSDR product. PlutoSDR is what’s usually described as a Software Defined Radio (SDR) device. That is, it is a translator between I/Q samples and RF signals, capable of any type of radio transmission and/or reception within its limits of frequency coverage, bandwidth, and sample rate, but only in conjunction with a computer that can supply and/or interpret the stream of I/Q samples. The device itself doesn’t know anything about any modulation or waveform details. That’s up to the connected computer.

Since the goal was to implement the modem inside the FPGA, we add modem blocks inside the mostly-unchanged FPGA reference design. Instead of exchanging I/Q samples with the computer, it exchanges data bits.

Inside the reference design, data flow between blocks is mostly done using a scheme called AXI-S (Advanced eXtensible Interface – Stream). This involves multiple handshaking signals between any two communicating blocks, arranged such that a small quantum of data is transferred when both sides are ready for the transfer, no sooner and no later. Each block has to be designed so that it can pause its operation seamlessly and await the readiness of the other block. This scheme allows for various blocks within the design to process at different speeds, even varying their processing rate in time, without any block falling behind.

Specifically, in the transmit pipeline, the first major block that handles incoming data from the computer is a specialized DMA controller, with access to the ARM’s memory on the computer side and an AXI-S interface on the other side. In the reference design, this DMA controller feeds into a series of scaling and filtering facilities, which in turn feed into the final processing stages before the DACs. We replaced the unneeded scaling and filtering blocks with the MSK modulator block. Data bits come in from the computer’s memory through the DMA controller and are delivered to the input of the Modulator block. The Modulator block outputs I/Q samples, which pass on into the rest of the reference design’s pipeline, eventually to be delivered to the DACs to create the analog signals.

The framework that controls how a computer talks to and controls PlutoSDR is called IIO, for Industrial Input/Output. It’s an extensive system meant to be general purpose for all sorts of devices that stream data to and/or from a computer, fast or slow. Besides the sample streams themselves, IIO provides a variety of control and status functions. With these, the computer can control the radio as well as the flow of samples. The computer side of IIO can be run on a local machine such as the ARM inside the Pluto, or over a USB interface, or over a network interface. So, PlutoSDR with IIO is __almost__ exactly what we needed. We use IIO essentially as in the reference design, except that we abuse the IIO interface with the computer by using it to carry data bits instead of I/Q samples.

One other thing that will turn out to be important about IIO: because it’s designed to work with both fast hardware and relatively slow software, it handles samples in relatively large batches. The user software creating transmit samples fills up a buffer full of samples using IIO utility functions, and then “pushes” that buffer into the Linux kernel where the actual transfer takes place. The kernel manages a finite pool of buffers, four by default, though this can be adjusted. If a kernel buffer is available, the push operation is quick. If the kernel buffers are all in use, presumably three are full and waiting their turn and one is actively being transferred to the hardware. In that case, an attempt to push a new buffer will result in the user program blocking until a kernel buffer is freed. This creates a similar effect to the AXI-S handshaking: the user program is slowed down so it never gets too far ahead of the hardware processing.

The 8X Problem

The problem initially detailed in Issue #22 referenced above was that the transmission observed on the spectrum analyzer was taking approximately eight times as long as it should have been taking. The shortest transmission is a single 40ms frame, but we also send a 40ms frame of preamble before the data frame, and follow it with 25 dummy frames (one second) of “hang time” in case another transmission comes along right away, and follow that with a 40ms frame of postamble. So that short transmission occupies 28 frames, which is 1120ms at 40ms per frame. The duration actually observed was estimated at 8500ms using a stopwatch. Here’s what it looked like on the waterfall on the spectrum analyzer.

Image: Short transmission extended by 8x

The Zynq contains a hardware clock that’s useful for precise timestamping from software. I made use of this facility to timestamp each frame, and found most of them to be spaced 302ms to 303ms apart, instead of the nominal 40ms. The spacing was suspiciously consistent. What’s more, the first few frames were transferred more quickly. This was consistent with the spectrum analyzer waterfall display shown in the original issue #22 comment.

After some confusion and fiddling around, the cause became evident. The IIO buffer size in use had not been adjusted to match the size of the transfer. It was set to 1024 samples. Since the Opulent Voice system is intended to be used with realtime voice conversations, we don’t buffer up multiple frames. Instead, there’s an IIO transfer for each individual frame. Since we are sending the logical data from the UDP-encapsulated frame, that is only 134 bytes. 1024 / 134 = 7.6, and 7.6 * 1120ms is 8559ms, neatly accounting for the observed duration. The software was packing 134 bytes into each buffer, and then pushing the whole 1024-byte buffer into the kernel, and the Pluto had no way to know that only the first 134 bytes were significant.

There are two ways to solve that. The simplest way is to change the buffer size to match the frame size, 134. With no other changes, that eliminated almost all of the excess delay. The other way is to leave the buffers alone, and instead replace the calls to iio_buffer_push() with calls to iio_buffer_push_partial() and pass a length of 134 to the latter function. We were suspicious of iio_buffer_push_partial(), because we remembered having difficulty with it in a previous encounter, so I tried both methods and compared the results. Both apparently worked the same. I decided to stick with changing the buffer size to 134.

Looking again at the waterfall display on the spectrum analyzer, we could see that the duration was very close to correct, even though there’s no way to get a precise time measurement from the waterfall display. But now we had precise timestamps in the code, and we could see from the timestamps that the duration was still a little bit long, 1214ms instead of 1120ms. That was an extra 94ms unaccounted for. I re-learned how to use the spectrum analyzer in zero-span mode, and was able to see that the 1214ms duration was real, and not an error in how the timestamps were handled.

Image: 4ms transmission seen in zero-span mode

Putting the spectrum analyzer back into waterfall mode, we observed that the signal was not entirely clean during that 1214ms period. There appeared to be small gaps in the modulation. Something was still wrong.

Possibly Leaking Kernel Buffers

The timestamp trace (posted in a comment to Issue #22) showed that each call to iio_buffer_push() was taking about 39ms to return. That was nice and consistent, but it should not have been. With four kernel buffers allocated, at least the first three calls to iio_buffer_push() ought to return very quickly. But we weren’t seeing that, even after long idle periods.

A reboot of the Pluto cleared up this issue. We found that all 28 of the iio_buffer_push() calls in a short transmission were returning in under 2ms, as expected. The overall duration of the transmission had fallen to 1135ms, just 15ms longer than nominal.

This was still a little too long to attribute to measurement uncertainty. We concluded that there were probably brief underruns adding delays between frames, and attributed this to a timeline with no slack for timing uncertainty introduced by the USB Ethernet interface. We resolved to get control over the timeline and redesign it with wider windows, but in the meantime we moved on to longer transmissions. Voice transmissions.

Voice Transmissions Still Getting Stretched

We made voice transmissions for a count of 10 or 20 mississippis using the PTT button in the Interlocutor GUI. During the transmission, we observed the waterfall display on the spectrum analyzer. There appeared to be tiny gaps, medium sized gaps, and a few pretty large gaps in the modulation (never in the overall power). That’s not right.

We were also collecting timestamp information, of course. It showed that there were 469 encapsulated frames processed, which would add up to 18.76 seconds at 40ms each. At this point, we happened to be using iio_buffer_push_partial(), and the debug log showed 645 calls to iio_buffer_push_partial(). The 645 – 469 = 176 extra pushes could only have been  frames inserted by the Dialogus code. It always inserts a preamble frame and a postamble frame, and the log showed that it inserted 174 dummy frames, so that adds up. Only the 25 dummy frames at the end of the transmission are expected, leaving 149 dummy frames that must have been inserted due to underruns. That is, when a 40ms boundary passes and no new encapsulated frame data is available, Dialogus sees this as a possible end of the transmission and starts to count off a one-second hang time filled with dummy frames. This event was also visible in the log, 61 times. That works out to about 2.5 dummy frames emitted per declared hang time. That’s too much to be due to narrow timing windows; each failure to hit a window by a millisecond or several would only generate a single dummy frame. There were still excess dummy frames being sent.

I made a table of the function call durations for iio_buffer_push_partial(), which we were still using at that time. It tells a confusing story. A total of 18 calls were 7ms or less, indicating that the kernel buffer was freed around the same time the next buffer was pushed. A total of 501 calls were clustered around 40ms, indicating that all the kernel buffers were full around the time the next buffer was pushed. The remaining 126 calls had durations ranging all the way up to 1000ms, which caused an IIO timeout error. How is that possible, with only four kernel buffers that only take 40ms each to clear out? Seemingly at least one of these assumptions is wrong.

A Period of Confusion

We went back to trying to characterize the behavior during short transmissions, mainly because they seemed a lot closer to being correct and they were easier to experiment with. I captured a test with three short transmissions on the waterfall, with matching log files. The three transmissions contained exactly the same data, but they did not look alike on the waterfall. The visible gaps looked different.

We mused about the limitations of the waterfall display. It works by capturing some number of samples, doing an FFT on them, and drawing a row of pixels on the display, and that happens approximately 30 times per second. I don’t recall seeing any documentation on how many samples are used, but it’s probably much less than 100% duty cycle. Worse, 30 per second is barely faster than our frame rate of 25 per second, so we are unlikely to clearly see any details at the level of individual frames, much less fractions of a frame. A faster waterfall display would be useful to have.

For these three short transmissions, I measured the time between successive calls to iio_buffer_push(). Except for several well-understood special cases, they were all clustered tightly around 40ms, as expected.

Image: Three short transmissions on the waterfall


I also measured the duration of each iio_buffer_push() call. They were all 1ms or shorter. That’s good.

And the overall transmission session durations were 1132ms, 1134ms, and 1133ms, still a tiny bit long. Maybe that could be explained as overhead?

The only worrisome indication of a problem was the appearance of the waterfall, and we’re not sure how meaningful that really is for these short transmissions. But we hadn’t forgotten the voice tranmissions, which were clearly terrible on the waterfall.

USB Ethernet Suspected

I had spent a lot of energy trying to blame the USB Ethernet connection between the Raspberry Pi and the Pluto. Probably because it was an external interface that I didn’t fully understand and could not trace. I went so far as to shop for USB sniffer devices so we could see what was actually happening on the bus. They are expensive, and none of the devices I found advertised a feature for tracing USB Ethernet.

To try and answer this question, I made a special Dialogus build that didn’t process any of the encapsulated frames beyond noting their length and some bytes of data from each one. This removed any interaction with the FPGA from the test scenario. I ran a lengthy transmission through it. Every single encapsulated frames arrived like clockwork, within several milliseconds of 40ms. The USB Ethernet was working fine when isolated from IIO.

Doing the Arithmetic

During Open Research Institute’s regular Tuesday morning Zoom call for FPGA development projects and such, we had a chance to discuss this with the designer of Locutus, Matthew Wishek, NB0X (see https://www.openresearch.institute/2025/08/04/matthew-wishek-wins-2025-arrl-technical-innovation-award/ ). I had made an attempt to read the relevant VHDL code in the pluto_msk repository to clarify my understanding of how the Modulator interacted with the surrounding AXI-S interfaces. Matthew confirmed my shaky understanding that the Modulator would consume data bits at an absolutely constant rate, and that it was not capable of slowing down for an underrun or of exerting extra “backpressure” on a data source that was trying to go too fast. That constant rate was familiar to me from working on the numerology for the older 4FSK version of Opulent Voice: 54200 bits per second.

A bit of light began to dawn.

That number is based on a full implementation of the Opulent Voice waveform. It starts from our choice of one of the recommended bit rates for the Opus audio codec, which is the key to excellent voice quality. We long ago decided to allow the voice codec to use 16000 bits per second. We also chose one of the frame durations recommended by Opus, 40ms. 16000 * 0.040 = 640 bits = 80 bytes. Wrap it up in IP/UDP/RTP as is usually done for streaming Opus on networks, and you’re up to 80 + 12 + 8 + 20 = 120 bytes. Add 2 for packet framing using COBS, 122. Add 12 bytes of frame header consisting of 6 bytes of station ID, 3 bytes of authentication tag, and 3 bytes reserved for protocol use, and that’s where the 134 byte 40ms frame comes from.

But that’s not the end of the transmitting process. The header is Golay encoded for FEC, which doubles it size to 24 bytes. The COBS data (including IP/UDP/RTP/Opus) is convolutionally encoded for FEC, which doubles its size as well. Now we’re up to 268 bytes. We also prepend an uncoded frame synchronization word before each frame, so the receiving modem can unambiguously find the frame boundaries. The frame sync is 3 bytes long, so now it’s 271 bytes, 2168 bits, and THAT is what gets fed to the MSK Modulator.

2168 bits * 25 frames/second = 54200 bits per second.

We had been sending the raw data, 134 bytes per frame, to Locutus. That would be great if Locutus implemented the FEC codes and prepended the frame sync word. However, at the current state of development, Locutus is just a bare Modulator. It takes a stream of bits, which have to be at 54200 bits per second, and modulates them using MSK, and that’s all it does. The software doesn’t implement those features, either. We haven’t even reached a firm decision about where those features *should* be implemented, hardware or software. So it was never going to work like that, and we knew that from the start.

I got to work modifying the Dialogus code to send frames of the right length. I added calls in the frame building logic to encode the header and the payload for FEC, and wrote dummy routines that fake the FEC codes by simply including the contents twice. I changed the buffer size to 271, and added checks to make sure the built frames were 271 bytes long. This would at least come close to keeping the Modulator well-fed.

Close But No Cigar

Actually, when tested, it was worse. A lot worse. Nearly every call to iio_buffer_push() was taking a long time. I now understood this to mean that the Modulator was consuming data more slowly than we were trying to send it. Not only were the kernel buffers all full, but they were taking a lot more than 40ms each to empty out.

Nonetheless, I was baffled. I re-examined the code that filled up the buffers, for probably the seventeenth time. This is some of the oldest code in the program, having been taken originally from the Analog Devices example code for streaming samples to a PlutoSDR. It’s full of cryptic calls to IIO utility routines, but I knew what those routines did, and it was nothing very complicated in this case where we had only one I/Q channel open for transmit. Really they were just setting up start index, end index, and stride for a loop that would visit each sample in the buffer. One sample for every channel (total of one channel) would constitute an AXI-S transfer when it got into the FPGA, and that same increment was the unit for allocating buffer sizes. Each sample had room for 16 bits of I and 16 bits of Q. Four bytes. Of course, I knew I had to put one byte of frame data in there instead of I/Q samples, and that’s what the code was doing.

There was one weird thing about the code that I did not know the story behind. The code put that one byte of data in two places within the I/Q sample. It carefully copied the data byte into the high byte of the I and then copied it again into the low byte of the Q. There were lovely comments on each line describing accurately what each line of code was doing at the byte level. Whoever wrote that code must have known the reason for it. Maybe they figured that by filling in the first byte and the last byte, they’d have a good chance of having the byte in the right place. Clearly they were aware that only one data byte needed to go into the sample. Unless they were an LLM.

Funny story. I knew for sure that those lines of code had been changed from the Analog Devices reference code. I remembered that the transmit buffers in that reference code had been filled with all zeroes. Which is just about the stupidest thing you could put into the I/Q samples of a demonstration intended to show how it worked. It would generate no modulation for most normal modulation types, and no power output at all for amplitude-modulation methods. Which we learned the hard way, and wasted some time tracking down.

Anyway, despite several good reasons I should have known better, I assumed that code was probably right and went looking for other things to check.

What About the Register Initializations?

There are a bunch of reads and writes to registers in Locutus as part of the initialization code in the main() function of Dialogus. These were inherited from older programs that did their jobs successfully. Some had been modified as new registers were added, mostly to help with debugging. I undertook to read and check them all against the register definitions. I wasn’t really expecting to find anything.

Until I got to the TX_DATA_WIDTH register. Short description, “Modem Tx Input Data Width”. Long description, “Set the parallel data width of the parallel-to-serial converter”. Default value at reset: 8. I knew what this was. The Modulator block is designed to be flexible about its input format, to make it easier to connect to a variety of hosts. When an AXI-S transfer arrives at the Modulator, this register tells it how many bits of that transfer contain meaningful data bits for modulation. I knew it was 8. It had to be 8, because we needed to send an odd number of bytes in each transfer.

But it wasn’t set to 8 in the initialization code. It was set to 32. That meant the Modulator was trying to send out four bytes for every one byte that we intended. I changed it to 8.

I also wanted to know where in the 32-bit “sample” the 8-bit data was supposed to go. I ran a sequence of tests at TX_DATA_WIDTHs of 8, 16, and 24, checking for modulation with data in each of the four locations within the sample. It turns out the buffer-filling code was wrong in both of the positions where it placed the data byte. It should have been placed in the lower byte of I. This is now corrected and documented in the code.

Characterizing the Results

I made a long voice transmission with all the usual logging. The waterfall looked good, no dropouts visible. The higher tone was much stronger than the lower tone, but that could be just because the data wasn’t scrambled for whitening or really encoded.

I extracted some statistics from the log file and did what felt like a systematic and thorough evaluation. Everything looked good and I got excited and committed the fixed code. I didn’t say so online, but I thought it was fixed. I did promise a detailed report “later on”, and that turned into this document.

On further study (as a result of writing this document) I can see that there are definitely still some things wrong, and clues about where to look next.

The elapsed time between calls to iio_buffer_push() was a mix of around 40ms (2736 occurrences) and around 0ms (662 occurrences). There were no outliers or long waits between push calls. I’m not sure I understand the mix of 40ms and 0ms elapsed times, but I suspect it can be explained by timing jitter around a too-narrow window. Some more debugging may be needed on this one.

I measured the duration of every iio_buffer_push() call. They were all quick. Most values were 0ms, none were longer than 2ms, and only a few examples of 2ms in the run. This is what I’d hope to see. 

I also looked at the axis_xfer_count register. This is one I haven’t mentioned before. It is supposed to measure how many AXI-S transfers have taken place. I logged it as a cumulative value and also as a differential between pushes. The difference between start of session and end of session was 458597 transfers, which is almost exactly 135 (not 134) transfers per push, but 169.4 transfers per frame as measured by the overall duration of the session, which was 2707 x 40ms. Neither of those numbers makes sense. It ought to be 271 per frame now. The distribution of the delta values was interesting, too.

I logged starts and cancels of hang times. There were 664 hang times declared and canceled. I didn’t log dummy frames, but I captured periodic statistics reports and they reveal that 689 dummy frames were sent. Discarding the 25 dummy frames at the natural end of transmission, that’s 664 dummy frames, exactly one for each hang time declared. That’s what we would expect from short underruns. Looking for patterns in the hang time events, I see cases where an encapsulated frame is logged as arriving after a hang time is declared, but the hang time is not canceled. This suggests race conditions in the hang time checking.

Image: Delta of axis_xfer_count between buffer pushes

Conclusion

The debugging detailed here was (probably) good progress, but there’s more work to do before we can consider the problems solved.

The final test will of course be to receive the transmissions created this way, and check that the data error rate is appropriate at a variety of signal strengths. Getting the receiver to work that well may be the harder part of the problem, so it’s essential to keep looking at Dialogus until every weird behavior is understood or corrected.

IIO Timeline Management in Dialogus — Transmit

Paul Williamson, KB5MU 

The next step in debugging/optimizing the design of the transmit pipeline in Dialogus (https://github.com/OpenResearchInstitute/dialogus) is to get control of the timeline. Frames are coming in via the network from Interlocutor (https://github.com/OpenResearchInstitute/interlocutor), being processed by Dialogus, and going out toward the modulator via iio_buffer_push() calls that transfer data into Linux kernel buffers on the Pluto’s ARM. The kernel’s IIO driver then uses the special DMA controller in the Pluto’s FPGA reference design to turn this into a stream of 32-bit AXI-S transfers into the Modulator block. The Modulator block has hard realtime requirements and is not capable of waiting for an AXI-S transfer that is delayed. Dialogus’s job is to make sure the kernel never runs out of DMA data.

Goals

1. Minimize latency

2. Maximize robustness to timing errors

Assumptions

Data frames arrive on UDP-encapsulated network interface from Locutus.

Data frames arrive without warning.

Data frames stop without any special indication.

Data frames may then resume at any time, without necessarily preserving the frame rhythm.

Data frames arrive in order within a transmission.

Requirements

A preamble frame must be transmitted whenever the transmitter turns on.

The preamble frame duration needs to be settable up to one entire 40ms frame.

There must be no gap between the preamble and the first frame of data.

A postamble frame must be transmitted whenever the transmitter turns off.

There must be no gap between the preamble and the postamble frame.

Dialogus may insert dummy frames when needed to prevent any gaps.

Dialogus must limit the number of consecutive dummy frames to a settable “hang time” duration.

Dialogus should not do any inspection of the frame contents.

Dialogus should not allow frame delays to accumulate. Latency should be bounded.

Dialogus is allowed and encouraged to combine transmissions whenever a transmission begins shortly after another transmission ends. That is, the new transmission begins within the hang time.

Derived Requirements

When not transmitting, Dialogus need not keep track of time.

When a first frame arrives, Dialogus should plan a timeline.

The timeline should be designed to create a wide time window during which a new encapsulated frame may correctly arrive.

The window should accommodate frame jitter in either direction from the base timing derived from the arrival time of the first frame.

The timeline will necessarily include a hard deadline for the arrival of a new encapsulated frame.

When no new frame has arrived by the hard deadline, Dialogus has no choice but to generate a dummy frame.

If an encapsulated frame arrives after the deadline but before the window, Dialogus must assume that the frame was simply late in arriving

Dialogus should adhere to the planned timeline through the end of the transmission, so that the receiver sees consistent frame timing throughout. The timeline should not drift or track incoming frame timing.

Observations

When the first frame arrives, Dialogus can only assume that the arrival time of that frame is representative of frame timing for the entire transmission. The window for future frame arrivals must include arrival times that are late or early as compared to exact 40ms spacing from the first frame, with enough margin to tolerate the maximum expected delay jitter.

The window does not track with varying arrival times, because that would imply that the output frame timing would track as well, and that’s not what the receiver is expecting. Once a timeline is established, the transmitter is stuck with that timeline until the transmission ends. After that, when the next transmission occurs, a completely new timeline will be established, and the preamble will be transmitted again to give the receiver time to re-acquire the signal and synchronize on the new first frame’s sync word.

When the first frame arrives and Dialogus is planning the time line, the minimum possible latency is achieved by immediately starting the preamble transmission. The hardware must receive the first frame before the end of the preamble transmission, and every frame thereafter with deadlines spaced 40ms apart. That implies that the window must close slightly before that time, early enough that Dialogus has time to decide whether to send the data frame or a dummy frame and to send the chosen frame to the hardware. If the preamble duration is set to a short value, this may not be possible. In that case, Dialogus can either extend the preamble duration or delay the start of the preamble, or a combination of both, sufficient to establish an acceptable window. Essentially, this just places a lower limit on the duration of the preamble.

Let’s think about the current case, where the preamble duration is fixed at 40ms. We’ll ignore the duration of fast processing steps. From idle, a first frame arrives. Call that time T=0. We expect every future frame to arrive at time T = N * 40ms ± J ms of timing jitter. We immediate push a preamble frame and then push the data frame. The kernel now holds 80ms worth of bits (counting down), and Dialogus can’t do anything more until the next frame arrives. If the next frame arrives exactly on time, 40ms after the first frame, then the preamble buffer will have been emptied and DMA will be just starting on the first frame’s buffer. When we push the arrived frame, the kernel holds 80ms of bytes again, counting down. If the next frame arrives a little early, DMA will still be working on the preamble buffer, and the whole first frame’s buffer will still be sitting there, and if we push the arrived frame, it will be sitting there as well, for a total of more than 80ms worth of data. If the next frame arrives a little late, DMA will have finished emptying out and freeing the preamble’s buffer, and will have started on the first frame’s buffer. If we push the arrived frame, there will be less than 80ms worth of data in the kernel. All of these cases are fine; we have provided a new full buffer long before the previous buffer was emptied.

What if the frame arrives much later? If it arrives 40ms late, or later, or even a little sooner, DMA will have finished emptying the previous frame’s buffer and will have nothing left to provide to the modulator before we can do anything about it. That’s bad. We need to have taken some action before this is allowed to occur.

Let’s say the frame arrives sooner than that. There are multiple cases to consider. One possibility is that the frame we were expecting was just delayed inside Interlocutor or while traversing the network between Interlocutor and Dialogus. In this case, we’d like to get that frame pushed if at all possible, and hope that subsequent frames aren’t delayed even more (exceeding our margin) or a lot less (possibly even arriving out of order, unbeknownst to Dialogus).

A second possibility is that the frame we were expecting was lost in transit of the network, and is never going to arrive, and the arrival is actually the frame after the one we were expecting, arriving a little early. In this case, we could push a dummy frame to take the place of the lost frame, and also push the newly arrived frame. That would get us back to a situation similar to the initial conditions, after we pushed a preamble and the first frame. That’d certainly be fine. We could also choose to push just the newly arrived frame, which might be better in some circumstances, but would leave us with less margin. Dialogus might need to inspect the frames to know which is best, and I said above that we wouldn’t be doing that.

A third possibility is that the frame we were expecting never existed, but a new transmission with its own timing was originated by Interlocutor, coincidentally very soon after the end of the transmission we were making. In that case, we need to conform Interlocutor’s new frame timing to our existing timeline and resume sending data frames. This may required inserting a dummy frame that’s not immediately necessary, just to re-establish sufficient timing margin. By combining transmissions in this way we are saving the cost of a postamble and new preamble, and saving the receiver the cost of re-acquiring our signal.

In real time, Dialogus can’t really distinguish these cases, so it has to have simple rules that work out well enough in all cases.

Analyzing a Simple Rule

The simplest rule is probably to make a complete decision once per frame, at Td = N * 40ms + 20ms, which is halfway between the nominal arrival time of the expected frame N and the nominal arrival time of the frame after that, N+1. If a frame arrives before Td, we assume it was the expected frame N and push it. If, on the other hand, no frame has arrived before Td, we push a dummy frame, which may end up being just a fill-in for a late or lost frame, or it may end up being the first dummy frame of a hang time. We don’t care which at this moment. If, on the gripping hand, more than one frame has arrived before Td, things have gotten messed up. The best we can do is probably to count the event as an error and push the most recently arrived frame. Note that we never push a frame immediately under this rule. We only ever push at decision time, Td. We never lose anything by waiting until Td, though. In all cases the kernel buffers are not going to be empty before we push.

It’s easy to see that this rule has us pushing some sort of frame every 40ms, about 20ms before it is needed to avoid underrunning the IIO stream. That would work as long as the jitter isn’t too bad, and coping with lots of frame arrival jitter would require extra mechanisms in the encapsulation protocol and would impose extra latency.

Pushing the frame 20ms before needed doesn’t sound very optimal. We need some margin to account for delays arising in servicing events under Linux, but probably not that much. With some attention to details, we could probably guarantee a 5ms response time. So, can we trim up to 15 milliseconds off that figure, and would it actually help?

We can trim the excess margin by reducing the duration of the preamble. Every millisecond removed from the preamble is a millisecond off the 15ms of unused margin. This would in fact translate to less latency as well, since we’d get that first data frame rolling through DMA that much sooner, and that locks in the latency for the duration of the transmission. This option would have to be evaluated against the needs of the receiver, which may need lots of preamble to meet system acquisition requirements.

We could also reduce the apparent excess margin by just choosing to move Td later in the frame. That effectively makes our tolerance to jitter asymmetrical. A later Td means we can better tolerate late frames, but our tolerance of early frames is reduced. Recall that “late” and “early” are defined relative to the arrival time of that very first frame. If that singular arrival time is likely to be biased with respect to other arrival times, that effectively biases all the arrival times. Perhaps first frames are likely to be later, because of extra overhead in trying to route the network packets to an uncached destination. Perhaps other effects might be bigger. We can’t really know the statistics of arrival time jitter without knowing the source of the jitter, so biasing the window seems ill-advised. Worse, biasing the window doesn’t reduce latency.

Conclusion

I’d argue that the simple rule described above is probably the best choice.

ORI Open Source Digital Radio at DEFCON

by KK6OOZ

At DEFCON in RF Village, we had a place to demonstrate work from ORI. We showed off open source synthetic aperture radar with coffee cans and GNU Radio and a PLUTO, and had space to show our “FT8 performance for keyboard chat” RFBitBanger QRP HF kit. We had room for the regulatory work for ITAR/EAR/219 MHz. And, very importantly – we had enough space to show major components of our UHF and up comms Opulent Voice system for amateur terrestrial and satellite fully up and running. At DEFCON, we had the human-radio interface and the modem as separate fully functional demonstrations.

Today, these two components have been combined and are working end-to-end. It’s coughing and sputtering, but it’s a solid first light. This means that microphone/keyboard/data processing from user input to waveforms over the air are happening.

The design goals for Opulent Voice project are to deliver very good voice quality in a modern way. AMBE/CODEC2 honestly sound terrible. Amateur radio deserves better audio quality. Therefore, we baseline Opus 16 kbps. It sounds great. Want more? there’s a path to 32 kbps.

We were very tired of a separate broken packet mode for data in ham digital voice product after ham digital voice product. Opulent Voice has keyboard chat and data in a single prioritized stream. No separate clunky packet mode. No 1980s architecture. It just works. In your browser. Or, at a command line interface. Chat only with transcriptions of all the audio received? With a microphone all in your ears and you never have to look at a screen? Your choice.

There are transcriptions for received audio (if you want that – it’s configurable), and text to speech for text messages is literally the next issue to be addressed. Accessibility is designed in from the start.

For terrestrial use, we have a demonstration conference server running on ORI’s Linode instance. This was up and running for DEFCON. It’s internet-only implementation of the repeater for terrestrial or space use, so that folks can see what the UI/UX looks like.

Everything is open source. Upcoming events? Opulent Voice will be presented to the ESA, AMSAT-DL, JAMSAT, and others in the next few months.

To everyone here that’s been supportive and wonderful – thank you so much.

RF Village has been instrumental and irreplaceable for this work to get a wider audience. This has been of enormous help. The benefits last year-round.

We’ve exhibited, presented, and contributed towards Ham Radio Village as well.

Here’s where we’ve been publishing and documenting the work. Under active development. Approach with the usual caution.

https://github.com/OpenResearchInstitute/interlocutor human-radio interface

https://github.com/OpenResearchInstitute/dialogus frames-to-modem application code for PLUTO SDR. Accepts Interlocutor connections and delivers them to the PLUTO Opulent Voice modem.

https://github.com/OpenResearchInstitute/pluto_msk Locutus, the award-winning modem firmware for the PLUTO SDR that makes the Opulent Voice waveforms.

Want to set up your own repeater for Opulent Voice?

https://github.com/OpenResearchInstitute/locus

Want to be more involved or informed about our work?

Opulent Voice in Space

Here’s Opulent Voice on a sounding rocket (RockSat-X project). Thank you to University of Puerto Rico for being such a great educational partner! Nice clean signal the entire time.

“Antenna Pattern”, colored pencil on paper, 2006

“Take This Job” For August 2025

Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.

Opulent Voice:

Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code. 

Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.

Bit Error Rate (BER) waterfall curves for Doppler shift.

Bit Error Rate (BER) waterfall curves for other channels and impairments.

Review Proportional-Integral Gain design document and provide feedback for improvement. 

Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco. 

Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.

Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator 

Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator

Evaluate loop stability with unscrambled data sequences of zeros or ones.

Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.

Review implementation of Tx I/Q outputs to support mirror image cancellation at RF. 

Haifuraiya:

HTML5 radio interface requirements, specifications, and prototype. This is the primary user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project. 

Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmitting a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.

Inner Circle Sphere of Activity

If you know of an event that would welcome ORI, please let your favorite board member know at our hello at openresearch dot institute email address. 

1 September 2025 Our Complex Modulation Math article will be published in ARRL’s QEX magazine in the September/October issue. 

5 September 2025 – Charter for the current Technological Advisory Council of the US Federal Communications Commission concludes. 

19-21 September 2025 – ESA and AMSAT-DL workshop in Bochum, Germany.

3 October 2025 – Deadline for submission for FCC TAC membership.

10-12 October 2025 – See us at Pacificon, San Ramon Marriot, CA, USA.

11-12 October 2025– Presentation (recorded) to AMSAT-UK Symposium

25-26 October 2025 – Open Source Cubesat Workshop, Athens, Greece.


Thank you to all who support our work! We certainly couldn’t do it without you. 

Anshul Makkar, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI
Matthew Wishek, Director ORI

DEFCON33 Open Source Digital Radio Exhibit

At DEFCON in RF Village, we had a place to demonstrate work from ORI. We showed off open source synthetic aperture radar with coffee cans and GNU Radio and a PLUTO, and had space to show our “FT8 performance for keyboard chat” RFBitBanger QRP HF kit. We had room for the regulatory work for ITAR/EAR/219 MHz. And, very importantly – we had enough space to show major components of our UHF and up comms Opulent Voice system for amateur terrestrial and satellite fully up and running. At DEFCON, we had the human-radio interface and the modem as separate fully functional demonstrations.

Today, these two components have been combined and are working end-to-end. It’s coughing and sputtering, but it’s a solid first light. This means that microphone/keyboard/data processing from user input to waveforms over the air are happening.

The design goals for Opulent Voice project are to deliver very good voice quality in a modern way. AMBE/CODEC2 honestly sound terrible. Amateur radio deserves better audio quality. Therefore, we baseline Opus 16 kbps. It sounds great. Want more? there’s a path to 32 kbps.

We were very tired of a separate broken packet mode for data in ham digital voice product after ham digital voice product. Opulent Voice has keyboard chat and data in a single prioritized stream. No separate clunky packet mode. No 1980s architecture. It just works. In your browser. Or, at a command line interface. Chat only with transcriptions of all the audio received? With a microphone all in your ears and you never have to look at a screen? Your choice.

There are transcriptions for received audio (if you want that – it’s configurable), and text to speech for text messages is literally the next issue to be addressed. Accessibility is designed in from the start.

For terrestrial use, we have a demonstration conference server running on ORI’s Linode instance. This was up and running for DEFCON. It’s internet-only implementation of the repeater for terrestrial or space use, so that folks can see what the UI/UX looks like.

Everything is open source. Upcoming events? Opulent Voice will be presented to the ESA, AMSAT-DL, JAMSAT, and others in the next few months.

To everyone here that’s been supportive and wonderful – thank you so much.

RF Village has been instrumental and irreplaceable for this work to get a wider audience. This has been of enormous help. The benefits last year-round.

We’ve exhibited, presented, and contributed towards Ham Radio Village as well.

Here’s where we’ve been publishing and documenting the work. Under active development. Approach with the usual caution.

https://github.com/OpenResearchInstitute/interlocutor human-radio interface

https://github.com/OpenResearchInstitute/dialogus frames-to-modem application code for PLUTO SDR. Accepts Interlocutor connections and delivers them to the PLUTO Opulent Voice modem.

https://github.com/OpenResearchInstitute/pluto_msk Locutus, the award-winning modem firmware for the PLUTO SDR that makes the Opulent Voice waveforms.

Want to set up your own repeater for Opulent Voice?
https://github.com/OpenResearchInstitute/locus

Want to be more involved or informed about our work?
https://www.openresearch.institute/getting-started/

Defending Amateur Radio Spectrum: The AST SpaceMobile Battle Continues

Partial Victory in 430-440 MHz Band Defense

The amateur radio community has achieved a significant but limited victory in protecting the 430-440 MHz band from commercial satellite encroachment. AST SpaceMobile’s request for broad commercial use of amateur spectrum has been restricted to emergency-only operations for a maximum of 24 hours and only 20 satellites—but the fight isn’t over.

AST SpaceMobile (AST & Science LLC) operates a constellation of large commercial satellites designed to provide cellular service directly to mobile phones. Think of it as trying to turn satellites into massive cell towers in space. The problem? They wanted to use the 430-440 MHz amateur radio band for their Telemetry, Tracking, and Command (TT&C) operations across a planned 243-satellite constellation.

This isn’t just about frequency coordination—it’s about fundamental spectrum philosophy. The amateur bands exist for experimentation, emergency communications, and education. Commercial operations fundamentally change the character of these allocations, much like turning a public research laboratory into a private factory floor.

The Technical Challenge

AST SpaceMobile’s satellites are massive. These are some of the largest commercial satellites ever deployed, with solar arrays spanning over 700 square meters. These aren’t small CubeSats doing modest experiments. They are industrial-scale infrastructure requiring robust command and control systems.

The company initially deployed five Bluebird satellites in September 2024, operating on amateur frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth. These were launched and operated without proper authorization. Each planned satellite would require TT&C channels with bandwidths between 64 – 256 kHz, creating a significant interference footprint across the entire 10 MHz amateur allocation.

The Open Research Institute, along with numerous international amateur radio organizations, filed strong opposition to AST SpaceMobile’s request. Our argument was both technical and philosophical:

Summarized from our filed comment, the technical objections included the following. 

1) Multiple established commercial satellite bands exist (S-band: 2025-2110 MHz, X-band: 8025-8400 MHz, Ka-band: 27.5-30.0 GHz)

2) ITU studies specifically excluded the 430-440 MHz amateur allocation from commercial TT&C considerations

3) Modern satellite technology readily supports operations in higher frequency bands with better propagation characteristics. 430-440 MHz is not the best choice.

We raise the following philosophical and cultural concerns.


1) Amateur radio bands serve critical emergency communications when commercial infrastructure fails

2) These frequencies support STEM education and technological innovation. Where do you think many RF engineers get their start?

3) Commercial encroachment sets a dangerous precedent that could completely destroy the experimental character of amateur allocations

The FCC’s Decision: A Limited Victory

On August 29, 2025, the FCC issued a modified grant that significantly restricts AST SpaceMobile’s operations:

1) 24-hour limit. TT&C operations in the 430-440 MHz band are permitted only for periods not exceeding 24 hours.

2. Emergency only. Operations are restricted to Launch and Early Orbit Phase (LEOP) and emergency situations when no other band is available.

3. 20-satellite cap: Authorization covers only the next 20 satellites, including the FM1 prototype.

FM1 stands for “Flight Model 1” and is AST SpaceMobile’s first “Block 2” BlueBird satellite. It’s a much bigger, more powerful version of their current satellites. According to AST, it is about three times larger than their first-generation BlueBird satellites with 10 times the capacity. Launch dates have been delayed over the past year and the satellite might go up in early 2026. 

This represents a major step back from AST SpaceMobile’s original request for blanket commercial access across their entire constellation.

What Does This Mean for the Amateur Community?

The decision validates several key principles that we and many others have been patiently asserting to regulators. 

1) Amateur spectrum is different. The FCC acknowledged that amateur allocations can’t simply be treated as general-purpose commercial spectrum. The severe restrictions imposed recognize the unique character and public service value of amateur radio.

2) Technical alternatives exist. By limiting operations to emergencies “when no other band is available,” the FCC effectively endorsed our argument that commercial TT&C bands are technically viable for these operations.

3) Precedent matters. While it shouldn’t have to be repeatedly argued, precedent really does matter and vigilance is required in order to keep a solid regulatory foundation for amateur radio. Rather than opening the floodgates to commercial use of amateur spectrum, the FCC imposed strict limits that discourage similar requests from other operators.

Industry Response and Next Steps

AMSAT-DL described this as a “greater (partial) success” for amateur radio and AMSAT satellite operators. The 24-hour emergency-only restriction and 20-satellite cap should give AST SpaceMobile sufficient time to redesign their constellation for proper commercial frequency usage.

However, this isn’t a complete victory. AST SpaceMobile still has temporary access to amateur spectrum, and the company may seek to extend or modify these restrictions as their constellation develops.

Lessons for Open Source and Amateur Communities

This case illustrates several important principles for defending community resources. Documentation matters. Technical arguments backed by ITU studies, engineering analysis, and regulatory precedent carried significant weight in the FCC’s decision. Without this, things would have worked out very differently. 

Community coordination works. International amateur radio organizations presenting unified opposition demonstrated the global impact of spectrum decisions. 

Vigilance must continue. Protecting community resources, whether spectrum, software licenses, IP addresses, or technical standards, requires continuous engagement with regulatory and governance processes. 

The amateur radio community must remain vigilant as commercial space operations continue expanding. AST SpaceMobile’s modified authorization creates a framework for emergency use that other operators will definitely seek to exploit.

We encourage continued support for organizations like AMSAT-DL, ARRL, and the Open Research Institute that actively defend amateur spectrum rights.

You can participating in FCC comment periods on spectrum issues. Yes! You! Reach out to your local amateur radio organization and be part of the process. Support technical education that demonstrates amateur radio’s ongoing innovation. Engage with emergency communications activities that highlight amateur radio’s public service value.

The 430-440 MHz band remains primarily protected for amateur use, but this victory required sustained technical and legal advocacy. Our spectrum allocations, just like our open source projects, exist because communities actively defend and develop them. The technical part of a project is never the hardest part. The hardest part of any project is the people part. Negotiating, collaborating, compromising, defending, and communicating in a complex world are all “people work”. 

Technical Details and References

For those interested in the regulatory details, please refer to the following documents.

FCC Proceeding 25-201 (application/licensing proceeding)

ICFS File Number SAT-MOD-20250612-00145 (FCC’s “case number” or filing reference for AST SpaceMobile’s request to modify their satellite authorization.)

Open Research Institute Comment to 25-201 (Filed July 21, 2025 by Michelle Thompson W5NYV)

FCC Decision (DA-24-756A1.pdf from August 29, 2025)

The full technical analysis includes frequency coordination studies, interference modeling, and alternative band analysis available through the FCC’s Electronic Comment Filing System.

Inner Circle Newsletter July 2025

The Who What When Where Why

Open Research Institute is a non-profit dedicated to open source digital radio work on the amateur bands. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer and we work to use and protect the amateur radio bands.

You can get involved in our work by visiting https://openresearch.institute/getting-started 

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Want more Inner Circle Newsletters? Sign up for email delivery at http://eepurl.com/h_hYzL 

The Microwave Link Mystery Puzzle (Solution Below)

Four amateur radio operators (Alice, Bob, Carol, and Dave) are setting up 10 GHz microwave links. Each has a different antenna polarization: Horizontal, Vertical, Right-Hand Circular (RHC), and Left-Hand Circular (LHC).

Polarization Loss Rules

Same polarization: 0 dB loss

Cross-polarized linear (H vs V): 20+ dB loss

Circular to linear: 3 dB loss (either direction)

Opposite circular (RHC vs LHC): 20+ dB loss

The Clues

1. Alice can communicate with Bob with perfect signal strength (0 dB loss).

2. Alice gets terrible reception from Carol (20+ dB loss).

3. Alice receives Dave’s signal at reduced power (3 dB loss).

4. Bob can barely hear Carol (20+ dB loss).

5. Bob gets a good but reduced signal from Dave (3 dB loss).

6. Carol receives Dave’s signal at reduced power (3 dB loss).

7. One operator forgot to rotate their new IC-905 dish from its factory vertical polarization setting.

8. Bob notices that a 10 degree rotation resulted in a lot of signal loss.

Who has which antenna polarization?

Solution:

Alice: Horizontal

Bob: Horizontal

Carol: Vertical (IC-905 factory setting – forgot to rotate!)

Dave: Right-Hand Circular (RHCP)

From clue 1, Alice and Bob have 0 dB loss, therefore they have identical polarization.

From clues 2 & 4, both Alice and Bob get 20+ dB loss from Carol, so Carol has the orthogonal polarization to Alice/Bob. This 20+ dB loss could happen in two scenarios:

1. Alice/Bob are one linear polarization (for example, Horizontal), Carol is the orthogonal linear (for example, Vertical).

2. Alice/Bob are one circular polarization (for example, RHCP), Carol is the opposite circular (for example, LHCP).

From Clue 8, Bob has either vertical or horizontal polarization, as rotating the antenna results in noticeable loss. Rotating a circular polarized antenna doesn’t result in much loss.

From clues 3, 5, & 6, Dave gets 3 dB loss from Alice, Bob, and Carol. Since 3 dB loss occurs between circular and linear polarizations, and we suspect Bob is linear, then Dave must be circular. 

Dave can get 3 dB from all three is if Alice/Bob/Carol are all linear polarizations, and Dave is circular. 

Then clue 7 (IC-905 vertical) helps us determine which linear polarizations they have.

From clue 7, someone has an IC-905 in vertical polarization. This some “one” must be Carol (since she’s orthogonal to Alice/Bob). Only one operator forgot to rotate, so it can’t be both Alice and Bob with vertical. 

Therefore: Carol = Vertical, 

Alice/Bob = Horizontal.

Since we have Horizontal (Alice/Bob), Vertical (Carol), and one circular (Dave), Dave must be either RHCP or LHCP. We are going to say RHCP, but it is arbitrary. LHCP is correct too. 

Amateur Radio Band Sudoku

“Take This Job”

30 July 2025

Interested in Open Source software and hardware? Not sure how to get started? Here’s some places to begin at Open Research Institute. If you would like to take on one of these tasks, please write hello@openresearch.institute and let us know which one. We will onboard you onto the team and get you started.

Opulent Voice:

Add a carrier sync lock detector in VHDL. After the receiver has successfully synchronized to the carrier, a signal needs to be presented to the application layer that indicates success. Work output is tested VHDL code. 

Bit Error Rate (BER) waterfall curves for Additive White Gaussian Noise (AWGN) channel.

Bit Error Rate (BER) waterfall curves for Doppler shift.

Bit Error Rate (BER) waterfall curves for other channels and impairments.

Review Proportional-Integral Gain design document and provide feedback for improvement. 

Generate and write a pull request to include a Numerically Controlled Oscillator (NCO) design document for the repository located at https://github.com/OpenResearchInstitute/nco. 

Generate and write a pull request to include a Pseudo Random Binary Sequence (PRBS) design document for the repository located at https://github.com/OpenResearchInstitute/prbs.

Generate and write a pull request to include a Minimum Shift Keying (MSK) Demodulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_demodulator 

Generate and write a pull request to include a Minimum Shift Keying (MSK) Modulator design document for the repository located at https://github.com/OpenResearchInstitute/msk_modulator

Evaluate loop stability with unscrambled data sequences of zeros or ones.

Determine and implement Eb/N0/SNR/EVM measurement. Work product is tested VHDL code.

Review implementation of Tx I/Q outputs to support mirror image cancellation at RF. 

Haifuraiya:

HTML5 radio interface requirements, specifications, and prototype. This is the primary user interface for the satellite downlink, which is DVB-S2/X and contains all of the uplink Opulent Voice channel data. Using HTML5 allows any device with a browser and enough processor to provide a useful user interface. What should that interface look like? What functions should be prioritized and provided? A paper and/or slide presentation would be the work product of this project. 

Default digital downlink requirements and specifications. This specifies what is transmitted on the downlink when no user data is present. Think of this as a modern test pattern, to help operators set up their stations quickly and efficiently. The data might rotate through all the modulation and coding, transmitting a short loop of known data. This would allow a receiver to calibrate their receiver performance against the modulation and coding signal to noise ratio (SNR) slope. A paper and/or slide presentation would be the work product of this project.

ORI’s “Real and Complex Signal Basics” Article to be Published in QEX

The September/October 2025 issue of ARRL’s QEX magazine features “Real and Complex Signal Basics” by Michelle Thompson W5NYV. The article provides a step-by-step mathematical explanation of how complex modulation works in digital radio systems.

The piece starts with simple single-carrier real signals and builds up to explain quadrature amplitude modulation (QAM). Using clear mathematical derivations, it shows how two real signals can be transmitted simultaneously using sine and cosine carriers that are 90 degrees out of phase, then separated at the receiver using trigonometric identities and integration.

Subjects covered include:


How real signals create symmetrical frequency domain images

The transition from 4-level amplitude modulation to 16QAM using I and Q coordinates

The mathematical basis for quadrature mixing at both transmitter and receiver

Why complex modulation eliminates unwanted frequency images

How this approach enables higher data rates without requiring finer amplitude resolution

The article emphasizes the practical advantages of complex modulation. You get increased spectral efficiency, easier filtering due to single-sided transmission, and the flexibility to implement any modulation scheme through software-defined radio techniques.

This mathematical foundation underlies much of ORI’s digital radio development work, including the Opulent Voice protocol and other broadband digital communications projects.


The full article is available to ARRL members through QEX magazine. Want to publish it in your club newsletter? Article is available on request from ARRL from qst@arrl.org

Looking to Learn more about IQ Modulation?

Basics of IQ Signals and IQ modulation & demodulation – A tutorial by W2AEW

https://www.youtube.com/watch?v=h_7d-m1ehoY

Software Defined Radio For Engineers (free PDF from Analog Devices)

https://www.analog.com/en/resources/technical-books/software-defined-radio-for-engineers.html

These resources will get you well on your way!

ORI’s FCC Comment on Proceeding 25-201

Opposition to AST & Science LLC (AST SpaceMobile) Request for Amateur Radio Band Usage

July 21, 2025

Executive Summary

We respectfully submit this comment in strong opposition to AST & Science LLC’s (AST SpaceMobile) request to utilize the 430-440 MHz amateur radio band for Telemetry, Tracking, and Command (TT&C) operations for their planned 243-satellite constellation. We urge the Commission to deny this application and direct AST SpaceMobile toward established commercial satellite frequency allocations that are much more appropriate for their commercial operations.

Background and Technical Concerns

First, we have currently unauthorized operations going on. AST SpaceMobile currently operates five Bluebird commercial satellites launched on September 12, 2024, using amateur radio frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth for telemetry links. This existing operation has already demonstrated the potential for interference with legitimate amateur radio operations. 

The scope of the proposed expansion is a problem. AST SpaceMobile seeks to expand this usage to a 243-satellite constellation, with each TT&C beam supporting command and telemetry channels with bandwidths between 64-256 kHz. This massive expansion would fundamentally transform the character of the amateur radio band from experimental and emergency communications to commercial satellite operations.

Amateur Radio uses this band and is important. The 430-440 MHz band serves a variety of critical Amateur Radio applications including amateur space communications, weak-signal SSB, digital television, data communications, repeaters and other applications. The amateur radio service in this band supports:

Emergency Communications: Amateur radio operators provide vital public service during disasters when commercial communications infrastructure fails.

Space Communication: Educational and experimental satellite communications that advance the radio arts.

Technical Innovation: Experimentation and development of new communication technologies. Where do we think new engineers come from? Many of them come from amateur radio. 

International Coordination: The proposed constellation will cause interference to amateurs world-wide. This is opposed by a wide variety of international amateur radio organizations. 

Regulatory and Precedential Concerns

This is a very inappropriate band allocation. The 430-440 MHz band is allocated to the Amateur Radio Service, not commercial satellite operations. ITU study groups investigated potential TT&C frequency allocations in the frequency ranges 150.05–174 MHz and 400.15–420 MHz, specifically excluding the amateur allocation at 430-440 MHz. Permitting a commercial satellite constellation to operate in amateur radio spectrum sets a dangerous precedent that could lead to further commercial encroachment on bands reserved for experimental, educational, and emergency communications.

Frequency coordination frameworks exist. Satellite frequency coordination, particularly in these frequency bands, relies on a global regulatory and technical framework maintained by the International Telecommunication Union (ITU). AST SpaceMobile should utilize this established framework rather than seeking unauthorized access to amateur spectrum. ITU study results are clear. ITU study groups conducted sharing studies in various bands which yield that no new allocations are suitable for small satellite TT&C on a co-channel sharing basis. Proper commercial allocations exist that would not interfere with amateur operations.

Proposed Alternative Solutions

We recommend the Commission direct AST SpaceMobile to utilize appropriate commercial satellite frequency allocations:

1. S-Band Operations: Migrate TT&C operations to established S-band satellite allocations (2025-2110 MHz and 2200-2290 MHz)

2. X-Band Implementation: Utilize X-band frequencies (8025-8400 MHz) which offer excellent propagation characteristics for satellite communications

3. Ka-Band Adoption: Consider Ka-band frequencies for high-capacity operations

4. Proper ITU Coordination: Work through established international coordination procedures for legitimate commercial satellite spectrum

Technical feasibility is not an issue. Modern satellite technology readily supports operations in these higher frequency bands. The primary frequency bands of S, X, and Ka are more advantageous than using the UHF band, which has a higher probability of local interference.

Economic and Public Interest Considerations

Protecting Public Service is important. Amateur radio operators provide critical emergency communications during disasters. Interference from commercial satellite operations could compromise this vital public service capability. The amateur radio service serves as a proving ground for new technologies and provides STEM education opportunities. Commercial encroachment limits these important societal benefits and harms our national competitiveness. 

Precedential impact is negative. Approving commercial use of amateur spectrum without compelling technical justification would invite similar requests from other commercial operators, potentially destroying the character of amateur radio allocations.

Conclusion and Recommendations: We respectfully urge the Commission to:

1. DENY AST SpaceMobile’s request to operate in the 430-440 MHz amateur radio band

2. DIRECT AST SpaceMobile to utilize appropriate commercial satellite frequency allocations in S, X, or Ka bands

3. REQUIRE proper ITU coordination for international satellite operations

4. REAFFIRM the Commission’s commitment to protecting amateur radio spectrum for its intended non-commercial, experimental, and emergency communications purposes

The amateur radio bands serve critical public interest functions that would be compromised by large-scale commercial satellite operations. Abundant alternative spectrum exists that is specifically allocated for commercial satellite TT&C operations. We urge the Commission to preserve the amateur radio bands for their intended purposes and direct AST SpaceMobile toward appropriate commercial spectrum.


References: FCC DA 25-532 (June 20, 2025), AMSAT-UK Technical Analysis, ITU Radio Regulations and Study Reports, and NASA Small Satellite Guidelines

ORI’s Contribution to FCC Technological Advisory Council


Open Research Institute contributed to the US Federal Communications Commission Technological Advisory Council final report for the 2024-2025 term. A summary of ORI’s final draft contribution to the report is presented here.

We describe how spectrum sharing models must evolve to meet growing demand, particularly focusing on terrestrial-satellite integration. The core thesis suggests we’re experiencing a crisis in current spectrum management that requires transitioning to a new “Era 4” model incorporating AI/ML automation and market-based mechanisms.


Historical Evolution of Spectrum Management

We identify three distinct eras of spectrum management.


Era 1 (1890-1912): Unregulated Model – A “loudest-served” system with no regulatory oversight, which collapsed following the Titanic disaster due to communication congestion.


Era 2 (1927-1981): Command-and-Control Model – Centralized FCC authority making static allocations based on “public interest.” This system struggled with emerging technologies like FM radio and cellular networks.


Era 3 (1993-present): Market-Based/Flexible Use Model – Introduced spectrum auctions and flexible licensing, but now showing signs of regulatory overload and crisis.


Evidence of Current Crisis


Several indicators suggest Era 3 regulatory models are failing.

219 MHz Band Limbo: Years of regulatory deadlock between amateur radio, commercial, and federal interests with zero amateur activity despite allocated rights


C-Band Aviation Disputes: $81 billion auction created interference concerns with radar altimeters, requiring presidential intervention


Inter-agency Conflicts: NTIA and FCC reaching opposite conclusions on identical technical evidence (Ligado case)


Reallocation Resistance: Broadcasting industry claiming all “low hanging fruit” has been picked from spectrum repacking


Technical Challenges in Terrestrial-Satellite Sharing


We highlight complex coordination requirements across multiple services in bands like 7.125-8.4 GHz, including Fixed Satellite Service, Mobile Satellite Service, and various terrestrial services. The SiriusXM situation exemplifies ongoing interference challenges between satellite and terrestrial broadband services.

AI/ML Enhanced Spectrum Management

The report positions AI/ML as essential for Era 4, comparing it to sophisticated air traffic control for the electromagnetic domain. Key capabilities include real-time spectrum sensing and occupancy analysis, dynamic allocation based on interference patterns, pattern recognition for optimization, and automated coordination at scale beyond human regulatory capacity


However, the report recommends against mandating specific AI/ML technologies, favoring technology-neutral approaches.


Proposed Era 4 Solutions


Band Managers and Spectrum Bucks: Government exits direct allocation, appointing non-governmental band managers who negotiate usage using a “Spectrum Bucks” currency system. This would enable both commercial and non-commercial users to coexist through market mechanisms.

Amateur Radio Model: Highlighted as a successful example of dynamic spectrum sharing through self-regulation, technical excellence requirements, and community governance. Amateur satellites demonstrate effective secondary service operations and have pioneered numerous technologies later adopted commercially.


Regulatory Sandboxes: Supplemental Coverage from Space (SCS) is the first terrestrial-to-satellite spectrum leasing framework, creating economic incentives for cooperation rather than just technical coordination. This hybrid model enables spectrum reuse in the same geographic areas.


Key Recommendations


Improve Spectrum Sensing: Establish independent measurement networks through citizen science projects, public-private partnerships, and dedicated monitoring systems to provide transparent occupancy data.


Create More Regulatory Sandboxes: Use controlled environments to test new sharing models before broad deployment, building on SCS and amateur radio examples.


Optimize Satellite Uplink Sharing: Prioritize sharing arrangements for uplink services while providing separate allocations for downlink services, recognizing the different interference characteristics.


Develop HetNet Principles: Create coordination algorithms that leverage satellite orbital mechanics and optimize handoffs between terrestrial and non-terrestrial networks.


The report concludes that the complexity and scale of modern spectrum management demands a paradigm shift toward automated, AI/ML-enhanced systems that can handle what human regulators cannot, while maintaining proven principles from successful sharing models like amateur radio.

[We’ll share full versions of all the charter items when the final report is approved by the TAC. This should be in early August 2025. -Michelle Thompson ]

What is the ESA FutureGEO Project?

Here is ORI’s response to the call for participation from AMSAT-DL concerning the FutureGEO project, sponsored by the European Space Agency. We are looking forward to participating in the FutureGEO workshop in September 2025.

Matthew Wishek Wins 2025 ARRL Technical Innovation Award

We are thrilled to announce that Matthew Wishek NB0X has been awarded the prestigious 2025 ARRL Technical Innovation Award by the American Radio Relay League (ARRL) Board of Directors. This distinguished honor recognizes licensed radio amateurs who develop and apply new technical ideas or techniques that advance the state of amateur radio technology.

Matthew received this recognition for his innovative contributions to amateur radio digital communications, specifically his development of an open source minimum shift keying (MSK) implementation for software-defined radio.

Matthew’s primary achievement centers on his work with the pluto_msk project, a sophisticated MSK modem implementation designed for the Analog Devices ADALM-Pluto SDR platform. This project represents a significant advancement in efficient digital communications for amateur radio, particularly for the Opulent Voice (OPV) digital voice and data protocol.

MSK modulation is a type of continuous phase modulation that eliminates phase discontinuities, resulting in superior spectral efficiency compared to traditional FSK and other binary modulations. Matthew’s custom hardware description language (HDL) implementation targeting the AMD Zynq 7010 system on chip, maximizes performance and resource utilization. The design has multiple sophisticated components including numerically controlled oscillators (NCO), Proportional Integral (PI) controllers, power detectors, and exponential moving average filters.

Equally significant is Matthew’s pioneering modem module approach to HDL design. This architectural approach makes a large positive difference in how digital signal processing systems are designed and implemented in FPGAs. The modem module approach is the systematic creation of reusable, well-defined building blocks that can be combined for a variety of communication protocols. While many projects and people pay lip service to modularity, Matthew’s execution and leadership in this area have supercharged ORI’s work.


All components are freely available, fostering collaboration and continued innovation in the amateur radio community

The ARRL Technical Innovation Award recognizes not just technical achievement, but also contributions that benefit the broader amateur radio community. Matthew’s work exemplifies both criteria

Matthew’s innovations in modular HDL design and MSK implementation provide a solid foundation for future developments in amateur radio digital communications. His work demonstrates how modern software-defined radio platforms can be leveraged to implement sophisticated communication techniques that were previously the domain of commercial and military systems.

The amateur radio community benefits enormously from contributions like Matthew’s, which not only advance the technical state of the art but also provide practical, implementable solutions that enhance our communication capabilities.

Congratulations to Matthew Wishek on this well-deserved recognition of his outstanding technical contributions to amateur radio!

The American Radio Relay League (ARRL) is the national association for amateur radio, connecting hams around the U.S. with news, information and resources. The ARRL Technical Innovation Award is presented annually to recognize exceptional technical innovation that advances amateur radio.

Highlights from the New Interlocutor Installation and Operator Manual

A Human Radio Interface for Opulent Voice is ready for you to try out at https://github.com/OpenResearchInstitute/interlocutor

Overview

Interlocutor is the human-radio interface component of the Open Research Institute’s Opulent Voice digital communication system. Think of it as the “radio console” that transforms your computing device (such as  Raspberry Pi or a laptop) into a sophisticated digital voice and data terminal. While traditional amateur radio digital modes often sacrifice audio quality for bandwidth efficiency, Interlocutor enables very high-quality voice communications with seamless integration of keyboard chat, file transfer, and system control messages.

What Does Interlocutor Do?

Interlocutor provides high-quality digital voice communication using the high-bitrate open source Opus voice encoder. It enables keyboard chat that integrates seamlessly with voice, handles file transfer and system control messages, and offers both command-line and web-based interfaces. Interlocutor manages audio devices with sophisticated conflict resolution and implements priority-based message queuing (voice always wins)

It acts as the bridge between human operators and radio equipment. It processes voice, text, and data into properly formatted frames that can be sent to any Opulent Voice-compatible modem via Ethernet, enabling remote operation and modular system design.

On first run, you’ll be prompted to:

1. Select audio input device where you choose your microphone.

2. Test audio input where you speak to verify microphone works.

3. Select audio output device where you choose your speakers/headphones

4. Test audio output where you listen for test tone. 

Where to Send Your Frames?

After Interlocutor does all the work required to gather your voice and text input and create Opulent Voice Protocol frames, those frames are then sent to a modem or other program that can make use of them. How does this work? If frames are sent to a modem then it turns the frames into a modulated signal. This signal is then sent out over the air. The current implemented target modem is the PLUTO SDR from Analog Devices.

Or, the frames can go to a computer or conference server over the Internet. In other words, frames can be sent to another computer, a modem for radio transmission, a conference server (repeater) receiver, and more. If it has an IP address, and if it understands what to do with the frames, then you are ready to communicate.

The Basics of Running Interlocutor

See the online manual for detailed installation instructions.

# Launch with web interface

python3 interlocutor.py YOUR_CALLSIGN --web-interface

# Launch with a specific config file

python3 interlocutor.py YOUR_CALLSIGN -c myconfig.yaml --web-interface

You’ll need to configure network targets to tell Interlocutor where your Opulent Voice data frames need to be sent. Modify the network target configuration through the web interface in the Target IP Address box or use the command-line argument like this:
-i <IP address> 
as needed.

The web interface is available at http://localhost:8000 on the device running interlocutor.py. 

Interlocutor features a modern glassmorphism-styled web interface for operation and complete system configuration. All configuration items are available at the command line or in the web interface.

The purpose of the Configuration System is to create, validate, and save one or more configuration files so that an operator can start the radio in a fully defined state. 

Operational Modes

First, let’s talk about the command line interface (CLI) mode. This offers traditional terminal-based operation with full keyboard chat capabilities. 

The simplest way to invoke this mode is by typing:

python3 interlocutor.py YOUR_CALLSIGN

In CLI mode, real-time voice transmission is done with a hardware PTT button. There is a keyboard chat interface. Voice has priority, with control messages second highest priority, and keyboard chat third. Debug output and system status are shown as text on the screen. 

Second, let’s explore the web interface mode. The web interface is a modern browser-based interface with visual controls. 

It is invoked by adding the –web-interface argument to program start.
python3 interlocutor.py YOUR_CALLSIGN --web-interface

We find a detailed configuration management in Configuration tab, live status indicators, and real-time voice transmission with PTT control available in three different ways. First, as a hardware switch on designated GPIOs. Second, as a clickable button in the web interface. Third, the space bar when the message entry box does not have focus. Web interface has keyboard chat and shows system log, notifications for important events, debug output, and system status. Sent and received audio can be replayed from the message history window. 

Dual-Mode Operation

Both interfaces can run simultaneously, providing flexibility for different operational scenarios or preferences. There are instant updates between command line and web interfaces via WebSocket communication.

Protocol and Network Configuration

Interlocutor implements the Opulent Voice protocol with sophisticated frame management. Here are the frame types and priorities.

1. VOICE (Priority 1): Opus-encoded audio, immediate transmission

2. CONTROL (Priority 2): PTT state changes, high priority queue, A5 messages

3. TEXT (Priority 3): Keyboard chat, normal priority queue

4. DATA (Priority 4): File transfers, low priority queue

External Network Ports:

57372: Network Transmitter port (configurable, connects to radio hardware, computer, or repeater). This is the only port you have to configure.

Internal Protocol Ports:

57373: Audio frames

57374: Text frames  

57375: Control frames

These ports tell the receiver what kind of information it’s receiving. These ports are in the UDP header in each Opulent Voice frame. The protocol is extendable. Future data types get the next port available. 

All frames follow the Opulent Voice protocol format.
Opulent Voice Header: 12 bytes (sync word + station ID + type + sequence + length + reserved)

Payload: 122 bytes of data loaded up in 40 ms frames

Encoding: COBS (Consistent Overhead Byte Stuffing) framing

Transport: UDP over IP with RTP headers for audio, UDP over IP for control, text, and data

Network Integration

# Basic operation (connects to default target with default values)

python3 interlocutor.py YOUR_CALLSIGN

# Specify target IP and port

python3 interlocutor.py YOUR_CALLSIGN --target-ip 192.168.1.100 --target-port 57372

# Load specific configuration file

python3 interlocutor.py YOUR_CALLSIGN -c mystation.yaml

Audio System Operation

Push-to-Talk (PTT) Operation:

GPIO Button: Physical button connected to Raspberry Pi GPIO

Web PTT: Additional click/touch controls in web interface. There’s a PTT button and the space bar activates PTT when the message entry box is not highlighted (does not have focus). 

Audio Processing Pipeline:

1. Microphone input to PyAudio capture

2. Audio validation

3. Opus encoding (40ms frames, 16,000 bps bitrate)

4. Hardware audio callback

5. RTP header addition

6. UDP header addition

7. IP header addition

8. COBS encoding

9. Opulent Voice header addition

10. Network transmission

Chat Integration

Voice transmission has absolute priority. Text messages typed during PTT are buffered. Buffered messages transmit immediately when PTT releases. Control messages maintain high priority for system functions

Chat Modes:

Voice + Chat: Normal operation with seamless integration. Operators can choose voice or text as appropriate. This is the default mode.

Chat Only Mode: Keyboard-to-keyboard communication (similar to RTTY). This is set up with a command line argument
--chat-only

Automatic Reconnection System

Interlocutor implements intelligent reconnection logic for the web interface.

Reconnection Timing is as follows.

1. First retry: 1 second delay

2. Subsequent retries: Exponential backoff (1.5x increase)

3. Maximum delay: 30 seconds

4. Maximum attempts: 10 attempts

5. Total auto-retry time: 2-3 minutes

A manual retry button appears after auto-retry exhaustion. 

Documentation Resources

Project repository: https://github.com/OpenResearchInstitute/interlocutor

Open Research Institute: https://www.openresearch.institute/getting-started

GitHub Issues for bug reports

Code contributions welcome via GitHub pull requests and filing issues. Documentation improvements welcome and encouraged. Testing and feedback valuable for development. Hardware testing on different platforms welcome and encouraged! 

The system is actively developed open-source software, and features will evolve. Check the project repository for the latest updates and documentation. 

But Wait, There’s More

Opulent Voice Demonstration Conference Server Beta Test Now Open

This project is called Locus, and the repository link, with manual and installation instructions, can be found here:

https://github.com/OpenResearchInstitute/locus

Key components of a fully implemented conference server (Opulent Voice repeater) are outlined below.

1. FDMA Uplink Channels Received at Spacecraft or Terrestrial Repeater

Multiple receivers monitoring different frequency slots simultaneously, each capable of demodulating and decoding the Opulent Voice protocol.

2. Conference Management Hardware and Software

This manages how stations can connect with other stations by maintaining lists of rooms or conferences. Conferences are logical groupings of stations.

3. DVB-S2 Downlink Multiplexer

This component takes all the conference data and creates a single high-bandwidth downlink stream.

Software modifications to Interlocutor, the human-radio interface for Opulent Voice stations, have been made in order to realize a simple repeater system. The Interlocutor repository can be found here:

https://github.com/OpenResearchInstitute/interlocutor

By targeting the IP address of opulent.openresearch.institute, anyone running Interlocutor can participate on ORI’s demonstration conference repeater. This repeater is internet-only at the moment, but will have RF hardware in the next phase of work.

To configure Interlocutor for the conference server, here is an example invocation.

python3 interlocutor.py QUARTER --web-interface  -i 172.236.237.16

This sets up a station with ID QUARTER, running a web interface at 127.0.0.1:8000, and sends frames to opulent.openresearch.institute.

The next phase of work is to set up named conference rooms so that people can join based on event, subject matter, scheduled meetups, and other searchable categories. There will be a Conferences tab in the Interlocutor web interface that will populate whenever conferences metadata is received.

Key Features Unlocked in the Next Phase

Multiple simultaneous conversations are possible. Instead of traditional “one-at-a-time” repeater operation, users will have multiple conferences running simultaneously. A local weather net, an emergency coordinator channel, and a tech Q&A room are all active at once. Each of the voice transmissions and chat messages appear in the message history window of Interlocutor. The voice transmissions and chat messages are filtered by the receiving station based on the conferences from which the transmissions originated from.

Operators will browse active conferences and see things like participant counts, last activity, and have the ability to freely monitor any conferences.

Conference participation is straightforward. Operators have their station join a conference by transmitting in one. At that point, the station ID appears in the conference membership list. Before transmission to a Conference, the station ID would not appear in the list.

Conference lists are kept clean. Over time, a station ID that has joined a conference expires and is dropped off the list of stations in the conference. This is done after some configurable period of inactivity. The default in Locus is one hour. Conferences themselves, without active transmissions, can expire after some amount of time as well. This “time out” process emphasizes active conferences, reducing the amount of “empty rooms” that an operator would have to search through to find someone to communicate with.

Technical Implementation Strategy

Within Interlocutor, current conference server operation is essentially the same as if it was connecting directly to another station. If the target IP address is a conference server, then voice and text appear in the message history from all the stations that are transmitting. The Locus demonstration station at ORI simply forwards frames received. It has essentially one conference. This will change in the next phase. Current work is to continuously improve performance and user experience with the basic functions. Adding additional conference functions will follow shortly.

For this next phase, Interlocutor will have a third Conference Tab in the web interface. This will allow conference discovery and enable full participation. The banner on the Communications Tab will change to show when a server is connected, as opposed to a single directly connected station. This will be unobtrusive but clear.

We need to implement the frequency division multiple access (FDMA) receiver that manages multiple frequency slots on the uplink, and build the DVB-S2 Multiplexer for the downlink aggregation. Hardware for our open source HEO/GEO satellite project Haifuraiya has been demonstrated in the past, and these designs will be used for this project as well.

Discoverability Improves Engagement

In order to take advantage of the conference functions, individual stations use the Interlocutor program in essentially the same way as they would in a point-to-point contact. A fully implemented conference server will broadcast messages to be detected by Interlocutor receiver. These broadcasts list the conferences and the participants. Then, the web interface populates with the available conferences and exposes the controls for starting a new conference, joining an existing conference, or monitoring any conference. Operators join conferences by participating with a transmission. The repeater handles the complexity. The broadcast will be sent at the lowest priority level (data), as to not interfere with voice, text, or control frames.

The concept of conferences is central to the repeater experience. People playing a D&D game over the air would be in a game conference. That conference might be called “The Elven Empire”. People coming and going during drive time might join “I-15 Rollout”. An individual calling another individual for a one-on-one QSO would be in a conference too, which might be called “Ken’s Coffee Break” or “Everyone Welcome”.

Conferences become the fundamental organizing principle. This organizing principal shifts amateur radio from “everyone talks to everyone on one frequency” to “purposeful gatherings with context and discoverability.”

Traditional repeaters are like having one conference room where everyone shows up, takes turns to talk, and if you miss a transmission then that is just too bad. You missed it.

A conference-based system creates purposeful spaces. The D&D group doesn’t interfere with the emergency coordinators, and both can run simultaneously. If you missed a message, you can scroll up in the message history window. You can replay past audio transmissions from the UI Audio Bubbles or read the past chat message history. Both are in the message history window. Empty conferences time out and are removed, so that it doesn’t look like a ghost town or give a false sense of activity. 

Instead of “I hope someone interesting is on the repeater,” users can browse conferences in the past, present, and future. A list of past conferences, out to some point in the past, can be maintained to show the frequency and type of activity. Any conference can be monitored. Current conferences are joined by transmitting in the conference. Future conferences can be listed in a schedule. This is like the difference between wandering into a random tavern hoping for good company versus checking a bulletin board that says “Adventuring Party Seeking Ranger Meet by the Old Oak” or “Storytelling by the Fire in the Great Hall.” This type of radio experience has natural social dynamics.

An operator can make a casual discovery and meet new friend.

“Oh, there’s a Python programming discussion with 4 people – that sounds interesting!”

Operators can make and advertise planned events.

“Tech Book Club session every Tuesday at 7 PM – join us!

Emergency coordination can be greatly improved. Operators can quickly and easily create dedicated emergency conferences during disasters. With authentication and authorization functions, these conferences can be limited to people that have, for example, completed specific training. A system like this can provide realistic emergency training opportunities that do not interfere with normal operation. Mentoring conferences can be set up to provide friendly and welcoming alternatives for education and learning.

The conference server manages conference data. A goal is to keep this to the minimal required state. The conference server doesn’t decide who can listen to what, unless there is an authentication and authorization policy in place from the repeater owner.

Use Cases

Opulent Voice naturally scales up. Here are some anticipated use cases. This is not intended to be a comprehensive list.

Simple Point-to-Point: Two or more people communicating point-to-point create the equivalent of a single conference. Whether the station is operating point-to-point or in a conference at a repeater, the program behaves almost identically. The only difference on the Communications tab is the heading color and title.

Local Repeater: Multiple conferences on one repeater system that is running a conference server.

Regional Network: Conferences can span multiple repeaters.

Satellite Integration: HEO satellites carry conference multiplexes to different continents

Internet Bridging: Conferences can include remote participants via internet gateways

The conference concept transforms amateur radio from “broadcast to everyone in range” to “purposeful communities of interest.” Conference discovery/joining is a different mental mode than active communication. Therefore, it gets a separate tab in the Interlocutor web interface.

Three-Tab Architecture Defined

Tab 1: Communications “I’m having a conversation”
We keep the current design
We use current message history and audio playback
Active conference indicator at the top: “Currently in: Morning Coffee Chat (4 people)”
Quick leave/switch button – minimal, non-intrusive
Focus stays on the actual conversations

Tab 2: Conferences “I’m choosing who to talk to”
New design to be implemented in the very near future
Conference browser and discovery
Create new conferences
Join/schedule conferences
Personal conference history and favorites
Block lists

Tab 3: Configuration “I’m setting up my radio”
We keep the current design
Configuration Tab moves from position 2 to position 3
New conference configuration items
Radio and system settings

Operator Mental Models

This creates a clear progression in the operator’s mind:

1) “I want to join a conversation” Check conference server status
2) If connected browse and join conferences
3) If no conferences appeal then “I can still talk directly to someone”

The central idea behind the conference tab is the transition experience: User browses conferences, finds “Python Coding Discussion (3 people)“, clicks join, gets smoothly transitioned back to their familiar communications interface, but now they’re talking with Python enthusiasts instead of whoever happened to be on the repeater.

Conference Servers feel like choosing your conversation rather than hoping for a good one.

Expect conference functionality to increase at the internet demonstration station over the next few months. 

Open Research Institute Sphere of Activity

August, September, October 2025

5 August 2025 – Final Technological Advisory Council meeting at the US Federal Communications Commission (FCC) in Washington, DC. The current charter concludes 5 September 2025. 

7-10 August 2025 – DEFCON 33 in Las Vegas, Nevada, USA. ORI plans an Open Source Digital Radio exhibit in RF Village, which is hosted by Radio Frequency Hackers Sanctuary. 

10 August 2025 – Submission deadline for Open Source Cubesat Workshop, to be held 25–26 October 2025. Location is Serafio of the Municipality of Athens, Greece. Submission status !!!AI

1 September 2025 Our Complex Modulation Math article will be published in ARRL’s QEX magazine in the September/October issue. 

5 September 2025 – Charter for the current Technological Advisory Council of the US Federal Communications Commission concludes. 

19-21 September 2025 – ESA and AMSAT-DL workshop in Bochum, Germany.

25-26 October 2025 – Open Source Cubesat Workshop, Athens, Greece.


Thank you to all who support our work! We certainly couldn’t do it without you. 

Anshul Makkar, Director ORI

Keith Wheeler, Secretary ORI

Steve Conklin, CFO ORI

Michelle Thompson, CEO ORI

Matthew Wishek, Director ORI

ORI’s Contribution to FCC Technological Advisory Council


Open Research Institute contributed to the US Federal Communications Commission Technological Advisory Council final report for the 2024-2025 term. A high-level summary of ORI’s final draft contribution to the report is presented here.

We describe how spectrum sharing models must evolve to meet growing demand, particularly focusing on terrestrial-satellite integration. The core thesis suggests we’re experiencing a crisis in current spectrum management that requires transitioning to a new “Era 4” model incorporating AI/ML automation and market-based mechanisms.


Historical Evolution of Spectrum Management

We identify three distinct eras of spectrum management.


Era 1 (1890-1912): Unregulated Model – A “loudest-served” system with no regulatory oversight, which collapsed following the Titanic disaster due to communication congestion.


Era 2 (1927-1981): Command-and-Control Model – Centralized FCC authority making static allocations based on “public interest.” This system struggled with emerging technologies like FM radio and cellular networks.


Era 3 (1993-present): Market-Based/Flexible Use Model – Introduced spectrum auctions and flexible licensing, but now showing signs of regulatory overload and crisis.


Evidence of Current Crisis


Several indicators suggest Era 3 regulatory models are failing.

219 MHz Band Limbo: Years of regulatory deadlock between amateur radio, commercial, and federal interests with zero amateur activity despite allocated rights


C-Band Aviation Disputes: $81 billion auction created interference concerns with radar altimeters, requiring presidential intervention


Inter-agency Conflicts: NTIA and FCC reaching opposite conclusions on identical technical evidence (Ligado case)


Reallocation Resistance: Broadcasting industry claiming all “low hanging fruit” has been picked from spectrum repacking


Technical Challenges in Terrestrial-Satellite Sharing


We highlight complex coordination requirements across multiple services in bands like 7.125-8.4 GHz, including Fixed Satellite Service, Mobile Satellite Service, and various terrestrial services. The SiriusXM situation exemplifies ongoing interference challenges between satellite and terrestrial broadband services.

AI/ML Enhanced Spectrum Management

The report positions AI/ML as essential for Era 4, comparing it to sophisticated air traffic control for the electromagnetic domain. Key capabilities include real-time spectrum sensing and occupancy analysis, dynamic allocation based on interference patterns, pattern recognition for optimization, and automated coordination at scale beyond human regulatory capacity


However, the report recommends against mandating specific AI/ML technologies, favoring technology-neutral approaches.


Proposed Era 4 Solutions


Band Managers and Spectrum Bucks: Government exits direct allocation, appointing non-governmental band managers who negotiate usage using a “Spectrum Bucks” currency system. This would enable both commercial and non-commercial users to coexist through market mechanisms.

Amateur Radio Model: Highlighted as a successful example of dynamic spectrum sharing through self-regulation, technical excellence requirements, and community governance. Amateur satellites demonstrate effective secondary service operations and have pioneered numerous technologies later adopted commercially.


Regulatory Sandboxes: Supplemental Coverage from Space (SCS) is the first terrestrial-to-satellite spectrum leasing framework, creating economic incentives for cooperation rather than just technical coordination. This hybrid model enables spectrum reuse in the same geographic areas.


Key Recommendations


Improve Spectrum Sensing: Establish independent measurement networks through citizen science projects, public-private partnerships, and dedicated monitoring systems to provide transparent occupancy data.


Create More Regulatory Sandboxes: Use controlled environments to test new sharing models before broad deployment, building on SCS and amateur radio examples.


Optimize Satellite Uplink Sharing: Prioritize sharing arrangements for uplink services while providing separate allocations for downlink services, recognizing the different interference characteristics.


Develop HetNet Principles: Create coordination algorithms that leverage satellite orbital mechanics and optimize handoffs between terrestrial and non-terrestrial networks.


The report concludes that the complexity and scale of modern spectrum management demands a paradigm shift toward automated, AI/ML-enhanced systems that can handle what human regulators cannot, while maintaining proven principles from successful sharing models like amateur radio.

[We’ll share full versions of all the charter items when the final report is approved by the TAC. This should be in early August 2025. -Michelle Thompson ]

ORI’s FCC Comment on Proceeding 25-201

Opposition to AST & Science LLC (AST SpaceMobile) Request for Amateur Radio Band Usage

July 21, 2025

Executive Summary

We respectfully submit this comment in strong opposition to AST & Science LLC’s (AST SpaceMobile) request to utilize the 430-440 MHz amateur radio band for Telemetry, Tracking, and Command (TT&C) operations for their planned 243-satellite constellation. We urge the Commission to deny this application and direct AST SpaceMobile toward established commercial satellite frequency allocations that are much more appropriate for their commercial operations.

Background and Technical Concerns

First, we have currently unauthorized operations going on. AST SpaceMobile currently operates five Bluebird commercial satellites launched on September 12, 2024, using amateur radio frequencies at 430.5, 432.3, 434.1, 435.9, and 439.5 MHz with 50 kHz bandwidth for telemetry links. This existing operation has already demonstrated the potential for interference with legitimate amateur radio operations. 

The scope of the proposed expansion is a problem. AST SpaceMobile seeks to expand this usage to a 243-satellite constellation, with each TT&C beam supporting command and telemetry channels with bandwidths between 64-256 kHz. This massive expansion would fundamentally transform the character of the amateur radio band from experimental and emergency communications to commercial satellite operations.

Amateur Radio uses this band and is important. The 430-440 MHz band serves a variety of critical Amateur Radio applications including amateur space communications, weak-signal SSB, digital television, data communications, repeaters and other applications. The amateur radio service in this band supports:

Emergency Communications: Amateur radio operators provide vital public service during disasters when commercial communications infrastructure fails.

Space Communication: Educational and experimental satellite communications that advance the radio arts.

Technical Innovation: Experimentation and development of new communication technologies. Where do we think new engineers come from? Many of them come from amateur radio. 

International Coordination: The proposed constellation will cause interference to amateurs world-wide. This is opposed by a wide variety of international amateur radio organizations. 

Regulatory and Precedential Concerns

This is a very inappropriate band allocation. The 430-440 MHz band is allocated to the Amateur Radio Service, not commercial satellite operations. ITU study groups investigated potential TT&C frequency allocations in the frequency ranges 150.05–174 MHz and 400.15–420 MHz, specifically excluding the amateur allocation at 430-440 MHz. Permitting a commercial satellite constellation to operate in amateur radio spectrum sets a dangerous precedent that could lead to further commercial encroachment on bands reserved for experimental, educational, and emergency communications.

Frequency coordination frameworks exist. Satellite frequency coordination, particularly in these frequency bands, relies on a global regulatory and technical framework maintained by the International Telecommunication Union (ITU). AST SpaceMobile should utilize this established framework rather than seeking unauthorized access to amateur spectrum. ITU study results are clear. ITU study groups conducted sharing studies in various bands which yield that no new allocations are suitable for small satellite TT&C on a co-channel sharing basis. Proper commercial allocations exist that would not interfere with amateur operations.

Proposed Alternative Solutions

We recommend the Commission direct AST SpaceMobile to utilize appropriate commercial satellite frequency allocations:

1. S-Band Operations: Migrate TT&C operations to established S-band satellite allocations (2025-2110 MHz and 2200-2290 MHz)

2. X-Band Implementation: Utilize X-band frequencies (8025-8400 MHz) which offer excellent propagation characteristics for satellite communications

3. Ka-Band Adoption: Consider Ka-band frequencies for high-capacity operations

4. Proper ITU Coordination: Work through established international coordination procedures for legitimate commercial satellite spectrum

Technical feasibility is not an issue. Modern satellite technology readily supports operations in these higher frequency bands. The primary frequency bands of S, X, and Ka are more advantageous than using the UHF band, which has a higher probability of local interference.

Economic and Public Interest Considerations

Protecting Public Service is important. Amateur radio operators provide critical emergency communications during disasters. Interference from commercial satellite operations could compromise this vital public service capability. The amateur radio service serves as a proving ground for new technologies and provides STEM education opportunities. Commercial encroachment limits these important societal benefits and harms our national competitiveness. 

Precedential impact is negative. Approving commercial use of amateur spectrum without compelling technical justification would invite similar requests from other commercial operators, potentially destroying the character of amateur radio allocations.

Conclusion and Recommendations: We respectfully urge the Commission to:

1. DENY AST SpaceMobile’s request to operate in the 430-440 MHz amateur radio band

2. DIRECT AST SpaceMobile to utilize appropriate commercial satellite frequency allocations in S, X, or Ka bands

3. REQUIRE proper ITU coordination for international satellite operations

4. REAFFIRM the Commission’s commitment to protecting amateur radio spectrum for its intended non-commercial, experimental, and emergency communications purposes

The amateur radio bands serve critical public interest functions that would be compromised by large-scale commercial satellite operations. Abundant alternative spectrum exists that is specifically allocated for commercial satellite TT&C operations. We urge the Commission to preserve the amateur radio bands for their intended purposes and direct AST SpaceMobile toward appropriate commercial spectrum.


References: FCC DA 25-532 (June 20, 2025), AMSAT-UK Technical Analysis, ITU Radio Regulations and Study Reports, and NASA Small Satellite Guidelines

Are Multi-Carrier Data Modes Illegal in Amateur Radio?

Mike McGinty W2FBI, organizer of the Amateur Cellular project (https://xcl.is) has a concern about Ham LTE. 

He writes, 

“The more I read part 97, the more I convinced myself that actually, Ham LTE and indeed all ham multi-carrier data is illegal. Further, all identification over almost all digital modes is illegal.

The important part is under Authorized Emissions. 

See https://xcl.is/news/standardized_ham_radio_lte_v1_rc/

Does part 97 needs rephrasing?”

The Key Issues with Emission Designators in Part 97

Mike McGinty raises a legitimate question about the interpretation of Part 97 rules regarding emission types and digital codes, particularly for multi-carrier data modes like LTE. The core of the issue appears to be in two parts of 47 CFR Part 97:

  1. Definition of Emission Types (§97.3(c)): This section defines authorized emission types for amateur radio, including RTTY and data emissions with specific designators where only “a digital code of a type specifically authorized in this part may be transmitted.” (https://www.ecfr.gov/current/title-47/chapter-I/subchapter-D/part-97)
  2. RTTY and Data Emission Codes (§97.309): This section lists both specified digital codes that are authorized, and conditions under which unspecified digital codes may be used depending on frequency band and other requirements. (https://www.law.cornell.edu/cfr/text/47/97.309)

The potential contradiction arises because §97.307(f)(8) authorizes “A RTTY or data emission having designators with A, B, C, D, E, F, G, H, J or R as the first symbol; 1, 2, 7, 9 or X as the second symbol; and D or W as the third symbol” (https://www.law.cornell.edu/cfr/text/47/97.307)

 while the definition in §97.3 appears to be more restrictive for some emission types.

The LTE Question

LTE (Long-Term Evolution) uses OFDMA (Orthogonal Frequency-Division Multiple Access), which is a multi-carrier modulation technique. The emission designator for such transmissions could be something like G7W or D7W based on emission designator standards. There is ambiguity about whether this precisely fits within the authorized emission types in Part 97.

Possible Interpretations

There are several ways to interpret these rules:

  1. Strict Interpretation: Only specifically listed emission designators are allowed, making some modern digital modes (including amateur LTE implementations) potentially unauthorized.
  2. Liberal Interpretation: The rule allowing “unspecified digital codes” (§97.309(b)) encompasses modern digital modes as long as they meet bandwidth and other technical requirements. Modes includes the way they are emitted and we don’t really have a problem. 
  3. Documentation-Based Interpretation: As discussion about this question widened, Paul Williamson KB5MU suggested that there may be an “established understanding with FCC” that emission modes can be unspecified in the same way that digital codes can be unspecified, as long as the full details of the mode are published.

Is There any Existing FCC Guidance or Documentation?

After Paul suggested that there might be a letter or document that addresses this, I made attempts to search and asked around. While I was unable to find specific FCC guidance documents or formal letters directly addressing this question for amateur radio LTE or similar modern multi-carrier digital modes, I can say that there is precedent for the FCC taking a more permissive approach to new digital modes:

  1. In 2016, the FCC proposed rule changes to remove limitations on the symbol rate for amateur radio data transmissions while maintaining bandwidth limitations (https://www.federalregister.gov/documents/2016/08/12/2016-19085/amateur-radio-service-rules-to-permit-greater-flexibility-in-data-communications) I assert that this shows a general trend toward technical flexibility.
  2. The provision in §97.309(a)(4) indicates that “An amateur station transmitting a RTTY or data emission using a digital code specified in this paragraph may use any technique whose technical characteristics have been documented publicly” (https://www.law.cornell.edu/cfr/text/47/97.309) which strongly suggests openness to innovation within specified categories and is a rule widely used by experimenters. There are no reports of restrictions, repercussions, or negative outcomes when amateur operators have relied upon public documentation for digital codes not otherwise specified. 

This begs the question: Can we assume that “An amateur station transmitting a RTTY or data emission may use any emissions technique whose technical characteristics have been documented publicly”?

Interpretation

The interpretation of amateur radio service rules regarding multi-carrier modes like LTE falls into a gray area that hasn’t been explicitly addressed by the FCC. The concern that Mike raises is therefore valid – there is tension between the specific emission designator definitions and the broader allowances for unspecified digital codes.

Paul’s suggestion that “there is an established understanding with FCC that emission modes can be unspecified in the same way that digital codes can be unspecified, as long as the full details of the mode are published” aligns with the spirit of amateur radio’s experimental nature, but I couldn’t find formal documentation supporting this understanding.

Given the FCC’s trend toward embracing technical innovation while maintaining good spectrum practices, this is indeed potentially a “bear best left unpoked” as Paul suggested, unless the amateur radio community seeks formal clarification through a petition for declaratory ruling from the FCC.

We could take the opportunity to suggest clarifying language that would explicitly encompass modern multi-carrier digital modes while maintaining appropriate technical standards and interference protection.

Option 1: Adding Specific Commercial LTE Emission Designators

Commercial LTE typically uses emission designators like G7W, D7W, or W7D, depending on the specific implementation. Simply adding these to the list of authorized designators would be a straightforward and easy approach, but it has some serious limitations:

  1. It would address only the current implementation of LTE without future-proofing for evolving technologies.
  2. It might not address other multi-carrier modes beyond LTE.
  3. It would require updating the rules again for each new technology.
  4. It doesn’t really fit into the Delete, Delete, Delete 25-133 proceeding because it’s adding and not taking away. 

Option 2: Comprehensive Modernization Approach

A more forward-looking approach would be to amend 47 CFR Part 97 with language that explicitly embraces modern digital techniques while maintaining appropriate technical standards. 

Draft:

I. INTRODUCTION

1. In this Notice of Proposed Rulemaking (or comment, or reply-to), we share proposed amendments to Part 97 of the Commission’s Rules governing the Amateur Radio Service. These proposed rules clarify and update the authorized emission types for digital communications. These proposed changes aim to explicitly accommodate modern multi-carrier digital modes, such as those based on orthogonal frequency-division multiplexing (OFDM) technologies, while maintaining appropriate technical standards and interference protection.

II. BACKGROUND

2. The Commission’s rules in Part 97 define various emission types authorized for amateur radio operators in § 97.3(c) and specify standards for RTTY and data emissions in § 97.309. These rules were established before many modern digital communication techniques were developed and in some cases do not clearly accommodate innovative technologies now commonly used in other radio services, including highly popular and efficient multi-carrier modulation methods.

3. Multi-carrier modulation techniques, such as those used in LTE (Long-Term Evolution) and other OFDM-based systems, offer significant advantages for efficient spectrum use and robustness against interference. Amateur radio operators have adapted these technologies for experimentation and improved communications, but questions have arisen about whether such modes are fully compliant with existing Part 97 emission designator requirements.

III. DISCUSSION

4. We propose to amend Part 97 to clarify that modern multi-carrier digital modes are authorized for amateur radio communications, provided they comply with other technical parameters specified for the relevant frequency bands. For example, a multi-carrier mode that exceeds the entire allocated bandwidth of an amateur band would obviously not be an authorized transmission. Specifically, we propose the following changes:

A. Proposed Rule Changes

5. We propose to amend § 97.3(c)(2) as follows:

(2) Data. Telemetry, telecommand and computer communications emissions having

   (i) designators with A, B, C, D, E, F, G, H, J, R or W as the first symbol, 1, 2, 7, 9 or X as the second symbol, and D or W as the third symbol;

   (ii) emission J2D; and

   (iii) emissions A1C, F1C, F2C, J2C, and J3C having an occupied bandwidth of 500 Hz or less when transmitted on an amateur service frequency below 30 MHz.

6. We further propose to amend § 97.309 by adding a new paragraph (c) as follows:

(c) An amateur station may transmit any digital emission using a multi-carrier modulation technique, including but not limited to orthogonal frequency-division multiplexing (OFDM), provided that:

   (1) The emission complies with all bandwidth limitations specified in § 97.307 for the frequency band in use;

   (2) The technical characteristics of the emission have been documented publicly;

B. Effect of Proposed Changes

7. These proposed changes would explicitly authorize the use of multi-carrier digital modes like those based on OFDM technology, including amateur adaptations of LTE and similar systems. Adding “W” as an authorized first symbol in emission designators would encompass cases where the carrier is modulated using combinations of amplitude and angle modulation or where multiple carriers are employed. The additional provisions in § 97.309(c) would ensure that such operations maintain the experimental and open nature of amateur radio while preventing misuse.

8. We believe these changes would eliminate ambiguity while promoting technical innovation in the Amateur Radio Service. The requirement for public documentation of technical characteristics would maintain the educational and experimental purposes of the service.

IV. CONCLUSION

9. The proposed rule changes would modernize Part 97 to explicitly accommodate contemporary digital communication technologies while preserving the fundamental purposes and technical disciplines of the Amateur Radio Service.

Option 3: Harmonization with International Approach

Another approach would be to harmonize with the ITU Radio Regulations’ approach to emission classification, which might provide more flexibility. This would involve referring directly to the ITU emission classification system rather than listing specific designators, focusing on bandwidth limitations and interference protection rather than specific emission types, and (like above) establishing that any emission type that meets technical parameters is permitted unless specifically prohibited.

Recommendation

Option 2 provides the most comprehensive solution as it explicitly addresses multi-carrier modes including OFDM-based systems like LTE, adds the emission designator first symbol “W” which is used for combined or multi-carrier modulation, and creates a framework that can accommodate future innovations without requiring constant rule updates. We want to maintain the core principles of amateur radio regarding openness and documentation while also preserving technical standards and doing all we can to prevent interference. Option 2 is the one most likely to resolve Mike’s concern that “all ham multi-carrier data is illegal”. 

Comments and critique welcome and encouraged!

-Michelle Thompson, for Mike McGinty, with Paul Williamson

Connecting the Dots: Protecting and Improving Amateur Radio Access to 219-220 MHz

Introduction

The 219-220 MHz band represents both a challenge and an opportunity for the amateur radio community. Recent events highlighted in the Convair/220 Amateur Radio Club’s newsletter “Contrails” and the DR0NK Radio Network’s report demonstrate how vigilance is essential for protecting our spectrum rights. While our ongoing work at Open Research Institute (ORI) focuses on removing unnecessary regulatory barriers to make this valuable spectrum more accessible, it took boots on the ground to defend the 220 band from unexpected and harmful interference.

The Twin Challenges: Interference and Regulatory Burden

The amateur radio community currently faces two related challenges in the 220 MHz vicinity. First, external threats. As documented by the DR0NK Radio Network (https://k1ct.freeshell.org/c220.html), experimental licenses granted without proper consideration of existing amateur services have caused significant interference to operations in the 222-225 MHz band (see the April 2025 newsletter – we have a copy of it here). This experience serves as a warning about potential threats to adjacent frequencies. In addition, regulatory barriers still exist. Our work at ORI has identified that the 219-220 MHz band is hampered by outdated notification requirements that create unnecessary administrative burdens without providing meaningful interference protection.

Our Regulatory Approach

ORI has been actively participating in the FCC regulatory process with proposals that would streamline operations while maintaining necessary technical safeguards. We have spoken up about removing the outdated and burdensom notification and permissions requirements for amateur radio stations operating on 219 MHz. See https://www.openresearch.institute/2025/04/01/fcc-filing-for-219-mhz-rules-changes-from-ori/

Our most recent comments go further than this, and support removing regulations requiring interference studies to protect television channels 10 and 13 (this is also supported by AAR and WISPA) along with reiterating the elimination of all of the outdated notification requirements in Part 80 and Part 97.

REPLY COMMENTS OF OPEN RESEARCH INSTITUTE, INC
Open Research Institute, Inc replies in support of certain comments filed by the Association of American Railroads (AAR) [1], WISPA [2], and Steven K. Stroh [3].
Specifically, we support section II of the AAR comments requesting the removal of regulations requiring interference studies to protect television stations using Channel 10 and Channel 13.
We also support WISPA’s comments regarding Part 80 of the Commission’s regulations requesting the same removal of interference studies. These regulations are outdated and based
on a coast station use of AMTS that was never widely deployed. Furthermore, we support Steven K. Stroh’s recommendation to delete 97.307(f)(13) which restricts amateur station emissions
in 219-220 MHz (hereafter “219”) from using more than 100 kHz of bandwidth. We take no position on other comments made by the AAR, WISPA, or Steven K. Stroh.
We believe that our comments [4] serve to encourage wider use of AMTS spectrum by removing other burdensome notification regulations in Part 80
that require AMTS licensees to notify the American Radio Relay League and Interactive Systems of station locations. Our proposed removal of notification requirements for
219 in Part 97 serve the same purpose.
The AAR observes that most Positive Train Control deployments are “[…] primarily located within railroad rights-of-way and generally away from households […]”.
This necessarily limits the potential for harmful interference given we do not advocate for mobile use of 219 by amateur stations.
Beyond documented Amateur Radio Service history in coexisting with primary users, we believe our proposed retention of many of the restrictions on amateur use (secondary status, reduced power levels, fixed stations, and digital emissions) will sufficiently reduce the possibility of harmful interference with AMTS licensees. In sum, we believe that our proposals, combined with those of AAR, WISPA, and Steven K. Stroh will lead to more efficient commercial and amateur use of 219-220 MHz. In keeping with the Commission’s goal of removing regulations, all of these
proposals can be implemented solely as rule deletions.
[1] https://www.fcc.gov/ecfs/search/search-filings/filing/10412030480052
[2] https://www.fcc.gov/ecfs/search/search-filings/filing/104111661931041
[3] https://www.fcc.gov/ecfs/search/search-filings/filing/10412237159703
[4] https://www.fcc.gov/ecfs/search/search-filings/filing/10329271641887

In the past, we have supported the removal of the 100 kHz bandwidth restriction for amateur stations in the 219-220 MHz band, and continue to do so now. 

The Connection to Recent Interference Cases

The DR0NK Radio Network’s experience with interference from improperly granted experimental licenses offers valuable lessons for our regulatory advocacy. Documentation is critical. Success came through “coordinating efforts” and “documenting and communicating real issues.”

Technical arguments prevaied. Successful spectrum defense relied on demonstrating actual harmful interference through technical measurements, not administrative formalities.

Community vigilance works. As DR0NK notes, “the price of freedom, and of spectrum, is eternal vigilance.” This vigilance allowed them to detect and address interference that administrative processes failed to prevent.

Conclusion

ORI’s approach balances regulatory streamlining with responsible spectrum sharing. By eliminating outdated administrative barriers while maintaining secondary status and technical safeguards, we can make the 219-220 MHz band more accessible to amateurs while protecting primary users. The recent experiences documented by the DR0NK Radio Network demonstrate that our community can effectively protect spectrum through technical vigilance rather than bureaucratic procedures. Removing unnecessary and outdated bureaucratic procedures clears the way for operators and enthusiasts to effectively and efficiently police the bands. 

FCC Filing for 219 MHz Rules Changes from ORI

Thank you to the many people that have helped with this effort. Open Research Institute (ORI) has filed the first of what might be several comments and proposed rules making efforts to the FCC about reforming amateur radio use of the 219 MHz band.

https://www.fcc.gov/ecfs/search/search-filings/filing/10329271641887

The list of folks that have contributed and supported this effort to renovate 219 MHz for actual amateur radio use is quite long. This filing and any that follow are the result of over a year of work. Thank you especially to Mike McGinty, ARRL advisors, and Justin Overfelt. 

If you would like to help? 

1) Please use this comment to make your own similar request under this particular proceeding. This is a “what regulations do you want to delete?” type of call. As with many FCC calls for comment, it will be dominated by commercial interests. Anything from amateur radio will stand out. The deadline for comments is 11 April 2025. Speak simply and directly. We’d like to use this band without unnecessary and burdensome requirements. 

2) Please be ready to file a “reply” comment after the 11 April 2025 deadline. This is a chance for you to say “I agree with this and support this.”

We are not asking to change the fundamental nature of the band. Fixed digital messaging forwarding is super exciting these days because of SDRs, mesh networking, and all sorts of amazing protocol work available to us. We decided to simply ask for removal of the notification and permissions requirements. These requirements have resulted in zero use of this band for over two decades. 

The primary service back in the late 1990s when these rules came out was maritime (AMTS). Those licenses were never fully deployed and have now been leased out by railroads. This means, to us, that the permissions requirements now make no sense at all for secondary licensees. 

ORI is tired of this and is working to make this situation better. This is a great band with huge, innovative, digital promise. We deserve to have a seat at this table and that means the chair has to actually exist and the door to the room the table is located within has to actually be something we can open.

Inner Circle Newsletter: Autumn 2024

A Harvest of Regulatory and Technical Updates

Open Research Institute is a non-profit dedicated to open source digital radio work. We do both technical and regulatory work. Our designs are intended for both space and terrestrial deployment. We’re all volunteer. You can get involved by visiting https://openresearch.institute/getting-started

Membership is free. All work is published to the general public at no cost. Our work can be reviewed and designs downloaded at https://github.com/OpenResearchInstitute

We equally value ethical behavior and over-the-air demonstrations of innovative and relevant open source solutions. We offer remotely accessible lab benches for microwave band radio hardware and software development. We host meetups and events at least once a week. Members come from around the world.

Subscribe to this newsletter at http://eepurl.com/h_hYzL

Ribbit Radio Update

It isn’t often that a completely new way of using existing technology in a truly innovative way on the amateur bands excites the community, but Ribbit consistently delivers.

What is Ribbit?

Ribbit is a novel digital text messaging mode for VHF/UHF communications for recreational and emergency use. It radically increases the density of information transmitted by spectrum used. Ribbit turns text into encoded audio, which can be played from your smartphone into the microphone of any radio, on any band, under even very noisy conditions. A smartphone with the Ribbit application on the other side can recover this message. Your information will get through.

Ribbit leverages the computing power of the modern smartphone to increase the capabilities of any Handy Talkie without requiring any additional hardware or cable.
Its redundant distributed nature allows it to function even when internet connectivity is lost during emergencies or just not available due to poor coverage.

Developer Updates

Project Lead Pierre W4CKX and Rattlegram Developer Ahmet Inan supported a transition of project development to Alex Okita KO6BVA in late 2023. Since then, Alex has made several big strides forward in both development and demonstration.

Alex’s most recent TO DO list included the following tasks.

1) Building a scheme to compact the header for Ribbit. The header is a bit long and could be shortened to save some bits using a raw bit stream of a simplified character map.

2) Add a keyEncoder to Code Witches as the keyboard input gets letters added. When the “Enter” key is pressed begin encoding/consuming the characters in the input area.

3) May also want to add in a sort of message queue above the input area.

4) Build a useful CW-Font for code witches and anything else anyone wants to use it for.

All of this work enabled a highly successful demonstration of Ribbit at DEFCON 32 in Las Vegas, Nevada, in mid-August 2024. A special thank you to RF Village for hosting ORI’s Open Source Showcase.

Visitors to the exhibit were able to see Ribbit work in a busy and noisy environment. DEFCON, like most other very large conferences, is acoustically noisy, but also very electrically noisy. Ribbit was demonstrated all weekend long in a very harsh environment, as the WiFi Capture the Flag competition was happening literally steps away from the Ribbit demonstration. The venue, Las Vegas Convention Center West, was full of conference attendees. Despite the background noise and substantial RF interference, Ribbit worked from Smartphone app speaker output, to Family Radio Service microphone input, over the air on a busy FRS channel, to the receiving FRS radio speaker output, to a Ribbit application microphone input on the monitoring smartphone. Being able to transmit and reliably receive accurate text data is a notable achievement.

Alex has integrated the Ribbit functionality into a web application. This makes Ribbit even easier to use than traditional native applications. The live web app demonstration can be found at https://badkangaroo.github.io/

Recordings of presentations about Ribbit, along with posters and diagrams, can be found at https://www.ribbitradio.org

The source code can be found at https://github.com/OpenResearchInstitute/ribbit_webapp

Practical Real World Use

The technical demonstration for Ribbit, called Rattlegram, got very positive reviews for utility in the field from Chile and Argentina mountain rescue groups. The challenge is supporting hikers and skiiers in places where cellular coverage is limited or not available. Ribbit functionality provided clear benefits with no downsides.

Carlos, our contact in Argentina who supports mountain rescue operations, described his experiences using Ribbit technology.

“Hi everyone, how are you? I’m just back from the mountains in Chile. We did some testing with Rattlegram, in very windy conditions (without any cables) and worked just great !
Send you a picture & video of one of the test we made. We were at this summit:”

In Argentina, Carlos reported:

“These past few months, we have been going out to the mountains, and been using Rattlegram with lot of success. In one of the ‘refugio’ we left an old cell phone with Rattlegram loaded besides the base radio, and we have been sending messages in all the backcountry ski outings and all the messages arrived without a problem.”

If Ribbit can work on the top of the mountains in Chile and Argentina, and if it also performs really well in noisy urban environments, then what can’t it do?

Ribbit Can’t Flourish Without Your Support

Ribbit is innovative, useful, and fully supported by Open Research Institute. But, it won’t realize its full potential without continuing development and community support. If you are excited about Ribbit and want to help, please join ORI at https://openresearch.institute/getting-started and let us know you want to get involved with Ribbit. We’ll make sure you get on the mailing lists and Slack channels.

ORI graciously accept donations for Ribbit development. Ribbit has a dedicated bank account, follows non-profit best practices, has an enforced code of conduct, and regularly reports to ORI leadership.

Thank you to everyone helping make Ribbit successful, useful, exciting, and fun to use!

Opulent Voice Comes to the PLUTO SDR
An Open Source Minimum Shift Keying Implementation

Opulent Voice is the native digital voice and data protocol for ORI’s open source satellite (or terrestrial) system called Haifuraiya. With high fidelity voice quality, seamless integration of data, keyboard chat, and authentication and control messages, amateur operators experience something truly new and enjoyable. As described by ORI Director Steve Conklin AI4QR, “Leave scratchy and harsh proprietary vocoders with clunky or incomplete data mode integration behind for good.” All source code at https://github.com/OpenResearchInstitute/pluto_msk

Things worth doing are rarely easy. The Minimum Shift Keying (MSK) implementation of Opulent Voice (OPV) on the PLUTO SDR is no exception. Challenges to the documentation, design, implementation, and test cycles have come up throughout the process. The response from the open source digital radio community has been solid, inspiring, and productive, with patient problem-solving winning the day every time.

Michelle Thompson, ORI’s CEO, described her perspective on OPV’s progress recently at an IEEE event, stating:

“I would like to thank everyone helping ORI achieve ambitious goals for our uplink protocol. It’s a real pleasure to serve this community and meet the needs of this project. It’s a privilege to be part of something so innovative and relevant.”

This feeling is reflected by the OPV volunteer team, who are actively involved around the world in promoting open source work and helping ORI projects like OPV succeed.

Successes Over the Past Quarter and Current Status

OPV was the communications downlink for a NASA RockSat-X mission executed by the University of Puerto Rico, one of the most consistent and successful teams participating in this competitive program. The mid-August 2024 launch from Wallops went flawlessly. The performance of OPV met all expectations and data recovered from the launch is being used to further develop and improve the communications protocol.

OPV presentations at HamCon:Zion and DEFCON during the summer of 2024 were extremely well received, giving positive educational value to the community. Amateur Radio deserves quality digital communications. OPV delivers excellent results and is well-documented.

There is still a long way to go. The physical layer work is currently focused on tuning the tracking loops in the demodulator. Once the physical layer is reliable, then the functions that integrate voice, data, keyboard, and channel control messages will be integrated into the PLUTO SDR build and tested. Application layer functions in HTML 5 have a head start, with some tests already completed. Parallel integration efforts from F5OEO and OpenCPI, both of which will expand the number of hardware platforms that OPV can use, are well underway.

Below, an example of bit-accurate simulation results for OPV, with signal names on the left and waveform values on the right.

Below, the spectrum of OPV, showing main lobe null-to-null span of 81 kHz, and signal sidelobes.

Haifuraiya and Modem Module Update

The work on Haifuraiya has broader appeal than just as a HEO/GEO satellite design. The “modem module” approach, developed by Matthew NB0X, is summarized at left.

At left is the most recent high level block diagram.

Haifuraiya – Open Source HEO/GEO Amateur Satellite Update

The multirate channelizer has been implemented in Remote Labs West on the zc706/ADRV9009 station. At left is a summary of the channel plan for the satellite receiver.

Regulatory Work at ORI
Making Open Source Easier for Everyone

Past regulatory work at ORI can be found at
https://github.com/OpenResearchInstitute/documents/Regulatory

A special Halloween season workshop about exorcising the Haunted Band of 219-220 MHz was held by IEEE and ORI on 29 October 2024.

The Zoom meetup was well attended with the majority of RSVPs participating on the call. The meeting was recorded for those that couldn’t make it. The video recording of the background session, followed immediately by the workshop, can be found at https://youtu.be/wGX__qcFrT4

219 MHz is a secondary allocation for amateur radio. It has excellent channel characteristics, is open to fixed point-to-point digital links only, allows 100 kHz bandwidths, has a minimum of 56 kbps data rates, and has had no activity for decades. It’s time to take this band back!

Why is there no activity? We believe this band is inactive because of the notification requirements in FCC part 97 and part 80. We assert that these notification requirements, which pull ARRL into the loop from two different directions, are impossible for both amateurs and commercial licensees to comply with.

What are these notification requirements? If you as an amateur want to operate on 219 MHz, you have to determine if there’s an Automated Maritime Telecommunications System (AMTS) station within 80 km of your amateur station. You then have to ask permission from the AMTS station(s) in order to legally operate with at least 30 days notice. If you are between 80 km and 640 km, then you have to notify the AMTS stations at least 30 days in advance of activating the band.

A publicly accessible database of amateur and AMTS stations on 219 MHz is supposed to be maintained by ARRL. This is a rule in Part 97 (for the amateur radio stations) and in Part 80 (for AMTS stations). This database does not appear to exist.

AMTS stations did not report their information to ARRL. Requests to ARRL for a copy of AMTS stations have not been answered. Requests for a copy of amateur radio stations that have notified the ARRL of intent to operate have also not been answered.

These requests were made by multiple amateur radio operators in the October – December 2023 time frame. This was before the cyberattack in 2024, which did sideline a lot of ARRL’s services and activities.

The 219 MHz band regulations need a serious overhaul. We believe a proposal for new rules for this band are a necessary step towards restoring amateur use of this band. We believe that rules can be proposed that will both promote modern and innovative digital activity, and at the same time, not endanger the primary licensees in any way. While the AMTS service no longer exists as originally envisioned, the AMTS licenses have been leased out to fundamentally different services, such as railroads. These primary licensees can share the band with amateur radio in, at the very worst, a mutually inoffensive way and, at the very best, a mutually beneficial manner. It’s up to us to make this better future happen.

In order to make an effective proposal to the US FCC for updated rules for this band, there are things that we need to sort out. We need to know what we want and we need to know what the primary users want. We need to anticipate and answer objections to dropping the onerous notification requirements, and we need to clearly state the value that an amateur radio allocation on this band brings to education, public service, citizenship, and technical advancement of the radio arts.

The workshop on 29 October was the first strong step towards sorting out a draft of proposed rulemaking for restoring amateur radio service use of the 219 MHz band.

Discussion during the workshop resulted in an outline for a draft rulemaking proposal. ORI sincerely thanks everyone that participated during the event and through email correspondence.

Draft Outline

1) Demonstrate that the current rules are not working. State our case that the notification requirements are impossible to comply with and are the reason that there is no activity on this band.

2) Cite 60m as an example of successful similar activity.

3) Assert, providing evidence, that amateur radio operators already know how to share spectrum with other services. We already know how to not cause problems for the services that we share spectrum allocations with. We do not need additional notification requirements or to sign declarations to the ARRL in order to be able to successfully operate on an amateur radio band as a secondary user.

4) Show, from a ULS database perspective, the current utilization of 219-220 MHz. Amateur activity does not threaten primary licensees.

5) Show, to the best of our ability, the actual use of 219 – 220 MHz. This will require a call for participation for sensing stations, such as WebSDRs, amateur operators, receiver enthusiasts, and distributed receiver networks such as SatNOGS, to monitor 219-220 MHz for some period of time, in order to measure primary licensee activity on the band.

5) We assert that AMTS no longer exists. The current primary licensees are not coastal stations. The rules assuming coastal activity do not reflect the reality of this allocation as it exists today. Old rules are hampering efficient use of the spectrum.

6) Propose specific deletions of rules. What exactly should and could be struck down?

7) What are the anticipated effects, repercussions, both positive and negative, of deleting the notification requirements.

8) Is there anything else that could or should be changed? We discussed the 100 kHz bandwidth requirement and it’s origins in specific equipment available to amateurs at the time. Current FCC thinking is to not have rules that hardcode in equipment requirements. Deleting the 100 kHz bandwidth limit in favor of no bandwidth or symbol rate limit had broad support at the workshop.

Thank you to all of you that support our regulatory work. It will take a lot of time and effort to draft proposed rules and to go through the proposal process. The FCC may or may not choose to entertain new rules for 219 MHz. If they do, then a public comment period will happen. If they do not, then the draft rules will still be of benefit, as they will record the point of view and consensus of digital mode amateur radio experimenters and enthusiasts.

Please join this effort at ORI by visiting https://openresearch.institute/getting-started

Sphere of Activity

July 12-13 2024 – ORI exhibited at HamCon:Zion in St. George, Utah, USA. We invited Deep Space Exploration Society (DSES) and QRZ.com columnist Dave Jensen to join our booth. We showcased their work alongside our technical and regulatory acheivements. Total staff was three for this weekend event at a very nice conference center. Some of the travel expenses were reimbursed by ORI. This collaboration was entirely successful. Our CEO gave a talk about AI/ML in Amateur Radio at HamCon’s forum, which can be seen at https://www.youtube.com/watch?v=ZgHUavf5Ddo

August 8-11 2024 – Our Open Source Digital Radio exhibit at DEFCON 32 in Las Vegas, Nevada USA was a big success. ORI hosted demonstrations from RFBitBanger, Ribbit, Opulent Voice, and our Regulatory team in the RF Village. A staff of four set up, staffed, and tore down the exhbit. Travel and lodging was funded by ORI. Our CEO gave an RF Village talk about Synthetic Aperture Radar and Amateur Radio, which can be seen at https://www.youtube.com/watch?v=fiR4woe4O5Q

August 29 2024 – ORI participated in the quarterly Federal Communications Commission Technology Advisory Council meeting and presentation in Washington, DC. ORI is represented on the FCC TAC by Michelle Thomposn and Matthew Wishek. Open Source and Amateur Radio interests have been defended at the federal level. ORI Contributions to the Dynamic Spectrum Sharing and 6G Working Groups can be seen in the video at https://www.fcc.gov/news-events/events/2024/08/technological-advisory-council-meeting-august-29-2024 If you are interested in supporting regulatory participation and defense, please join ORI, support ORI’s work, and let us know how you want to help.

October 2-9 2024 – ORI organized a week-long event called Digital Update for Microwave (DUM) in Vancouver, British Columbia, Canada. There were 3 on site for this event and 9 participants virtually. Housing was reimbursed by ORI. The focus of the workshop was Opulent Voice and Iterative Receiver Theory technical work. We did not address the 10 GHz antenna questions posed from our outreach prior to the event due to time limitations. Better 10 GHz mobile radio gear is what people are looking for, and this will get attention from ORI over the next quarter. DUM is an event that provides opportunities for people interested in digital communications on the amateur microwave frequencies. The contributions to ORI’s GitHub repositories showed that DUM was well worth the effort. Should ORI have a DUMII?

Thank you to all who support our work! We certainly couldn’t do it without you.

Anshul Makkar, Director ORI
Frank Brickle, Director ORI
Keith Wheeler, Secretary ORI
Steve Conklin, CFO ORI
Michelle Thompson, CEO ORI

219 MHz FCC Rules Revision Workshop Announcement

All of us at ORI are looking forward to the “let’s take back 219 MHz for innovative and awesome digital operations” workshop coming up in a few days. (This is a US centric spectrum management issue.)

The response has been very strong. Thank you!

Yes, the workshop will be recorded. No, it will not be successful without participation. If you care about completely unused VHF/UHF spectrum in the US, and want to see educational, modern, non-commercial use of this band happen without unnecessary impediments and silly rules, then we need your input now.

Here’s the link to RSVP (no you do not have to be an IEEE member to use vTools and sign up and attend):

https://events.vtools.ieee.org/m/439290

Even if you simply want a recording of the workshop, please register. Then you can get a notification.

If you would like to participate in “The Haunted Band” or any of the other 9 open source amateur radio projects at ORI, then it’s easy to join.

Please visit https://openresearch.institute/getting-started to get involved.

Inner Circle Newsletter: Summer 2024

A Sizzling Summer Summary prepared just for you!

Read on for highlights from all our technical and regulatory open source digital radio work. ORI’s work directly benefits amateur radio, provides educational and professional development opportunities for people all over the world, and puts ethics and good governance first.

Opulent Voice Flying High

Opulent Voice is an open source high bitrate digital voice (and data) protocol. It’s what we are using for our native digital uplink protocol for ORI’s broadband microwave digital satellite transponder project. Opulent Voice has excellent voice quality, putting it in a completely different category than low bitrate digital communications products such as D-Star, Yaesu System Fusion, and DMR.

Opulent voice switches between high resolution voice and data without requiring the operator to switch to a separate packet mode. Opulent voice also handles keyboard chat and digital file transmission. Seamless integration of different data types, using modern digital communication techniques, differentiates Opulent Voice from any other amateur radio protocol.

Opulent Voice will fly on the University of Puerto Rico’s RockSat-X launch on 13 August 2024. It’s been a very positive experience working with the students and faculty at the University.

An implementation on FPGA for the PLUTO SDR is well underway, with a active international team delivering quality results. This implementation will not only turn your PLUTO SDR into an Opulent Voice transceiver, but it will have remote operation functionality.

Hear what Opulent Voice sounds like by following the links in an earlier update at https://www.openresearch.institute/2022/07/30/opulent-voice-digital-voice-and-data-protocol-update/

We’ve come quite a long way in less than two years! The FPGA implementation upgrades the modulation from 4-ary frequency shift keying to minimum shift keying, and increases forward error correction performance and flexibility.

HAMCON:ZION 2024 is This Week!

Please visit us at HAMCON:ZION 2024 this weekend, 12-13 July 2024 in St. George, Utah, USA.

The event website is https://www.hamconzion.com/

ORI will have a club booth at the event. We opened our space to QRZ.com (https://www.qrz.com/) and Deep Space Exploration Society (https://dses.science/). This combined exhibit is a one-stop shop for the best in community, technical, and citizen science amateur radio activity.

We have a volunteer presenting on Artificial Intelligence and Machine Learning in Amateur Radio. The talk opens with a brief summary of the history of our relationship with created intelligence and then explores case studies of the use of artificial intelligence and machine learning in amateur radio. Talk is 1pm on Friday in Entrada B.

Open Research Institute at DEFCON32

We will present an Open Source Showcase at DEFCON in the Radio Frequency Village 12-13 August 2024, with accessible exhibits and demonstrations. Here is the list of scheduled project demonstrations.

Regulatory Efforts: ORI works hard for open source digital radio work and moves technology from proprietary and controlled to open and free in intelligent and mutually beneficial ways. Our work on ITAR, EAR, Debris Mitigation, AI/ML, and Synthetic Aperture Radar will be presented and explained. Find out more at https://github.com/OpenResearchInstitute/documents/tree/master/Regulatory

Ribbit: this open source communications protocol uses the highest performance error correction and modern techniques to turn any analog radio into a digital text terminal. No wires, no extra equipment.. Learn how to use this communications system and get involved in building a truly innovative open source tactical radio service. Find out more at https://www.ribbitradio.org

Satellite: ORI has the world’s first and only open source HEO/GEO communications satellite project. All working parts of the transponder project will be demonstrated, from Opulent Voice to antenna designs.

The Dumbbell antenna: We have an HF antenna design based on a highly effective inductive loading technique first written about in 1958. Learn about this antenna and find out how to make your own. Repository can be found at https://github.com/OpenResearchInstitute/dumbbell

RFBitBanger: an HF QRP system and novel digital protocol called SCAMP. Kit information and updates will be available. Get your Batch 2 kit today at https://www.ebay.com/itm/364783754396

Radar: Our regulatory and technical work in synthetic aperture radar will be demonstrated. One of our volunteers will be giving a talk about open source synthetic aperture radar in the RF Village speakers track. Here is the abstract.

Synthetic Aperture Radar (SAR) is one of the most useful and interesting techniques in radar, providing high resolution radar satellite images from relatively small satellites. SAR is not limited by the time of day or by atmospheric conditions. It complements satellite photography and other remote sensing techniques, revealing activity on the Earth that would otherwise be hidden. How does the magic happen? This talk will explain the basics of SAR in an accessible and friendly way. That’s the good news.

The bad news? SAR is controlled by ITAR, the International Traffic in Arms Regulations, and is listed in the USML, the United States Munitions List. ITAR regulates the export of defense articles and services and is administered by the US State Department. This includes both products and services as well as technical data. Such as, catalogs of high resolution radar imagary. The articles and services regulated by ITAR are identified in the USML. If ITAR doesn’t get you, then EAR just might. The Export Administration Regulations (EAR) are administered by the US Commerce Department, and items are listed on a Commerce Control List (CCL). Commercial products and services and dual-use items that are not subject to ITAR could be regulated by EAR. Even if you are free of ITAR and EAR, you may still be regulated by yet another agency, such as the FCC.

Regulation of SAR chills commercial activity, creating costly and time-consuming burdens. But why does any of this matter to signals hackers? Because technology has overtaken the rules, and devices used by enthusiasts, researchers, students, and hackers are increasingly likely to have enough capability to fall into export-controlled categories. The penalties are harsh. Fear of running afoul of ITAR is enough to stop a promising open source project in its tracks.

Is there a way forward? Yes. ITAR has a public domain carve out. Information that is published and that is generally accessible or available to the public is excluded from control as ITAR technical data. That’s great in theory, but how can we increase our confidence that we are interpreting these rules correctly? How can we use and build upon these rules, so that our community can learn and practice modern radio techniques with reduced fear and risk? Can we contribute towards regulatory relief when it comes to SAR? We will describe the process, report on the progress, and enumerate the challenges and roadblocks.

RFBitBanger Batch 2 Kits Available

Kits are available at our eBay store at this link https://www.ebay.com/itm/364783754396

Be a part of the future with a prototype Batch 2 kit build of the RFBitBanger, a low-power high-frequency digital radio by Dr. Daniel Marks KW4TI. Presented by Open Research Institute, this kit is designed to produce 4 watts of power and opens up a new digital protocol called SCAMP.

SCAMP Is now available in FLDigi!

Source code and extensive instructions can be found at https://github.com/profdc9/fldigi

Your donation in exchange for this kit directly enables the further development of an innovative Class E amplifier based radio design. It has a display, button menu navigation, and keyboard connection for keyboard modes and keyboard-enabled navigation. This radio can be taken portable or used in a case. If you have a 3d printer, then Dr. Marks has a design ready for you to print in the repository linked below.

  • Built-in digital modes: CW, RTTY, SCAMP (FSK and OOK, multiple speeds)
  • Key jack supports straight keys and iambic paddles
  • Open Source hardware and firmware, Arduino UNO compatible https://github.com/profdc9/RFBitBanger
  • External sound-card FSK digital modes supported (including FT4/FT8)
  • Experimental SSB support
  • Serial port support (2400 baud) for send and receive in keyboard modes

SCAMP is a new protocol that allows keyboard-to-keyboard contacts with a digital protocol that has excellent connection performance. See Dr. Marks presentation about RFBitBanger at QSO Today Academy in September 2023 to learn more about SCAMP and the RFBitBanger project. Link below:

All surface mount parts on the main board are pre-installed at the factory. All the through-hole parts you need to complete the radio are provided for you to solder yourself.

Builder’s notes and photos of all the components to help you identify and install them can be found here:

https://github.com/OpenResearchInstitute/RFBitBanger-kit/tree/main/batch2

If you don’t know how to wind toroids or solder surface mount capacitors, this is an excellent kit to learn on. There are just six toroids on the main board, and two on each band pass filter board. You can build just one band pass filter board and operate on a single band, or you can build an assortment. We provide 12 filter boards, enough toroids to build any 9 filters, and a supply of capacitors that will let you build those 9 filters for 9 different HF ham bands. These capacitors are size 1206, which is the largest common size for SMT capacitors and the easiest to solder manually. All you’ll need is a pair of tweezers and your regular soldering iron and solder. We provide detailed instructions on winding the toroids and soldering the capacitors. You get spare filter boards to experiment with.

Friendly Support is provided through a dedicated Open Research Institute Slack channel.

Instructions on how to join this community are here:

https://www.facebook.com/openresearchinstitute https://www.instagram.com/open_research_institute/
https://x.com/OpenResearchIns

Standards Groups and Advisory Work

We have been hard at work defending #opensource and #hamradio in multiple ways by actively supporting standards committees and advisory groups.

We are a member of the United States Federal Communications Commission Technological Advisory Council, serving on the Advanced Spectrum Sharing and 6G Working Groups.

We serve on IEEE P1954 Standard for Self-Organizing Spectrum-Agile Unmanned Aerial Vehicles Communications working group.

We participated in several notices for proposed rulemaking at multiple government agencies over the past year, and expect this work to continue.

We have assisted in several successful experimental license applications for amateur radio satellite communications projects.

ORI is doing the work that helps the future of open source digital radio.

Thank you to all who support our work.

Comments from ORI to the FCC on Advancing Understanding of Non-Federal Spectrum Usage

The comments as filed can be found here:

https://www.fcc.gov/ecfs/document/1009106258899/1

Text of the comments is below.

Comments of
Open Research Institute, Inc.

#1873 3525 Del Mar Heights Road
San Diego, CA, 92130
United States of America
9 October 2023

Open Research Institute (ORI) is a non-profit research and development organization devoted to Open Source digital radio technology.

ORI’s mission is to provide practical open source wireless communications solutions for terrestrial and space applications. ORI provides significant workforce development opportunities to the volunteers engaging in Open Source technical work.

Open Source means that everything required to reproduce a software, hardware, or firmware design is freely available to the general public. An Open Source design must have free redistribution, allow modifications and derived works, and be non-discriminatory against persons, groups, or fields of endeavor. Open Source work cannot be specific to a particular product, it cannot restrict other software or hardware, and needs to be technology-neutral.

Open Source is vital to the United States’ economic competitiveness in telecommunications. The Internet runs on Linux, an Open Source computer operating system, as 96.3% of the top one million web servers run on Linux, and the vast majority of the underlying networking infrastructure uses either Linux or other Open Source operating systems and libraries.

The telecommunications industry has historically been heavily reliant on proprietary software and hardware. An increase in the adoption of Open Source, from OpenRAN to Nephio, has enabled telecommunications companies to more quickly and efficiently meet market demands. There is a broad trend towards the continuing increase in adoption of Open Source designs in order to avoid silos of wheel reinvention and to promote effective interoperability.

The Open Source community can address this inquiry.

Introduction

ORI agrees that spectrum usage information is generally nonpublic and infrequently available. In the spirit of Open Source and Open Access, we believe that publicly available high-quality data about spectrum usage is in the national interest.

ORI agrees that the need for spectrum usage information will only continue to grow. In order to produce useful models, artificial intelligence and machine learning require enough data to complete the training process. Without enough data, a machine learning model can suffer from a condition called overfitting. When this happens, the model becomes a very expensive copy machine, presenting the training data as output, regardless of the input. The lack of sufficiently large high quality radiofrequency data sets is widely acknowledged as an impediment to research and development for radiofrequency machine learning.

ORI agrees that the development of new and innovative spectrum sharing techniques, allowing increased co-existence among users and services, will improve spectrum management. Spectrum usage
information is required in order to develop new spectrum sharing techniques. This is true whether or not machine learning is used either in the process or in the product. In other words, even if only ordinary humans had the task of improving spectrum sharing over and above what we have today, those humans would still need spectrum usage information to achieve their goal.

Without good spectrum usage information, neither machine learning nor human architects will be able to confidently produce quality results. The most common outcome to making the best guess in the absence of spectrum usage information is highly conservative spectrum sharing arrangements that do not fully utilize spectrum, overly restrict licensees, and prevent innovation in the field.

Central Question

We want a more sophisticated knowledge of non-Federal spectrum usage. The central question of this inquiry is how can we take advantage of modern capabilities for gaining this knowledge in a costeffective, accurate, scalable, and actionable manner?

In addition to the other spectrum monitoring efforts listed in this inquiry, we can start with the concepts established by the Spectrum Monitoring Pilot Program from NTIA/NIST.

This program measured spectrum occupancy with standardized distributed receivers reporting to Measured Spectrum Occupancy Databases. These databases publish the metadata of their measurements so that measured data can be retrieved over https:// connections. The concepts of federation are used in order to avoid inefficient and expensive replication of measured data.

The Spectrum Monitoring Pilot Program had two classes of receivers. One was dedicated to radar and the other to communications. The communications receiver approach is an order of magnitude less
expensive than the radar receiver approach.

Restricting the hardware package to the less expensive category of communications receiver, and using modern software-defined radios with Open Source software, and incorporating existing online radios (“WebSDRs”) alongside the standardized stations, and being deliberate and innovative about incentives, then we assert that a low-resolution receiver network that can produce useful spectrum usage information is achievable.

A Low Resolution Receiver Network – Why?

Why low resolution? Because a broad heatmap of spectrum usage, even at low resolution, is valuable in a different way for spectrum management purposes than a small amount of high resolution data from
one service in one geographical area.

The current situation is of essentially no real-time spectrum usage information. Even if we simply had noise floor measurements across the bands across the country, and even if those measurements were
gathered from stations of varying quality, we would have an immense improvement in our capacity to intelligently manage our spectrum over having no measurements at all.

Towards a Weather Map of the National Noise Floor

Noise floor is the measure of the noise power per bandwidth. Getting a snapshot of something like a National Noise Floor, comparable to a national radar weather map, needs a diversity of radio receivers.
We need to be able to measure or estimate power spectral density from these receivers. Services with intermittent use must be measured often enough to produce minimally accurate data, or those services
must be covered with alternate techniques.

As an example of where alternate techniques can work, FT8 mode contacts on the 17 meter Amateur Radio Service band can be obtained from https://gridtracker.org/. These reports, which are text based, can be used to estimate the power spectral density at the radios using Grid Tracker. Reports from radios can be used to create a map of real-time spectrum usage without directly measuring the spectrum. These are estimates, but they are based on measured signal-to-noise reports that all the radios are giving each other, in each 15-second long exchange.

We can compare this type of observation to eavesdropping on roomfuls of people talking to each other, and writing down how many conversations (and attempted conversations) there were. Multiple
eavesdroppers can spread through the house, combine their notes, and show where the conversational density was highest.

What does this look like on the 17 meter Amateur Radio Service allocation with a typical FT8 station?

A stream of which stations contacted other stations is reported from a single radio. The stream consists of all the contacts that the radio has heard from that location. The radio is not directly contacting all of
these other stations, but it can hear them all and tracks who is trying to talk to who and when. Any radio on the band can act like the eavesdropper above.

Open Source software (called WSJT-X) is controlling the radio and demodulating and decoding all received transmissions across the entire sub-band. WSJT-X does include a spectrum waterfall display,
which could be used to obtain power spectral density of all the simultaneous transmissions, but we do not have to do this. We instead use another commonly installed Open Source software program (called
Grid Tracker) which takes the text output of WSJT-X and provides a text list of active stations and their reported signal power. This list can produce a calculated estimate of the power spectral density in the band. It’s less math intensive to work with text based signal reports for fixed formal signals like FT8 than it is to use a spectrum analyzer waterfall, or deal with the IQ recordings of a radiometer.

The addition of additional radios (more eavesdroppers) improves coverage.

Does this exact network exist today? It almost does. Instances of Grid Tracker, very commonly installed alongside WSJT-X and running whenever the station is active are reporting all of this information, but they do not have coordinated reporting, as of today. However, since the software is Open Source, adding a function to opt in to do some math and donate the data to a server to produce a National Noise Floor snapshot, for this particular mode, is achievable.

This example outlines a proof of concept of a very tiny slice of the HF spectrum at 18.100 MHz, but it shows the advantage of using existing Open Source software, existing radios and existing communities
of licensed operators. Incentives to licensees to participate could be as simple as getting an opportunity to defend their licensed spectrum up to recognition awards for donating data consistently over time.

Achieving Aggregated Wireless Sensing

How can we broaden this concept? First, leverage existing online radio receivers, such as broadband WebSDRs. See http://www.websdr.org/ for deployed examples. Power spectral density measurements or estimates can be obtained from receivers already on the air and already connected to the Internet.

An anticipated and understandable objection is that the multitude of WebSDRs across the country are not calibrated stations and they are not standardized. The owner could take them down at any time. A
new one might be put on the air tomorrow. The answer to these objections is that the aggregation of these observations, even if the observations are of varying quality, provides immense value in efforts to
improve spectrum management because these receivers can produce real-time spectral usage information with no additional radio hardware investment.

We should, of course, not stop there. We must commit to a both/and and not an either/or approach to answer the central question of this inquiry.
Second, deploy inexpensive, standardized, and calibrated receivers to libraries, schools, post offices, and any other institution or organization that can be either incentivized or mandated.

For a model of an Open Source standardized distributed receiver system producing real-world practical radio results, please refer to the SatNOGS project at https://satnogs.org/

What are some standardized station examples that we could deploy in the United States to achieve the
goals of this inquiry?

An Open Source PLUTO SDR running Open Source Maia software creates an inexpensive spectrum analyzer with a built-in webserver. The addition of the federated reporting functions is possible because
the source code for Maia can be modified to include these additional functions. Maia can be found at https://maia-sdr.org/. Documentation for the standard PLUTO firmware (which is largely replaced by
the Maia firmware) can be found at

https://github.com/analogdevicesinc/plutosdr-fw

and documentation for the PLUTO hardware can be found at

https://wiki.analog.com/university/tools/pluto/hackers

A PLUTO/Maia package can cover frequencies from 70 MHz to 6 GHz. It would requires one or more antennas (depending on how many bands are to be monitored by that station), a power supply, a weatherproof enclosure, mechanical attachments, and cables. A proof of concept would be expected to cost less than the Spectrum Monitoring Pilot Program communications receiving station proof of concept, which came in at $6000 according to “An Overview of the NTIA/NIST Spectrum Monitoring Pilot Program”.

This can be read at

https://its.ntia.gov/umbraco/surface/download/publication? reportNumber=CottonSpectMonIwssSubmitted.pdf

A second and even less expensive example of a standardized station would be an RTL-SDR

https://www.rtl-sdr.com/about-rtl-sdr/

and a Raspberry Pi

https://www.raspberrypi.com/

running Linux. This kit can use a large number of Open Source software-defined radio packages. It can be tuned to any of the bands in its operating range of 2.4 MHz to 2 GHz. For a sweep of the entire operating range, multiple antennas and combiners would be necessary along with an amount of additional equipment and software.

The WebSDRs combined with the standard package stations form a heterogeneous distributed receiver network. In aggregate, with enough stations, the resulting network can provide useful real-time reports
of spectrum usage information. A centralized visualization of actual spectrum usage, even if not realtime, would be very informative. If a picture is worth a thousand words, then a video is worth a thousand pictures. People seeing gaps in the data may be motivated to put up or sponsor a station to fill in the gaps, similar to the effect we see with personal weather stations that measure temperature, wind
speed, and other meteorological aspects.

TAC Working Group as Asset

The Dynamic Spectrum Allocation Working Group of the Technological Advisory Council of the Federal Communications Commission could provide advisory feedback on real-time spectral usage information obtained through opportunistic and inexpensive sensor networks, invite speakers to present about the topic, and give specific recommendations. Leveraging an existing Working Group with familiarity in this particular topic would be an efficient use of already-available expert advice.

Conclusion

A National Noise Floor heatmap, even if low resolution, is achievable and it is valuable. Any reasonable real-time data, whether obtained opportunistically or purposefully, is an enormous step forward compared to no data at all.

There are drawbacks to low resolution data. The limits of the resolution must be acknowledged. The measurements have to at least be reasonable, meaning that a snapshot of noise floor includes enough
information to where lower power signals aren’t completely overlooked or missed. For each frequency allocation measured, a subject matter expert in that allocation would be expected to compare the realtime spectrum usage information to a model of expected spectrum usage. The difference between this theoretical or calculated model and the real-time spectrum usage information is valuable and informative in many ways. A subject matter expert would be able to explain observed differences, explain whether and why any difference was sufficient reason to make spectrum management adjustments, and provide feedback for improved spectrum sensing. There is no one-size solution for either the measurement stations involved or the allocations they are measuring.

The architecture for gaining visibility of spectral usage has been previously piloted in the Spectrum Monitoring Pilot Program. This and other excellent prior work can be adapted, and citizens can be
incentivized to participate in order to scale up the sensor network. Incentives range from the simple fact of being able to individually contribute directly towards the defense of a spectral allocation, to awards or recognition for the technical achievement of constructing and calibrating a station to a published standard, to a scoreboard of who provided the most consistent reports of real-time spectral information
over specific lengths of time.

There is a large amount of Open Source software and hardware that can be used to reduce costs and reward high quality collaborative and cooperative work. A “lower-tech, inexpensive, diverse, and start now” instead of a “high-tech, expensive, maybe sometime in the future” approach is cost-effective, accurate (enough), scalable, and actionable.

Respectfully,
Michelle Thompson
CEO ORI

AI vs. Human: ChatGPT vs. ORI Open Source Designers

Greetings all,

You know that ORI has been active in a variety of regulatory work for several years now. One of those areas has been in AI/ML and the effect on communications. We served on the US FCC TAC in 2022, have presented and written on the subject, and are working towards an AI/ML Handbook for Amateur Radio.

You may have seen news about OpenAI GPT, or ChatGPT.

https://openai.com/api/

There’s a lot of people testing this chatbot out for reverse engineering and in writing software. There’s been some successes in cases where the model would have had access to code or documentation about protocols, observations, or technical behavior in order to train, and requests match up well with the models. 

For example, a friend asked the chatbot to create a Python script to access the webcam on their computer, and they shared a still image from the resulting feed. Another example from another friend resulted in better reverse engineering results than they had gotten with either IDA or Ghidra over many weeks.

So, how does ChatGPT handle VHDL?

We have something specific to compare against, since we just completed a COBS decoder in VHDL for our open source HEO/GEO transponder work for #Haifuraiya.

Let’s start off simple. Here’s the first dialogue with the chatbot, and the results.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder in VHDL

Sure, here’s an example of a COBS decoder in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity COBS_decoder is
  Port (
    input_data : in std_logic_vector;
    decoded_data : out std_logic_vector
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in input_data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if input_data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      decoded_data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
end Behavioral;

This code decodes the input data using the COBS (Consistent Overhead Byte Stuffing) algorithm, which involves replacing runs of zeros in the input data with a single byte that indicates the length of the zero run. For example, the input ‘101001000’ would be decoded as ‘103101’.

This decoder iterates over the input data and maintains a variable zero_run_length that is incremented for each zero in the input data. When a non-zero value is encountered, the length of the zero run is written to the output data and the zero_run_length variable is reset.

I hope this helps! Let me know if you have any other questions.”

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knew what COBS meant. It produced code, with comments, and a brief “design document”, with examples.

The code produced is not licensed or cited.

Our open source HEO transponder project uses AXI-based blocks in the FPGA. So, let’s add AXI to the request.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder with AXI interface in VHDL

Sure, here’s an example of a COBS decoder with an AXI interface in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;

— AXI interface definitions
library unisim;
use unisim.vcomponents.all;

entity COBS_decoder is
  Port (
    AXI_IN  : in  AXI4Stream;
    AXI_OUT : out AXI4Stream
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in AXI_IN.data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if AXI_IN.data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      AXI_OUT.data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
 
  — Set the valid and last flags in the output AXI stream
  AXI_OUT.valid <= ‘1’;
  AXI_OUT.last  <= ‘1’;
end Behavioral;

This code is similar to the previous example, but it uses an AXI (Advanced eXtensible Interface) interface for the input and output data. The AXI interface is a standard interface used in many FPGA designs, and it provides a way to stream data into and out of the decoder.

In this code, the input data is read from the AXI_IN.data port, and the decoded output is written to the AXI_OUT.data port. The valid and last flags are also set in the output AXI stream to indicate that the data is valid and that it is the last data in the stream.

I hope this helps! Let me know if you have any other questions.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knows about the previous request and updated the design. No license.

Here’s a review of the COBS implementation from Paul Williamson KB5MU.

“I don’t know what this is, but it isn’t a COBS decoder with AXI interfaces.

It does seem at first glance to be some sort of thing with AXI interfaces, but it does not implement an AXI interface at all. It just relies on a (possibly imaginary) library type called AXI4Stream that somehow magically does AXI interfaces, in a way that’s both implausibly smart and inexplicably dumb. I think the classic phrase “not even wrong” applies here.

The AXI4Stream object apparently has a member .data that is an array of bytes, which might make sense if the entire AXI transfer is being treated as a single entity here. The code iterates through the input .data using the ’range attribute, as if the AXI4Stream object has a defined fixed transfer length.

Which it can’t have, if it’s a general purpose AXI stream implementation.

So, if this was ever anything real, it must have been a special purpose AXI implementation with a fixed transfer length. Assuming that the AXI4Stream .data actually works that way, the entity does
do some sort of transformation on the input data to create the output data.

We asked for that to be a COBS decoder, but the function that it actually implements bears only the slightest resemblance to a COBS decoder. What it actually does is output a stream of words that are the lengths of the runs of zeroes in the input data. Nonzero data does not get passed through at all.

Instead, a run of nonzero bytes gets translated into a run of zeroes, as if two adjacent nonzero bytes meant nothing but a zero-length run of zeroes.

Here’s an example of what this code seems to do:

  Input  .data:  1 2 3 4 0 0 0 0 1 2 3 4
  Output .data:  0 0 0 0         4 0 0 0

That’s certainly not a COBS decoder, or even anything useful at all.

Note that the output is variable length, but never longer than the input.
We had to assume that the AXI4Stream object had a fixed .data size to get this far, but now it has to have a variable data size for the output.

Supposing we had successfully suspended disbelief up to here, we now examine the final two lines of code in the Behavioral architecture. It sets the .valid and .last members of the AXI_OUT port to ‘1’.

That makes absolutely no sense if it’s supposed to encapsulate an entire AXI stream transfer. Those signals only make sense if we’re treating one word at a time, as you’d expect to see in a normal, general-purpose AXI stream object.

The extremely verbose commenting style in this code suggests that the original source material was from an introductory tutorial. Maybe the AI has been tuned to favor heavily commented code, and ends up picking up partially completed junk from the middle of tutorials?
Anyhow, this is not an example of automated programming in response to a high-level query. This is gibberish that succeeds only in having a superficial resemblance to the code requested.”

So, does this work?

Let’s try it out in the lab!

The first thing we notice is that there is an obvious syntax error with a missing semicolon. Aside from that, there is no AXI4Stream port type. It’s not recognized. Vivado appears to be disappointed in this file, and wants little to do with this code. This is where we should stop, but I spent some time looking to see if the ports were at least close to being declared correctly.

The model appears to have taken some custom or maybe enumerated type from somewhere, and come up with a custom AXI4Stream type. This is not recognized by the unisim library right off the bat, and that’s the sort of thing that automated code writing needs to get right.

Let’s look at the code written by humans from ORI. The decoder can be found here:

https://github.com/Abraxas3d/fpga_cores/blob/master/src/decoder.vhd

-=-=-=-=-=-=-=-=-=-=-=-=-=-

— Company: Open Research Institute, Inc.
— Engineer: Skunkwrx, Abraxas3d
— Design Name: COBS protocol decoder
— Module Name: decoder – Behavioral
— Project Name: Phase 4 “Haifuraiya”
— Target Devices: 7000 Zynq
— Tool Versions: 2021.1
— Description: COBS protocol decoder.

— Additional Comments: This work is Open Source and licensed using CERN OHL v2.0

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

–Entity Declaration
entity decoder is
    Port ( rst      : in STD_LOGIC;
           clk      : in STD_LOGIC;
           s_tdata  : in STD_LOGIC_VECTOR (7 downto 0);
           s_tlast  : in STD_LOGIC;
           s_tvalid : in STD_LOGIC;
           s_tready : out STD_LOGIC;
           m_tdata  : out STD_LOGIC_VECTOR (7 downto 0);
           m_tlast  : out STD_LOGIC;
           m_tvalid : out STD_LOGIC;
           m_tready : in STD_LOGIC);
end decoder;

–Architecture
architecture Behavioral of decoder is

— internal copy of s_tdata
    signal input_data          : STD_LOGIC_VECTOR (7 downto 0);

    — s_tdata delayed by one clk where s_tvalid is high
    signal input_data_d        : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of s_tlast
    — not used for anything
    — We create our own m_tlast from the COBS framing instead
    signal s_tlast_i           : STD_LOGIC;

    — internal version of input handshaking signal s_tvalid
    signal s_tvalid_i          : STD_LOGIC;

    — s_tvalid delayed by one clk
    — thus, a version of s_tvalid aligned with input_data_d
    signal s_tvalid_i_d        : STD_LOGIC;

    — s_tvalid delayed by two clks
    — thus, a version of s_tvalid aligned with output_data
    signal s_tvalid_i_d_d      : STD_LOGIC;
   
    — countdown of bytes in this sequence
    — loads from input_data_d when it represents the frame count
    —   as determined by signal counter_load
    — contains the count while m_tdata contains the implied 0 at
    —   the end of a sequence, or the next count that replaced it
    — counts down thereafter, to contain 1 when the last non-zero
    —   byte of the sequence is on m_tdata
    — allowed to count down to 0, but not beyond
    signal count               : STD_LOGIC_VECTOR (7 downto 0);

— enable to load count from input_data_d on this clk edge
— two cases detected:
—    * first valid non-zero byte after a frame separator
—    * first valid byte after count is exhausted
— allowed to be high for multiple cycles
    signal counter_load        : STD_LOGIC;

    — counter_load delayed by one clk where s_tvalid is high
    — used to identify the first valid data byte of any sequence,
    —   for purposes of computing m_tvalid (via pre_tvalid)
    signal counter_load_d      : STD_LOGIC;

    — detection of a valid frame separator (zero) byte in input_data_d
    signal frame_sep           : STD_LOGIC;

    — frame_sep delayed by one clk where s_tvalid is high
    — used to compute counter_load
    — used to compute rising edge of pre_tvalid
    signal frame_sep_d         : STD_LOGIC;
   
    — frame_sep_d delayed by an additional clk (not depending on s_tvalid)
    — used to find the first non-zero byte of the new frame
    signal frame_sep_d_d       : STD_LOGIC;
       
    — move the frame_sep signal that occurred during m_tready low
    — out to the first cycle when m_tready is high again
    signal use_saved_frame_sep : STD_LOGIC;

    — flag to remember that the frame count for this sequence was 255,
    —   to handle the special case that such a sequence does not have
    —   an implied zero byte at the end.
    — set when loading count with 255
    — cleared when the counter is reloaded with anything else
    signal case_255            : STD_LOGIC;

    — internal version of m_tdata output
    signal output_data         : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of m_tlast output
    — high when the last byte of a frame is valid on m_tdata
    signal m_tlast_i           : STD_LOGIC;

    — delayed versions of m_tlast
    signal m_tlast_i_d         : STD_LOGIC;
    signal m_tlast_i_d_d       : STD_LOGIC;

    — intermediate result for m_tvalid.
    — high across all data bytes of each sequence on m_tdata
    — does not go low for bytes on m_tdata corresponding to
    —   bytes invalidated by s_tvalid.
    signal pre_tvalid          : STD_LOGIC;

    — internal version of m_tvalid output.
    — pre_tvalid with periods of low s_tvalid_d_d punched out
    signal m_tvalid_i          : STD_LOGIC;

    — internal version of m_tready input
    — also the internal version of s_tready output
    — passes through m_tready to s_tready with no clk delays
    signal m_tready_i          : STD_LOGIC;

    — constant byte value 0xFF, for comparison purposes
    signal all_ones            : STD_LOGIC_VECTOR(input_data’range) := (others => ‘1’);

    — constant byte value 0x00, for comparison purposes
    signal all_zeros           : STD_LOGIC_VECTOR(input_data’range) := (others => ‘0’);

begin

    — asynchronous assignments
   
    frame_sep <= ‘1’ when input_data_d = all_zeros and s_tvalid_i_d = ‘1’
                else ‘0’;
   
    m_tlast_i <= ‘1’ when ((frame_sep = ‘1’ and m_tvalid_i = ‘1’ and m_tready = ‘1’))
                else ‘0’;
   
    counter_load <= ‘1’ when (input_data_d /= all_zeros and frame_sep_d = ‘1’ and s_tvalid_i_d = ‘1’)   — start of frame
                          or (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘1’)   — start of next sequence in frame
                else ‘0’;
   
    m_tvalid_i <= ‘1’ when ((pre_tvalid = ‘1’ and s_tvalid_i_d_d = ‘1’    — usual case, if input_data was valid
                        and not (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘0’)) — defer last byte; might be m_tlast
                      or (pre_tvalid = ‘1’ and to_integer(unsigned(count)) = 1
                        and s_tvalid_i_d = ‘1’ and s_tvalid_i_d_d = ‘0’)) — pick up that deferred last byte
                else ‘0’;
               
   
    s_tready <= m_tready_i;
    m_tdata <= output_data;
    input_data <= s_tdata;
    s_tvalid_i <= s_tvalid;
    m_tready_i <= m_tready;
    m_tvalid <= m_tvalid_i;
    m_tlast <= m_tlast_i;
   
— processes

    set_case_255 : process (rst, clk)
    begin
        if rst = ‘1’ then
            case_255 <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’ and input_data_d = all_ones then
                case_255 <= ‘1’;
            elsif counter_load = ‘1’ and input_data_d /= all_ones then
                case_255 <= ‘0’;
            end if;
        end if;
    end process set_case_255;

   
    delay_s_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            s_tvalid_i_d <= ‘0’;
            s_tvalid_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            s_tvalid_i_d <= s_tvalid_i;            
            s_tvalid_i_d_d <= s_tvalid_i_d;
        end if;
    end process delay_s_tvalid;
   
   
   
    create_pre_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            counter_load_d <= ‘0’;
            pre_tvalid <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i_d = ‘1’ then
                counter_load_d <= counter_load;
                if (frame_sep_d_d = ‘1’ and frame_sep_d = ‘0’)            — normal last byte of frame
                or (counter_load_d = ‘1’ and frame_sep_d = ‘0’)           — normal first byte of a sequence
                then      
                    pre_tvalid <= ‘1’;
                end if;
            end if;
            if frame_sep = ‘1’
            then
                pre_tvalid <= ‘0’;
            end if;
            if counter_load = ‘1’ and case_255 = ‘1’ then
                pre_tvalid <= ‘0’;
            end if;
        end if;
    end process create_pre_tvalid;
     
     

    delay_m_tlast_i : process (rst, clk)
    begin
        if rst = ‘1’ then
            m_tlast_i_d <= ‘0’;
            m_tlast_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            m_tlast_i_d <= m_tlast_i;
            m_tlast_i_d_d <= m_tlast_i_d;
        end if;
    end process delay_m_tlast_i;

    set_counter : process (rst,clk)
    begin
        if rst = ‘1’ then
            count <= (others => ‘0’);
            frame_sep_d <= ‘0’;             frame_sep_d_d <= ‘0’;         elsif rising_edge(clk) and m_tready_i = ‘1’ then             frame_sep_d_d <= frame_sep_d;             if s_tvalid_i_d = ‘1’ then                 frame_sep_d <= frame_sep;                 if counter_load = ‘1’ then                     count <= input_data_d;                 elsif count /= all_zeros                 then                     count <= STD_LOGIC_VECTOR(unsigned(count) – 1);                 end if;             end if;         end if;     end process set_counter;                 create_output : process (rst, clk)     begin         if rst = ‘1’ then             output_data <= (others => ‘0’);
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’
            then
                output_data <= all_zeros;
            elsif s_tvalid_i_d = ‘1’ then
                output_data <= input_data_d;                
            end if;
        end if;
    end process create_output;
   
 
   
    selective_delay_of_input_data : process (rst,clk)
    begin
        if rst = ‘1’ then
            input_data_d <= all_zeros;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i = ‘1’ then
                input_data_d <= input_data;
            end if;    
        end if;
    end process selective_delay_of_input_data;
   

end Behavioral;

-=-=-=-=-=-=-=-=-=-=-=-=-=-

This codebase doesn’t include just the source code, but also a testbench file, a design document, timing diagrams, a Python notebook that creates test vectors, and Python code that compares results produced by the testbench that uses the test vectors.

None of this sort of thing is produced by ChatGPT.

There is code in some languages for some functions that can be correctly written by AI at this point.

This is a situation similar to algorithmic musical composition. Algorithms can come up with useful ideas, motifs, sketches, or traverse a search space more quickly than a human. They can help a composer, but do not replace one. Algorithms can produce some sparkling gems of ideas. These are fresh cuts and views. This is mixed in with a lot of music that is just plain wrong.

If your programming job happens to be in a well-defined area that is easily automated, then you do have something to worry about today.

All jobs are a target for eventual automation with AI/ML. Yes, all of them. As AI/ML becomes more capable, it will automate even things like testbench writing (HDL Coder already does something like this) and support code (like a Python notebook that creates a solid set of test vectors for a testbench) and design documents. All of us should be aware of what we do being replaced by an algorithm some number of tomorrows in the future.

That day is not yet “today” with Chat GPT, when it comes to writing #VHDL.

What do you think? We’re interested in your experience and opinions with AI/ML writing code.