Standards Groups and Advisory Work

We have been hard at work defending #opensource and #hamradio in multiple ways by actively supporting standards committees and advisory groups.

We are a member of the United States Federal Communications Commission Technological Advisory Council, serving on the Advanced Spectrum Sharing and 6G Working Groups.

We serve on IEEE P1954 Standard for Self-Organizing Spectrum-Agile Unmanned Aerial Vehicles Communications working group.

We participated in several notices for proposed rulemaking at multiple government agencies over the past year, and expect this work to continue.

We have assisted in several successful experimental license applications for amateur radio satellite communications projects.

ORI is doing the work that helps the future of open source digital radio.

Thank you to all who support our work.

Comments from ORI to the FCC on Advancing Understanding of Non-Federal Spectrum Usage

The comments as filed can be found here:

https://www.fcc.gov/ecfs/document/1009106258899/1

Text of the comments is below.

Comments of
Open Research Institute, Inc.

#1873 3525 Del Mar Heights Road
San Diego, CA, 92130
United States of America
9 October 2023

Open Research Institute (ORI) is a non-profit research and development organization devoted to Open Source digital radio technology.

ORI’s mission is to provide practical open source wireless communications solutions for terrestrial and space applications. ORI provides significant workforce development opportunities to the volunteers engaging in Open Source technical work.

Open Source means that everything required to reproduce a software, hardware, or firmware design is freely available to the general public. An Open Source design must have free redistribution, allow modifications and derived works, and be non-discriminatory against persons, groups, or fields of endeavor. Open Source work cannot be specific to a particular product, it cannot restrict other software or hardware, and needs to be technology-neutral.

Open Source is vital to the United States’ economic competitiveness in telecommunications. The Internet runs on Linux, an Open Source computer operating system, as 96.3% of the top one million web servers run on Linux, and the vast majority of the underlying networking infrastructure uses either Linux or other Open Source operating systems and libraries.

The telecommunications industry has historically been heavily reliant on proprietary software and hardware. An increase in the adoption of Open Source, from OpenRAN to Nephio, has enabled telecommunications companies to more quickly and efficiently meet market demands. There is a broad trend towards the continuing increase in adoption of Open Source designs in order to avoid silos of wheel reinvention and to promote effective interoperability.

The Open Source community can address this inquiry.

Introduction

ORI agrees that spectrum usage information is generally nonpublic and infrequently available. In the spirit of Open Source and Open Access, we believe that publicly available high-quality data about spectrum usage is in the national interest.

ORI agrees that the need for spectrum usage information will only continue to grow. In order to produce useful models, artificial intelligence and machine learning require enough data to complete the training process. Without enough data, a machine learning model can suffer from a condition called overfitting. When this happens, the model becomes a very expensive copy machine, presenting the training data as output, regardless of the input. The lack of sufficiently large high quality radiofrequency data sets is widely acknowledged as an impediment to research and development for radiofrequency machine learning.

ORI agrees that the development of new and innovative spectrum sharing techniques, allowing increased co-existence among users and services, will improve spectrum management. Spectrum usage
information is required in order to develop new spectrum sharing techniques. This is true whether or not machine learning is used either in the process or in the product. In other words, even if only ordinary humans had the task of improving spectrum sharing over and above what we have today, those humans would still need spectrum usage information to achieve their goal.

Without good spectrum usage information, neither machine learning nor human architects will be able to confidently produce quality results. The most common outcome to making the best guess in the absence of spectrum usage information is highly conservative spectrum sharing arrangements that do not fully utilize spectrum, overly restrict licensees, and prevent innovation in the field.

Central Question

We want a more sophisticated knowledge of non-Federal spectrum usage. The central question of this inquiry is how can we take advantage of modern capabilities for gaining this knowledge in a costeffective, accurate, scalable, and actionable manner?

In addition to the other spectrum monitoring efforts listed in this inquiry, we can start with the concepts established by the Spectrum Monitoring Pilot Program from NTIA/NIST.

This program measured spectrum occupancy with standardized distributed receivers reporting to Measured Spectrum Occupancy Databases. These databases publish the metadata of their measurements so that measured data can be retrieved over https:// connections. The concepts of federation are used in order to avoid inefficient and expensive replication of measured data.

The Spectrum Monitoring Pilot Program had two classes of receivers. One was dedicated to radar and the other to communications. The communications receiver approach is an order of magnitude less
expensive than the radar receiver approach.

Restricting the hardware package to the less expensive category of communications receiver, and using modern software-defined radios with Open Source software, and incorporating existing online radios (“WebSDRs”) alongside the standardized stations, and being deliberate and innovative about incentives, then we assert that a low-resolution receiver network that can produce useful spectrum usage information is achievable.

A Low Resolution Receiver Network – Why?

Why low resolution? Because a broad heatmap of spectrum usage, even at low resolution, is valuable in a different way for spectrum management purposes than a small amount of high resolution data from
one service in one geographical area.

The current situation is of essentially no real-time spectrum usage information. Even if we simply had noise floor measurements across the bands across the country, and even if those measurements were
gathered from stations of varying quality, we would have an immense improvement in our capacity to intelligently manage our spectrum over having no measurements at all.

Towards a Weather Map of the National Noise Floor

Noise floor is the measure of the noise power per bandwidth. Getting a snapshot of something like a National Noise Floor, comparable to a national radar weather map, needs a diversity of radio receivers.
We need to be able to measure or estimate power spectral density from these receivers. Services with intermittent use must be measured often enough to produce minimally accurate data, or those services
must be covered with alternate techniques.

As an example of where alternate techniques can work, FT8 mode contacts on the 17 meter Amateur Radio Service band can be obtained from https://gridtracker.org/. These reports, which are text based, can be used to estimate the power spectral density at the radios using Grid Tracker. Reports from radios can be used to create a map of real-time spectrum usage without directly measuring the spectrum. These are estimates, but they are based on measured signal-to-noise reports that all the radios are giving each other, in each 15-second long exchange.

We can compare this type of observation to eavesdropping on roomfuls of people talking to each other, and writing down how many conversations (and attempted conversations) there were. Multiple
eavesdroppers can spread through the house, combine their notes, and show where the conversational density was highest.

What does this look like on the 17 meter Amateur Radio Service allocation with a typical FT8 station?

A stream of which stations contacted other stations is reported from a single radio. The stream consists of all the contacts that the radio has heard from that location. The radio is not directly contacting all of
these other stations, but it can hear them all and tracks who is trying to talk to who and when. Any radio on the band can act like the eavesdropper above.

Open Source software (called WSJT-X) is controlling the radio and demodulating and decoding all received transmissions across the entire sub-band. WSJT-X does include a spectrum waterfall display,
which could be used to obtain power spectral density of all the simultaneous transmissions, but we do not have to do this. We instead use another commonly installed Open Source software program (called
Grid Tracker) which takes the text output of WSJT-X and provides a text list of active stations and their reported signal power. This list can produce a calculated estimate of the power spectral density in the band. It’s less math intensive to work with text based signal reports for fixed formal signals like FT8 than it is to use a spectrum analyzer waterfall, or deal with the IQ recordings of a radiometer.

The addition of additional radios (more eavesdroppers) improves coverage.

Does this exact network exist today? It almost does. Instances of Grid Tracker, very commonly installed alongside WSJT-X and running whenever the station is active are reporting all of this information, but they do not have coordinated reporting, as of today. However, since the software is Open Source, adding a function to opt in to do some math and donate the data to a server to produce a National Noise Floor snapshot, for this particular mode, is achievable.

This example outlines a proof of concept of a very tiny slice of the HF spectrum at 18.100 MHz, but it shows the advantage of using existing Open Source software, existing radios and existing communities
of licensed operators. Incentives to licensees to participate could be as simple as getting an opportunity to defend their licensed spectrum up to recognition awards for donating data consistently over time.

Achieving Aggregated Wireless Sensing

How can we broaden this concept? First, leverage existing online radio receivers, such as broadband WebSDRs. See http://www.websdr.org/ for deployed examples. Power spectral density measurements or estimates can be obtained from receivers already on the air and already connected to the Internet.

An anticipated and understandable objection is that the multitude of WebSDRs across the country are not calibrated stations and they are not standardized. The owner could take them down at any time. A
new one might be put on the air tomorrow. The answer to these objections is that the aggregation of these observations, even if the observations are of varying quality, provides immense value in efforts to
improve spectrum management because these receivers can produce real-time spectral usage information with no additional radio hardware investment.

We should, of course, not stop there. We must commit to a both/and and not an either/or approach to answer the central question of this inquiry.
Second, deploy inexpensive, standardized, and calibrated receivers to libraries, schools, post offices, and any other institution or organization that can be either incentivized or mandated.

For a model of an Open Source standardized distributed receiver system producing real-world practical radio results, please refer to the SatNOGS project at https://satnogs.org/

What are some standardized station examples that we could deploy in the United States to achieve the
goals of this inquiry?

An Open Source PLUTO SDR running Open Source Maia software creates an inexpensive spectrum analyzer with a built-in webserver. The addition of the federated reporting functions is possible because
the source code for Maia can be modified to include these additional functions. Maia can be found at https://maia-sdr.org/. Documentation for the standard PLUTO firmware (which is largely replaced by
the Maia firmware) can be found at

https://github.com/analogdevicesinc/plutosdr-fw

and documentation for the PLUTO hardware can be found at

https://wiki.analog.com/university/tools/pluto/hackers

A PLUTO/Maia package can cover frequencies from 70 MHz to 6 GHz. It would requires one or more antennas (depending on how many bands are to be monitored by that station), a power supply, a weatherproof enclosure, mechanical attachments, and cables. A proof of concept would be expected to cost less than the Spectrum Monitoring Pilot Program communications receiving station proof of concept, which came in at $6000 according to “An Overview of the NTIA/NIST Spectrum Monitoring Pilot Program”.

This can be read at

https://its.ntia.gov/umbraco/surface/download/publication? reportNumber=CottonSpectMonIwssSubmitted.pdf

A second and even less expensive example of a standardized station would be an RTL-SDR

https://www.rtl-sdr.com/about-rtl-sdr/

and a Raspberry Pi

https://www.raspberrypi.com/

running Linux. This kit can use a large number of Open Source software-defined radio packages. It can be tuned to any of the bands in its operating range of 2.4 MHz to 2 GHz. For a sweep of the entire operating range, multiple antennas and combiners would be necessary along with an amount of additional equipment and software.

The WebSDRs combined with the standard package stations form a heterogeneous distributed receiver network. In aggregate, with enough stations, the resulting network can provide useful real-time reports
of spectrum usage information. A centralized visualization of actual spectrum usage, even if not realtime, would be very informative. If a picture is worth a thousand words, then a video is worth a thousand pictures. People seeing gaps in the data may be motivated to put up or sponsor a station to fill in the gaps, similar to the effect we see with personal weather stations that measure temperature, wind
speed, and other meteorological aspects.

TAC Working Group as Asset

The Dynamic Spectrum Allocation Working Group of the Technological Advisory Council of the Federal Communications Commission could provide advisory feedback on real-time spectral usage information obtained through opportunistic and inexpensive sensor networks, invite speakers to present about the topic, and give specific recommendations. Leveraging an existing Working Group with familiarity in this particular topic would be an efficient use of already-available expert advice.

Conclusion

A National Noise Floor heatmap, even if low resolution, is achievable and it is valuable. Any reasonable real-time data, whether obtained opportunistically or purposefully, is an enormous step forward compared to no data at all.

There are drawbacks to low resolution data. The limits of the resolution must be acknowledged. The measurements have to at least be reasonable, meaning that a snapshot of noise floor includes enough
information to where lower power signals aren’t completely overlooked or missed. For each frequency allocation measured, a subject matter expert in that allocation would be expected to compare the realtime spectrum usage information to a model of expected spectrum usage. The difference between this theoretical or calculated model and the real-time spectrum usage information is valuable and informative in many ways. A subject matter expert would be able to explain observed differences, explain whether and why any difference was sufficient reason to make spectrum management adjustments, and provide feedback for improved spectrum sensing. There is no one-size solution for either the measurement stations involved or the allocations they are measuring.

The architecture for gaining visibility of spectral usage has been previously piloted in the Spectrum Monitoring Pilot Program. This and other excellent prior work can be adapted, and citizens can be
incentivized to participate in order to scale up the sensor network. Incentives range from the simple fact of being able to individually contribute directly towards the defense of a spectral allocation, to awards or recognition for the technical achievement of constructing and calibrating a station to a published standard, to a scoreboard of who provided the most consistent reports of real-time spectral information
over specific lengths of time.

There is a large amount of Open Source software and hardware that can be used to reduce costs and reward high quality collaborative and cooperative work. A “lower-tech, inexpensive, diverse, and start now” instead of a “high-tech, expensive, maybe sometime in the future” approach is cost-effective, accurate (enough), scalable, and actionable.

Respectfully,
Michelle Thompson
CEO ORI

AI vs. Human: ChatGPT vs. ORI Open Source Designers

Greetings all,

You know that ORI has been active in a variety of regulatory work for several years now. One of those areas has been in AI/ML and the effect on communications. We served on the US FCC TAC in 2022, have presented and written on the subject, and are working towards an AI/ML Handbook for Amateur Radio.

You may have seen news about OpenAI GPT, or ChatGPT.

https://openai.com/api/

There’s a lot of people testing this chatbot out for reverse engineering and in writing software. There’s been some successes in cases where the model would have had access to code or documentation about protocols, observations, or technical behavior in order to train, and requests match up well with the models. 

For example, a friend asked the chatbot to create a Python script to access the webcam on their computer, and they shared a still image from the resulting feed. Another example from another friend resulted in better reverse engineering results than they had gotten with either IDA or Ghidra over many weeks.

So, how does ChatGPT handle VHDL?

We have something specific to compare against, since we just completed a COBS decoder in VHDL for our open source HEO/GEO transponder work for #Haifuraiya.

Let’s start off simple. Here’s the first dialogue with the chatbot, and the results.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder in VHDL

Sure, here’s an example of a COBS decoder in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity COBS_decoder is
  Port (
    input_data : in std_logic_vector;
    decoded_data : out std_logic_vector
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in input_data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if input_data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      decoded_data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
end Behavioral;

This code decodes the input data using the COBS (Consistent Overhead Byte Stuffing) algorithm, which involves replacing runs of zeros in the input data with a single byte that indicates the length of the zero run. For example, the input ‘101001000’ would be decoded as ‘103101’.

This decoder iterates over the input data and maintains a variable zero_run_length that is incremented for each zero in the input data. When a non-zero value is encountered, the length of the zero run is written to the output data and the zero_run_length variable is reset.

I hope this helps! Let me know if you have any other questions.”

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knew what COBS meant. It produced code, with comments, and a brief “design document”, with examples.

The code produced is not licensed or cited.

Our open source HEO transponder project uses AXI-based blocks in the FPGA. So, let’s add AXI to the request.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Abraxas3d: Design a COBS decoder with AXI interface in VHDL

Sure, here’s an example of a COBS decoder with an AXI interface in VHDL:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;

— AXI interface definitions
library unisim;
use unisim.vcomponents.all;

entity COBS_decoder is
  Port (
    AXI_IN  : in  AXI4Stream;
    AXI_OUT : out AXI4Stream
  );
end COBS_decoder;

architecture Behavioral of COBS_decoder is
  — Declare a variable to keep track of the current index
  — in the input and output data streams
  variable current_index : integer := 1;
 
  — Declare a variable to store the length of the
  — run of zeros in the input data
  variable zero_run_length : integer := 0;
begin
  — Iterate over the input data
  for i in AXI_IN.data’range loop
    — If the current input value is a zero, increment
    — the zero run length variable
    if AXI_IN.data(i) = ‘0’ then
      zero_run_length := zero_run_length + 1;
    — Otherwise, the current input value is not a zero,
    — so we’ve reached the end of the zero run
    else
      — Set the decoded value at the current index to
      — the length of the zero run
      AXI_OUT.data(current_index) := conv_std_logic_vector(zero_run_length, 8);
     
      — Increment the current index and reset the zero
      — run length for the next iteration
      current_index := current_index + 1;
      zero_run_length := 0;
    end if;
  end loop;
 
  — Set the valid and last flags in the output AXI stream
  AXI_OUT.valid <= ‘1’;
  AXI_OUT.last  <= ‘1’;
end Behavioral;

This code is similar to the previous example, but it uses an AXI (Advanced eXtensible Interface) interface for the input and output data. The AXI interface is a standard interface used in many FPGA designs, and it provides a way to stream data into and out of the decoder.

In this code, the input data is read from the AXI_IN.data port, and the decoded output is written to the AXI_OUT.data port. The valid and last flags are also set in the output AXI stream to indicate that the data is valid and that it is the last data in the stream.

I hope this helps! Let me know if you have any other questions.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

It knows about the previous request and updated the design. No license.

Here’s a review of the COBS implementation from Paul Williamson KB5MU.

“I don’t know what this is, but it isn’t a COBS decoder with AXI interfaces.

It does seem at first glance to be some sort of thing with AXI interfaces, but it does not implement an AXI interface at all. It just relies on a (possibly imaginary) library type called AXI4Stream that somehow magically does AXI interfaces, in a way that’s both implausibly smart and inexplicably dumb. I think the classic phrase “not even wrong” applies here.

The AXI4Stream object apparently has a member .data that is an array of bytes, which might make sense if the entire AXI transfer is being treated as a single entity here. The code iterates through the input .data using the ’range attribute, as if the AXI4Stream object has a defined fixed transfer length.

Which it can’t have, if it’s a general purpose AXI stream implementation.

So, if this was ever anything real, it must have been a special purpose AXI implementation with a fixed transfer length. Assuming that the AXI4Stream .data actually works that way, the entity does
do some sort of transformation on the input data to create the output data.

We asked for that to be a COBS decoder, but the function that it actually implements bears only the slightest resemblance to a COBS decoder. What it actually does is output a stream of words that are the lengths of the runs of zeroes in the input data. Nonzero data does not get passed through at all.

Instead, a run of nonzero bytes gets translated into a run of zeroes, as if two adjacent nonzero bytes meant nothing but a zero-length run of zeroes.

Here’s an example of what this code seems to do:

  Input  .data:  1 2 3 4 0 0 0 0 1 2 3 4
  Output .data:  0 0 0 0         4 0 0 0

That’s certainly not a COBS decoder, or even anything useful at all.

Note that the output is variable length, but never longer than the input.
We had to assume that the AXI4Stream object had a fixed .data size to get this far, but now it has to have a variable data size for the output.

Supposing we had successfully suspended disbelief up to here, we now examine the final two lines of code in the Behavioral architecture. It sets the .valid and .last members of the AXI_OUT port to ‘1’.

That makes absolutely no sense if it’s supposed to encapsulate an entire AXI stream transfer. Those signals only make sense if we’re treating one word at a time, as you’d expect to see in a normal, general-purpose AXI stream object.

The extremely verbose commenting style in this code suggests that the original source material was from an introductory tutorial. Maybe the AI has been tuned to favor heavily commented code, and ends up picking up partially completed junk from the middle of tutorials?
Anyhow, this is not an example of automated programming in response to a high-level query. This is gibberish that succeeds only in having a superficial resemblance to the code requested.”

So, does this work?

Let’s try it out in the lab!

The first thing we notice is that there is an obvious syntax error with a missing semicolon. Aside from that, there is no AXI4Stream port type. It’s not recognized. Vivado appears to be disappointed in this file, and wants little to do with this code. This is where we should stop, but I spent some time looking to see if the ports were at least close to being declared correctly.

The model appears to have taken some custom or maybe enumerated type from somewhere, and come up with a custom AXI4Stream type. This is not recognized by the unisim library right off the bat, and that’s the sort of thing that automated code writing needs to get right.

Let’s look at the code written by humans from ORI. The decoder can be found here:

https://github.com/Abraxas3d/fpga_cores/blob/master/src/decoder.vhd

-=-=-=-=-=-=-=-=-=-=-=-=-=-

— Company: Open Research Institute, Inc.
— Engineer: Skunkwrx, Abraxas3d
— Design Name: COBS protocol decoder
— Module Name: decoder – Behavioral
— Project Name: Phase 4 “Haifuraiya”
— Target Devices: 7000 Zynq
— Tool Versions: 2021.1
— Description: COBS protocol decoder.

— Additional Comments: This work is Open Source and licensed using CERN OHL v2.0

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

use IEEE.NUMERIC_STD.ALL;

–Entity Declaration
entity decoder is
    Port ( rst      : in STD_LOGIC;
           clk      : in STD_LOGIC;
           s_tdata  : in STD_LOGIC_VECTOR (7 downto 0);
           s_tlast  : in STD_LOGIC;
           s_tvalid : in STD_LOGIC;
           s_tready : out STD_LOGIC;
           m_tdata  : out STD_LOGIC_VECTOR (7 downto 0);
           m_tlast  : out STD_LOGIC;
           m_tvalid : out STD_LOGIC;
           m_tready : in STD_LOGIC);
end decoder;

–Architecture
architecture Behavioral of decoder is

— internal copy of s_tdata
    signal input_data          : STD_LOGIC_VECTOR (7 downto 0);

    — s_tdata delayed by one clk where s_tvalid is high
    signal input_data_d        : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of s_tlast
    — not used for anything
    — We create our own m_tlast from the COBS framing instead
    signal s_tlast_i           : STD_LOGIC;

    — internal version of input handshaking signal s_tvalid
    signal s_tvalid_i          : STD_LOGIC;

    — s_tvalid delayed by one clk
    — thus, a version of s_tvalid aligned with input_data_d
    signal s_tvalid_i_d        : STD_LOGIC;

    — s_tvalid delayed by two clks
    — thus, a version of s_tvalid aligned with output_data
    signal s_tvalid_i_d_d      : STD_LOGIC;
   
    — countdown of bytes in this sequence
    — loads from input_data_d when it represents the frame count
    —   as determined by signal counter_load
    — contains the count while m_tdata contains the implied 0 at
    —   the end of a sequence, or the next count that replaced it
    — counts down thereafter, to contain 1 when the last non-zero
    —   byte of the sequence is on m_tdata
    — allowed to count down to 0, but not beyond
    signal count               : STD_LOGIC_VECTOR (7 downto 0);

— enable to load count from input_data_d on this clk edge
— two cases detected:
—    * first valid non-zero byte after a frame separator
—    * first valid byte after count is exhausted
— allowed to be high for multiple cycles
    signal counter_load        : STD_LOGIC;

    — counter_load delayed by one clk where s_tvalid is high
    — used to identify the first valid data byte of any sequence,
    —   for purposes of computing m_tvalid (via pre_tvalid)
    signal counter_load_d      : STD_LOGIC;

    — detection of a valid frame separator (zero) byte in input_data_d
    signal frame_sep           : STD_LOGIC;

    — frame_sep delayed by one clk where s_tvalid is high
    — used to compute counter_load
    — used to compute rising edge of pre_tvalid
    signal frame_sep_d         : STD_LOGIC;
   
    — frame_sep_d delayed by an additional clk (not depending on s_tvalid)
    — used to find the first non-zero byte of the new frame
    signal frame_sep_d_d       : STD_LOGIC;
       
    — move the frame_sep signal that occurred during m_tready low
    — out to the first cycle when m_tready is high again
    signal use_saved_frame_sep : STD_LOGIC;

    — flag to remember that the frame count for this sequence was 255,
    —   to handle the special case that such a sequence does not have
    —   an implied zero byte at the end.
    — set when loading count with 255
    — cleared when the counter is reloaded with anything else
    signal case_255            : STD_LOGIC;

    — internal version of m_tdata output
    signal output_data         : STD_LOGIC_VECTOR (7 downto 0);

    — internal version of m_tlast output
    — high when the last byte of a frame is valid on m_tdata
    signal m_tlast_i           : STD_LOGIC;

    — delayed versions of m_tlast
    signal m_tlast_i_d         : STD_LOGIC;
    signal m_tlast_i_d_d       : STD_LOGIC;

    — intermediate result for m_tvalid.
    — high across all data bytes of each sequence on m_tdata
    — does not go low for bytes on m_tdata corresponding to
    —   bytes invalidated by s_tvalid.
    signal pre_tvalid          : STD_LOGIC;

    — internal version of m_tvalid output.
    — pre_tvalid with periods of low s_tvalid_d_d punched out
    signal m_tvalid_i          : STD_LOGIC;

    — internal version of m_tready input
    — also the internal version of s_tready output
    — passes through m_tready to s_tready with no clk delays
    signal m_tready_i          : STD_LOGIC;

    — constant byte value 0xFF, for comparison purposes
    signal all_ones            : STD_LOGIC_VECTOR(input_data’range) := (others => ‘1’);

    — constant byte value 0x00, for comparison purposes
    signal all_zeros           : STD_LOGIC_VECTOR(input_data’range) := (others => ‘0’);

begin

    — asynchronous assignments
   
    frame_sep <= ‘1’ when input_data_d = all_zeros and s_tvalid_i_d = ‘1’
                else ‘0’;
   
    m_tlast_i <= ‘1’ when ((frame_sep = ‘1’ and m_tvalid_i = ‘1’ and m_tready = ‘1’))
                else ‘0’;
   
    counter_load <= ‘1’ when (input_data_d /= all_zeros and frame_sep_d = ‘1’ and s_tvalid_i_d = ‘1’)   — start of frame
                          or (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘1’)   — start of next sequence in frame
                else ‘0’;
   
    m_tvalid_i <= ‘1’ when ((pre_tvalid = ‘1’ and s_tvalid_i_d_d = ‘1’    — usual case, if input_data was valid
                        and not (to_integer(unsigned(count)) = 1 and s_tvalid_i_d = ‘0’)) — defer last byte; might be m_tlast
                      or (pre_tvalid = ‘1’ and to_integer(unsigned(count)) = 1
                        and s_tvalid_i_d = ‘1’ and s_tvalid_i_d_d = ‘0’)) — pick up that deferred last byte
                else ‘0’;
               
   
    s_tready <= m_tready_i;
    m_tdata <= output_data;
    input_data <= s_tdata;
    s_tvalid_i <= s_tvalid;
    m_tready_i <= m_tready;
    m_tvalid <= m_tvalid_i;
    m_tlast <= m_tlast_i;
   
— processes

    set_case_255 : process (rst, clk)
    begin
        if rst = ‘1’ then
            case_255 <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’ and input_data_d = all_ones then
                case_255 <= ‘1’;
            elsif counter_load = ‘1’ and input_data_d /= all_ones then
                case_255 <= ‘0’;
            end if;
        end if;
    end process set_case_255;

   
    delay_s_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            s_tvalid_i_d <= ‘0’;
            s_tvalid_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            s_tvalid_i_d <= s_tvalid_i;            
            s_tvalid_i_d_d <= s_tvalid_i_d;
        end if;
    end process delay_s_tvalid;
   
   
   
    create_pre_tvalid : process (rst, clk)
    begin
        if rst = ‘1’ then
            counter_load_d <= ‘0’;
            pre_tvalid <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i_d = ‘1’ then
                counter_load_d <= counter_load;
                if (frame_sep_d_d = ‘1’ and frame_sep_d = ‘0’)            — normal last byte of frame
                or (counter_load_d = ‘1’ and frame_sep_d = ‘0’)           — normal first byte of a sequence
                then      
                    pre_tvalid <= ‘1’;
                end if;
            end if;
            if frame_sep = ‘1’
            then
                pre_tvalid <= ‘0’;
            end if;
            if counter_load = ‘1’ and case_255 = ‘1’ then
                pre_tvalid <= ‘0’;
            end if;
        end if;
    end process create_pre_tvalid;
     
     

    delay_m_tlast_i : process (rst, clk)
    begin
        if rst = ‘1’ then
            m_tlast_i_d <= ‘0’;
            m_tlast_i_d_d <= ‘0’;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            m_tlast_i_d <= m_tlast_i;
            m_tlast_i_d_d <= m_tlast_i_d;
        end if;
    end process delay_m_tlast_i;

    set_counter : process (rst,clk)
    begin
        if rst = ‘1’ then
            count <= (others => ‘0’);
            frame_sep_d <= ‘0’;             frame_sep_d_d <= ‘0’;         elsif rising_edge(clk) and m_tready_i = ‘1’ then             frame_sep_d_d <= frame_sep_d;             if s_tvalid_i_d = ‘1’ then                 frame_sep_d <= frame_sep;                 if counter_load = ‘1’ then                     count <= input_data_d;                 elsif count /= all_zeros                 then                     count <= STD_LOGIC_VECTOR(unsigned(count) – 1);                 end if;             end if;         end if;     end process set_counter;                 create_output : process (rst, clk)     begin         if rst = ‘1’ then             output_data <= (others => ‘0’);
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if counter_load = ‘1’
            then
                output_data <= all_zeros;
            elsif s_tvalid_i_d = ‘1’ then
                output_data <= input_data_d;                
            end if;
        end if;
    end process create_output;
   
 
   
    selective_delay_of_input_data : process (rst,clk)
    begin
        if rst = ‘1’ then
            input_data_d <= all_zeros;
        elsif rising_edge(clk) and m_tready_i = ‘1’ then
            if s_tvalid_i = ‘1’ then
                input_data_d <= input_data;
            end if;    
        end if;
    end process selective_delay_of_input_data;
   

end Behavioral;

-=-=-=-=-=-=-=-=-=-=-=-=-=-

This codebase doesn’t include just the source code, but also a testbench file, a design document, timing diagrams, a Python notebook that creates test vectors, and Python code that compares results produced by the testbench that uses the test vectors.

None of this sort of thing is produced by ChatGPT.

There is code in some languages for some functions that can be correctly written by AI at this point.

This is a situation similar to algorithmic musical composition. Algorithms can come up with useful ideas, motifs, sketches, or traverse a search space more quickly than a human. They can help a composer, but do not replace one. Algorithms can produce some sparkling gems of ideas. These are fresh cuts and views. This is mixed in with a lot of music that is just plain wrong.

If your programming job happens to be in a well-defined area that is easily automated, then you do have something to worry about today.

All jobs are a target for eventual automation with AI/ML. Yes, all of them. As AI/ML becomes more capable, it will automate even things like testbench writing (HDL Coder already does something like this) and support code (like a Python notebook that creates a solid set of test vectors for a testbench) and design documents. All of us should be aware of what we do being replaced by an algorithm some number of tomorrows in the future.

That day is not yet “today” with Chat GPT, when it comes to writing #VHDL.

What do you think? We’re interested in your experience and opinions with AI/ML writing code.