Contains neural network models for signal classification implemented in the PyTorch framework.
When getting started with this package, you can use these models as-is: the default sizes and parameter values are reasonable for the bundled datasets. Next, you can explore how tuning the values provided as arguments to these models (“model hyperparameters”) affects their performance and accuracy. Finally, these models are each standalone functions, so you can copy them into your own code to add layers, change activation functions, and experiment with different model structures entirely!
default_network(input_len, output_len, num_filt=16, strides=(1, 1), filt_size=(32, 2), pool_size=(16, 1), fc_num_nodes=[128, 64, 32], padding=(0, 0))¶
Creates a convolutional signal classifier model for chunks of signal data that are a specified number of samples long.
Required arguments are the input length (number of samples in the input) and the output length number of classes of signal in the classifier. Other optional arguments are tunable hyperparameters to optimize the model for specific datasets and/or types of signals. When the optional arguments are set to
None, default values will be used that will work reasonably well.
This model object is created with default-initialized weights and must be first trained before being used for inference.
input_len – Number of complex samples to be input to the model
output_len – Number of output classes for this classifier
num_filt – Number of output filters in the convolution
strides – Stride within convolutional layers
filt_size – Kernel/window size of convolutional layers, typically
(X, 2)for some
pool_size – Kernel/window size of downsampling layers, typically
(X, 1)for some
fc_num_nodes – List of sizes of fully-connected (FC) layers
padding – Padding dimensions
Feed forward of the neural network
Data needs to be converted from complex tensor prior to this
x_real (torch.float32) – non complex tensor
- Return type
Get a tensor (containing random data) of the shape and data type expected by this model. This dummy data tensor is used to print a summary of the model as well as during tracing for ONNX model export and in test cases.
You can always test that the model is sized correctly to pass data by running
batch_size (int) – batch size
- Return type
calc_output_dims(h_in, w_in, c, kernel_size, stride=None, padding=None, dilation=None)¶
Calculate torch.nn.Conv2d or torch.nn.MaxPool2d output dimensions
h_in (int) – height dimensions of layer input
w_in (int) – width dimension of layer input
c (int) – channel dimension of layer output
kernel_size (tuple) – kernel dimensions
stride (Optional[tuple]) – stride dimensions
padding (Optional[tuple]) – padding dimensions
dilation (Optional[tuple]) – dilation dimensions
output dimensions of layer (C, H, W)
- Return type
Tuple[int, int, int]