airpack.pytorch.model
¶
Contains neural network models for signal classification implemented in the PyTorch framework.
When getting started with this package, you can use these models as-is: the default sizes and parameter values are reasonable for the bundled datasets. Next, you can explore how tuning the values provided as arguments to these models (“model hyperparameters”) affects their performance and accuracy. Finally, these models are each standalone functions, so you can copy them into your own code to add layers, change activation functions, and experiment with different model structures entirely!
Module Contents¶
-
class
airpack.pytorch.model.
default_network
(input_len, output_len, num_filt=16, strides=(1, 1), filt_size=(32, 2), pool_size=(16, 1), fc_num_nodes=[128, 64, 32], padding=(0, 0))¶ Bases:
torch.nn.Module
Creates a convolutional signal classifier model for chunks of signal data that are a specified number of samples long.
Required arguments are the input length (number of samples in the input) and the output length number of classes of signal in the classifier. Other optional arguments are tunable hyperparameters to optimize the model for specific datasets and/or types of signals. When the optional arguments are set to
None
, default values will be used that will work reasonably well.This model object is created with default-initialized weights and must be first trained before being used for inference.
- Parameters
input_len – Number of complex samples to be input to the model
output_len – Number of output classes for this classifier
num_filt – Number of output filters in the convolution
strides – Stride within convolutional layers
filt_size – Kernel/window size of convolutional layers, typically
(X, 2)
for someX
pool_size – Kernel/window size of downsampling layers, typically
(X, 1)
for someX
fc_num_nodes – List of sizes of fully-connected (FC) layers
padding – Padding dimensions
-
forward
(self, x_real)¶ Feed forward of the neural network
Note
Data needs to be converted from complex tensor prior to this
- Parameters
x_real (torch.float32) – non complex tensor
- Returns
output
- Return type
torch.Tensor
-
get_dummy_data
(self, batch_size)¶ Get a tensor (containing random data) of the shape and data type expected by this model. This dummy data tensor is used to print a summary of the model as well as during tracing for ONNX model export and in test cases.
Note
You can always test that the model is sized correctly to pass data by running
model.forward(model.get_dummy_data(batch_size))
- Parameters
batch_size (int) – batch size
- Returns
input_data
- Return type
torch.Tensor
-
airpack.pytorch.model.
calc_output_dims
(h_in, w_in, c, kernel_size, stride=None, padding=None, dilation=None)¶ Calculate torch.nn.Conv2d or torch.nn.MaxPool2d output dimensions
Note
Ref: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html
- Parameters
h_in (int) – height dimensions of layer input
w_in (int) – width dimension of layer input
c (int) – channel dimension of layer output
kernel_size (tuple) – kernel dimensions
stride (Optional[tuple]) – stride dimensions
padding (Optional[tuple]) – padding dimensions
dilation (Optional[tuple]) – dilation dimensions
- Returns
output dimensions of layer (C, H, W)
- Return type
Tuple[int, int, int]
-
airpack.pytorch.model.
MODEL
¶