hls4ml.utils package
Submodules
hls4ml.utils.attribute_descriptions module
Strings holding attribute descriptions.
hls4ml.utils.config module
- hls4ml.utils.config.config_from_keras_model(model, granularity='model', backend=None, default_precision='fixed<16,6>', default_reuse_factor=1, max_precision=None)
Create an HLS conversion config given the Keras model.
This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_keras_model.
- Parameters:
model – Keras model
granularity (str, optional) –
Granularity of the created config. Defaults to ‘model’. Can be set to ‘model’, ‘type’ and ‘name’.
Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.
backend (str, optional) – Name of the backend to use
default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’. Note, this must be an explicit precision: ‘auto’ is not allowed.
default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.
max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported
- Raises:
Exception – If Keras model has layers not supported by hls4ml.
- Returns:
The created config.
- Return type:
[dict]
- hls4ml.utils.config.config_from_onnx_model(model, granularity='name', backend=None, default_precision='fixed<16,6>', default_reuse_factor=1, max_precision=None)
Create an HLS conversion config given the ONNX model.
This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_onnx_model.
- Parameters:
model – ONNX model
granularity (str, optional) –
Granularity of the created config. Defaults to ‘name’. Can be set to ‘model’, ‘type’ and ‘name’.
Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.
backend (str, optional) – Name of the backend to use
default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’.
default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.
max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported
- Raises:
Exception – If ONNX model has layers not supported by hls4ml.
- Returns:
The created config.
- Return type:
[dict]
- hls4ml.utils.config.config_from_pytorch_model(model, input_shape, granularity='model', backend=None, default_precision='ap_fixed<16,6>', default_reuse_factor=1, channels_last_conversion='full', transpose_outputs=False, max_precision=None)
Create an HLS conversion config given the PyTorch model.
This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_pytorch_model.
Note that hls4ml internally follows the keras convention for nested tensors known as “channels last”, wherease pytorch uses the “channels first” convention. For exampe, for a tensor encoding an image with 3 channels, pytorch will expect the data to be encoded as (Number_Of_Channels, Height , Width), whereas hls4ml expects (Height , Width, Number_Of_Channels). By default, hls4ml will perform the necessary conversions of the inputs and internal tensors automatically, but will return the output in “channels last” However, this behavior can be controlled by the user using the related arguments discussed below.
- Parameters:
model – PyTorch model
input_shape (tuple or list of tuples) – The shape of the input tensor, excluding the batch size.
granularity (str, optional) –
Granularity of the created config. Defaults to ‘model’. Can be set to ‘model’, ‘type’ and ‘layer’.
Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.
backend (str, optional) – Name of the backend to use
default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’. Note, this must be an explicit precision: ‘auto’ is not allowed.
default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.
channels_last_conversion (string, optional) – Configures the conversion of pytorch layers to ‘channels_last’ data format used by hls4ml internally. Can be set to ‘full’ (default), ‘internal’, or ‘off’. If ‘full’, both the inputs and internal layers will be converted. If ‘internal’, only internal layers will be converted; this assumes the inputs are converted by the user. If ‘off’, no conversion is performed.
transpose_outputs (bool, optional) – Set to ‘False’ if the output should not be transposed from channels_last into channels_first data format. Defaults to ‘False’. If False, outputs needs to be transposed manually.
max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported
- Raises:
Exception – If PyTorch model has layers not supported by hls4ml.
- Returns:
The created config.
- Return type:
[dict]
- hls4ml.utils.config.create_config(output_dir='my-hls-test', project_name='myproject', backend='Vivado', version='1.0.0', **kwargs)
Create an initial configuration to guide the conversion process.
The resulting configuration will contain general information about the project (like project name and output directory) as well as the backend-specific configuration (part numbers, clocks etc). Extra arguments of this function will be passed to the backend’s
create_initial_config
. For the possible list of arguments, check the documentation of each backend.- Parameters:
output_dir (str, optional) – The output directory to which the generated project will be written. Defaults to ‘my-hls-test’.
project_name (str, optional) – The name of the project, that will be used as a top-level function in HLS designs. Defaults to ‘myproject’.
backend (str, optional) – The backend to use. Defaults to ‘Vivado’.
version (str, optional) – Optional string to version the generated project for backends that support it. Defaults to ‘1.0.0’.
- Raises:
Exception – Raised if unknown backend is specified.
- Returns:
The conversion configuration.
- Return type:
dict
hls4ml.utils.dependency module
- hls4ml.utils.dependency.requires(pkg: str)
Mark a function or method as requiring a package to be installed.
- Parameters:
pkg (str) – The package to require. ‘name’ requires hls4ml[name] to be installed. ‘_name’ requires name to be installed.
hls4ml.utils.einsum_utils module
- class hls4ml.utils.einsum_utils.EinsumRecipe
Bases:
TypedDict
- C: int
- I: int
- L0: int
- L1: int
- direct_sum_axis: tuple[tuple[int, ...], tuple[int, ...]]
- in_transpose_idxs: tuple[tuple[int, ...], tuple[int, ...]]
- out_interpert_shape: tuple[int, ...]
- out_transpose_idxs: tuple[int, ...]
- hls4ml.utils.einsum_utils.einsum(fn: str, input0: ndarray, input1: ndarray) ndarray
Execute einsum operation on two input arrays.
Warning
Order of multiplication is reversed – watchout if you are using non-commutative operators
- Parameters:
fn – einsum string, e.g. ‘ij,jk->ik’
input0 – the first input array
input1 – the second input array
- Returns:
output array
- Return type:
np.ndarray
- hls4ml.utils.einsum_utils.parse_einsum(fn: str, input_shape0: tuple[int, ...], input_shape1: tuple[int, ...]) EinsumRecipe
Parse einsum operation on two input arrays, return a recipe for execution.
- Parameters:
fn – einsum string, e.g. ‘ij,jk->ik’
input_shape0 – shape of the first input array
input_shape1 – shape of the second input array
- Returns:
einsum recipe; executed by _exec_einsum
- Return type:
hls4ml.utils.example_models module
- hls4ml.utils.example_models.fetch_example_list()
- hls4ml.utils.example_models.fetch_example_model(model_name, backend='Vivado')
Download an example model (and example data & configuration if available) from github repo to working directory, and return the corresponding configuration:
https://github.com/fastmachinelearning/example-models
Use fetch_example_list() to see all the available models.
- Parameters:
model_name (str) – Name of the example model in the repo. Example: fetch_example_model(‘KERAS_3layer.json’)
backend (str, optional) – Name of the backend to use for model conversion.
- Returns:
Dictionary that stores the configuration to the model
- Return type:
dict
hls4ml.utils.fixed_point_utils module
- class hls4ml.utils.fixed_point_utils.FixedPointEmulator(N, I, signed=True, integer_bits=None, decimal_bits=None)
Bases:
object
Default constructor :param - N: Total number of bits in the fixed point number :param - I: Integer bits in the fixed point number :param - F = N-I: Fixed point bits in the number :param - signed: True/False - If True, use 2’s complement when converting to float :param - self.integer_bits: Bits corresponding to the integer part of the number :param - self.decimal_bits: Bits corresponding to the decimal part of the number
- exp_float(sig_figs=12)
- inv_float(sig_figs=12)
- set_msb_bits(bits)
- to_float()
- hls4ml.utils.fixed_point_utils.ceil_log2(i)
Returns log2(i), rounding up :param - i: Number
- Returns:
representing ceil(log2(i))
- Return type:
val
- hls4ml.utils.fixed_point_utils.next_pow2(x)
Return the next bigger power of 2 of an integer
- hls4ml.utils.fixed_point_utils.uint_to_binary(i, N)
hls4ml.utils.link module
- class hls4ml.utils.link.FilesystemModelGraph(project_dir: str | Path)
Bases:
ModelGraph
A subclass of ModelGraph that can link with an existing project in the filesystem.
This allows the user to call compile(), predict() and build() functions. All other methods are disabled and will raise an exception if accessed.
- build(**kwargs)
Builds the generated project using HLS compiler.
Please see the build() function of backends for a list of possible arguments.
- compile()
Compile the generated project and link the library into current environment.
Users should call this function if they want to use predict functionality for simulation.
- get_input_variables()
- get_output_variables()
- predict(x)
hls4ml.utils.plot module
Utilities related to model visualization.
- hls4ml.utils.plot.add_edge(dot, src, dst)
- hls4ml.utils.plot.check_pydot()
Returns True if PyDot and Graphviz are available.
- hls4ml.utils.plot.model_to_dot(model, show_shapes=False, show_layer_names=True, show_precision=False, rankdir='TB', dpi=96, subgraph=False)
Convert a HLS model to dot format.
- Parameters:
model – A HLS model instance.
show_shapes – whether to display shape information.
show_layer_names – whether to display layer names.
show_precision – whether to display precision of layer’s variables.
rankdir – rankdir argument passed to PyDot, a string specifying the format of the plot: ‘TB’ creates a vertical plot; ‘LR’ creates a horizontal plot.
dpi – Dots per inch.
subgraph – whether to return a pydot.Cluster instance.
- Returns:
A pydot.Dot instance representing the HLS model or a pydot.Cluster instance representing nested model if subgraph=True.
- Raises:
ImportError – if graphviz or pydot are not available.
- hls4ml.utils.plot.plot_model(model, to_file='model.png', show_shapes=False, show_layer_names=True, show_precision=False, rankdir='TB', dpi=96)
Converts a HLS model to dot format and save to a file.
- Parameters:
model – A HLS model instance
to_file – File name of the plot image.
show_shapes – whether to display shape information.
show_layer_names – whether to display layer names.
show_precision – whether to display precision of layer’s variables.
rankdir – rankdir argument passed to PyDot, a string specifying the format of the plot: ‘TB’ creates a vertical plot; ‘LR’ creates a horizontal plot.
dpi – Dots per inch.
- Returns:
A Jupyter notebook Image object if Jupyter is installed. This enables in-line display of the model plots in notebooks.
hls4ml.utils.profiling_utils module
hls4ml.utils.serialization module
- hls4ml.utils.serialization.deserialize_model(file_path, output_dir=None)
Deserializes an hls4ml model from a compressed file format (.fml).
This function extracts the model’s architecture, configuration, internal state, and version information from the provided .fml file and returns a new instance of ModelGraph. If testbench data was provided during the serialization, it will be restored to the specified output directory.
- Parameters:
file_path (str or pathlib.Path) – The path to the serialized model file (.fml).
output_dir (str or pathlib.Path, optional) – The directory where extracted testbench data files will be saved. If not specified, the files will be restored to the same directory as the .fml file.
- Returns:
The deserialized hls4ml model.
- Return type:
- Raises:
FileNotFoundError – If the specified .fml file does not exist.
OSError – If an I/O error occurs during extraction or file operations.
Notes
The function ensures that input/output testbench data files are restored to the specified output directory if they were included during serialization.
The deserialized model includes its architecture, configuration, and internal state, allowing it to be used as if it were freshly created.
- hls4ml.utils.serialization.serialize_model(model, file_path)
Serializes an hls4ml model into a compressed file format (.fml).
This function saves the model’s architecture, configuration, internal state, and version information into a temporary directory. It then compresses the directory into a .fml file (a tar.gz archive with a custom extension) at the specified file path.
- Parameters:
model (ModelGraph) – The hls4ml model to be serialized.
file_path (str or pathlib.Path) – The path where the serialized model will be saved. If the file extension is not .fml, it will be automatically appended.
- Raises:
OSError – If the file cannot be written or an I/O error occurs.
Notes
The function also handles serialization of NumPy arrays and ensures that input/output testbench data files are included if specified in the model configuration.
Existing files at the specified path will be overwritten.
hls4ml.utils.string_utils module
- hls4ml.utils.string_utils.convert_to_pascal_case(snake_case)
Convert string in snake_case to PascalCase
- Parameters:
snake_case (str) – string to convert
- Returns:
converted string
- Return type:
str
- hls4ml.utils.string_utils.convert_to_snake_case(pascal_case)
Convert string in PascalCase to snake_case
- Parameters:
pascal_case (str) – string to convert
- Returns:
converted string
- Return type:
str
hls4ml.utils.symbolic_utils module
- class hls4ml.utils.symbolic_utils.LUTFunction(name, math_func, range_start, range_end, table_size=1024)
Bases:
object
- hls4ml.utils.symbolic_utils.generate_operator_complexity(part, precision, unary_operators=None, binary_operators=None, hls_include_path=None, hls_libs_path=None)
Generates HLS projects and synthesizes them to obtain operator complexity (clock cycles per given math operation).
This function can be used to obtain a list of operator complexity for a given FPGA part at a given precision.
- Parameters:
part (str) – FPGA part number to use.
precision (str) – Precision to use.
unary_operators (list, optional) – List of unary operators to evaluate. Defaults to None.
binary_operators (list, optional) – List of binary operators to evaluate. Defaults to None.
hls_include_path (str, optional) – Path to the HLS include files. Defaults to None.
hls_libs_path (str, optional) – Path to the HLS libs. Defaults to None.
- Returns:
Dictionary of obtained operator complexities.
- Return type:
dict
- hls4ml.utils.symbolic_utils.init_pysr_lut_functions(init_defaults=False, function_definitions=None)
Register LUT-based approximations with PySR.
Functions should be in the form of:
<func_name>(x) = math_lut(<func>, x, N=<table_size>, range_start=<start>, range_end=<end>)
where
<func_name>
is a given name that can be used with PySR,<func>
is the math function to approximate (sin, cos, log,…),<table_size>
is the size of the lookup table, and<start>
and<end>
are the ranges in which the function will be approximated. It is strongly recommended to use a power-of-two as a range.Registered functions can be passed by name to
PySRRegressor
(asunary_operators
).- Parameters:
init_defaults (bool, optional) – Register the most frequently used functions (sin, cos, tan, log, exp). Defaults to False.
function_definitions (list, optional) – List of strings with function definitions to register with PySR. Defaults to None.
- hls4ml.utils.symbolic_utils.register_pysr_lut_function(func, julia_main=None)
hls4ml.utils.torch module
hls4ml.utils.transpose_utils module
- hls4ml.utils.transpose_utils.transpose_config_gen(name: str, shape: tuple[int, ...], perm: tuple[int, ...])
Generate new shape and perm_strides for a permute operation. Operates by mapping the output index to input input index by: - unravel the output index - map each dimension to the corresponding stride in the input tensor, sum The operation can be expressed as:
new_shape = tuple(shape[i] for i in perm) strides = np.cumprod((shapes[1:] + (1,))[::-1])[::-1] perm_strides = [strides[i] for i in perm] out[index] = inp[np.dot(np.unravel_index(index, new_shape), perm_strides)]
- Parameters:
name (str) – The name of the configuration.
shape (tuple[int, ...]) – The shape of the input tensor.
perm (tuple[int, ...]) – The permutation of the dimensions.
- Returns:
Dictionary containing the configuration.
- Return type:
dict