hls4ml.utils package

Submodules

hls4ml.utils.attribute_descriptions module

Strings holding attribute descriptions.

hls4ml.utils.config module

hls4ml.utils.config.config_from_keras_model(model, granularity='model', backend=None, default_precision='fixed<16,6>', default_reuse_factor=1, max_precision=None)

Create an HLS conversion config given the Keras model.

This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_keras_model.

Parameters:
  • model – Keras model

  • granularity (str, optional) –

    Granularity of the created config. Defaults to ‘model’. Can be set to ‘model’, ‘type’ and ‘name’.

    Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.

  • backend (str, optional) – Name of the backend to use

  • default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’. Note, this must be an explicit precision: ‘auto’ is not allowed.

  • default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.

  • max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported

Raises:

Exception – If Keras model has layers not supported by hls4ml.

Returns:

The created config.

Return type:

[dict]

hls4ml.utils.config.config_from_onnx_model(model, granularity='name', backend=None, default_precision='fixed<16,6>', default_reuse_factor=1, max_precision=None)

Create an HLS conversion config given the ONNX model.

This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_onnx_model.

Parameters:
  • model – ONNX model

  • granularity (str, optional) –

    Granularity of the created config. Defaults to ‘name’. Can be set to ‘model’, ‘type’ and ‘name’.

    Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.

  • backend (str, optional) – Name of the backend to use

  • default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’.

  • default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.

  • max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported

Raises:

Exception – If ONNX model has layers not supported by hls4ml.

Returns:

The created config.

Return type:

[dict]

hls4ml.utils.config.config_from_pytorch_model(model, input_shape, granularity='model', backend=None, default_precision='ap_fixed<16,6>', default_reuse_factor=1, channels_last_conversion='full', transpose_outputs=True, max_precision=None)

Create an HLS conversion config given the PyTorch model.

This function serves as the initial step in creating the custom conversion configuration. Users are advised to inspect the returned object to tweak the conversion configuration. The return object can be passed as hls_config parameter to convert_from_pytorch_model.

Note that hls4ml internally follows the keras convention for nested tensors known as “channels last”, wherease pytorch uses the “channels first” convention. For exampe, for a tensor encoding an image with 3 channels, pytorch will expect the data to be encoded as (Number_Of_Channels, Height , Width), whereas hls4ml expects (Height , Width, Number_Of_Channels). By default, hls4ml will perform the necessary conversions of the inputs and internal tensors automatically, but will return the output in “channels last” However, this behavior can be controlled by the user using the related arguments discussed below.

Parameters:
  • model – PyTorch model

  • input_shape (tuple or list of tuples) – The shape of the input tensor, excluding the batch size.

  • granularity (str, optional) –

    Granularity of the created config. Defaults to ‘model’. Can be set to ‘model’, ‘type’ and ‘layer’.

    Granularity can be used to generate a more verbose config that can be fine-tuned. The default granularity (‘model’) will generate config keys that apply to the whole model, so changes to the keys will affect the entire model. ‘type’ granularity will generate config keys that affect all layers of a given type, while the ‘name’ granularity will generate config keys for every layer separately, allowing for highly specific configuration tweaks.

  • backend (str, optional) – Name of the backend to use

  • default_precision (str, optional) – Default precision to use. Defaults to ‘fixed<16,6>’. Note, this must be an explicit precision: ‘auto’ is not allowed.

  • default_reuse_factor (int, optional) – Default reuse factor. Defaults to 1.

  • channels_last_conversion (string, optional) – Configures the conversion of pytorch layers to ‘channels_last’ data format used by hls4ml internally. Can be set to ‘full’ (default), ‘internal’, or ‘off’. If ‘full’, both the inputs and internal layers will be converted. If ‘internal’, only internal layers will be converted; this assumes the inputs are converted by the user. If ‘off’, no conversion is performed.

  • transpose_outputs (bool, optional) – Set to ‘False’ if the output should not be transposed from channels_last into channels_first data format. Defaults to ‘False’. If False, outputs needs to be transposed manually.

  • max_precision (str or None, optional) – Maximum width precision to use. Defaults to None, meaning no maximum. Note: Only integer and fixed precisions are supported

Raises:

Exception – If PyTorch model has layers not supported by hls4ml.

Returns:

The created config.

Return type:

[dict]

hls4ml.utils.config.create_config(output_dir='my-hls-test', project_name='myproject', backend='Vivado', version='1.0.0', **kwargs)

Create an initial configuration to guide the conversion process.

The resulting configuration will contain general information about the project (like project name and output directory) as well as the backend-specific configuration (part numbers, clocks etc). Extra arguments of this function will be passed to the backend’s create_initial_config. For the possible list of arguments, check the documentation of each backend.

Parameters:
  • output_dir (str, optional) – The output directory to which the generated project will be written. Defaults to ‘my-hls-test’.

  • project_name (str, optional) – The name of the project, that will be used as a top-level function in HLS designs. Defaults to ‘myproject’.

  • backend (str, optional) – The backend to use. Defaults to ‘Vivado’.

  • version (str, optional) – Optional string to version the generated project for backends that support it. Defaults to ‘1.0.0’.

Raises:

Exception – Raised if unknown backend is specified.

Returns:

The conversion configuration.

Return type:

dict

hls4ml.utils.example_models module

hls4ml.utils.example_models.fetch_example_list()
hls4ml.utils.example_models.fetch_example_model(model_name, backend='Vivado')

Download an example model (and example data & configuration if available) from github repo to working directory, and return the corresponding configuration:

https://github.com/fastmachinelearning/example-models

Use fetch_example_list() to see all the available models.

Parameters:
  • model_name (str) – Name of the example model in the repo. Example: fetch_example_model(‘KERAS_3layer.json’)

  • backend (str, optional) – Name of the backend to use for model conversion.

Returns:

Dictionary that stores the configuration to the model

Return type:

dict

hls4ml.utils.fixed_point_utils module

class hls4ml.utils.fixed_point_utils.FixedPointEmulator(N, I, signed=True, integer_bits=None, decimal_bits=None)

Bases: object

Default constructor :param - N: Total number of bits in the fixed point number :param - I: Integer bits in the fixed point number :param - F = N-I: Fixed point bits in the number :param - signed: True/False - If True, use 2’s complement when converting to float :param - self.integer_bits: Bits corresponding to the integer part of the number :param - self.decimal_bits: Bits corresponding to the decimal part of the number

exp_float(sig_figs=12)
inv_float(sig_figs=12)
set_msb_bits(bits)
to_float()
hls4ml.utils.fixed_point_utils.ceil_log2(i)

Returns log2(i), rounding up :param - i: Number

Returns:

representing ceil(log2(i))

Return type:

  • val

hls4ml.utils.fixed_point_utils.next_pow2(x)

Return the next bigger power of 2 of an integer

hls4ml.utils.fixed_point_utils.uint_to_binary(i, N)

hls4ml.utils.plot module

Utilities related to model visualization.

hls4ml.utils.plot.add_edge(dot, src, dst)
hls4ml.utils.plot.check_pydot()

Returns True if PyDot and Graphviz are available.

hls4ml.utils.plot.model_to_dot(model, show_shapes=False, show_layer_names=True, show_precision=False, rankdir='TB', dpi=96, subgraph=False)

Convert a HLS model to dot format.

Parameters:
  • model – A HLS model instance.

  • show_shapes – whether to display shape information.

  • show_layer_names – whether to display layer names.

  • show_precision – whether to display precision of layer’s variables.

  • rankdirrankdir argument passed to PyDot, a string specifying the format of the plot: ‘TB’ creates a vertical plot; ‘LR’ creates a horizontal plot.

  • dpi – Dots per inch.

  • subgraph – whether to return a pydot.Cluster instance.

Returns:

A pydot.Dot instance representing the HLS model or a pydot.Cluster instance representing nested model if subgraph=True.

Raises:

ImportError – if graphviz or pydot are not available.

hls4ml.utils.plot.plot_model(model, to_file='model.png', show_shapes=False, show_layer_names=True, show_precision=False, rankdir='TB', dpi=96)

Converts a HLS model to dot format and save to a file.

Parameters:
  • model – A HLS model instance

  • to_file – File name of the plot image.

  • show_shapes – whether to display shape information.

  • show_layer_names – whether to display layer names.

  • show_precision – whether to display precision of layer’s variables.

  • rankdirrankdir argument passed to PyDot, a string specifying the format of the plot: ‘TB’ creates a vertical plot; ‘LR’ creates a horizontal plot.

  • dpi – Dots per inch.

Returns:

A Jupyter notebook Image object if Jupyter is installed. This enables in-line display of the model plots in notebooks.

hls4ml.utils.string_utils module

hls4ml.utils.string_utils.convert_to_pascal_case(snake_case)

Convert string in snake_case to PascalCase

Parameters:

snake_case (str) – string to convert

Returns:

converted string

Return type:

str

hls4ml.utils.string_utils.convert_to_snake_case(pascal_case)

Convert string in PascalCase to snake_case

Parameters:

pascal_case (str) – string to convert

Returns:

converted string

Return type:

str

hls4ml.utils.symbolic_utils module

class hls4ml.utils.symbolic_utils.LUTFunction(name, math_func, range_start, range_end, table_size=1024)

Bases: object

hls4ml.utils.symbolic_utils.generate_operator_complexity(part, precision, unary_operators=None, binary_operators=None, hls_include_path=None, hls_libs_path=None)

Generates HLS projects and synthesizes them to obtain operator complexity (clock cycles per given math operation).

This function can be used to obtain a list of operator complexity for a given FPGA part at a given precision.

Parameters:
  • part (str) – FPGA part number to use.

  • precision (str) – Precision to use.

  • unary_operators (list, optional) – List of unary operators to evaluate. Defaults to None.

  • binary_operators (list, optional) – List of binary operators to evaluate. Defaults to None.

  • hls_include_path (str, optional) – Path to the HLS include files. Defaults to None.

  • hls_libs_path (str, optional) – Path to the HLS libs. Defaults to None.

Returns:

Dictionary of obtained operator complexities.

Return type:

dict

hls4ml.utils.symbolic_utils.init_pysr_lut_functions(init_defaults=False, function_definitions=None)

Register LUT-based approximations with PySR.

Functions should be in the form of:

<func_name>(x) = math_lut(<func>, x, N=<table_size>, range_start=<start>, range_end=<end>)

where <func_name> is a given name that can be used with PySR, <func> is the math function to approximate (sin, cos, log,…), <table_size> is the size of the lookup table, and <start> and <end> are the ranges in which the function will be approximated. It is strongly recommended to use a power-of-two as a range.

Registered functions can be passed by name to PySRRegressor (as unary_operators).

Parameters:
  • init_defaults (bool, optional) – Register the most frequently used functions (sin, cos, tan, log, exp). Defaults to False.

  • function_definitions (list, optional) – List of strings with function definitions to register with PySR. Defaults to None.

hls4ml.utils.symbolic_utils.register_pysr_lut_function(func, julia_main=None)

Module contents