hls4ml.model package

Subpackages

Submodules

hls4ml.model.attributes module

All information about a layer is stored in the attributes of a layer instance. This information can be properties of a layer, like a number of hidden units in Dense layer or number of filters in a convolutional layer, but also includes information about weight variables, output variables and all data types defined. The attribute system provides a mechanism that ensures layers are correctly initialized, have the valid information stored and have configurable endpoints exposed.

This module contains the definitions of classes for handling attributes. The Attribute class and its subclasses provide information about an expected attribute, but the actual value will be stored within the instance’s attribute dict. This provides an unified view (mapping) of all attributes, but for convenience there are mappings that expose only certain types of attributes, such as types, variables, weights etc, via the AttributeMapping class.

class hls4ml.model.attributes.Attribute(name, value_type=<class 'numbers.Integral'>, default=None, configurable=False)

Bases: object

Base attribute class.

Attribute consists of a name, the type of value it will store, the optional default if no value is specified during layer creation, and a flag indicating if the value can be modified by the user. This class is generally expected to exist only as part of the expected_attributes property of the layer class.

Parameters:
  • name (str) – Name of the attribute

  • value_type (optional) – Type of the value expected to be stored in the attribute. If not specified, no validation of the stored value will be performed. Defaults to int.

  • default (optional) – Default value if no value is specified during layer creation. Defaults to None.

  • configurable (bool, optional) – Specifies if the attribute can be modified after creation. Defaults to False.

property config_name

Returns the name of the attribute as it will appear in the attribute dict of the layer instance.

The format will be in pascal case, e.g., AttributeName -> attribute_name.

Returns:

The pascal_case of the name of the attribute.

Return type:

str

validate_value(value)
class hls4ml.model.attributes.AttributeDict(layer)

Bases: MutableMapping

Class containing all attributes of a given layer.

Instances of this class behave like a dictionary. Upon insertion, the key/value may trigger additional actions, such as registering variables or modifying the key name to ensure it follows the convention.

Specific “views” (mappings) of this class can be used to filter desired attributes via the AttributeMapping class.

class hls4ml.model.attributes.AttributeMapping(attributes, clazz)

Bases: MutableMapping

Base class used to filter attributes based on their expected class.

class hls4ml.model.attributes.ChoiceAttribute(name, choices, default=None, configurable=True)

Bases: Attribute

Represents an attribute whose value can be one of several predefined values.

validate_value(value)
class hls4ml.model.attributes.CodeAttrubute(name)

Bases: Attribute

Represents an attribute that will store generated source code block.

class hls4ml.model.attributes.CodeMapping(attributes)

Bases: AttributeMapping

Mapping that only sees Source instances (i.e., generated source code blocks).

class hls4ml.model.attributes.ConfigurableAttribute(name, value_type=<class 'int'>, default=None)

Bases: Attribute

Represents a configurable attribute, i.e., the attribute whose value can be modified by the user.

This is a convenience class. It is advised to use ConfigurableAttribute over Attribute(..., configurable=True) when defining the expected attributes of layer classes.

class hls4ml.model.attributes.TypeAttribute(name, default=None, configurable=True)

Bases: Attribute

Represents an attribute that will store a type, i.e., an instance of NamedType or its subclasses.

As a convention, the name of the attribute storing a type will end in _t.

class hls4ml.model.attributes.TypeMapping(attributes)

Bases: AttributeMapping

Mapping that only sees NamedType instances (i.e., defined types).

class hls4ml.model.attributes.VariableMapping(attributes)

Bases: AttributeMapping

Mapping that only sees TensorVariable instances (i.e., activation tensors).

class hls4ml.model.attributes.WeightAttribute(name)

Bases: Attribute

Represents an attribute that will store a weight variable.

class hls4ml.model.attributes.WeightMapping(attributes)

Bases: AttributeMapping

Mapping that only sees WeightVariable instances (i.e., weights).

hls4ml.model.graph module

class hls4ml.model.graph.HLSConfig(config)

Bases: object

The configuration class as stored in the ModelGraph.

Parameters:

config (dict) – The configuration dictionary

get_bram_size(layer)
get_compression(layer)
get_config_value(key, default=None)
get_conv_implementation(layer)
get_layer_config(layer)
get_layer_config_value(layer, key, default=None)
get_output_dir()
get_precision(layer, var='default')
get_project_dir()
get_project_name()
get_reuse_factor(layer)
get_strategy(layer)
get_target_cycles(layer)
get_writer_config()
is_resource_strategy(layer)
parse_name_config(layer_name, layer_cfg)

This is used by _parse_hls_config below, but also in optimizers when a new layer config is created

set_name_config(name, config)

sets hls_config[“LayerName”][name] = config

class hls4ml.model.graph.ModelGraph(config, layer_list, inputs=None, outputs=None)

Bases: object

The ModelGraph represents the network that is being processed by hls4ml.

Parameters:
  • config (dict) – The configuration dictionary

  • layer_list (list(dict)) – The list contains a dictionary for each input layer

  • inputs (list, optional) – The inputs to the model. If None, determined from layer_list

  • outputs (list, optional) – The outputs to the model. If None, determined from layer_list

apply_flow(flow, reapply='single')

Applies a flow (a collection of optimizers).

Parameters:
  • flow (str) – The name of the flow to apply

  • reapply (str, optional) – Determines the action to take if the flow and its requirements have already been applied. Possible values are: - ‘all’: Apply the flow and all its requirements. - ‘single’: Apply only the given flow, but skip the already applied requirements. - ‘none’: Skip applying the flow. Defaults to ‘single’.

build(**kwargs)

Builds the generated project using HLS compiler.

Please see the build() function of backends for a list of possible arguments.

compile()

Compile the generated project and link the library into current environment.

Users should call this function if they want to use predict functionality for simulation.

get_input_variables()
get_layer_output_variable(output_name)
get_layers()
get_output_variables()
get_weight_variables()
insert_node(node, before=None, input_idx=0)

Insert a new node into the model graph.

The node to be inserted should be created with make_node() function. The optional parameter before can be used to specify the node that follows in case of ambiguities.

Parameters:
  • node (Layer) – Node to insert

  • before (Layer, optional) – The next node in sequence before which a new node should be inserted.

  • input_idx (int, optional) – If the next node takes multiple inputs, the input index

Raises:

Exception – If an attempt to insert a node with multiple inputs is made or if before does not specify a correct node in sequence.

make_node(kind, name, attributes, inputs, outputs=None)

Make a new node not connected to the model graph.

The ‘kind’ should be a valid layer registered with register_layer. If no outputs are specified, a default output named the same as the node will be created. The returned node should be added to the graph with insert_node or replace_node functions.

Parameters:
  • kind (type or str) – Type of node to add

  • name (str) – Name of the node

  • attributes (dict) – Initial set of attributes required to construct the node (Layer)

  • inputs (list) – List of inputs to the layer

  • outputs (list, optional) – The optional list of named outputs of the node

Raises:

Exception – If an attempt to insert a node with multiple inputs is made or if before does not specify a correct node in sequence.

Returns:

The node created.

Return type:

Layer

next_layer()
predict(x)
register_output_variable(out_name, variable)
remove_node(node, rewire=True)

Removes a node from the graph.

By default, this function connects the outputs of the previous node to the inputs of the next node. If the removed node has multiple input/output tensors, an exception is raised.

Parameters:
  • node (Layer) – The node to remove.

  • rewire (bool, optional) – Deprecated, has no effect.

Raises:
  • Exception – If an attempt is made to rewire a node with

  • multiple inputs/outputs.

Note

The rewire parameter is deprecated and has no effect.

replace_node(old_node, new_node)

Replace an existing node in the graph with a new one.

Parameters:
  • old_node (Layer) – The node to replace

  • new_node (Layer) – The new node

split_node(old_node, new_node1, new_node2)

Replace an existing node in the graph with two nodes in sequence.

Parameters:
  • old_node (Layer) – The node to replace

  • new_node1 (Layer) – The first new node in sequence

  • new_node2 (Layer) – The second new node in sequence

trace(x)
write()

Write the generated project to disk.

This function converts the model to C++ and writes the generated files in the output directory specified in the config.

hls4ml.model.layers module

class hls4ml.model.layers.Activation(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.ApplyAlpha(model, name, attributes, inputs, outputs=None)

Bases: BatchNormalization

A custom layer to scale the output of a QDense layer which used ‘alpha != 1’ Inference computation uses BatchNormalization methods

add_bias(bias, quantizer=None, precision=None)
add_weights(scale, quantizer=None, precision=None)
initialize()
class hls4ml.model.layers.BatchNormOnnx(model, name, attributes, inputs, outputs=None)

Bases: Layer

A transient layer formed from ONNX BatchNormalization that gets converted to BatchNormalization after the scale and bias are determined

initialize()
class hls4ml.model.layers.BatchNormalization(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.BiasAdd(model, name, attributes, inputs, outputs=None)

Bases: Merge

initialize()
class hls4ml.model.layers.Concatenate(model, name, attributes, inputs, outputs=None)

Bases: Merge

initialize()
class hls4ml.model.layers.Constant(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Conv(model, name, attributes, inputs, outputs=None)

Bases: Layer

This is for the ONNX Conv node. Currently, it is only supported as an intermediate form that gets converted to an explicit ConvXD.

Note: these are always channels-last.

initialize()
class hls4ml.model.layers.Conv1D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Conv2D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Conv2DBatchnorm(model, name, attributes, inputs, outputs=None)

Bases: Conv2D

initialize()
class hls4ml.model.layers.Dense(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.DepthwiseConv1D(model, name, attributes, inputs, outputs=None)

Bases: Conv1D

initialize()
class hls4ml.model.layers.DepthwiseConv2D(model, name, attributes, inputs, outputs=None)

Bases: Conv2D

initialize()
class hls4ml.model.layers.Dot(model, name, attributes, inputs, outputs=None)

Bases: Merge

initialize()
class hls4ml.model.layers.Embedding(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.GRU(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.GarNet(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
ref_impl = False
class hls4ml.model.layers.GarNetStack(model, name, attributes, inputs, outputs=None)

Bases: GarNet

class hls4ml.model.layers.GlobalPooling1D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.GlobalPooling2D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.HardActivation(model, name, attributes, inputs, outputs=None)

Bases: Activation

Implements the hard sigmoid and tan function in keras and qkeras (Default parameters in qkeras are different, so should be configured) The hard sigmoid unction is clip(slope * x + shift, 0, 1), and the hard tanh function is 2 * hard_sigmoid - 1

initialize()
class hls4ml.model.layers.Input(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.LSTM(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Layer(model, name, attributes, inputs, outputs=None)

Bases: object

The base class for all layers, which are the nodes in the model graph. Note: they don’t necessarily correspond 1:1 with the network layers.

The expected attributes are index, trace (configurable), and result (type)

Parameters:
  • model (ModelGraph) – The ModelGraph that this Layer is part of

  • name (str) – The node name

  • attributes (dict) – Initial set of attributes required to construct the node (Layer)

  • inputs (list) – List of inputs to the layer

  • outputs (list, optional) – The optional list of named outputs of the node

add_bias(quantizer=None)
add_output_variable(shape, dim_names, out_name=None, var_name='layer{index}_out', type_name='layer{index}_t', precision=None)
add_weights(quantizer=None, compression=False)
add_weights_variable(name, var_name=None, type_name=None, precision=None, data=None, quantizer=None, compression=False)
property class_name
expected_attributes = [<hls4ml.model.attributes.Attribute object>, <hls4ml.model.attributes.ConfigurableAttribute object>, <hls4ml.model.attributes.TypeAttribute object>]
get_attr(key, default=None)
get_input_node(input_name=None)
get_input_variable(input_name=None)
get_layer_precision()
get_output_nodes(output_name=None)
get_output_use_map()
get_output_variable(output_name=None)
get_variables()
get_weights(var_name=None)
initialize()
set_attr(key, value)
class hls4ml.model.layers.LayerGroup(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.MatMul(model, name, attributes, inputs, outputs=None)

Bases: Layer

This is a matrix multiply. Currently, it is only supported as an intermediate form that gets converted to a Dense layer.

initialize()
class hls4ml.model.layers.Merge(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.PReLU(model, name, attributes, inputs, outputs=None)

Bases: Activation

initialize()
class hls4ml.model.layers.ParametrizedActivation(model, name, attributes, inputs, outputs=None)

Bases: Activation

initialize()
class hls4ml.model.layers.Pooling1D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Pooling2D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Quant(model, name, attributes, inputs, outputs=None)

Bases: Layer

This is a QONNX quantization layer. Optimizations should convert it before HLS is produced.

initialize()
class hls4ml.model.layers.Reshape(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Resize(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.SeparableConv1D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.SeparableConv2D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.SimpleRNN(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.Softmax(model, name, attributes, inputs, outputs=None)

Bases: Activation

initialize()
class hls4ml.model.layers.SymbolicExpression(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.TernaryTanh(model, name, attributes, inputs, outputs=None)

Bases: Activation

initialize()
class hls4ml.model.layers.Transpose(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.ZeroPadding1D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.ZeroPadding2D(model, name, attributes, inputs, outputs=None)

Bases: Layer

initialize()
class hls4ml.model.layers.classproperty(func)

Bases: object

hls4ml.model.layers.register_layer(name, clazz)

hls4ml.model.profiling module

hls4ml.model.quantizers module

This module contains definitions of hls4ml quantizer classes. These classes apply a quantization function on the provided data. The quantization function may be defined locally or taken from a library in which case the classes behave like simple wrappers.

class hls4ml.model.quantizers.BinaryQuantizer(bits=2)

Bases: Quantizer

Quantizer that quantizes to 0 and 1 (bits=1) or -1 and 1 (bits==2).

Parameters:

bits (int, optional) – Number of bits used by the quantizer. Defaults to 2.

Raises:

Exception – Raised if bits>2

class hls4ml.model.quantizers.QKerasBinaryQuantizer(config, xnor=False)

Bases: Quantizer

Wrapper around QKeras binary quantizer.

Parameters:

config (dict) – Config of the QKeras quantizer to wrap.

class hls4ml.model.quantizers.QKerasPO2Quantizer(config)

Bases: Quantizer

Wrapper around QKeras power-of-2 quantizers.

Parameters:

config (dict) – Config of the QKeras quantizer to wrap.

class hls4ml.model.quantizers.QKerasQuantizer(config)

Bases: Quantizer

Wrapper around QKeras quantizers.

Parameters:

config (dict) – Config of the QKeras quantizer to wrap.

class hls4ml.model.quantizers.QuantNodeQuantizer(precision)

Bases: Quantizer

This implements a quantizer for a FixedPrecisionType with width==integer

This is based on the sample implementation in finn-base

class hls4ml.model.quantizers.Quantizer(bits, hls_type)

Bases: object

Base class for representing quantizers in hls4ml.

Subclasses of Quantizer are expected to wrap the quantizers of upstream tools (e.g., QKeras).

Parameters:
  • bits (int) – Total number of bits used by the quantizer.

  • hls_type (NamedType) – The hls4ml type used by the quantizer.

class hls4ml.model.quantizers.TernaryQuantizer

Bases: Quantizer

Quantizer that quantizes to -1, 0 and 1.

hls4ml.model.types module

This module contains the definitions of classes hls4ml uses to represent data types. The data types are equivalents of C++/HLS data types. The basic type(PrecisionType) is defined as having a specified width in bits (it’s ‘precision’). The Precision types are given names for convenience (NamedType). Named types are the building blocks of higher-dimensional tensors, which are defined as arrays or FIFO streams in the generated code.

class hls4ml.model.types.CompressedType(name, precision, index_precision, **kwargs)

Bases: NamedType

Class representing a compressed type in COO format.

Parameters:
  • name (str) – Name given to the type (used in generated C++/HLS).

  • precision (PrecisionType) – Precision data type.

  • index_precision (PrecisionType) – Precision of the index of COO format.

class hls4ml.model.types.CompressedWeightVariable(var_name, type_name, precision, data, reuse_factor, quantizer=None, **kwargs)

Bases: WeightVariable

Class representing a tensor containing the weights of a layer represented in the COO format.

Parameters:
  • var_name (str, optional) – Name of the variable in the generated C++/HLS.

  • type_name (str, optional) – Name of the data type used (in NamedType).

  • precision (PrecisionType, optional) – Precision data type.

  • data (ndarray) – The data array.

  • reuse_factor (_type_) – The reuse factor used to pad the data array.

  • quantizer (_type_, optional) – Quantizer to apply to the data array. Defaults to None.

next()
class hls4ml.model.types.ExponentPrecisionType(width=16, signed=True)

Bases: PrecisionType

Convenience class to differentiate ‘regular’ integers from those which represent exponents, for QKeras po2 quantizers, for example.

class hls4ml.model.types.ExponentType(name, precision, **kwargs)

Bases: NamedType

Special type used to mark an exponent type, used by the power-of-2 quantizers.

Parameters:
  • name (str) – Name given to the type (used in generated C++/HLS).

  • precision (PrecisionType) – Precision data type.

class hls4ml.model.types.ExponentWeightVariable(var_name, type_name, precision, data, quantizer=None, **kwargs)

Bases: WeightVariable

WeightVariable for Exponent aka power-of-2 data. The data should already by quantized by the quantizer.

Parameters:
  • var_name (str, optional) – Name of the variable in the generated C++/HLS.

  • type_name (str, optional) – Name of the data type used (in NamedType).

  • precision (PrecisionType, optional) – Precision data type.

  • data (ndarray) – The data array.

  • quantizer (_type_, optional) – Quantizer to apply to the data array. Defaults to None.

next()
class hls4ml.model.types.FixedPrecisionType(width=16, integer=6, signed=True, rounding_mode=None, saturation_mode=None, saturation_bits=None)

Bases: PrecisionType

Arbitrary precision fixed-point data type.

This type is equivalent to ap_(u)fixed and ac_fixed HLS types.

Parameters:
  • width (int, optional) – Total number of bits used. Defaults to 16.

  • integer (int, optional) – Number of integer bits left of the decimal point. Defaults to 6.

  • signed (bool, optional) – Signed or unsigned type. Defaults to True.

  • rounding_mode (RoundingMode, optional) – Quantization mode. Defaults to None (TRN).

  • saturation_mode (SaturationMode, optional) – Overflow mode. Defaults to None (WRAP).

  • saturation_bits (int, optional) – The number of saturation bits. Defaults to None.

property fractional
property rounding_mode
property saturation_bits
property saturation_mode
class hls4ml.model.types.InplaceTensorVariable(tv, input_var)

Bases: TensorVariable

A TensorVariable that is just a link to another TensorVariable.

Parameters:
  • tv (TensorVariable) – The tensor variable to link.

  • input_var (_type_) – The input variable that should be should link to.

class hls4ml.model.types.IntegerPrecisionType(width=16, signed=True)

Bases: PrecisionType

Arbitrary precision integer data type.

This type is equivalent to ap_(u)int and ac_int HLS types.

Parameters:
  • width (int, optional) – Number of bits used. Defaults to 16.

  • signed (bool, optional) – Signed or unsigned type. Defaults to True.

property fractional
property integer
property rounding_mode
property saturation_bits
property saturation_mode
class hls4ml.model.types.NamedType(name, precision, **kwargs)

Bases: object

Class representing a named type.

For convenience, hls4ml gives names to data types used in the generated HLS. This is equivalent to defining types in C/C++ like:

typedef precision name;
Parameters:
  • name (str) – Name given to the type (used in generated C++/HLS).

  • precision (PrecisionType) – Precision data type.

class hls4ml.model.types.PackedType(name, precision, n_elem, n_pack, **kwargs)

Bases: NamedType

A type where multiple elements of the tensor are concatenated and stored as a single element, used by the streaming implementations to store elements of the last dimension of a tensor as a single element.

The tensor of shape (H, W, C) will be represented as a FIFO stream having H * W / n_pack elements where each element will be a concatenation of n_elem * n_pack elements of the original tensor.

Parameters:
  • name (str) – Name given to the type (used in generated C++/HLS).

  • precision (PrecisionType) – Precision data type.

  • n_elem (int) – Number of packed elements.

  • n_pack (int) – _description_

class hls4ml.model.types.PrecisionType(width, signed)

Bases: object

Base class representing a precision type of specified width.

Subclasses of this provide concrete implementations of arbitrary precision integer and fixed-point types.

Parameters:
  • width (int) – Number of bits used by the precision type.

  • signed (bool) – Signed or unsigned type.

class hls4ml.model.types.RoundingMode(value)

Bases: Enum

An enumeration.

RND = 3
RND_CONV = 7
RND_INF = 5
RND_MIN_INF = 6
RND_ZERO = 4
TRN = 1
TRN_ZERO = 2
classmethod from_string(mode)
class hls4ml.model.types.SaturationMode(value)

Bases: Enum

An enumeration.

SAT = 2
SAT_SYM = 4
SAT_ZERO = 3
WRAP = 1
classmethod from_string(mode)
class hls4ml.model.types.Source(code)

Bases: object

Class representing generated source code blocks.

Parameters:

code (str) – Generated source code.

class hls4ml.model.types.TensorVariable(shape, dim_names, var_name='layer{index}', type_name='layer{index}_t', precision=None, **kwargs)

Bases: Variable

Class representing the output of a layer (like an activation tensor).

Parameters:
  • shape (list, tuple) – Shape of the tensor.

  • dim_names (list, tuple) – Names given to the dimensions of the tensor.

  • var_name (str, optional) – Name of the variable in the generated C++/HLS. Defaults to layer{index}.

  • type_name (str, optional) – Name of the data type used (in NamedType). Defaults to layer{index}_t.

  • precision (PrecisionType, optional) – Precision data type. Defaults to None.

get_shape()
size()
size_cpp()
class hls4ml.model.types.UnspecifiedPrecisionType

Bases: PrecisionType

Class representing an unspecified precision type.

Instances of this class are expected to be replaced with concrete precision types during conversion.

class hls4ml.model.types.Variable(var_name, atype, **kwargs)

Bases: object

Base class representing a named multidimensional tensor.

Parameters:
  • var_name (str) – Name of the variable in the generated C++/HLS.

  • atype (NamedType) – Data type used by the tensor.

class hls4ml.model.types.WeightVariable(var_name, type_name, precision, data, quantizer=None, **kwargs)

Bases: Variable

Class representing a tensor containing the weights of a layer.

Precision type of the instance can be modified with the update_precision method.

Parameters:
  • var_name (str, optional) – Name of the variable in the generated C++/HLS.

  • type_name (str, optional) – Name of the data type used (in NamedType).

  • precision (PrecisionType, optional) – Precision data type.

  • data (ndarray) – The data array.

  • quantizer (_type_, optional) – Quantizer to apply to the data array. Defaults to None.

next()
update_precision(new_precision)
class hls4ml.model.types.XnorPrecisionType

Bases: PrecisionType

Convenience class to differentiate ‘regular’ integers from BNN Xnor ones

hls4ml.model.types.find_minimum_width(data, signed=True)

Helper function to find the minimum integer width to express all entries in the data array without saturation / overflow.

Parameters:
  • data (ndarray) – Data array.

  • signed (bool, optional) – Signed or unsigned type. Defaults to True.

Returns:

Minimum integer width required.

Return type:

int

Module contents