taiyo_utils.models
AdaBoost
(data, random_state_value=None, test_value=None)AdaBoost class Implementation
DecisionTree
(min_samples_leaf)Decision tree classifier class
GradientBoosting
(data, random_state_value=None, test_value=None)Gradient Boosting classifier class Implementation
Helper
(n_estimators, random_state, x, y, n_splits)IndexerARIMA
(order, *args, **kwargs)Base stats api model (ARIMA) class Implementation.
Args:
- data: dataframe that contains the data
Returns:
- a ARIMA model
IndexerBDLSTM
(input_shape, neurons, dropouts, activations)Implementation of Bidirectional LSTM
IndexerBagging
(model, n_estimators=10)Method to assign variables when the class is called.
Args: model (Keras Model): Taiyo_ model class instances from the Repo n_estimators (int): the number of estimators for the bagging model.the 10 means ten models of the param model will me made and the results will be combined in a weighted average fashion
IndexerCatBoost
(iterations, depth, learning_rate, loss_function='RMSE')IndexerConvBDLSTM
(input_shape, neurons, dropouts, activations, network_width, conv_depth, kernel_size, num_lstmunits)Implementation of 1D Conv + BDLSTM Model input_shape: shape of input data, (n_memory_steps, n_in_features) network_width : The number of parellel 1D conv - BDLSTM units conv_depth : The number of 1D conv + Maxpool layers before the BDLSTM num_lstmunits : Number of LSTM units in the BDLSTM kernel_size : The kernel_size in the 1D Conv layers (The most important parameter, ideally should be 9)
(1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- ... -- (BDLSTM -- Dropout - Dense) (1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- ... -- (BDLSTM -- Dropout - Dense) . . Concat -- Dense . (1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- (1D Conv -- MaxPool1D) -- ... -- (BDLSTM -- Dropout - Dense) /
IndexerDecisionTree
(min_samples_leaf)Decision Tree class Implementation.
Arguments:
- args: min_samples_leaf
Returns:
- a Decision tree model based on inputs
IndexerDilatedConv
(input_shape, neurons, dropouts, activations, n_filters, filter_width)===== Model Architecture ===== 16 dilated causal convolutional blocks Preprocessing and postprocessing (time distributed) fully connected layers (convolutions with filter width 1): 16 output units
32 filters of width 2 per block Exponentially increasing dilation rate with a reset (1, 2, 4, 8, ..., 128, 1, 2, ..., 128) Gated activations
Residual and skip connections 2 (time distributed) fully connected layers to map sum of skip outputs to final output
neurons (any int .ie 8) so the the model looks like [1,2,4,..2^8,1,2,4,..2^8] Note : Some activations are fixed and not meant to be changed.
IndexerEncoderDecoderRNN
(input_shape, output_shape, neurons, dropouts, activations, cell)Implementation of Encoder Decoder RNN model without teacher forcing input_shape: shape of input data, (n_memory_steps, n_in_features) output_shape: shape of output data, (n_forcast_steps, n_out_features) cell: cell in the RNN part, 'SimpleRNN' / 'LSTM' / 'GRU' cell_units: number of hidden cell unit in RNN part, integer, e.g. 100 dense_units: units of the hidden dense layers, a tuple, e.g, (20,30)
IndexerGRU
(input_shape, neurons, dropouts, activations)Gated Recurrent Network Implementation
IndexerIdentity
()Base stats api model class Implementation.
Args:
- data: dataframe that contains the data
Returns:
- a stats api model
IndexerKNNRegressor
(n_neighbors)KNN Regressor class Implementation.
IndexerLSTM
(input_shape, neurons, dropouts, activations, r_dropouts)Long Short Term Memory class abstraction
Args: r_dropouts (list(float)): list of doubles (0 - 1) of length neurons - 1 defining the dropout at each level - do not include the final layer
IndexerLogisticRegression
(regularization_value)Logistic Regression class Implementation.
Arguments:
- args: regularization_value
Returns:
- a logistic regression model based on inputs
IndexerMA
(days)Base stats api model (Moving Average) class Implementation.
Args:
- data: dataframe that contains the data
Returns:
- a moving average model
IndexerMLP
(hidden_layer_sizes=2, activation='relu', solver='lbfgs', max_iter=200)Initialising variables
Args:
- length (int): length of the input array - part of the definition of the first layer shape
- numAttr (int): number of attributes - second part of the definition of the first layer shape
- neurons (int): array of ints defining the number of neurons in each layer and the number of layers (by the length of the array) - Do not include the final layer
- dropouts (list(float)): array of doubles (0 - 1) of length neurons - 1 defining the dropout at each level - do not include the final layer
- activations (list(str)): array of strings of length neurons to define the activation of each layer - do not include the final layer
IndexerModel
()This is an abstract class that is used as a parent class for all model abstractions
IndexerNF
(shift)Base stats api model (Naive forecaster) class Implementation.
Args:
- data: dataframe that contains the data
Returns:
- a Naive forcaster model
IndexerRNN
(input_shape, neurons, dropouts, activations)Base RNN class Implementation.
Args: input_shape tuple(int, int): A tuple of (length of sequence, number of features) neurons List[int]: array of ints defining the number of neurons in each layer and the number of layers (by the length of the array) - Do not include the final layer dropouts List[float]: array of doubles (0 - 1) of length neurons - 1 defining the dropout at each level - do not include the final layer activations List[str]: array of strings of length neurons to define the activation of each layer - do not include the final layer
IndexerRNNDense
(input_shape, output_shape, neurons, dropouts, activations, cell, dense_units)Implementation of RNN + Dense Layer Model input_shape: shape of input data, (n_memory_steps, n_in_features) output_shape: shape of output data, (n_forcast_steps, n_out_features) cell: cell in the RNN part, 'SimpleRNN' / 'LSTM' / 'GRU' neurons: number of hidden cell unit in RNN part, integer, e.g. 100 dense_units: units of the hidden dense layers, a tuple, e.g, (20,30)
IndexerRNNHiddenDense
(input_shape, output_shape, neurons, dropouts, activations, cell, dense_units, stack_size)Implementation of Stacked RNN + Dense Layer Model, here the hidden states go to the dense layers
input_shape: shape of input data, (n_memory_steps, n_in_features) output_shape: shape of output data, (n_forcast_steps, n_out_features) cell: cell in the RNN part, 'SimpleRNN' / 'LSTM' / 'GRU' cell_units: number of hidden cell unit in RNN part, integer, e.g. 100 dense_units: units of the hidden dense layers, a tuple, e.g, (20,30) stack_size : The size of the stack of RNN layers before the dense ones.
IndexerSVR
(kernel)SVR class Implementation.
IndexerSimpleRNN
(input_shape, neurons, dropouts, activations)SimpleRNN model using RNN Layers
IndexerStackingRegressor
(n_estimators, random_state)IndexerVotingRegressor
(min_samples_split, n_estimators, max_features)IndexerXGBoost
(n_estimators, max_depth, learning_rate, min_child_weight, reg_alpha)KNeighbors
(data, random_state_value=None, test_value=None)LightGBM
()LightGBM classifier class Implementation
LogisticRegression
(regularization_value)Logistic Regression classifier class Implementation
NaiveBayes
()Naive Bayes classifier class Implementation
RandomForest
(min_samples_split, n_estimators, max_features)Random forest class Implementation
SKLearnBaseEstimator
(args, kwargs)Base SK Learn Model class Implementation.
Arguments:
- args: (Dict) arguements of the base model
- kwargs: (Dict) kwargs of the base model
Returns:
None
SVM
(kernel, C, gamma)Support vector machine class Implementation
Voting
(kernel, C, gamma, voting, min_samples_split, n_estimators, max_features)Voting classifier class Implementation
XGBoost
(n_estimators, max_depth, learning_rate, objective)XGBoost classifier class Implementation
classical_models
classifiers
ensembles
linear_model
neural_network
new_models
toBaseModel