一、1.kaggel简街市场预测—baseline代码解析

1147-柳同学

发表文章数:589

首页 » 算法 » 正文

引言

简街市场预测
一、1.kaggel简街市场预测—baseline代码解析

一、数据竞赛介绍

一、1.kaggel简街市场预测—baseline代码解析

二、赛题介绍

一、1.kaggel简街市场预测—baseline代码解析
一、1.kaggel简街市场预测—baseline代码解析

三、赛题思路

一、1.kaggel简街市场预测—baseline代码解析

四、baseline代码解析

链接—提取码:1234
一、1.kaggel简街市场预测—baseline代码解析

1.自编码器模型

def create_autoencoder(input_dim,output_dim,noise=0.05):
    i = Input(input_dim)
    # 自编码部分
    # 自编码器— x = decoder(encoder(x)) => 130 -> 64 -> 64 -> 130
    # 编码器—对数据进行降维
    encoded = BatchNormalization()(i)
    encoded = GaussianNoise(noise)(encoded)
    encoded = Dense(64,activation='relu')(encoded)
    # 解码器
    # 对数据进行升维
    decoded = Dropout(0.2)(encoded)
    decoded = Dense(input_dim,name='decoded')(decoded)
    
    # 将解码后的数据在训练一个分类模型
    x = Dense(32,activation='relu')(decoded)
    x = BatchNormalization()(x)
    x = Dropout(0.2)(x)
    x = Dense(32,activation='relu')(x)
    x = BatchNormalization()(x)
    x = Dropout(0.2)(x)    
    x = Dense(output_dim,activation='sigmoid',name='label_output')(x)
    
    encoder = Model(inputs=i,outputs=encoded)
    autoencoder = Model(inputs=i,outputs=[decoded,x])
    # 损失函数由二部分构成。损失:均方误差。分类:交叉熵损失
    autoencoder.compile(optimizer=Adam(0.005),loss={'decoded':'mse','label_output':'binary_crossentropy'})
    return autoencoder, encoder

2.全连接网络(MLP)模型

def create_model(input_dim,output_dim,encoder):
    inputs = Input(input_dim)
    # encoder进行降维,可以学习到数据集更有效的表征方法
    x = encoder(inputs)
    x = Concatenate()([x,inputs]) #use both raw and encoded features
    x = BatchNormalization()(x)
    x = Dropout(0.13)(x)
    # 多个隐藏层
    hidden_units = [384, 896, 896, 394]
    for idx, hidden_unit in enumerate(hidden_units):
        x = Dense(hidden_unit)(x)
        x = BatchNormalization()(x)
        x = Lambda(tf.keras.activations.relu)(x)
        x = Dropout(0.25)(x)
    # 输出
    x = Dense(output_dim,activation='sigmoid')(x)
    model = Model(inputs=inputs,outputs=x)
    # label_smoothing标签平滑操作
    model.compile(optimizer=Adam(0.0005),loss=BinaryCrossentropy(label_smoothing=0.05),metrics=[tf.keras.metrics.AUC(name = 'auc')])
    return model

3.完整代码

tf.keras.layers.BatchNormalization
tf.keras.layers.Lambda
tf.keras.layers.GaussianNoise
tf.keras.layers.Activation
tf.keras.losses.BinaryCrossentropy

from tensorflow.keras.layers import Input, Dense, BatchNormalization, Dropout, Concatenate, Lambda, GaussianNoise, Activation
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import GroupKFold

from tqdm import tqdm
from random import choices


# PurgedGroupTimeSeriesSplit——根据时序划分数据集
import numpy as np
from sklearn.model_selection import KFold
from sklearn.model_selection._split import _BaseKFold, indexable, _num_samples
from sklearn.utils.validation import _deprecate_positional_args

# modified code for group gaps; source
# https://github.com/getgaurav2/scikit-learn/blob/d4a3af5cc9da3a76f0266932644b884c99724c57/sklearn/model_selection/_split.py#L2243
class PurgedGroupTimeSeriesSplit(_BaseKFold):
    """Time Series cross-validator variant with non-overlapping groups.
    Allows for a gap in groups to avoid potentially leaking info from
    train into test if the model has windowed or lag features.
    Provides train/test indices to split time series data samples
    that are observed at fixed time intervals according to a
    third-party provided group.
    In each split, test indices must be higher than before, and thus shuffling
    in cross validator is inappropriate.
    This cross-validation object is a variation of :class:`KFold`.
    In the kth split, it returns first k folds as train set and the
    (k+1)th fold as test set.
    The same group will not appear in two different folds (the number of
    distinct groups has to be at least equal to the number of folds).
    Note that unlike standard cross-validation methods, successive
    training sets are supersets of those that come before them.
    Read more in the :ref:`User Guide <cross_validation>`.
    Parameters
    ----------
    n_splits : int, default=5
        Number of splits. Must be at least 2.
    max_train_group_size : int, default=Inf
        Maximum group size for a single training set.
    group_gap : int, default=None
        Gap between train and test
    max_test_group_size : int, default=Inf
        We discard this number of groups from the end of each train split
    """

    @_deprecate_positional_args
    def __init__(self,
                 n_splits=5,
                 *,
                 max_train_group_size=np.inf,
                 max_test_group_size=np.inf,
                 group_gap=None,
                 verbose=False
                 ):
        super().__init__(n_splits, shuffle=False, random_state=None)
        self.max_train_group_size = max_train_group_size
        self.group_gap = group_gap
        self.max_test_group_size = max_test_group_size
        self.verbose = verbose

    def split(self, X, y=None, groups=None):
        """Generate indices to split data into training and test set.
        Parameters
        ----------
        X : array-like of shape (n_samples, n_features)
            Training data, where n_samples is the number of samples
            and n_features is the number of features.
        y : array-like of shape (n_samples,)
            Always ignored, exists for compatibility.
        groups : array-like of shape (n_samples,)
            Group labels for the samples used while splitting the dataset into
            train/test set.
        Yields
        ------
        train : ndarray
            The training set indices for that split.
        test : ndarray
            The testing set indices for that split.
        """
        if groups is None:
            raise ValueError(
                "The 'groups' parameter should not be None")
        X, y, groups = indexable(X, y, groups)
        n_samples = _num_samples(X)
        n_splits = self.n_splits
        group_gap = self.group_gap
        max_test_group_size = self.max_test_group_size
        max_train_group_size = self.max_train_group_size
        n_folds = n_splits + 1
        group_dict = {}
        u, ind = np.unique(groups, return_index=True)
        unique_groups = u[np.argsort(ind)]
        n_samples = _num_samples(X)
        n_groups = _num_samples(unique_groups)
        for idx in np.arange(n_samples):
            if (groups[idx] in group_dict):
                group_dict[groups[idx]].append(idx)
            else:
                group_dict[groups[idx]] = [idx]
        if n_folds > n_groups:
            raise ValueError(
                ("Cannot have number of folds={0} greater than"
                 " the number of groups={1}").format(n_folds,
                                                     n_groups))

        group_test_size = min(n_groups // n_folds, max_test_group_size)
        group_test_starts = range(n_groups - n_splits * group_test_size,
                                  n_groups, group_test_size)
        for group_test_start in group_test_starts:
            train_array = []
            test_array = []

            group_st = max(0, group_test_start - group_gap - max_train_group_size)
            for train_group_idx in unique_groups[group_st:(group_test_start - group_gap)]:
                train_array_tmp = group_dict[train_group_idx]
                
                train_array = np.sort(np.unique(
                                      np.concatenate((train_array,
                                                      train_array_tmp)),
                                      axis=None), axis=None)

            train_end = train_array.size
 
            for test_group_idx in unique_groups[group_test_start:
                                                group_test_start +
                                                group_test_size]:
                test_array_tmp = group_dict[test_group_idx]
                test_array = np.sort(np.unique(
                                              np.concatenate((test_array,
                                                              test_array_tmp)),
                                     axis=None), axis=None)

            test_array  = test_array[group_gap:]
            
            
            if self.verbose > 0:
                    pass
                    
            yield [int(i) for i in train_array], [int(i) for i in test_array]


# 加载训练数据
# 定义TRAINING来控制到底是训练还是提交,训练时TRAINING = True,预测时TRAINING = False
TRAINING = True
USE_FINETUNE = False     
FOLDS = 4 # 4折
SEED = 42

# 读取数据,并用一部分数据集当我们的训练集
train = pd.read_csv('train.csv',nrows = None)
# 先用查询表达式'date > 85'进行查询,再重置为整数索引
train = train.query('date > 85').reset_index(drop = True) 
# 将float64 => float32,缩小内存
train = train.astype({c: np.float32 for c in train.select_dtypes(include='float64').columns}) #limit memory use
# 缺失值的填充,用均值进行填充
train.fillna(train.mean(),inplace=True)
# 选取满足条件的训练集
train = train.query('weight > 0').reset_index(drop = True)
# 构建action列
#train['action'] = (train['resp'] > 0).astype('int')
train['action'] =  (  (train['resp_1'] > 0 ) & (train['resp_2'] > 0 ) & (train['resp_3'] > 0 ) & (train['resp_4'] > 0 ) &  (train['resp'] > 0  )   ).astype('int')                                                                                                                
# 130个feature
features = [c for c in train.columns if 'feature' in c]

resp_cols = ['resp_1', 'resp_2', 'resp_3', 'resp', 'resp_4']
# X,y
X = train[features].values
y = np.stack([(train[c] > 0).astype('int') for c in resp_cols]).T #Multitarget
# 每列的均值
f_mean = np.mean(train[features[1:]].values,axis=0)


# 自编码器
def create_autoencoder(input_dim,output_dim,noise=0.05):
    i = Input(input_dim)
    # 自编码部分
    # 自编码器— x = decoder(encoder(x)) => 130 -> 64 -> 64 -> 130
    # 编码器—对数据进行降维
    encoded = BatchNormalization()(i)
    encoded = GaussianNoise(noise)(encoded)
    encoded = Dense(64,activation='relu')(encoded)
    # 解码器
    # 对数据进行升维
    decoded = Dropout(0.2)(encoded)
    decoded = Dense(input_dim,name='decoded')(decoded)
    
    # 将解码后的数据在训练一个分类模型
    x = Dense(32,activation='relu')(decoded)
    x = BatchNormalization()(x)
    x = Dropout(0.2)(x)
    x = Dense(32,activation='relu')(x)
    x = BatchNormalization()(x)
    x = Dropout(0.2)(x)    
    x = Dense(output_dim,activation='sigmoid',name='label_output')(x)
    
    encoder = Model(inputs=i,outputs=encoded)
    autoencoder = Model(inputs=i,outputs=[decoded,x])
    # 损失函数由二部分构成。损失:均方误差。分类:交叉熵损失
    autoencoder.compile(optimizer=Adam(0.005),loss={'decoded':'mse','label_output':'binary_crossentropy'})
    return autoencoder, encoder


# 全连接网络(MLP)
def create_model(input_dim,output_dim,encoder):
    inputs = Input(input_dim)
    # encoder进行降维,可以学习到数据集更有效的表征方法
    x = encoder(inputs)
    # 将经过encoder降维后的数据与原始数据进行拼接,原有信息与降维后的信息都存在,由神经网络决定使用什么数据
    # 这样做导致向量过长,导致模型参数增多,优化变难
    x = Concatenate()([x,inputs]) #use both raw and encoded features
    x = BatchNormalization()(x)
    x = Dropout(0.13)(x)
    # 多个隐藏层
    hidden_units = [384, 896, 896, 394]
    for idx, hidden_unit in enumerate(hidden_units):
        x = Dense(hidden_unit)(x)
        x = BatchNormalization()(x)
        x = Lambda(tf.keras.activations.relu)(x)
        x = Dropout(0.25)(x)
    # 输出
    x = Dense(output_dim,activation='sigmoid')(x)
    model = Model(inputs=inputs,outputs=x)
    # label_smoothing标签平滑操作
    model.compile(optimizer=Adam(0.0005),loss=BinaryCrossentropy(label_smoothing=0.05),metrics=[tf.keras.metrics.AUC(name = 'auc')])
    return model


# 定义与训练自编码器,我们对训练数据的均值与方差添加了高斯噪声;训练结束后,我们锁定编码器中的层,避免进一步训练
autoencoder, encoder = create_autoencoder(X.shape[-1],y.shape[-1],noise=0.1)
if TRAINING:
    autoencoder.fit(X,(X,y),
                    epochs=1000,
                    batch_size=4096, 
                    validation_split=0.1,
                    callbacks=[EarlyStopping('val_loss',patience=10,restore_best_weights=True)])
    encoder.save_weights('./encoder.hdf5')
else:
    encoder.load_weights('./encoder.hdf5')
encoder.trainable = False


# 训练与预测
FOLDS = 5
SEED = 42

oof = np.zeros((X.shape[0],5))

if TRAINING:
    gkf = PurgedGroupTimeSeriesSplit(n_splits = FOLDS, group_gap=20)
    splits = list(gkf.split(y, groups=train['date'].values))

    for fold, (train_indices, test_indices) in enumerate(splits):
        model = create_model(130, 5, encoder)
        X_train, X_test = X[train_indices], X[test_indices]
        y_train, y_test = y[train_indices], y[test_indices]
        # 现在训练集上做训练,然后在测试集上做微调
        model.fit(X_train,y_train,validation_data=(X_test,y_test),
                  epochs=100,batch_size=4096,
                  callbacks=[EarlyStopping('val_auc',mode='max',patience=10,restore_best_weights=True)])
        
        model.save_weights(f'./model_{SEED}_{fold}.hdf5')
        model.compile(Adam(0.00001),loss='binary_crossentropy')
        
        model.fit(X_test,y_test,epochs=3,batch_size=4096)
        model.save_weights(f'./model_{SEED}_{fold}_finetune.hdf5')
        
        oof[test_indices] = model.predict(X_test)
else:
    models = []
    for f in range(FOLDS):
        model = create_model(130, 5, encoder)
        if USE_FINETUNE:
            model.load_weights(f'./model_{SEED}_{f}_finetune.hdf5')
        else:
            model.load_weights(f'./model_{SEED}_{f}.hdf5')
        models.append(model)


# 评分
from sklearn.metrics import roc_auc_score,roc_curve

score_oof = roc_auc_score(train['action'].values,
                         np.median(np.where(oof[:,:] >= 0.5,1,0).astype(int),1))
print(score_oof)
# auc取值与线上得分一致


# 提交
if not TRAINING:
    f = np.median   # 中位数
    models = models[-2:]
    import janestreet
    env = janestreet.make_env()
    th = 0.503	# 阈值
    # 数据集从简街中遍历得到测试样本
    for (test_df, pred_df) in tqdm(env.iter_test()):
        if test_df['weight'].item() > 0:
            x_tt = test_df.loc[:, features].values
            if np.isnan(x_tt[:, 1:].sum()):
                # 缺失值的填充
                x_tt[:, 1:] = np.nan_to_num(x_tt[:, 1:]) + np.isnan(x_tt[:, 1:]) * f_mean
                
            # 5个模型每个resp_的均值   
            pred = np.mean([model(x_tt, training = False).numpy() for model in models],axis=0)
            # pred的中位数
            pred = f(pred)
            # 通过一个阈值将pred => action
            pred_df.action = np.where(pred >= th, 1, 0).astype(int)
        else:
            pred_df.action = 0
        env.predict(pred_df)

未经允许不得转载:作者:1147-柳同学, 转载或复制请以 超链接形式 并注明出处 拜师资源博客
原文地址:《一、1.kaggel简街市场预测—baseline代码解析》 发布于2021-01-17

分享到:
赞(0) 打赏

评论 抢沙发

评论前必须登录!

  注册



长按图片转发给朋友

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏

Vieu3.3主题
专业打造轻量级个人企业风格博客主题!专注于前端开发,全站响应式布局自适应模板。

登录

忘记密码 ?

您也可以使用第三方帐号快捷登录

Q Q 登 录
微 博 登 录