Implementation of ANN in Python (Keras with Tensorflow) to Predict Contact Point Temperature in Heat Conduction
Problem Statement:
T1,T4= boundary temperature
T2,T3=Contact temperature
k1=k3= insulation thermal conductivity
k2= bricks thermal conductivity
L1,L3= insulation width
L2=brick layer width
Analytically, this problem can be solved using the thermal resistance concept. The solution to this problem can be found here.
In this blog, the above-mentioned analytical approach will be adopted to generate a dataset, and this dataset will be used to train an ANN to predict the temperature. In this problem, only the prediction of T2 will be demonstrated. T3 can be predicted from the same approach.
Constructing dataset
To create a dataset, the value of T1,T2,L1,L2,L3,k1,k2,k3 will be veried within a certain range and corresponding T2 will be calculated. The range of the variables are given below:
T1: 130-140, step increment: 1
T2: 10-20, step increment: 1
L1, L3= 0.3-0.6, step increment: .1
L2= 0.1-0.5, step increment: .1
k1,k3= 0.07-0.1, step increment: 0.01
k2=0.7-1.0, step increment: 0.01
Creating function to yield T2
First of all, let's construct a function that will yield T2 considering the input parameters:
def T2(T1,T4,L1,L2,L3,k1,k2,k3): R1=L1/k1 R2=L2/k2 R3=L3/k3 R=R1+R2+R3 q=(T1-T4)/R T2=T1-q*R1 return T2
Constructing dataset
Now, let's construct the dataset containing different combinations of input parameters:
T1=[i for i in range(130,141,1)] T4=[i for i in range(10,20,1)] k1=[i/100 for i in range(7,10)] k2=[i/10 for i in range(7,10)] k3=k1 L1=[i/10 for i in range(3,7)] L2=[i/10 for i in range(1,6)] L3=L1 length=len(T1)*len(T4)*len(k1)*len(k2)*len(k3)*len(L1)*len(L2)*len(L3) dataset=np.zeros((length,8)) i=0 for t1 in T1: for t4 in T4: for K1 in k1: for K2 in k2: for K3 in k3: for l1 in L1: for l2 in L2: for l3 in L3: dataset[i,0]=t1 dataset[i,1]=t4 dataset[i,2]=K1 dataset[i,3]=K2 dataset[i,4]=K3 dataset[i,5]=l1 dataset[i,6]=l2 dataset[i,7]=l3 i=i+1
Then, the output dataset (temperature T2 dataset) is constructed by this code:
output=np.zeros((len(dataset),1)) for i in range(len(dataset)): output[i]=T2(dataset[i,0],dataset[i,1],dataset[i,5],dataset[i,6],dataset[i,7],dataset[i,2],dataset[i,3],dataset[i,4])
Scaling and Processing the data
Scaling Data
from sklearn.preprocessing import MinMaxScaler scaler_x=MinMaxScaler() scaler_x.fit(dataset) xscale=scaler_x.transform(dataset)
Splitting data
We will split this data into training and test datasets so that we can test the ANN model accuracy after training. here, 20% of the dataset will be used as test datasets.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(xscale, output, test_size=0.2)
Now that our datasets are constructed, we can proceed in building our ANN model. For this, we need to import several keras and tensorflow libraries first:
import tensorflow as tf from tensorflow import keras from tensorflow.keras import Sequential from tensorflow.keras.layers import Activation,Dense from tensorflow.keras.optimizers import Adam
Build and run the model
Build model
Now, it's time to build our model. To do that, we will consider:
- two hidden layers, with 100 and 50 nodes each
- learning rate 0.1
- Adam optimizer and mean square error loss function
model= keras.Sequential([ Dense(units=100,input_dim=8,activation='relu',kernel_initializer='he_uniform'), Dense(units=50,activation='relu'), Dense(units=1,activation='linear') ]) model.compile(optimizer=Adam(learning_rate=0.1),loss='mse', metrics=['mse'])
Epoch 1/6 3422/3422 [==============================] - 6s 2ms/step - loss: 9.8294 - mse: 9.8294 - val_loss: 0.4728 - val_mse: 0.4728 Epoch 2/6 3422/3422 [==============================] - 5s 2ms/step - loss: 1.1623 - mse: 1.1623 - val_loss: 2.4225 - val_mse: 2.4225 Epoch 3/6 3422/3422 [==============================] - 5s 2ms/step - loss: 0.6389 - mse: 0.6389 - val_loss: 0.3654 - val_mse: 0.3654 Epoch 4/6 3422/3422 [==============================] - 5s 2ms/step - loss: 0.4456 - mse: 0.4456 - val_loss: 0.0981 - val_mse: 0.0981 Epoch 5/6 3422/3422 [==============================] - 5s 2ms/step - loss: 0.3601 - mse: 0.3601 - val_loss: 0.2467 - val_mse: 0.2467 Epoch 6/6 3422/3422 [==============================] - 5s 2ms/step - loss: 0.3293 - mse: 0.3293 - val_loss: 0.2020 - val_mse: 0.2020
Run model
model.fit(X_train,y_train,batch_size=50,validation_split=0.1,epochs=6)
y_predict= model.predict(X_test) print("actual",y_test) print("predicted",y_predict)
Comments
Post a Comment