Главная » ЭКСПЕР » Sklearn metrics import mean squared error

Sklearn metrics import mean squared error

В этой статье

What is Mean Squared Error (MSE)?

The Mean squared error (MSE) represents the error of the estimator or predictive model created based on the given set of observations in the sample. Intuitively, the MSE is used to measure the quality of the model based on the predictions made on the entire training dataset vis-a-vis the true label/output value. In other words, it can be used to represent the cost associated with the predictions or the loss incurred in the predictions. And, the squared loss (difference between true & predicted value) is advantageous because they exaggerate the difference between the true value and the predicted value. Two or more regression models created using a given sample of data can be compared based on their MSE. The lesser the MSE, the better the regression model is. When the linear regression model is trained using a given set of observations, the model with the least mean sum of squares error (MSE) is selected as the best model. The Python or R packages select the best-fit model as the model with the lowest MSE or lowest RMSE when training the linear regression models.

In 1805, the French mathematician Adrien-Marie Legendre, who first published the sum of squares method for gauging the quality of the model stated that squaring the error before summing all of the errors to find the total loss is convenient. The question that may be asked is why not calculate the error as the absolute value of loss (difference between y and y_hat in the following formula) and sum up all the errors to find the total loss. The absolute value of error is not convenient, because it doesn’t have a continuous derivative, which does not make the function smooth. And, the functions that are not smooth are difficult to work with when trying to find closed-form solutions to the optimization problems by employing linear algebra concepts.

Mathematically, the MSE can be calculated as the average sum of the squared difference between the actual value and the predicted or estimated value represented by the regression model (line or plane). It is also termed as mean squared deviation (MSD). This is how it is represented mathematically:

Mean squared error
Fig 1. Mean Squared Error

The value of MSE is always positive. A value close to zero will represent better quality of the estimator/predictor (regression model).

An MSE of zero (0) represents the fact that the predictor is a perfect predictor.

When you take a square root of MSE value, it becomes root mean squared error (RMSE). RMSE has also been termed root mean square deviation (RMSD). In the above equation, Y represents the actual value and the Y_hat represents the predicted value that could be found on the regression line or plane. Here is the diagrammatic representation of MSE for a simple linear or univariate regression model:

Mean Squared Error Representation
Fig 2. Mean Squared Error Representation

More from Acing AI

Acing AI provides analysis of AI companies and ways to venture into them.

Acing AIRead more from

Interpreting the Mean Squared Error

The mean squared error is always 0 or positive. When a MSE is larger, this is an indication that the linear regression model doesn’t accurately predict the model.

An important piece to note is that the MSE is sensitive to outliers. This is because it calculates the average of every data point’s error. Because of this, a larger error on outliers will amplify the MSE.

There is no “target” value for the MSE. The MSE can, however, be a good indicator of how well a model fits your data. It can also give you an indicator of choosing one model over another.

2. Mean Squared Error or MSE

MSE is calculated by taking the average of the square of the difference between the original and predicted values of the data.

Hence, MSE =

Here N is the total number of observations/rows in the dataset. The sigma symbol denotes that the difference between actual and predicted values taken on every i value ranging from 1 to n.

This can be implemented using sklearn’s mean_squared_error method:

from sklearn.metrics import mean_squared_error

actual_values = [3, -0.5, 2, 7] predicted_values = [2.5, 0.0, 2, 8]

mean_squared_error(actual_values, predicted_values)

In most of the regression problems, mean squared error is used to determine the model’s performance.

Calculating Mean Squared Error Without Using any Modules

true_value_of_y = [3,2,6,1,5] predicted_value_of_y = [2.0,2.4,2.8,3.2,3.6] summation_of_value = 0
n = len(true_value_of_y)
for i in range (0,n):
difference_of_value = true_value_of_y[i] — predicted_value_of_y[i] squared_difference = difference_of_value**2
summation_of_value = summation_of_value + squared_difference
MSE = summation_of_value/n
print («The Mean Squared Error is: » , MSE)

Declaring the true values and the predicted values to two different variables. Initializing the variable summation_of_value is zero to store the values. len() function is useful to check the number of values in true_value_of_y. Creating for loop to iterate. Calculating the difference between true_value and the predicted_value. Next getting the square of the difference. Adding all the squared differences, we will get the MSE.


The Mean Squared Error is: 3.6400000000000006

Средняя абсолютная ошибка

средняя абсолютная ошибкаили MAE, рассчитывается как среднее значение ошибок прогноза, где все значения прогноза вынуждены быть положительными.

Заставить ценности быть положительными называется сделать их абсолютными. Это обозначено абсолютной функциейабс ()или математически показано как два символа канала вокруг значения:| Значение |,

mean_absolute_error = mean( abs(forecast_error) )

кудаабс ()делает ценности позитивными,forecast_errorодна или последовательность ошибок прогноза, иимею в виду()рассчитывает среднее значение.

Мы можем использоватьmean_absolute_error ()функция из библиотеки scikit-learn для вычисления средней абсолютной ошибки для списка прогнозов. Пример ниже демонстрирует эту функцию.

from sklearn.metrics import mean_absolute_error
expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mae = mean_absolute_error(expected, predictions)
print(‘MAE: %f’ % mae)

При выполнении примера вычисляется и выводится средняя абсолютная ошибка для списка из 5 ожидаемых и прогнозируемых значений.

MAE: 0.140000

Эти значения ошибок приведены в исходных единицах прогнозируемых значений. Средняя абсолютная ошибка, равная нулю, означает отсутствие ошибки.

How to Calculate RMSE in Python

Lets understand with examples about how to calculate RMSE in python with given below python code

from sklearn.metrics import mean_squared_error
from math import sqrt
import numpy as np

#define Actual and Predicted Array

actual = np.array([10,11,12,12,14,18,20])

pred = np.array([9,10,13,14,17,16,18])

#Calculate RMSE

result = sqrt(mean_squared_error(actual,pred))

# Print the result

print(«RMSE:», result)

In the above example, we have created actual and prediction array with the help of numpy package array function.

We then use mean_squared_error() function of sklearn.metrics library which take actual and prediction array as input value. It returns mean squared error value.

Later, we find RMSE value using square root of mean squared error value.

Above code returns root mean squared error (RMSE) value for given actual and prediction model is 1.85164

Lets check out root mean squared value (RMSE) calculation with few more examples.

3. Root Mean Squared Error or RMSE

RMSE is the standard deviation of the errors which occur when a prediction is made on a dataset. This is the same as MSE (Mean Squared Error) but the root of the value is considered while determining the accuracy of the model.

from sklearn.metrics import mean_squared_error
from math import sqrt

actual_values = [3, -0.5, 2, 7] predicted_values = [2.5, 0.0, 2, 8]

mean_squared_error(actual_values, predicted_values)
# taking root of mean squared error
root_mean_squared_error = sqrt(mean_squared_error)

Средняя квадратическая ошибка

средняя квадратическая ошибкаили MSE, рассчитывается как среднее значение квадратов ошибок прогноза. Возведение в квадрат значений ошибки прогноза заставляет их быть положительными; это также приводит к большему количеству ошибок.

Квадратные ошибки прогноза с очень большими или выбросами возводятся в квадрат, что, в свою очередь, приводит к вытягиванию среднего значения квадратов ошибок прогноза, что приводит к увеличению среднего квадрата ошибки. По сути, оценка дает худшую производительность тем моделям, которые делают большие неверные прогнозы.

mean_squared_error = mean(forecast_error^2)

Мы можем использоватьmean_squared_error ()функция из scikit-learn для вычисления среднеквадратичной ошибки для списка прогнозов. Пример ниже демонстрирует эту функцию.

from sklearn.metrics import mean_squared_error
expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mse = mean_squared_error(expected, predictions)
print(‘MSE: %f’ % mse)

При выполнении примера вычисляется и выводится среднеквадратическая ошибка для списка ожидаемых и прогнозируемых значений.

MSE: 0.022000

Значения ошибок приведены в квадратах от предсказанных значений. Среднеквадратичная ошибка, равная нулю, указывает на совершенное умение или на отсутствие ошибки.

Power BI: что это за система

Power BI — это набор программных служб, приложений и соединителей, которые работают вместе, чтобы превратить ваши несвязанные источники данных в согласованные,
18 сент. 2022 г.

1 min read



Root Mean Square Error with NumPy module

Let us have a look at the below formula–


So, as seen above, Root Mean Square Error is the square root of the average of the squared differences between the estimated and the actual value of the variable/feature.

In the below example, we have implemented the concept of RMSE using the functions of NumPy module as mentioned below–

  • Calculate the difference between the estimated and the actual value using numpy.subtract() function.
  • Further, calculate the square of the above results using numpy.square() function.
  • Finally, calculate the mean of the squared value using numpy.mean() function. The output is the MSE score.
  • At the end, calculate the square root of MSE using math.sqrt() function to get the RMSE value.


import math
y_actual = [1,2,3,4,5] y_predicted = [1.6,2.5,2.9,3,4.1]

MSE = np.square(np.subtract(y_actual,y_predicted)).mean()

RMSE = math.sqrt(MSE)
print(«Root Mean Square Error:n»)


Root Mean Square Error:



Vertica – база данных, хранящая информацию в колонках. По сравнению со строковыми Системами управления базами данных (DBMS), колоночная БД сокращает количество
9 сент. 2022 г.

1 min read



Mean Absolute Error (MAE)

Source: Regression Docs

Lets take an example where we have some points. We have a line that fits those points. When we do a summation of the absolute value distance from the points to the line, we get Mean absolute error. The problem with this metric is that it is not differentiable. Let us translate this into how we can use Scikit Learn to calculate this metric.

>>> fromsklearn.metricsimport mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]>>> y_pred = [2.5, 0.0, 2, 8]>>> mean_absolute_error(y_true, y_pred)
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]>>> y_pred = [[0, 2], [-1, 2], [8, -5]]>>> mean_absolute_error(y_true, y_pred)
>>> mean_absolute_error(y_true, y_pred, multioutput=’raw_values’)
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])


Line regression graph

Let us consider the values (1,3), (2,2), (3,6), (4,1), (5,5) to plot the graph.

Line regression graph
Line regression graph

The straight line represents the predicted value in this graph, and the points represent the actual data. The difference between this line and the points is squared, known as mean squared error.

Also, Read |How to Calculate Square Root in Python

Water Monitoring of the Murray-Darling Basin Using Time Series Data



R2 Score

Let us take a naive approach by taking an average of all the points by thinking of a horizontal line through them. Then we can calculate the MSE for this simple model.

Source: R2 Score

R2 score answers the question that if this simple model has a larger error than the linear regression model. However, it terms of metrics the answer we need is how much larger. The R2 score answers this question. R2 score is 1 — (Error from Linear Regression Model/Simple average model).

Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R² score of 0.0. In Scikit Learn it looks like:

>>> fromsklearn.metricsimport r2_score
>>> y_true = [3, -0.5, 2, 7]>>> y_pred = [2.5, 0.0, 2, 8]>>> r2_score(y_true, y_pred)
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]>>> y_pred = [[0, 2], [-1, 2], [8, -5]]>>> r2_score(y_true, y_pred, multioutput=’variance_weighted’)

>>> y_true = [1,2,3]>>> y_pred = [1,2,3]>>> r2_score(y_true, y_pred)
>>> y_true = [1,2,3]>>> y_pred = [2,2,2]>>> r2_score(y_true, y_pred)
>>> y_true = [1,2,3]>>> y_pred = [3,2,1]>>> r2_score(y_true, y_pred)

Среднеквадратическая ошибка

Средняя квадратичная ошибка, описанная выше, выражается в квадратах единиц прогнозов.

Его можно преобразовать обратно в исходные единицы прогнозов, взяв квадратный корень из среднего квадрата ошибки Это называетсясреднеквадратичная ошибкаили RMSE.

rmse = sqrt(mean_squared_error)

Это можно рассчитать с помощьюSQRT ()математическая функция среднего квадрата ошибки, рассчитанная с использованиемmean_squared_error ()функция scikit-learn.

from sklearn.metrics import mean_squared_error
from math import sqrt
expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mse = mean_squared_error(expected, predictions)
rmse = sqrt(mse)
print(‘RMSE: %f’ % rmse)

При выполнении примера вычисляется среднеквадратичная ошибка.

RMSE: 0.148324

Значения ошибок RMES приведены в тех же единицах, что и прогнозы. Как и в случае среднеквадратичной ошибки, среднеквадратическое отклонение, равное нулю, означает отсутствие ошибки.


ETL (англ. Extract, Transform, Load – Извлечь, преобразовать и загрузить) группа процессов, происходящих при переносе данных из нескольких систем в одно
1 сент. 2022 г.

6 min read


Визуализация данных (Dataviz)

Loading a Sample Pandas DataFrame

Let’s start off by loading a sample Pandas DataFrame. If you want to follow along with this tutorial line-by-line, simply copy the code below and paste it into your favorite code editor.

# Importing a sample Pandas DataFrame
import pandas as pd

df = pd.DataFrame.from_dict({
‘x’: [1,2,3,4,5,6,7,8,9,10],
‘y’: [1,2,2,4,4,5,6,7,9,10]})

# x y
# 0 1 1
# 1 2 2
# 2 3 2
# 3 4 4
# 4 5 4

You can see that the editor has loaded a DataFrame containing values for variables x and y. We can plot this data out, including the line of best fit using Seaborn’s .regplot() function:

# Plotting a line of best fit
import seaborn as sns
import matplotlib.pyplot as plt
sns.regplot(data=df, x=’x’, y=’y’, ci=None)

This returns the following visualization:

Plotting a line of best fit to help visualize mean squared error in Python

Plotting a line of best fit to help visualize mean squared error in Python

The mean squared error calculates the average of the sum of the squared differences between a data point and the line of best fit. By virtue of this, the lower a mean sqared error, the more better the line represents the relationship.

We can calculate this line of best using Scikit-Learn. You can learn about this in this in-depth tutorial on linear regression in sklearn. The code below predicts values for each x value using the linear model:

# Calculating prediction y values in sklearn
from sklearn.linear_model import LinearRegression

model = LinearRegression()
model.fit(df[[‘x’]], df[‘y’])
y_2 = model.predict(df[[‘x’]])
df[‘y_predicted’] = y_2

# Returns:
# x y y_predicted
# 0 1 1 0.581818
# 1 2 2 1.563636
# 2 3 2 2.545455
# 3 4 4 3.527273
# 4 5 4 4.509091

Bonus: Gradient Descent

Gradient Descent is used to find the local minimum of the functions. In this case, the functions need to be differentiable. The basic idea is to move in the direction opposite from the derivate at any point.

The following code works on a set of values that are available on the Github repository.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Get startedVimarsh Karbhari

Regularization with Ridge, Lasso, and Elastic Net Regressions

Lan Chu

Lan Chu


Towards Data Science

Derivation of Mean Squared Error

First to find the regression line for the values (1,3), (2,2), (3,6), (4,1), (5,5). The regression value for the value is  y=1.6+0.4x. Next to find the new Y values. The new values for y are tabulated below.

Given x valueCalculating y valueNew y value
4 1.6+0.4(4)3.2
5 1.6+0.4(5)3.6

Now to find the error ( Yi – Ŷi )

We have to square all the errors

By adding all the errors we will get the MSE

Makeover by Proxy

Anna Quaglieri

Anna Quaglieri



Example 2 – RMSE Calculation

Lets assume, we have actual and predicted dataset as follows

actual = [14,17,13,19,12,18,14,10,12,12]

prediction = [15,14,14,18,10,16,12,11,11,13]

Calculate RMSE for given model.

Here, again we will be using numpy package to create actual and prediction array and mean_squared_error() funciton of sklearn.metrics library for RMSE calculation in python.

Python code is given as below

from sklearn.metrics import mean_squared_error
from math import sqrt
import numpy as np

#define Actual and Predicted Array

actual = np.array([14,17,13,19,12,18,14,10,12,12])

pred = np.array([15,14,14,18,10,16,12,11,11,13])

#Calculate RMSE

result = sqrt(mean_squared_error(actual,pred))

# Print the result

print(«RMSE:», result)

Above code returns root mean squared (RMSE) for given actual and prediction dataset is 1.643167

Calculate Mean Squared Error Using Negative Values

Now let us consider some negative values to calculate MSE. The values are (1,2), (3,-1), (5,0.6), (4,-0.7), (2,-0.2). The regression line equation is y=1.13-0.33x

The line regression graph for this value is:

Calculate mean squared error using negative values
Calculate mean squared error using negative values

New y values for this will be:

Given x valueCalculating y valueNew y value


>>> from sklearn.metrics import mean_squared_error
>>> y_true = [2,-1,0.6,-0.7,-0.2] >>> y_pred = [0.9,0.1,-0.4,-0.1,0.6] >>> mean_squared_error(y_true, y_pred)

First, importing a module. Declaring values to the variables. Here we are using negative value to calculate. Using the mean_squared_error module, we are calculating the MSE.


0.884Trending Gingerit: Correct Grammatical Errors Using Python

Recommended from Medium

Bartek Borkowski

Bartek Borkowski


Fresha Engineering

4. R Squared

It is also known as the coefficient of determination. This metric gives an indication of how good a model fits a given dataset. It indicates how close the regression line (i.e the predicted values plotted) is to the actual data values. The R squared value lies between 0 and 1 where 0 indicates that this model doesn’t fit the given data and 1 indicates that the model fits perfectly to the dataset provided.

import numpy as np

X = np.random.randn(100)
y = np.random.randn(60) # y has nothing to do with X whatsoever

from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import cross_val_score

scores = cross_val_score(LinearRegression(), X, y,scoring=’r2′)


In this tutorial, you learned what the mean squared error is and how it can be calculated using Python. First, you learned how to use Scikit-Learn’s mean_squared_error() function and then you built a custom function using Numpy.

The MSE is an important metric to use in evaluating the performance of your machine learning models. While Scikit-Learn abstracts the way in which the metric is calculated, understanding how it can be implemented from scratch can be a helpful tool.

Calculating the Mean Squared Error from Scratch using Numpy

Numpy itself doesn’t come with a function to calculate the mean squared error, but you can easily define a custom function to do this. We can make use of the subtract() function to subtract arrays element-wise.

# Definiting a custom function to calculate the MSE
import numpy as np

def mse(actual, predicted):
actual = np.array(actual)
predicted = np.array(predicted)
differences = np.subtract(actual, predicted)
squared_differences = np.square(differences)
return squared_differences.mean()

print(mse(df[‘y’], df[‘y_predicted’]))

# Returns: 0.24727272727272714

The code above is a bit verbose, but it shows how the function operates. We can cut down the code significantly, as shown below:

# A shorter version of the code above
import numpy as np

def mse(actual, predicted):
return np.square(np.subtract(np.array(actual), np.array(predicted))).mean()

print(mse(df[‘y’], df[‘y_predicted’]))

# Returns: 0.24727272727272714

Understanding Term-based Retrieval methods in Information Retrieval

Cody Glickman, PhD

Cody Glickman, PhD


Towards Data Science

To get the MSE using sklearn

sklearn is a library that is used for many mathematical calculations in python. Here we are going to use this library to calculate the MSE


sklearn.metrices.mean_squared_error(y_true, y_pred, *, sample_weight=None, multioutput=’uniform_average’, squared=True)


  • y_true – true value of y
  • y_pred – predicted value of y
  • sample_weight
  • multioutput
  • raw_values
  • uniform_average
  • squared


Mean squared error.


from sklearn.metrics import mean_squared_error
true_value_of_y= [3,2,6,1,5] predicted_value_of_y= [2.0,2.4,2.8,3.2,3.6] mean_squared_error(true_value_of_y,predicted_value_of_y)

From sklearn.metrices library importing mean_squared_error. Creating two variables. true_value_of_y holds an original value. predicted_value_of_y holds a calculated value. Next, giving the formula to calculate the mean squared error.


3.6400000000000006Popular now Generate OpenSSL Symmetric Key Using Python

Frequentist vs Bayesian Statistics

Blake Samaha

Blake Samaha


Towards Data Science

To get the Mean Squared Error in Python using NumPy

import numpy as np
true_value_of_y= [3,2,6,1,5] predicted_value_of_y= [2.0,2.4,2.8,3.2,3.6] MSE = np.square(np.subtract(true_value_of_y,predicted_value_of_y)).mean()

Importing numpy library as np. Creating two variables. true_value_of_y holds an original value. predicted_value_of_y holds a calculated value. Next, giving the formula to calculate the mean squared error.




Subscribe to the Acing AI/Data Science Newsletter. It is FREE! Reducing the entropy in data science. Helping you with…


We have build a new course to help people ace data science interviews. Sign up below!

Средняя ошибка прогноза (или ошибка прогноза)

Средняя ошибка прогноза рассчитывается как среднее значение ошибки прогноза.

mean_forecast_error = mean(forecast_error)

Ошибки прогноза могут быть положительными и отрицательными. Это означает, что при вычислении среднего из этих значений идеальная средняя ошибка прогноза будет равна нулю.

Среднее значение ошибки прогноза, отличное от нуля, указывает на склонность модели к превышению прогноза (положительная ошибка) или занижению прогноза (отрицательная ошибка). Таким образом, средняя ошибка прогноза также называетсяпрогноз смещения,

Ошибка прогноза может быть рассчитана непосредственно как среднее значение прогноза. В приведенном ниже примере показано, как среднее значение ошибок прогноза может быть рассчитано вручную.

expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] forecast_errors = [expected[i]-predictions[i] for i in range(len(expected))] bias = sum(forecast_errors) * 1.0/len(expected)
print(‘Bias: %f’ % bias)

При выполнении примера выводится средняя ошибка прогноза, также известная как смещение прогноза.

Bias: -0.100000

Единицы смещения прогноза совпадают с единицами прогнозов. Прогнозируемое смещение нуля или очень маленькое число около нуля показывает несмещенную модель.

RMSE with Python scikit learn library

In this example, we have calculated the MSE score using mean_square_error() function from sklearn.metrics library.

Further, have calculated the RMSE score through the square root of MSE as shown below:


from sklearn.metrics import mean_squared_error
import math
y_actual = [1,2,3,4,5] y_predicted = [1.6,2.5,2.9,3,4.1]

MSE = mean_squared_error(y_actual, y_predicted)

RMSE = math.sqrt(MSE)
print(«Root Mean Square Error:n»)


Root Mean Square Error:


pip install numpy

If you don’t have numpy package installed on your system, use below command in command prompt

pip install numpy

Stars Charts in Python

Brian Ondov

Brian Ondov


Sparks of Innovation: Stories from the HCIL

Лексема (Lexeme)

Лексема — это последовательность буквенно-цифровых символов в Токене (Token). Лексемы являются ключевыми словами в словарях. Лексема «играть», например, может принимать разные
21 окт. 2022 г.

1 min read


Power BI: что это за система

Acing Data Science Interviews

3-Month course for data science interviews


Thanks for reading! If you enjoyed it, test how many times can you hit in 5 seconds. It’s great cardio for your fingers AND will help other people see the story.


Aggregating user ratings




Towards Data Science


  • https://vitalflux.com/mean-square-error-r-squared-which-one-to-use/
  • https://medium.com/acing-ai/how-to-evaluate-regression-models-d183b4f5853d
  • https://datagy.io/mean-squared-error-python/
  • https://www.studytonight.com/post/what-is-mean-squared-error-mean-absolute-error-root-mean-squared-error-and-r-squared
  • https://www.pythonpool.com/mean-squared-error-python/
  • https://machinelearningmastery.ru/time-series-forecasting-performance-measures-with-python/
  • https://vedexcel.com/how-to-calculate-root-mean-squared-error-rmse-in-python/
  • https://www.helenkapatsa.ru/sriedniekvadratichieskaia-oshibka/
  • https://www.askpython.com/python/examples/rmse-root-mean-square-error
Решите Вашу проблему!