Neural Networks and Deep Learning – III: Logistic Regression

The relationship between independent variables or variables is searched by logistic regression method. The most common case is when the dependent variable is two categories (as if healed is not healed). The goal of the logistic regression method is to find the simplest model to predict the outcome of the dependent variable.

So logistic regression is necessary to understand the possibility of an event.
This event is indicated as P. It comes from probabilty.

Example: P (chips | beer) = 0.7

This impression means that 70% of those who buy beer buys chips. We will be given an x input and this input will output a y. The probability is right here. Given the possibility of this x giving “y = 1” output.
For this situation, we define the probability as follows. In this equation it represents the estimated result we obtained.

Y = P (y = 1 | x)
0<= y <= 1 is indicated by notation.

Sigmoid Function (σ)

We use the sigmoid function ŷ to make a output in the 0-1 range.

The product of inputs and weights is collected with a threshold value and the obtained value is between 0 and 1.

Sigmoid function in Python language

import numpy as np
def sigmoid(z):
s = 1.0 / (1.0 + np.exp**(-1.0 * z))
return s

• If z is a big positive number σ (Z) = 1
• If z is a small or negative number, σ (Z) = 0
• if z = 0 σ (Z) = 0.5

Leave a Reply

Your email address will not be published. Required fields are marked *