To cells as output cells so to make

To understand what a neural network is we need to start with the brain the human brain has about 100 billion cells called neurons. Neurons are connected to each other through pathways that transmit electrical signals, these connections give neurons have the ability to send and receive electrical impulses which in turn are responsible for the brains function on a larger scale so each brain cell can be seen as a mathematical function. Each neuron has input and output, when it receives electrical impulses from a cell it sends it other cell its connected to so neural network is just an attempt to make computers model the brain. The reason is that if computers were more like the brain they could be good at some of the things humans are good at like pattern recognition so a neural network simulate a collection of neurons just as they do in the brain these simulated neurons take input and give outputs through their connections. Here we see what a neural network would look like if it was just a jumbled mess. A neural network is of no use to us on this. We can make it compute something in order to do this we need to designate some cells as input and other cells as output cells so to make things simpler cells are put in layers. We can think of the input cells on the far left layer the hidden cells is the middle layer and the output  cells is the rightmost layer the middle layer or hidden cells can be more than one layer but is simpler to just to illustrate one hidden layer. The connections between cells can be strong or weak and so the whole neural network is just a function that takes in a few input numbers and outputs a few output numbers and in between it does some computation with the hidden layers so let’s look at an individual cell and the neural network each cell is function it takes some inputs from other cells and it also considers how strong those connections for those cells are and it gives in o/p for a given cell. We will multiply each input signal by that input strength and then all of these will be added together for every input in this way inputs are multiplied by a really small connection strength number so they are almost zero this means I have almost no effect on the function conversely cells with strong connection strength numbers mean that they have their input multiplied by high numbers and they have a large effect in the function. The function itself takes its function of the sum of these inputs, these input time connection strength and a lot of different functions can be used but hyperbolic tangent generally works best. Hyperbolic tangent takes a function of the previous sum and then it outputs a number between -1 and 1. If we go back to our diagram of the neural network each cell is like this it takes input and multiplier then by their connection strength that takes the sum of these and then passes it through a function with the output it sends it.  Under all the connections going to the output connections the computer will do the work for cells layer by layer moving left to right in the neural network you see will take the example input x1 and x2 and calculate the output for these inputs in the first layer each gets it input directly from the inputs the output are just y1=f(x1), y2=f(x2) labelling the connection weights. Now we have a general sense of how a neural network works. We need to know how it learns most of