Neural Networks – A perceptron in Matlab

Neural networks can be used to determine relationships and patterns between inputs and outputs. A simple single layer feed forward neural network which has a to ability to learn and differentiate data sets is known as a perceptron.

Single layer feed forward perceptron

By iteratively “learning” the weights, it is possible for the perceptron to find a solution to linearly separable data (data that can be separated by a hyperplane). In this example, we will run a simple perceptron to determine the solution to a 2-input OR.

X1 or X2 can be defined as follows:

X1 X2 Out
0 0 0
1 0 1
0 1 1
1 1 1

If you want to verify this yourself, run the following code in Matlab. Your code can further be modified to fit your personal needs. We first initialize our variables of interest, including the input, desired output, bias, learning coefficient and weights.

input = [0 0; 0 1; 1 0; 1 1];
numIn = 4;
desired_out = [0;1;1;1];
bias = -1;
coeff = 0.7;
rand('state',sum(100*clock));
weights = -1*2.*rand(3,1);

The input and desired_out are self explanatory, with the bias initialized to a constant. This value can be set to any non-zero number between -1 and 1. The coeff represents the learning rate, which specifies how large of an adjustment is made to the network weights after each iteration. If the coefficient approaches 1, the weight adjustments are modified more conservatively. Finally, the weights are randomly assigned.

A perceptron is defined by the equation:

Therefore, in our example, we have w1*x1+w2*x2+b = out
We will assume that weights(1,1) is for the bias and weights(2:3,1) are for X1 and X2, respectively.

One more variable we will set is the iterations, specifying how many times to train or go through and modify the weights.

iterations = 10;

Now the feed forward perceptron code.

for i = 1:iterations
     out = zeros(4,1);
     for j = 1:numIn
          y = bias*weights(1,1)+...
               input(j,1)*weights(2,1)+input(j,2)*weights(3,1);
          out(j) = 1/(1+exp(-y));
          delta = desired_out(j)-out(j);
          weights(1,1) = weights(1,1)+coeff*bias*delta;
          weights(2,1) = weights(2,1)+coeff*input(j,1)*delta;
          weights(3,1) = weights(3,1)+coeff*input(j,2)*delta;
     end
end

A little explanation of the code. First, the equation solving for ‘out’ is determined as mentioned above, and then run through a sigmoid function to ensure values are squashed within a [0 1] limit. Weights are then modified iteratively based on the delta rule.

When running the perceptron over 10 iterations, the outputs begin to converge, but are still not precisely as expected:

out = 
  0.3756
  0.8596
  0.9244
  0.9952
weights = 
  0.6166
  3.2359
  2.7409

As the iterations approach 1000, the output converges towards the desired output.

out = 
  0.0043
  0.9984
  0.9987
  1.0000
weights = 
  5.4423
  12.1084
  11.8823

As the OR logic condition is linearly separable, a solution will be reached after a finite number of loops. Convergence time can also change based on the initial weights, the learning rate, the transfer function (sigmoid, linear, etc) and the learning rule (in this case the delta rule is used, but other algorithms like the Levenberg-Marquardt also exist). If you are interested try to run the same code for other logical conditions like ‘AND’ or ‘NAND’ to see what you get.

While single layer perceptrons like this can solve simple linearly separable data, they are not suitable for non-separable data, such as the XOR. In order to learn such a data set, you will need to use a multi-layer perceptron.

Bookmark and Share

27 thoughts on “Neural Networks – A perceptron in Matlab

  1. This is my code
    clear
    clc
    Class = [1 1 2 2];
    Agment = 1;
    V = [0;0;0];
    C = 1;
    X1 = [Agment;2;1];
    X2 = [Agment;2;5];
    X3 = [Agment;5;2];
    X4 = [Agment;4;6];
    X = [X1 X2 X3 X4];

    for I = 1 : size(X,2)
    if Class(1, I) == 2
    X(:, I) = X(:, I) * (-1);
    end
    end
    K = 1;
    for Step = 1 : 100
    t=(V’ * X(:, K));
    if (V’ * X(:, K)) <= 0
    V = V + (C * X(:, K));
    end
    K = mod(K, size(X,2));
    K = K + 1;
    end

  2. now i want to enter only 2 bit input and get the answer what should i do?? imean the input is a matrix and i want the i/ps only 2 bit [0 1] or[1 0] or [0 0] or[1 1]

    • %here it is, it works well
      clear out
      test=[1 1];
      y = bias*weights(1,1)+…
      test(1,1)*weights(2,1)+test(1,2)*weights(3,1);
      out = 1/(1+exp(-y))

  3. hi
    i am just the beginner using the nn tool in matlab….the first question is when we have to give both inputs and outputs to the nn gui tool in feedforward algorithm then for what we are using it…..i mean if we have output then why do we need to use that tool….please someone help me…i am not able to understand anything…..PLZZZZZZZZZZZZZZZ

    • neural network is a kind of thing that learn from experience. So by giving inputs and outputs we are train’n it to recognize another input which is similar in pattern we trained. In here, by giving inputs and outputs we are train’n the network.

  4. Hi,
    Could you please tell how to implement perceptron when we have an image of size(say 50×50) as input and more than one output(say 5 options)..

  5. Question again why are there 3 weights? and i tried solving the weight output but it doesnt give me the correct answer like
    input 1 = 1;
    input 2 = 1;
    desired output = 1;

    the weight given to me by the code after i executed the code is
    36.2816
    24.0756
    24.0756

    so,
    1×36.2816 = 36.2816
    1×24.0756 = 24.0756

    Bias = -1
    36.2816+24.0756+(-1) = 59.3572

    why is the answer not 1?

  6. Pingback: Neural Networks – A perceptron in Matlab – PIYABUTE FUANGKHON

  7. hi all,
    I have a question and i really need help coz I’ve tried everything but in vain.
    i have a matrix A = [k x 1] where k = 10,000 and ranging, say, from a to b, randomly.
    I need to get a matrix B = [m x 1] from A, where m is from a to c ( a<c<b),…
    basically, i want to do is to "shrink" A and get a smaller matrix B.

    Thanks everybody.

    Nietzsche.

    • From your question, I’m assuming something like the following?:

      % Preallocate a random 10000 x 1 matrix A
      A = rand(10000,1);
      a = min(A)
      b = max(A)
      % set c = to a random value in between a and b. Lets choose 0.5 for this example
      c = 0.5
      % Then create B for between a and b
      B = A(A< =c)
      % Can also use the following, though the second part is redundant in this case
      B = A(A<=c & A>=a)

  8. should’t the input entered be:

    input = [0 0; 1 0; 0 1; 1 1];

    Instead of….

    input = [0 0; 0 1; 1 0; 1 1];

    I’m sure I’m just confused but I need to use the following input data (and am uncertain about how to enter it):
    X1=0, 0, 1, 1
    X2=0, 1, 0, 1

    would it be

    input = [0 0;0 1; 1 0; 1 1]

    or

    input = [0 0;1 0;0 1;1 1]

    Your help would be much appreciated

    • You are correct. In our example here for OR, both [1 0] and [0 1] map to an output of 1 though, so it works still.

      If you have a matrix of inputs = [X1 X2] which are defined as follows:
      X1=0, 0, 1, 1
      X2=0, 1, 0, 1

      Then you would use this:
      input = [0 0;0 1; 1 0; 1 1]

    • I feel that is among the such a lot vital info for me. And i am glad studying your aictrle. But should observation on some normal issues, The website taste is wonderful, the aictrles is actually great : D. Just right task, cheers

  9. Hello

    I’ve tried this example. I always get same results:
    Out
    0.5
    0.5
    0.5
    0.5

    weights:
    0
    0
    0

    I don’t know what is wrong with my code. please help. here is my code
    input =[0 0; 0 1; 1 0; 1 1];
    numIn = 4;
    desired_out = [0;1;1;1];
    bias = -1;
    coeff = 1;
    %rand(‘state’, sum(100*clock));
    weights = -1*2.*rand(3,1);
    iterations = 10000;

    for i =1:iterations
    out = zeros(4,1);
    for j=1:numIn
    y = bias*weights(1,1)+…
    input(j,1)*weights(2,1)+input(j,2)*weights(3,1);
    out(j) = 1/(1+exp(-y));
    delta=desired_out(j)-out(j);
    weights(1,1)=weights(1,1)*coeff*bias*delta;
    weights(2,1)=weights(2,1)*coeff*input(j,1)*delta;
    weights(3,1)=weights(3,1)*coeff*input(j,2)*delta;
    end
    end

  10. hi, thanks for the good explaination about perceptron.
    I hv one question, this program is to train the input right??

    then..how i’m going to test the input for classification using perceptron ?

  11. David,
    I don’t know if I follow your question. You could plot the results, residuals, MSE errors or other variables over each iteration. If you want to do something like this, that would be possible. If this isn’t what you were looking for, let me know.

    Vipul

  12. hi
    thank you for having this brief and useful tutorial.
    I’d really appreciate if you send me a multilayer perceptron implementation using matlab .
    best regards.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>