Image Processing By Neural Network Computer Science Essay

Abstract- This paper discusses the categorization of different nervous webs in footings of their activation maps. These include Threshold logic unit, Hopfield cyberspace, ADALINE, Perceptron and its types. It besides includes the assorted types of larning algorithms that include Back-Propagation Algorithm, Support Vector Machines and Logistic Regression. The Analysis and Discussion subdivision of this paper explains the best solution for Optical Character Recognition.

Index Terms- Neural Network, Optical Character Recognition


A nervous web is a made up of two or more beds of nerve cells.

Each nerve cell acts as a black box incorporating an activation map. Give a set of input value and their several weights and the threshold value or the bias value, the nerve cell faculty calculates the end product. [ 2 ]

Section two describes the different types of nervous web classified harmonizing to the activation map that are by and large used by them. It discusses each activation map in item clarifying the pros and cons of each nervous web.

Get quality help now
Bella Hamilton
Bella Hamilton
checked Verified writer

Proficient in: Artificial Intelligence

star star star star 5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

Section three discusses the acquisition algorithms that can be used merely with more than two beds of nervous web. This subdivision discusses the different types of algorithms and enlightens the importance of each.

Section four analyses and discusses the application of each nervous web and the acquisition algorithms in its manner and explains which method may be most suited for Optical Character Recognition and how.

Nervous Network

There are many different sorts of nervous webs, with different names and different execution. [ 1 ] However the two major categorizations of the types of nervous web are:

Single Layered Neural Network

Multiple Layered Neural Network

The individual superimposed webs by and large use Heaviside measure or additive activation maps.

Get to Know The Price Estimate For Your Paper
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Whereas, the multiple layered webs by and large use the sigmoid map or similar maps that can easy be differentiated.

Each nervous web, calculates the leaden amount, which is defined as the amount of each multiple of the inputs and their corresponding weights [ 3 ] :


Weighted Sum = ? ( inputs i * weights I )


Gram: eural-network.png

Figure: Each input value is assigned a weight [ 11 ]

Heaviside Step map

Measure is a really particular map that describes a graph that consists of series of line section. This was the first type of activation map that was used for nervous webs. The map and its graph are shown below:

degree Fahrenheit ( x ) = [ [ x – 1 ] ]

Degree centigrades: UsersFarzanaDesktopStepFunction.png

Figure: A graph of a measure map [ 12 ]

The graph above shows how the end product of the map stairss from one point to the other. This measure may be every bit little as 1 or every bit large as 5 units. The end product of the map is ever a existent figure. For illustration, if the deliberate value of the activation map meets the threshold, so the end product is 1 else nothing.

Threshold Logic Unit ( TLU ) and Hopfield Net are two illustrations of nervous webs that use Heaviside Step map. [ 4, 5 ] The difference between the two webs is that the Hopfield net uses the binary threshold units- it fundamentally converges the end product to the local lower limit value ( the gradient descent ) that is, zero or one whilst the TLU was more generalized.

Linear activation map

The graph of the additive activation map is a consecutive line. An illustration of the map and its corresponding graph is shown as below:

degree Fahrenheit ( x ) = 2x +2

Degree centigrades: UsersFarzanaDesktoplinear_0.png

Figure: Graph of a additive map [ 13 ]

As you can see above, additive combination is fundamentally a combination of additive transmutation and interlingual rendition. It allows a vector add-on every bit good as a scalar generation within its map. The end product of such a map is the amount of leaden amount of inputs and the bias term.

The two most common types of nervous webs that implement this sort of map are the Perceptron and ADALINE. Perceptron is the simplest signifier of provender frontward nervous web.

Both are individual superimposed nervous webs that have the ability to larn, though the chief difference between both the nervous webs is that ADALINE web adjusts its weight harmonizing to the leaden amount [ 6 ] , whilst the Perceptron web adjusts it utilizing the additive activation map. [ 7 ]

Sigmoid activation map

The sigmoid map shows a similarity of a stretched cosine remedy. The map and its graph is shown as below:

degree Fahrenheit ( x ) = 1 / ( 1 + e-x )

Degree centigrades: UsersFarzanaDesktoplogistic_sigmoid_function.jpg

Figure: Graph demoing a sigmoid curve [ 14 ]

The sigmoid is a simple non-linear map that has a part of uncertainness. This means, as shown in the graph above, that the corresponding end product is non clearly deterministic.

The sigmoid curve is chiefly used within multiple superimposed nervous webs. The major illustration of the nervous web utilizing this map is multilayer Perceptron. [ 7 ] Its ability to be easy differentiated aid in the re-learning feature of nervous webs.

Learning Algorithms

There are many sorts of algorithms available that allow the re-learning of the nervous web possible and efficient. The chief type of algorithm that helps the web in relearning is the back extension algorithm.

The Back Propagation Algorithm

It requires the coveted end product to develop its web. Once it gets the incorrect end product, it goes back to the weights, updates them consequently and so re-calculates until it gets to the desired end product. [ 8, 9 ]

Degree centigrades: UsersFarzanaDesktop1_2-backpropagation-net.gif

Figure: A nervous web implementing back extension algorithm. [ 15 ]

Support Vector Machines

The support vector machine fundamentally separates the sample size with regard to some major leaden vectors. These vectors are besides known as the support vectors. They separate the vectors in a manner that distinguishes the needed end product from the existent end product. [ 10 ]

Degree centigrades: UsersFarzanaDesktopSVM.JPG

Figure: How support vector machines work [ 16 ]

Logistic Arrested development

This algorithm fundamentally predicts the end product given the inputs and certain conditional facts. It works on the chance of the end product to be right. Each clip it learns the anticipation degree becomes efficient and the algorithm gives a better end product.


Each class of nervous webs below shows some fact stats based on the inputs given to the maps.

Heaviside Step Function

The input was a set of nothing and 1s. The threshold was set to 0.5, and each input value was given assigned weights. The nervous web had helped giving the correct end product for the OR map. Basically, the map calculates the degree Fahrenheit ( x ) values as:



Gradient descent













Table: Measure map degree Fahrenheit ( x ) = [ [ x – 1 ] ]

Linear Function

The additive map operates as below:












Table: Linear map of degree Fahrenheit ( x ) = 2x+2

Once an mistake is determined, the line on the graph translates itself towards the right or the left and changes its gradient in such a manner that it is closest to the end product.

Sigmoid Function

The basic end product of the sigmoid map is as below:












Table: Sigmoid curve of degree Fahrenheit ( x ) = 1/ ( 1+ exp ( -x ) )

However, when the curve is shifted utilizing the derived function of the sigmoid map, it gives an unsure end product. The end product can non be predicted and acts as the closest to the encephalon of a human.

Back Propagation Algorithm: It performs figure of loops before acquiring to the right end product. It requires a big sum of memory though its truth degree is rather high.

Support Vector Machines: It works good in most state of affairss if it has chosen the right support vectors. There are state of affairss where the support vectors are such that the end product is jagged throughout giving a obscure end product.

Logistic Arrested development: It gives an unpredictable end product and therefore ab initio the dependability of its consequences is hapless.


Having said all that, I must state a Back extension neural web that uses the sigmoid map as its activation map would work best in about all state of affairss.

However, as the above treatment claims, even a individual bed web utilizing the Heaviside measure map would work. Rather that would work best.

Cite this page

Image Processing By Neural Network Computer Science Essay. (2020, Jun 02). Retrieved from

Image Processing By Neural Network Computer Science Essay

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment